Load Balancing from extern as well as between mid-tear services
A framework to develop uniform REST API
Easy consumable deployment framework for each service
The work begins…
As we used in the current development Java and Spring Framework, we had a look on the Spring project in order to evaluate what can be used form there. We already had some experience with Spring Boot for some stand alone applications.
And the next interesting project in that area was Spring Cloud. But I also have to mention that there were critical voices, to rely on a single framework. But we want to use it for the PoC.
With Spring Cloud Netflix some of the requirements we had could be satisfied and Netflix OSS components are tested at scale.
From the different available Netflix OSS components, we had a look at the following components:
Service Discovery: Eureka
Circuit Breaker: Hystrix also with Dashboard and Turbine
Declarative REST Client: Feign
Client Side Load Balancer: Ribbon
Router and Filter: Zuul
How to split the monolith?
We had longer discussions how we can cut the big monolith into several services. Or phrase it differently: what are services and how we could move the code into these services.
For an already existing and productive system the best decision is to stop adding new functionality into the monolith and build up new services on top (or beside) the existing infrastructure. In addition, cut out services that are simple and encapsulate components.
In the case of our PoC, we have taken the radical way and took the core services. For the background: the team working on that PoC had its main expertise on that core services. With that we were able to estimate how viable our concept will be at the end.
We could not only concentrate on the work of functional requirements but also most of the non-functional requirements came up very quick.
One point was the secure communication between the different services. My colleague Daniel Sass described in his blog  our approach.
As continuous delivery and automation was already a well established knowledge and behavior, we did not had to spend to much time on these topics. Without the knowledge of automation of your infrastructure you should not even start thinking about microservices.
With microservices, containers and Docker are mentioned as the terrific trio.
An easy way to transport your microservices through the different stages of your infrastructure is to use containers. So build it once, deploy everywhere. So the marketing slogan.
So, we had to take a closer look on Docker, the tools around it like Docker Registry, Swarm, etc. and the orchestration including centralized log management. It was not that easy.
Although we had quite fast success to deploy something with Docker, there are still a lot of corners that need to be sanded. Deploy one container is easy, deploy several containers on several Docker host can be done, but at that time with some workarounds. We should have a look at the upcoming Cloud Foundry instance at SAP HANA Cloud Platform.
In general, we could show that the basic ideas can work. Even the performance of the single services was good. Additional tests must show, whether the performance of an end-to-end scenario is also acceptable. Acceptable means here: something below 300ms. 🙂
What I learned in that PoC was that we have to discuss again how we cut our monolith into smaller pieces. For the beginning the Services will be less micro but something between micro and a monolith.
Also development teams must take over more responsibility for their services. They must be responsible end-to-end, no operations team that will be the firewall when something went wrong. A long way to go…