In this post I will explain how I came to believe that microservices are the future through a short demonstration of deploying a simple microservices application using Kubernetes.
From Skyscrapers to Cottages
I’ve been what we call today a full-stack developer since asp.net 2.0. Back then we used to have ‘code behind’ files which typically contained *a lot* of code. Then the 3 tier architecture kicked in, which made a clear separation between data access and business logic. This reduced the amount of code in each class and made the code more readable.
Success of online businesses created the need for web applications with more complex requirements, driving the need for new architectures. The rise of design patterns, n-tier architectures, and domain driven design (DDD) addressed this need, but also made web applications more complex and harder to manage. So we started covering our applications with onion layers and DTOs for each layer, hoping to please the DDD gods.
.net onionAs applications grew with time, so the need to break down was evident and this is when SOA became popular. By breaking down monoliths into smaller parts and services, it became easier to manage and write code. The disadvantage was that often the deployment process was still monolithic. Each service was not independent from the rest of the system so testing a single service was a nightmare since you needed to deploy bunch of other services without which the service under the test couldn’t act on its own. In the image below you can see that our ‘Shop Application’ will require deployment with its service dependencies but each such dependency cannot be deployed or manually tested on its own.
.net SOAThen came microservices.
Microservices came into this world as a concept in late 2013, ‘a new way of thinking about structuring applications’ as Martin Fowler, a known microservices guru describes them.
Unlike SOA, microservices are independently deployed and act as small parts doing a specific job, while they inherit all the good coming from SOA. They can be written using any language which might lead to faster change in technology stack, be built by independent teams with no previous knowledge of the certain programming language or whole produce and can horizontally scale with ease. For example, on the image below you can see ‘Payment’ microservice which theoretically can support payment processing from any source not just our application because the main goal of microservice is to decouple an unit from the system.
microservicesAll seemed like heaven, but deployment remained complicated. It often required expertise no one previously had, tools which just came into this world and different mindset, working with a mindset of focusing on a small component which can do it’s ‘stuff’ well, rather than the whole product.
Let’s talk about the costs of running a microservice which needs to be scaled in a complex system deployed on AWS or other cloud. With the instantiation of each microservice instance, you pay 100% for utilization even if you only use 15%. This represents a lot of waste, and an enormous amount of money can be saved just from moving from an instance way of thinking to a process way of thinking. By process I mean of course Docker containers which are the most efficient way to deploy microservices or server-less solutions like AWS Lambda.
Orchestration just with Docker is full of complexities as you will need to write your own scripts to scale up, process networks between new machines running Docker. So this is where container orchestrators kicks in.
In the following demonstration I will use the most common container orchestrator — Kubernetes — which will help us to deploy a simple application consisting of a few microservices.
Taking it Live with Kubernetes
Let’s start from a brief look in the code, there are 4 parts of this application; Api-Gateway, Order-Service, Payment-Service and Db. Don’t get too familiar with the code since it does not matter much while we will concentrate mainly on deployment with Kubernetes. You can always clone or download the full repository.
Microservices
We have REST API gateway made with the help of node.js and express.js in which we can order a product which will create an order entry in order-service and transaction entry in payment-service. Both will reach db at the end. Again, this demonstration is for Kubernetes capabilities nor microservices.
Api-Gateway.js can get all orders which are successful and create new order.
Order-service.js can get all orders which are successful and create new order.
Payment-service.js can get all transactions and creates new transaction routes
DB.js just basic in memory collection of orders and transactions.Kubernetes
First we start with a Namespace, think of this entity as a ‘folder’ in which rest of our resources will reside. You can read more about Namespaces here.
Next let’s talk about services. Services are Kubernetes entities which define networking from containers to other containers in cluster or to public networks via cloud provider’s native load balancing. Find more information about Services here.
Each service route to ports which container exposed.
We will use two service types, LoadBalancer which will be for our Api-Gateway because it will be exposed to the public network. Other type will be ClusterIP which will be for inner communication inside our namespace between other microservices.
Kubernetes comes with kube-dns application which allows us to know the hostname before the deployment of any service running in our namespace by following convention [service name].[namespace]:[port exposed by service]. In other words, if the name of the service is ‘order-service’, we can access it from any other container inside of the namespace by order-service.shop:3002. This is crucial part for creating configuration management scripts like you will see below.
Last but not least are Deployments. Think of deployment as an watchdog controller which will make sure if you ask to scale 5 instances ( containers ) of your microservice it will make sure there are always 5 online even if some containers will die due to software crash. Moreover deployments are useful for up/down grading our application running on the containers, with short command against Kubectl — a CLI tool for interaction with Kubernetes cluster you can upgrade or downgrade a microservice.
As you could guess ‘replicas’ means the amount of containers that will be deployed for a specific microservice; feel comfortable to change that to whatever makes you happy for any service besides our DB since data saved in it is in memory thus cannot be scaled.
Most important part of deployments are usually container definitions, as you can see we are using official nodejs image running Alpine linux known for its small footprint then we execute our configuration management in bash; here some of Kubernetes magic happens.
First we download both app.js and package.json from Github, install packages defined in package.json and run our app.js passing to it as arguments already known to us hostnames with a little help of kube-dns of course. This way each of our microservices knows how to access other microservices in our application.
Hope you enjoyed my experience with microservices and Kubernetes, again feel free to clone or download repository. Check for updates.