API Gateways are becoming increasingly popular with the microservice architecture. Recently, Google announced its own api-gateway. The time is ripe to take a look at why microservice architecture needs them and how they currently look without the api gateway in place.
Let's look at a microservice architecture without an api gateway.
Each microservice apart from its core functionality, traditionally have been doing these as well,
- Authentication of requests based on OAuth or JWT token or a simple key-based authentication. (This authentication is to verify if other services or users have access to this service, not the typical user authentication that happens based on the application data.)
- Allow CORS requests from other microservices
- Allow/Deny requests based on their IPS.
Rate Limiting - Allow only a certain number of requests. Requests over the specified limit will be responded with a status code 429 (Too many requests). These rules also protect the microservice from DDOS attacks that happen at the application layer.
Monitoring - Collecting metrics from the requests/response to gain valuable insights. Eg: number of requests per minute, number of requests per API, number of requests a particular user has hit above the rate limit.
Alerting - This is a subset of monitoring, where alerts are generated for specific events. Eg: Generating an alert when the response time is over 500ms for 1000 requests. Using an observability tool like Prometheus helps in both monitoring and alerting.
Logging - Logging all the requests that are made to the server.
Request Termination - Temporarily disable the request for some APIs or disable the service during downtime.
Alright, these are a few of them. There might be even more to it as well.
Major Disadvantages of this approach:
- When a new microservice comes up, all these functionalities need to be replicated.
- Any change to one functionality should be repeated across all services. Eg: Moving the logging of requests from Loggly to StatsD.
- Logically looking, all these functionalities are not specific to the underlying application. These can be decoupled from the application itself.
API Gateway:
API Gateway doesn't need any introduction now. An API gateway can be considered as yet another microservice in your architecture that does all the aforementioned functionalities.
- It is the entry point for your microservices and acts as a gatekeeper doing all the basic functionalities before passing the request to the respective microservice.
- All the functionalities now reside at a centralised place, making it easy to maintain and analyse them.
- When a new microservice comes up, all it has to do is to process the requests and send the response back to the gateway. API gateway takes care of the rest.
- With API gateway in place, functionalities like Request/Response Transformation, rolling out canary deployments become possible as well.
I have jotted down some of the problems that an API gateway solves. Having said these, it's really up to the architecture to decide if an API gateway is a must to have or good to have. They are a must to have especially when there are a lot of microservices in the architecture.
Irrespective of whether there's an absolute need for an API gateway or not, just by looking closely at the design before the existence of API gateway, it's evident that it was violating Single Responsibility Principle, Don't repeat yourself and High Cohesion and low coupling.
Top comments (28)
I really don't like microservices. mainly due to the complexities introduced and the latencies added. I really wish there was a better way to scale systems and teams. here's to hoping for a better future 🤞
There is, it’s improving the single monolith to be easier to develop in. Shopify has been doing this for years. The problem is microservices are cool and people jump ship to them before putting any effort into solving their existing problems
yes, rightly said! my teams have adopted vertical slice architecture which deals with scaling human resources pretty well and a lot of performance optimizations + a good caching strategy has saved us from having to invest in microservices for the moment. maybe in v2.0 they can move to microservices but I'm out the door if they decide to do so 😁
They could be ok if you have a perfect understanding of where to draw the lines between them... But when does that happen
There is nothing wrong with monoliths. A monolith with some particularly heavy services living separately will work for most companies out there.
Don't solve problems you don't have.
Thanks for the article. One question: do you think inter-service call should also done through the API gateway? For example, should the Ruby service call the Golang service via the gateway in this case?
One microservice should not call another microservice. This introduces coupling and is worse than the monolith – actually it creates a distributed monolith. Instead the services should use publish/subscribe event log to exchange data.
So do you mean in such systems there would no longer be a request-response communication model between the services? I am currently having a hard time imagining such kind of systems (especially for customer-facing SaaS apps), but it might just be because I haven't seen it yet.
Yes. The most important property of a well designed distributed architecture is that individual services are operationally completely independent of each other. That is if one service goes down the rest will continue to function and the system stays available. When services query each other monolith’s coupling is still there with the added ‘benefit’ of distribution faults. Why not stay with the monolith in first place?
For an infra composed of hundreds of very chatty microservices, most of which are interdependent, and where RMQ cannot handle the traffic, what do you recommend for handling eventing? We're using Kafka now, but curious for your thoughts.
We use Kafka too. For not so overwhelming loads and Ruby based services I would consider Eventide with PostgreSQL store.
Great question Bobby. There are no hard and fast rules. It is a matter of choice if you want any of the functionalities from your api gateway or not.
For instance, for services that aren't exposed to the outside world, the authentication/security can be done either at the application layer or at the network layer or both. We are good with the security at the network layer and as a result, we don't route the inter-service calls through our api gateway as of now. (We use kong)
Having said that, let's say, if we need metrics for inter-service calls going forward, we might rethink about this and route the inter-service calls through Kong.
Any open source api gateway you recommend? I know Kong but don't really like it.
I use traefik, which is the intersection between ingress and gateway. The middleware offered are well defined and easy to use, although they don't capture all requirements. For the Go devs who want to build that functionality, there is a new plugin architecture being introduced in v2.3 - doc.traefik.io/traefik/master/plug...
I haven't kicked the tires on this yet but will be soon
Traefik is very interesting as well, do you see it as an alternative to HAProxy in term of load balancer as well? How about performance as well as configuration? I assume the configuration should be easier than HAProxy?
Yep - we rolled our own cert-manager (which is a requirement if you want more than one instance running, which generally, you do) to get it to scale (unless you go Enterprise).
Obviously, it's not as thin as a true reverse proxy, but that's because it's not designed that way. It's still very fast for us and the flexibility pays off the performance. In terms of config, we use k8s so it falls into the same CD pipelines we have for everything else which is another bonus.
There are other ways to configure it and you can read more about it here: doc.traefik.io/traefik/providers/o...
dev.to/imthedeveloper/open-source-...
Thank you. I'm staring at haproxy now. It looks quite promising. It doesn't have lots of features yet but it has good performance overall and is also a load balancer.
I would recommend trying out Apache APISIX
apisix.apache.org/docs/apisix/gett...
Is there any specific reasons on why you don't like Kong? We are using it on production.
Pretty much solves the problem at hand for us.
I've had immeasurable issues with the development lifecycle (in particular the feedback loop) when writing plugins for it (using the Go PDK bridge)
Nothing in particular. It actually looks great. I may even use it for a toy project first. I'm using haproxy for load balancer so I'm thinking I should try out its other features as well.
If you're just looking for a LB, you should check out nginx as well. FYI, Kong is just a wrapper around nginx.
I completely agree to your advice, but moving the blocks "logging" and "metrics" out of the individual services into the API Gatway is an over-simplification from my point of view. Logging API requests is not the full story behind application logging; you're probably interested in other metrics than request / response counts and times (e.g. message queue sizes) and neither diagram shows a messaging infrastructure.
Yes, you're right. Individual services will still have application-specific observability tools, authentication system and even quota/weightage based rate limiting. Those are very specific to the particular service.
But what I talked about was the higher-level metrics. For instance: getting the latency of the services or grouping the number of requests per service by status code. These are not application-specific and will be applicable to all the services.
Do one and only one thing, and do it well. UNIX and then Linux.
Hey, love the diagrams. They are super clear and really help illustrate your point. How did you create them?
Hey Douglas, I use excalidraw.