API Gateways are becoming increasingly popular with the microservice architecture. Recently, Google announced its own api-gateway. The time is ripe...
For further actions, you may consider blocking this person and/or reporting abuse
I really don't like microservices. mainly due to the complexities introduced and the latencies added. I really wish there was a better way to scale systems and teams. here's to hoping for a better future 🤞
There is, it’s improving the single monolith to be easier to develop in. Shopify has been doing this for years. The problem is microservices are cool and people jump ship to them before putting any effort into solving their existing problems
yes, rightly said! my teams have adopted vertical slice architecture which deals with scaling human resources pretty well and a lot of performance optimizations + a good caching strategy has saved us from having to invest in microservices for the moment. maybe in v2.0 they can move to microservices but I'm out the door if they decide to do so 😁
They could be ok if you have a perfect understanding of where to draw the lines between them... But when does that happen
There is nothing wrong with monoliths. A monolith with some particularly heavy services living separately will work for most companies out there.
Don't solve problems you don't have.
Thanks for the article. One question: do you think inter-service call should also done through the API gateway? For example, should the Ruby service call the Golang service via the gateway in this case?
One microservice should not call another microservice. This introduces coupling and is worse than the monolith – actually it creates a distributed monolith. Instead the services should use publish/subscribe event log to exchange data.
So do you mean in such systems there would no longer be a request-response communication model between the services? I am currently having a hard time imagining such kind of systems (especially for customer-facing SaaS apps), but it might just be because I haven't seen it yet.
Yes. The most important property of a well designed distributed architecture is that individual services are operationally completely independent of each other. That is if one service goes down the rest will continue to function and the system stays available. When services query each other monolith’s coupling is still there with the added ‘benefit’ of distribution faults. Why not stay with the monolith in first place?
For an infra composed of hundreds of very chatty microservices, most of which are interdependent, and where RMQ cannot handle the traffic, what do you recommend for handling eventing? We're using Kafka now, but curious for your thoughts.
We use Kafka too. For not so overwhelming loads and Ruby based services I would consider Eventide with PostgreSQL store.
Great question Bobby. There are no hard and fast rules. It is a matter of choice if you want any of the functionalities from your api gateway or not.
For instance, for services that aren't exposed to the outside world, the authentication/security can be done either at the application layer or at the network layer or both. We are good with the security at the network layer and as a result, we don't route the inter-service calls through our api gateway as of now. (We use kong)
Having said that, let's say, if we need metrics for inter-service calls going forward, we might rethink about this and route the inter-service calls through Kong.
Any open source api gateway you recommend? I know Kong but don't really like it.
I use traefik, which is the intersection between ingress and gateway. The middleware offered are well defined and easy to use, although they don't capture all requirements. For the Go devs who want to build that functionality, there is a new plugin architecture being introduced in v2.3 - doc.traefik.io/traefik/master/plug...
I haven't kicked the tires on this yet but will be soon
Traefik is very interesting as well, do you see it as an alternative to HAProxy in term of load balancer as well? How about performance as well as configuration? I assume the configuration should be easier than HAProxy?
Yep - we rolled our own cert-manager (which is a requirement if you want more than one instance running, which generally, you do) to get it to scale (unless you go Enterprise).
Obviously, it's not as thin as a true reverse proxy, but that's because it's not designed that way. It's still very fast for us and the flexibility pays off the performance. In terms of config, we use k8s so it falls into the same CD pipelines we have for everything else which is another bonus.
There are other ways to configure it and you can read more about it here: doc.traefik.io/traefik/providers/o...
dev.to/imthedeveloper/open-source-...
Thank you. I'm staring at haproxy now. It looks quite promising. It doesn't have lots of features yet but it has good performance overall and is also a load balancer.
I would recommend trying out Apache APISIX
apisix.apache.org/docs/apisix/gett...
Is there any specific reasons on why you don't like Kong? We are using it on production.
Pretty much solves the problem at hand for us.
I've had immeasurable issues with the development lifecycle (in particular the feedback loop) when writing plugins for it (using the Go PDK bridge)
Nothing in particular. It actually looks great. I may even use it for a toy project first. I'm using haproxy for load balancer so I'm thinking I should try out its other features as well.
If you're just looking for a LB, you should check out nginx as well. FYI, Kong is just a wrapper around nginx.
I completely agree to your advice, but moving the blocks "logging" and "metrics" out of the individual services into the API Gatway is an over-simplification from my point of view. Logging API requests is not the full story behind application logging; you're probably interested in other metrics than request / response counts and times (e.g. message queue sizes) and neither diagram shows a messaging infrastructure.
Yes, you're right. Individual services will still have application-specific observability tools, authentication system and even quota/weightage based rate limiting. Those are very specific to the particular service.
But what I talked about was the higher-level metrics. For instance: getting the latency of the services or grouping the number of requests per service by status code. These are not application-specific and will be applicable to all the services.
Do one and only one thing, and do it well. UNIX and then Linux.
Hey, love the diagrams. They are super clear and really help illustrate your point. How did you create them?
Hey Douglas, I use excalidraw.