DEV Community

Cover image for An Introduction to Asynchronous Processing and Message Queues
Fikayo Adepoju for Hookdeck

Posted on • Edited on • Originally published at hookdeck.com

An Introduction to Asynchronous Processing and Message Queues

Introduction

Client-server communication is one of the core operations in software development today. A system (the client) can request work to be done on another system (the server) by sending messages between each other and communicating the outcome of the requested work. Managing this communication can easily grow in complexity on the network layer when you begin to manage the rate at which messages are sent, the amount of requests a service can handle at a given time, and the speed at which the client requires a response.

In this post, we look at the two methods of managing communication between networked applications, and how to solve some of the problems that come with scaling the process.

Synchronous and asynchronous processing

What is synchronous processing?

Synchronous processing is the traditional way of processing in client-server communication. A client sends a request to a server and waits for the server to complete the job and send a response before the client can continue doing any other work. This process is often referred to as blocking (i.e. the client is blocked from doing any other work until a response is received).

Synchronous processing

What is asynchronous processing?

Asynchronous processing is the opposite of synchronous processing, as the client does not have to wait for a response after a request is made, and can continue other forms of processing. This process is referred to as non-blocking because the execution thread of the client is not blocked after the request is made. This allows systems to scale better as more work can be done in a given amount of time.

Asynchronous Processing

Differences between synchronous and asynchronous processing

Now that we understand what synchronous and asynchronous processing are, let's compare the two to see how asynchronous processing offers more benefits than its counterpart:

  • Synchronous requests block the execution thread of the client, causing them to wait for as long as the requests take before they can perform another action. On the other hand, asynchronous requests do not block and allow for more work to be done in a given amount of time.
  • The blocking nature of synchronous requests causes the client to consume more resources as the execution thread is locked during the waiting period. Asynchronous requests immediately free up the execution thread to perform more functions without having to wait for a response.
  • Because there is no way to determine how long a request will take, it is difficult to build responsive applications with synchronous processing. The more blocking operations you perform, the slower your system becomes. With asynchronous processing, response time is quick as the client does not have to wait on the request.
  • Failure tolerance of asynchronous processing is higher than that of synchronous processing, as it is easy to build a retry system when a request fails and the client is not held back by how many times the request is retried or how long it takes.
  • Asynchronous processing allows you to achieve parallel execution of client requests, while synchronous processing requests can only be processed sequentially.

How to achieve asynchronous processing

The section above illuminated that asynchronous processing is the way to go when it comes to building scalable and highly responsive applications. However, all the benefits of asynchronous processing come at a cost to the architecture of the system.

Synchronous processing is a straightforward process. You don't need to think too much about it to achieve it - it's the default behavior in client-server communication.

To achieve asynchronous processing, a middle-man needs to be inserted between the client and the processing server. This middle-man serves as a message broker between the client and the server, ingesting the request messages and distributing them to the server based on the load the processing server can handle at any given moment. The client gets a quick response once the request message is ingested, and can continue doing other activities without waiting for the server.

Asynchronous processing architectures offer callback endpoints that can be called when a response is ready for the request made.

One middle-man in today's asynchronous processing architectures is the message queue, discussed in the next section. Another alternative is using a publish/subscribe system like an event bus.

What are message queues?

A message queue is a component that buffers and distributes asynchronous requests. A message is a piece of data that can either be in XML or JSON, and contains all necessary information for a request to be served.

Clients are known as message producers, who send request messages to the message queue instead of the processing server. This gets a quick response when the message is added to the queue, which enables them to continue with their other forms of processing without having to wait for the server.

Message Queues

Servers are known as message consumers, and are served messages from the queue based on the number of requests they can serve at any given time. This continues until all the messages in the queue have been served.

Not all clients making requests to your servers are controlled by you. Sometimes, you need to serve requests from SaaS applications that communicate with your servers using webhooks. These are also considered to be message producers.

Message queues can be built into your infrastructure using open source technologies like RabbitMQ and Apache Kafka, or by using an infrastructure as a service, like Hookdeck (more on this later).

Benefits of message queues

Below are a few of the instant benefits you get by adding a message queue to your infrastructure:

  • Message queues enable asynchronous processing even if your system is not built to achieve asynchronous processing (unlike event buses).
  • The inclusion of message queues helps producers to be nonblocking (their execution thread does not need to block) as they don't have to wait for the consumer to be available or to finish execution.
  • It enables message producers and consumers to be scaled differently to achieve more speed with scale. You can add more processing servers to ensure that requests are served faster while still enjoying the benefit of a non-blocking client.
  • Fault tolerance is increased: the processing server can shut down, and requests from the client will still be handled. Once the server is back up, the message queue serves the pending requests to the server for processing.

Using Hookdeck to queue your webhooks

Let's imagine you have a Shopify store and you need to send an email to a buyer when a purchase is made to communicate details of the delivery of the product. A webhook is then configured on Shopify to call an endpoint on your server each time a purchase is made for the email to be sent.
You can easily run into a situation where your server is overloaded with webhook requests when there are a lot of people buying your items, and if this is not properly handled, your server will shut down once it's out of resources to handle more requests. This can lead to loss of business and have a negative effect on customer satisfaction. This is where Hookdeck comes in.

Hookdeck is an Infrastructure as a Service for webhooks, that allows you to benefit from message queues without having to write a single line of code. Hookdeck sits in between your webhook requests and processing server to queue your requests and serve them to your endpoint based on the load your API can handle. You get a highly performant message queue system, request tracing, request logging, alerting, retries, error handling, and an intuitive dashboard to search and filter your requests.

Hookdeck can be set up in just a few steps with no installation required. All you have to do is:

  • Signup for a Hookdeck account
  • Add your webhook provider (e.g. Shopify, Stripe, GitHub etc.)
  • Add your API destination endpoint
  • Replace your webhook provider's destination endpoint with the one generated by Hookdeck

And that's it! The best part is that you can get started today for free.

Conclusion

Asynchronous processing introduces a non-blocking request handling process into your applications, giving you a more decoupled system that can be easily scaled for performance. In this post we have looked at the benefits of asynchronous processing and how it can be easily achieved using message queues.

To quickly and easily implement asynchronous processing in your applications, you can start using Hookdeck today.

Top comments (0)