DEV Community

lucasnscr
lucasnscr

Posted on

Event Architecture with Spring Cloud Stream

Today we are going to land our studies on it, the Spring Cloud Stream and understand how implementing an event architecture added to this framework, we achieve the much desired scalability and

Event Architecture
An event-driven architecture is a type of software architecture intended for applications. A system built under this definition, totally changes the concept we are used to, where we work by requests. In this structure, we define the components as follows: event capture, communication, processing and persistence. This pattern of events has been adopted in many applications due to its ability to be language agnostic and loosely coupled. These features greatly facilitate the construction of distributed systems. Thinking in the context of applications running in a fully cloud environment, this type of definition allows the construction of hybrid cloud applications. About loose coupling, this feature makes it possible for event producers not to know who their consumers are, with this its life cycle ends when the producer sends the event to the Message Broker. After this process, this is where the consumer function enters, which only starts when there is a respective event with its long-awaited signature, ensuring total isolation of responsibilities and failures. This scenario is perfect for distributed systems. When a certain part of the system needs more instances to supply the momentary need, it becomes ideal for those who want to have upscale cloud computing and still combine low cost.

Image description

Mediator or Broker?
When to use which is the best way? After introducing the events, we started to see that there is a universe of content that involves this subject, a very important theme is which topology can be used.

Mediator
In short, the mediator is used to orchestrate processing. For the mediator, there are two solutions that are widely used, which are: Apache Camel and Spring Integration.

Image description

Broker
When it comes to direct communication and there is no need for orchestration between those who publish and those who subscribe to events, Broker is the most recommended. Used within a cluster to have high availability, the broker is responsible for managing topics and queues. The best known and most used brokers are: Kafka and RabbitMQ.

Image description

Events what now?
For ease of understanding, an event is anything that will generate a change of state within your system. In an attempt to be more didactic, let’s break data processing into two steps. The first step is where we will send a certain message to our Broker, at this moment we are notifying our system that something happened. In the second part, the system will only capture the notification that is its responsibility, being able to simultaneously generate publications and event subscriptions. Events can be generated internally or externally, exception scenarios can trigger possible events or data manipulation.

Event Sourcing
The fundamental idea of Event Sourcing is to ensure that each change in an application’s state is captured in an event object and that those event objects are stored in the sequence in which they were applied for the same lifetime as the application’s own state.
Following the principle defined by Martin Fowler, event sourcing enables a tracking where we will store the entire timeline of our events. This type of approach is essential for monitoring if there is any failure that generates data inconsistency, enabling auditing. Auditing is also of high value to the business team where essential information can be extracted. Along with event sourcing, another capability can be assigned, CQRS (Command Query Responsibility Segregation).

CQRS
CQRS stands for Command Query Responsibility Segregation . It’s a pattern I first heard described by Greg Young . In your capacity is the notion that you can use a different template to update information than the template you use to read this information. For some situations, this separation can be valuable, but beware that for most systems, CQRS adds risky complexity².

Middleware Event Managers
There are several event handlers: Kafka, RabbitMQ, ActiveMQ and so on. These tools are data transmission platform. Its ability to perform publications and subscriptions, storing and processing events in real time, which makes it essential for this architecture. These middlewares are essential pieces to make your solution scalable and fast, as they are also capable of reducing latency.

Event Architecture Model
An architecture for this purpose can be oriented in two ways: publish/subscribe (pub/sub) or broadcast events.

Pub/Sub model
It is a subscription-based messaging infrastructure. In this model, after an event occurs or is published, it is sent to subscribers who need to be informed.

Event transmission model
In the broadcast model, events are written to a log. Consumers do not need to subscribe to an event broadcast. In fact, they can read from any part of the broadcast and join it at any time. We can have some types of event processing which are: Event flow, simple event and complex event.

Event Flow
In this case, the transmission is done via either RabbitMQ or Kafka, where events that transform its flow are inserted and processed. This is the pattern used for an expressive number of services and a high exchange of events in the processing flow.

Simple Event
An event immediately triggers an action on the consumer.

Complex Event
It is necessary to consume numerous events in order to obtain the necessary information. An example would be a customer’s credit approval.

Positive points of event architecture
As we understand and start orienting events, we consequently have some benefits. Due to its loose coupling, we have a more flexible system, which allows us to more easily implement new features. Another interesting capability is real-time monitoring, as well as improving scalability and responsiveness.

Negative points of event architecture
There is no silver bullet that solves all problems, everything has its benefits and harms, it would be no different with events too. When you assume that your application will have this architecture, of course, you are aiming for the benefits, but let us not forget that there is an increase in complexity, managing the data flow becomes much more costly due to competition and there is a risk of loss if not implemented in a way that is fault tolerant.

Spring Cloud Stream
Spring Cloud Stream is a framework that facilitates the implementation of an event orientation. Today in its version, 3.0.⁸³. Cloud Stream provides quite ease of implementation, providing us with implementations of some event models, information persistence and state partitions. Cloud Stream supports implementations of the following Binders: RabbitMQ, Kafka, Kineses, Solace, Azure Event Hub and Goohle PubSub. For its use it is not necessary a very long configuration, once its dependency is inserted, it will only be necessary to configure the application properties file with the following information: Define your Binder, in my example below, I normally use Rabbit, where it is configured: host, port, username and password. After defining the binder, you need to define its instance and then its events with the name of its output event or its listener. Once that’s done, you need to define the name of your topic, group and content type, I usually use application/json. After completing this process of defining your message, you will define the detail of your event, where you define the header in your listener service will define your consumption information. The file will be next to this example:

#SPRING CLOUD STREAM
spring.cloud.stream.default-binder=rabbit
spring.cloud.stream.instance-count=1
spring.cloud.stream.instance-index=0
spring.cloud.stream.binding-retry-interval=0

#INPUT EVENT
spring.cloud.stream.bindings.input.destination=transferencia-processing
spring.cloud.stream.bindings.input.group=group1
spring.cloud.stream.bindings.input.content-type=application/json
spring.cloud.stream.bindings.input.consumer.concurrency=2
spring.cloud.stream.bindings.input.consumer.partitioned=true
spring.cloud.stream.bindings.input.consumer.max-attempts=3
spring.cloud.stream.bindings.input.consumer.back-off-initial-interval=1000
spring.cloud.stream.bindings.input.consumer.back-off-max-interval=1000
Enter fullscreen mode Exit fullscreen mode

Producer setup on Spring Cloud Stream in properties file.

#OUTPUT EVENT
spring.cloud.stream.bindings.output.destination=transferencia-processing
spring.cloud.stream.bindings.output.group=group1
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.bindings.output.producer.partition-key-expression=headers['TRANSFERENCIA']
spring.cloud.stream.bindings.output.producer.partition-count=1
spring.cloud.stream.bindings.output.producer.required-groups=group1
spring.cloud.stream.rabbit.bindings.Input.consumer.transacted=true
Enter fullscreen mode Exit fullscreen mode

Spring Cloud Stream Consumer Configuration in the properties file.

Binder Configuration
You can use the message broker of your choice, in this case RabbitMq was used by default.

spring.rabbitmq.host=localhost
spring.rabbitmq.port=5672
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
Enter fullscreen mode Exit fullscreen mode

After this configuration, and the conclusion of the events, you will already be starting your first steps with Event Driven and Spring Cloud Stream.

Here we have a repository where we implement an MS using event model with RabbitMQ

Top comments (0)