DEV Community

Rogério Ramos
Rogério Ramos

Posted on • Edited on

Migrating Spring Cloud Stream v2 to v3

Cloud Stream V2 x V3

If you have landed here there is a high chance you are looking for ways of migrating the Spring Cloud Stream (SCS) to version 3.x where almost everything has changed

The version 3.x has its focus mainly in serverless applications where a service will execute a single function and the combination of all functions perform a workflow or process related to a given feature through Consumer, Supplier and/or Function bean functions, but not all applications are already using the new approach although is a trending now

Looking at version 2.x Sink, Source e StreamListeners build the set of configurations that allows an application to consume messages from multiple upstream sources to execute a flow as well as to publish messages with the resulting output of that flow

This post aims to demonstrate how the migration from version 2.x to 3.x can be done

The idea behind Cloud Function (CF) along with SCS is to be used in a FaaS application as unique function with single responsibility

A non-FaaS application can consumes information from several sources, that is an approach quite common on SCS v2.x, below it will be demonstrated how you can do such migration to SCS v3.x, if you have already checked what's new on SCS version 3 you will note that definitions of Sink, Source and Processor were replaced by Consumer, Supplier and Function bean functions

The function name mapping to a topic or queue might looks unclear at first sight, on SCS v2.x this used to be configure explicitly and it was easy to identify which topic or queue would be consumed by the application via Sink configuration and now you have to map the function name directly to the input or output configured on application.yaml

Particularly I didn´t like that much this configuration but I do not discard being only an initial impression or the natural resistance to novelties, depending on number of topics/queues it may produce a big and unreadable list of mapping functions (spring.cloud.stream.function.definition) on configuration file. I have not checked the limit - if any - about the size of this property value

Configuration

The application.yaml file has a mixture of Spring Cloud Stream and Spring Cloud Function configurations, the function and function definition belongs to Spring Cloud Function configuration

There isn't big differences in the configuration except the spring.cloud.stream.function.definition property that contains the functions to be mapped to bean functions Consumer, Supplier and Function names, the standard suffix for input and output configuration are <function-name>-in-0 and <function-name>-out-0

Consuming messages

On version 2 the Sink is used to configure which topic/queue will be listened for input messages as well as the annotation StreamListener that allows many configurations including to filter messages from upstream systems
On version 3 a Consumer function annotated with @Bean is used and the filter that was defined before via annotation now is done programmatically

Producing messages

The recommended method to produce messages is quite similar to the consumer function using Supplier instead Consumer, but an easier option in my opinion is to use the StreamBridge component that allows to send messages dynamically just passing the destination output

CloudEvents (optional)

Cloud Events is heavily used message specification and its usage is growing very fast, basically it puts a message payload in an "envelope" enriching the message with attributes like source, publish time, type, content type, etc

In the example below it's possible to check the event serialization and deserialization. Keep in mind that using CloudEvents is completely optional

The CloudEvents SDK offers many amenities that can be found here, e.g. the converter below allowing integration with Spring:

Conclusion and next steps

The goal here is to help with examples and configurations what has changed among SCS versions 2.x e 3.x and how such upgrade can be done

It's an abrupt change on how CSC is used and it can cause resistance to adopt keeping the already deprecated programming model still in use for while in the other hand can impose some challenging on running applications that will require a consistent set of testing mainly load and resiliency tests.

Tests were not covered on that time but they also changed quite a lot and it will be covered on further posts but if you can not wait to make it happen here you will find the steps to move forward

I hope this info could be helpful and don't hold yourself to share it everywhere

Esse post foi originalmente publicado aqui em português brasileiro

cya

-Rogério

Top comments (0)