DEV Community

Cover image for Azure Service Bus - Replay Messages CLI
Dylan Morley
Dylan Morley

Posted on

Azure Service Bus - Replay Messages CLI

When working with an Asynchronous messaging product such as Azure Service Bus (ASB), you're going to be working with queues and publish/subscribe scenarios. In either case, you'll have times when message processing isn't successful - perhaps required data isn't available, perhaps some dependent system is offline.

When this happens, we need to handle the failures, and what we need to do depends on your implementation pattern. Generally, after a certain amount of attempts by your compute processing to handle the message, the transport will dead letter it. You may be using the dead letter sub-queue section of the ASB entity as your strategy for failed attempts, and messages will be located here until you do something about them

You could also be sending failures on to a centralised errors entity - but whatever pattern has been chosen, you'll have some outstanding messages you need to process. At small message volumes, this is achievable with tools such as Service Bus Explorer and the inbuilt portal tooling, but this soon becomes problematic if you have many 1000's of messages to deal with.

While your systems should be self-healing as much as possible, there are times when you need to intervene and would like to have an automated way to deal with scenarios.

For example, simply always replaying messages that dead letter immediately isn't a good strategy - if there's a genuine system problem you're just going to create a request storm of errors. You need to replay when you're happy that processing is going to be successful, and you want an automated way to do this

CLI support

There are numerous ways you can replay messages - you could write a function app, a logic app, or provision any other type of compute that allows you to execute some code.

However, replaying messages felt like a very common problem where we could offer a reusable solution. Creating CLI support would make this easy for anyone to consume - as a CLI tool, it can be installed easily and it becomes a cross-platform solution which allows people to work in the way they want.

We therefore created Asos.ServiceBus.MessageSiphon, which allows you to define a configuration file that represents the message work you want to perform.

In this example, we're connecting to a source namespace using a SAS key, peeking messages from a topic-subscription and cloning them into another namespace, this time connecting using RBAC.

    "Logging": {
        "LogLevel": {
    "ReplayMessagesJob": {
        "JobType": "SourceToTarget",
        "JobName": "Clone-Message-To-Other-Namespace",
        "NumberOfConcurrentProcesses": 5,
        "ServiceBusDetails": [
                "Name": "Source",
                "ConnectionMode": "ConnectionString",
                "ConnectionString": "Endpoint=sb://;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=access-key"
                "Name": "Target",
                "ConnectionMode": "Rbac",
                "FullyQualifiedNamespace": ""
        "SiphonWork": [
                "SiphonMode": "Clone",
                "SourceConnectionName": "Source",
                "SourceEntity": "topic-entity",
                "SourceSubscriptions": [ "test-subscription" ], 
                "SourceBatchReceiveSize" : 40,
                "TargetConnectionName": "Target",
                "TargetEntity": "copy-of-topic"
Enter fullscreen mode Exit fullscreen mode

There are various examples in the README we've published with the package, but usage is always the same. Define a configuration file, then execute it via the CLI after installing the tool

siphon-asb-messages -n D:\temp\file-with-config.json
Enter fullscreen mode Exit fullscreen mode

Wrapping up

By using a configuration file based CLI tool, we put the power in the hands of the user and allow them to define various configurations - supporting a variety of common scenarios and ways to filter the messages, such as by age or message header.

The tool can be installed on build agent pipelines and executed on a schedule, or can be installed by any engineer in their development environment.

This allows you to handle requirements such as network and RBAC restricted namespaces. Instead of allowing engineers to connect to namespaces and manipulate data, you can define it as a job that's executed from a build pipeline in a controlled way. An example Azure Devops Pipeline is included in the README

Source code will be available on Github at very shortly

Top comments (0)