DEV Community

Michael Davis for Solace Developers

Posted on • Originally published at solace.com on

Making Event-Driven Music with MIDI, Solace, and Slack

I am a software developer with an interest in music. My professional work has been almost completely unrelated to music, but I recently had an opportunity to get about 200 people, mostly non-musicians, to create a couple of minutes of unique, spontaneous music using a Solace message router, Slack (the communications product), a software synthesizer, and a little bit of code. This article describes how it worked. It also includes links to the software that I used and samples that sound like that music.

For those of you who are not familiar with event brokers like our PubSub+, they are computer components that store and forward messages. The most common scenario is for some system called a “publisher” to send messages to the broker, and the broker sends the messages to other systems called “subscribers”. Messages can be temperature sensor readings, stock market transactions, horse racing bets, weather information, or anything else that can be represented as computer data. The message broker can be just software or a hardware appliance when high performance and reliability are required. Solace makes both and provides a free version of the PubSub+ software broker and a cloud service. Most users of message routers are in the commercial and industrial worlds, but I’m interested in exploring the use of message routers in the fine arts.

The music industry already has support for a type of messaging called MIDI (Musical Instrument Digital Interface.) This was created in the early 1980s and quickly became an industry standard. For those not familiar with it, MIDI defines a hardware interface (connectors and cords), a wire protocol (how and which data gets sent through the wire), and a file format for storing musical pieces. MIDI does not deal directly with audio or sound. Rather it works with a simplified notion of notes. Imagine a keyboard player hitting a note. MIDI will create an event when the key is depressed, called a “Note On” message, containing a number corresponding to the key (e.g. 60 for Middle C), a number representing how hard the key was struck (named Velocity in MIDI terminology) and which instrument was being played (called the Channel.) When the player releases the key, a “Note Off” message gets sent.

MIDI cable

MIDI can also transmit messages for various other instrument controls such as pressing on the sustain pedal or turning a pitch bend wheel. Up to 16 different instruments can share the same connection.

It seemed to me that the most simple, obvious first step in exploring music and messaging would be to send MIDI messages through a message broker. But how could that be useful? Most music depends on fairly precise timing. Even MIDI introduces unacceptable latency when the cables are too long or more than a couple of devices are daisy-chained together. But sending messages through a computer network usually introduces much larger timing problems.

Some forms of modern art music, pioneered by the 20th century composers, are more tolerant of timing. For example, the electronic works Poeme Electronique by Edgard Varèse (1958) and Kontakte by Karlheinz Stockhausen (1960) have no discernible rhythm. If one is working within that genre, network latency becomes a nonissue.

At Solace we have an annual event called Kanata Day, which is a kind of company-wide conference. We developers are encouraged to write demos that show off our products. I saw that as an opportunity to try out an idea. Why not have the 200-odd people in the room all send MIDI notes to a message broker and have a program collect the notes and play them on an instrument?

What could I use to enable a group of people to send musical notes? One idea was to have a Web application that had a few buttons that people could use on their phones to trigger sounds. That would have worked, but it would have been more limited than I would have liked.

We all use Slack extensively at work and have it on our phones. I decided that it would be easy for people to just send text messages to a Slack bot and have some way of translating text to MIDI notes. So, I wrote a bot that does just that. It maps each letter to a note based on the frequency with which the letter appears in the English language. The most frequently used letter is ‘e’, and I mapped that to Middle C (a note near the middle of a piano keyboard.) I mapped other frequently used to rarely used letters to notes that are increasingly distant harmonically from Middle C. The image below shows which letters are mapped to which keys:

Letter-to-key assignment used in the Slack bot

Letter-to-key assignment used in the Slack bot

Whenever someone sends a message, the Slack bot would pick a time duration, one of 250, 375, 500, 750, 1000, or 1500 milliseconds. That duration would be used for all the notes in the one message. Furthermore, if the user enters a capital letter, the duration would be doubled for just that letter. That adds some temporal variety to the music. By the way, it is possible to play with this software locally without using Slack. This is documented in the slack-midi-web repo that is described below.

People can pick one of 10 instruments by typing a number between 0 and 9. The instruments are:

  • 0: Harpsichord (the default)
  • 1: Trumpet
  • 2: Bass guitar
  • 3: Electric guitar
  • 4: Electric organ
  • 5: Baa box (a toy that makes sheep sounds)
  • 6: Grand piano
  • 7: Double bass
  • 8: Synthesizer
  • 9: Drums

For example, if a user types “I saw 3 dogs”, then the first four notes would be played by the harpsichord and the last three would be played by the electric guitar.

The loudness of each note is determined randomly, within a reasonable range.

Any characters that are not alphabetical or numerals are treated as rests, that is, no sound would be played.

For each letter in a message, the Slack bot sends a MIDI “Note On” message with the appropriate pitch, loudness, and choice of instrument. Then a corresponding “Note Off” message is sent. The messages conform to the actual MIDI specification and are just three bytes long.

At the receiving end is a Java program I named “midi-mercury” that receives the messages from the message broker and sends them to a MIDI device. Java has good built-in support for MIDI and can easily communicate with any connected MIDI device or software instrument. The program lists the devices it sees and lets the user select one. Java provides a built-in instrument that can play the sounds if no other devices are present. The program can also be used to send MIDI notes from a device to a message broker. During the demo I used Steinberg’s Halion 6.

Here are some recordings that sound very similar to what we heard that day:

Programs I Used

The Java command-line program “midi-mercury” (available on Github) moves data to and from the Solace message broker and MIDI devices. Currently it only supports “NOTE ON” and “NOTE OFF” messages. The user can choose to convert the MIDI messages to a JSON format to enable interoperability with other systems. For example, one could write a program that generates notes and sends them to a message broker, but it might be more convenient to send text rather that binary data.

slack-midi-web is a NodeJS program. It was written as a Slack bot, but as it runs as a Web server, it can be run standalone — one can send it HTTP POST requests and have it process the text just as if it received a Slack message (it comes with a test script that does just that). It converts text to MIDI messages as described above and sends them to a Solace message router. It too is available on Github. For the purpose of the Kanata Day demo, I deployed it in a Kubernetes cluster on Google Cloud. I included the scripts that enable that to happen in the repository.

The post Making Event-Driven Music with MIDI, Solace, and Slack appeared first on Solace.

Top comments (0)