DEV Community

loading...
Cover image for 
Implementing Non-Trivial Containerized Systems - Part 1: Picking Components

Implementing Non-Trivial Containerized Systems - Part 1: Picking Components

James Hunt
James is a cloud-native kinda guy who likes new tech and beautiful interfaces (graphical or textual). He's contributed to SHIELD, Genesis, safe, spruce, BOSH, Tweed, and other open source projects.
・3 min read

So, you want to start a radio station, eh?

This is the first part of a multi-part series on designing and building non-trivial containerized solutions. We're making a radio station using off-the-shelf components and some home-spun software, all on top of Docker, Docker Compose, and eventually, Kubernetes.

In this part, we're going to explore how the different parts of the system interface with one another, to set the stage for our next post, where we Dockerize everything!

I first met Icecast (https://icecast.org/) when I worked at a web-hosting startup around the turn of the millennium. One night, one of my co-workers and I had the crazy idea to load a bunch of audio files on the networked file server and stream them to our workstations. We could listen to music while we worked 90+ hours a week. Strange times. After realizing it wasn't as simple as exporting .ogg files over HTTP, we found Icecast (and its pal, Ices2) and built a rudimentary, local-network broadcast radio station.
Alt Text
That's sort of what Icecast does – it makes streams of audio available to clients (er, listeners) over HTTP. These streams are analogous to old-school AM/FM radio stations. The listener can't pick what gets played (not directly, at least), and they can't control where they are in the stream. Wherever they connect, that's what they listen to.

Icecast is actually more of a go-between. It will keep track of which streams exist, and connect listeners to those streams, but it is not responsible for what's in the streams. For that, you need a source.

A source can be just about anything; a physical audio input device (i.e. a microphone), a CD player, audio files, you name it. The go-to source driver for Icecast is Ices, but we're going to use something with a bit more functionality; a little something called LiquidSoap.
Alt Text
LiquidSoap (https://www.liquidsoap.info/) is really cool. Like, phenomenally powerful. We're going to use it to combine multiple audio files into a single stream and send that off to Icecast. This doesn't even begin to scratch the surface of what LiquidSoap is capable of. I hope it isn't insulted.

Finally, we need to talk about where we are going to get the actual audio that we want LiquidSoap to bundle up and stream to Icecast. If I were a good teacher, this paragraph would end with a link to an S3 bucket or a GitHub project or some internet endpoint where you can download test files to play with, and we could move on.

I'm not a good teacher. You're going to work for that audio.

We're actually going to build a means of acquiring audio into the system itself, using YouTube as our primary source. For that, we'll use a handy little tool called youtube-dl, in concert with ffmpeg, another sharp little tool that makes quick work of audio video files. Here's how it will work:
Alt Text
We start with a YouTube(dot)com video URL, which we will feed to youtube-dl. From that, we get a video file, on-disk. Since we're only interested in the audio track, we'll feed that video file into ffmpeg to rip out just the audio track into a more appropriate audio file format (like opus / ogg).

We have arrived at our final architecture.

Next up, we'll start putting things into containers, and see if our design will actually work.

Discussion (0)