DEV Community

Cover image for Building a Browser-Based Synthesizer Using Rust and WebAssembly
Andrew Luchuk
Andrew Luchuk

Posted on

Building a Browser-Based Synthesizer Using Rust and WebAssembly

Ever since I learned the math behind sine waves, I've been fascinated by the idea of building my own synthesizer, however rudimentary it might turn out to be.

For those who don't know what a synthesizer is, it's a configurable software musical instrument which can generate periodic waves which can mimic the sounds of existing instruments, or create sounds that would be impossible to reproduce on any physical instrument.

I loved the idea of directly applying mathematics to a problem and having it translate into a meaningful phenomenon. Then my imagination would run wild with all the things that I might be able to do with a self-made synthesizer.

In practice, I can probably achieve anything I could imagine doing with a self-built synthesizer using a Digital Audio Workstation combined with a good synthesizer plugin, but that never stopped me from imagining.

Project Goals

I've wanted to build a larger Rust project for a while now, and I realized that this would be a great language in which to build the synthesizer.

Rust performs at a near-native level and doesn't have a garbage collector, making it ideal for applications that have to perform large numbers of operations per second.

Depending on the file format and the desired audio quality, a synthesizer needs to generate between 44,100 and 192,000 points of data per second at a bare minimum, so clearly performance is going to matter in any medium to large synthesizer.

Blockers

I had an idea and a language in which to build that idea. So, I began researching systems level APIs for sending an audio signal to a device's speakers. And then, almost immediately, I decided I didn't want to use system level APIs for playing audio.

You see, I develop mainly on Windows, so after taking a rudimentary glance at the documentation for sending a signal to audio output devices on Windows, I began to realize that developing a solution for sending my newly generated audio signal to the speakers was going to be quite a bit harder than I had anticipated.

This was in the days before ChatGPT and its power to almost immediately connect the dots for such sophisticated problems. As a result, I decided building a Synthesizer would be more trouble than I really wanted to put into it.

Enter The Web Audio API

A few weeks ago, I was reading the Mozilla Developer Network docs and stumbled upon the Web Audio API. Then, I had a flash of inspiration.

I knew Rust could compile to WebAssembly which is essentially a browser-based pseudo machine language that can have serious performance benefits over vanilla JavaScript.

Now that I knew there was an API for playing audio in JavaScript, and even some APIs for generating audio from JavaScript, all I had to do was build in Rust, compile to WebAssembly, and utilize the Web Audio API to do the hard work of sending the audio signal to the device speakers. This was the solution I was looking for to get my synthesizer started.

Easier Said Than Done

But, as often happens, I'm only two weeks into this project and have already encountered hurdles I could not have foreseen. These problems have forced me to push the existing solutions for both Rust-based WebAssembly and the Web Audio API to their limits to find a solution.

I've had to abandon my starting codebase and restart several times just to find a solution for using WebAssembly in the context I needed it. But I have found a solution that works and is generic enough to be applicable to a wide variety of situations. In so doing, I have greatly increased my understanding of WebAssembly as a technology, and hopefully, in writing this, I will be able to expand your understanding too.

In my next post, I will describe how I discovered a severe flaw in the current Rust toolchain for building WebAssembly and my current, rudimentary solution to compensate for that flaw.

Top comments (0)