Ever since I learned the math behind sine waves, I've been fascinated by the idea of building my own synthesizer, however rudimentary it might turn out to be.
For those who don't know what a synthesizer is, it's a configurable software musical instrument which can generate periodic waves which can mimic the sounds of existing instruments, or create sounds that would be impossible to reproduce on any physical instrument.
I loved the idea of directly applying mathematics to a problem and having it translate into a meaningful phenomenon. Then my imagination would run wild with all the things that I might be able to do with a self-made synthesizer.
In practice, I can probably achieve anything I could imagine doing with a self-built synthesizer using a Digital Audio Workstation combined with a good synthesizer plugin, but that never stopped me from imagining.
I've wanted to build a larger Rust project for a while now, and I realized that this would be a great language in which to build the synthesizer.
Rust performs at a near-native level and doesn't have a garbage collector, making it ideal for applications that have to perform large numbers of operations per second.
Depending on the file format and the desired audio quality, a synthesizer needs to generate between 44,100 and 192,000 points of data per second at a bare minimum, so clearly performance is going to matter in any medium to large synthesizer.
I had an idea and a language in which to build that idea. So, I began researching systems level APIs for sending an audio signal to a device's speakers. And then, almost immediately, I decided I didn't want to use system level APIs for playing audio.
You see, I develop mainly on Windows, so after taking a rudimentary glance at the documentation for sending a signal to audio output devices on Windows, I began to realize that developing a solution for sending my newly generated audio signal to the speakers was going to be quite a bit harder than I had anticipated.
This was in the days before ChatGPT and its power to almost immediately connect the dots for such sophisticated problems. As a result, I decided building a Synthesizer would be more trouble than I really wanted to put into it.
A few weeks ago, I was reading the Mozilla Developer Network docs and stumbled upon the Web Audio API. Then, I had a flash of inspiration.
But, as often happens, I'm only two weeks into this project and have already encountered hurdles I could not have foreseen. These problems have forced me to push the existing solutions for both Rust-based WebAssembly and the Web Audio API to their limits to find a solution.
I've had to abandon my starting codebase and restart several times just to find a solution for using WebAssembly in the context I needed it. But I have found a solution that works and is generic enough to be applicable to a wide variety of situations. In so doing, I have greatly increased my understanding of WebAssembly as a technology, and hopefully, in writing this, I will be able to expand your understanding too.
In my next post, I will describe how I discovered a severe flaw in the current Rust toolchain for building WebAssembly and my current, rudimentary solution to compensate for that flaw.