The next step in my project was to build a prototype. The first step of this process was to generate a sine wave and have it be played through the speakers. Once I had achieved this first step, I could start building more complex features on this foundation.
The Setup
After taking a good look through the Web Audio API documentation, I decided that the best way for my synthesizer to output sound to the speakers was through the use of the AudioWorkletNode
combined with an AudioWorkletProcessor
. This combination of two nodes would allow me to generate audio in a background thread which could then be played through the speakers.
If I wanted to take a different approach, I would need to extend AudioNode
and implement the audio generation by using much less documented features. Since I could find no documentation on extending AudioNode
, I wanted to avoid extending it for as long as possible.
Implementing an Oscillator
Now I was ready to start writing Rust code. Almost all the documentation and tutorials I was able to find online recommend using wasm-pack
.
Once I had followed the setup tutorials, I added the following code to my lib.rs
file.
#[wasm_bindgen]
pub struct SineOsc {
sample_rate: i32,
cycler: i32,
}
#[wasm_bindgen]
impl SineOsc {
pub fn sample(&mut self, pitch: i32, gain: f32) -> f32 {
// This number is the base from which `sine` is calculated
let seed: f32 = ((2.0 * std::f32::consts::PI) / (self.sample_rate / pitch) as f32) * self.cycler as f32;
// Keeps track of the position within the wave
self.cycler += 1;
// Reset position if it gets too large
if self.cycler > (self.sample_rate / pitch) {
self.cycler = 0;
}
// Control the volume by multiplying gain by the sine.
gain * seed.sin()
}
pub fn new(sample_rate: i32) -> Self {
SineOsc { sample_rate: sample_rate, cycler: 0 }
}
}
At this point, I had a basic oscillator. Passing a pitch (in hertz) and gain value between 0
and 1
to the sample()
function would produce a single number which can be added to the audio buffer. Calling sample()
repeatedly produces the sequence of numbers need to create an audible sine wave.
To build this project, I ran wasm-pack build
which compiles the Rust code to a .wasm
file and generates JavaScript bindings which can be used to call the Rust code. I integrated my newly generated .wasm
file into my prototype JavaScript code. And that's when I started running into problems.
Roadblocks
In order to use an AudioWorkletNode
, one must first set up a separate file with a class that extends the AudioWorkletProcessor
interface in it. The AudioWorkletProcessor
class must be defined in a separate file because the AudioWorkletProcessor
interface only exists in the AudioWorkletGlobalScope
which is an independent thread from the main thread of JavaScript execution. Because JavaScript does not support multithreading, any multi-threaded behavior must be achieved using the Web Workers API - which can only run a JavaScript file in the background instead of branching like typical multi-threaded applications.
At first, I tried to follow the tutorials' instructions for importing my newly minted WebAssembly package. But based on the console errors I kept getting, I quickly discovered that the AudioWorkletGlobalContext
does not support importing modules.
Next, I tried to copy and paste the code from the JavaScript package directly into the AudioWorkletProcessor
file and load it from there. But the AudioWorkletGlobalScope
doesn't support fetch()
which is required to read the .wasm
file behind the scenes, so this solution didn't work either.
I then tried passing the imported JavaScript package from the main JavaScript file to the AudioWorkletProcessor
via AudioWorkletNode
's MessagePort
instance. That didn't work because using the MessagePort
relies on the objects being passed being copyable, and JavaScript functions are not copyable. I tried variant after variant of these solutions, and nothing I did could get it to work.
Eventually, I stumbled upon this GitHub issue:
Support using wasm-pack in Worklets (particularly AudioWorklet) #689
π‘ Feature description
One of the key use cases for WebAssembly is audio processing. wasm-pack doesn't currently support AudioWorklets (purportedly the future of custom audio processing on the web) with any of its current --target
options.
The web
and no-modules
targets get close, but error out during instantiation because the AudioWorklet context is lacking several browser APIs which the JS wrappers expect. This problem may extend to other Worker/Worklet contexts, but I've only attempted this with AudioWorklets.
π» Basic example
my_processor.worklet.js
class MyProcessor extends AudioWorkletProcessor {
constructor() {
super()
import("../pkg/audio").then(module => {
this._wasm = module
});
}
process(inputs, outputs, parameters) {
if (!this._wasm) {
return true
}
let output = outputs[0]
this._wasm.exports.process(this._outPtr, this._size)
for (let channel = 0; channel < output.length; ++channel) {
output[channel].set(this._outBuf)
}
return true
}
}
Finally, I knew why I couldn't figure out how to make this work. The answer was simple, wasm-pack
does not support Worklet compilation targets.
In my opinion, this is a significant deficiency in the Rust to WebAssembly toolchain because Worklets are some of the use cases which could most benefit from using WebAssembly. Worklets are used to do computation intensive tasks in the background, tasks which could further be optimized by offloading them to WebAssembly code that runs at near-native speeds.
This discovery was proved to be a major roadblock to getting a prototype working, but I wasn't out of ideas yet.
Solution
I did a few quick searches, and discovered that Rust actually already has a WebAssembly compilation target built in. Unfortunately, because the features that wasm-pack
provides are so much more useful than what are available using only the built in compilation target, there are basically no tutorials on how to use Rust WebAssembly without using wasm-pack
.
Nonetheless, after some digging, I found an old project that compiled Rust WebAssembly code without using wasm-pack
and I adapted what I learned from that project to my use case.
The cost of this approach was that I would no longer have JavaScript bindings for my Rust struct
s because the JavaScript bindings are a feature provided by wasm-pack
. So, I adapted my sampling function so that it no longer needed a struct
and this is what I ended up with:
#[no_mangle]
pub extern "C" fn samplex(pitch: i32, gain: f32, sample_rate: i32, transport: i32) -> f32 {
// Does the same thing as above, but statelessly
let seed = ((2.0 * std::f32::consts::PI) / (sample_rate / pitch) as f32) * transport as f32;
gain * seed.sin()
}
from here, all I had to do was run
cargo build --target wasm32-unknown-unknown
and it would produce a .wasm
file ready for use.
I learned how to pass the WebAssembly module to a Worklet while reading the Mozilla Developer Network article on the WebAssembly.compile()
function. Armed with that knowledge, this is what my two JavaScript files ended up looking like:
// index.js
let startBtn = document.getElementById('start-sound');
async function setup() {
// Load and compile our WebAssembly module
const bin = await WebAssembly.compileStreaming(await fetch('/wasm_demo.wasm'));
let context = new AudioContext();
// Add our processor to the Worklet
await context.audioWorklet.addModule('processor.js');
let node = new AudioWorkletNode(context, 'web-synth-proto');
node.port.postMessage({type: 'init-wasm', wasmData: bin});
// Connect to the speakers
node.connect(context.destination);
// Make sure sound can be stopped to prevent it from getting annoying
let stopBtn = document.getElementById('stop-sound');
stopBtn.addEventListener('click', () => {
context.close();
});
}
// User must click to start sound
startBtn.addEventListener('click', () => {
setup();
});
// processor.js
class WebSynthProcessor extends AudioWorkletProcessor {
constructor(options) {
super(options);
this.cycler = 0;
this.transport = 0;
// Setup `MessagePort` so we can receive WebAssembly
// module from the main thread
this.port.onmessage = event => this.onmessage(event.data);
console.log(sampleRate);
}
onmessage(data) {
// Receive WebAssembly module from main thread
if (data.type === 'init-wasm') {
// Declare this as an async function so we can use `await` keyword
const instance = async () => {
try {
// We need to instantiate the module to use it
let mod = await WebAssembly.instantiate(data.wasmData, {});
this._wasm = mod;
} catch(e) {
console.log("Caught error in instantiating wasm", e);
}
}
// Call the setup function
instance();
}
}
process(inputs, outputs, parameters) {
if (typeof this._wasm !== 'undefined' && this._wasm !== null) {
let output = outputs[0];
output.forEach(channel => {
for (let i = 0; i < channel.length; i++) {
let pitch = 880;
// Call our WebAssembly function
let sample = this._wasm.exports.samplex(pitch, 0.3, sampleRate, this.transport);
// Add Sample to audio buffer
channel[i] = sample;
this.transport += 1;
let resetPoint = Math.ceil(sampleRate / pitch);
if (this.transport > this.resetPoint) {
this.transport = 0;
}
}
});
} else {
console.log('wasm not instantiated yet');
}
// Must return `true` to continue processing audio
return true;
}
}
// register our processor with the `AudioWorkletGlobalContext`
registerProcessor('web-synth-proto', WebSynthProcessor);
Time for Improvement
While this new solution worked, it wasn't ideal because there was no good way for me to track state in the Rust code, which I knew would prove to be problematic once I started making my synthesizer more advanced.
But, as I worked on redesigning my code to work without the need for wasm-pack
, the puzzle pieces started to fall into place, and I began to see that, despite its flaws, I might still be able to use wasm-pack
for this web-based synthesizer.
All it would take is a little adapting...
In my next article, I will describe my attempt to reincorporate
wasm-pack
into my synthesizer project.Introduction to this project:
Top comments (0)