As a new developer going through bootcamp and a long time musician, I've had to make the decision to put music aside and focus on the study of software engineering. Needless to say, I miss music a great deal until the moment I can dive right back in. However, I stumbled across something that sparked great interest and gave me a taste of what I've been missing these last few months, music programming.
I recently discovered Tone.js and what it allows developers to accomplish and was inspired to look into Tone.js. I feel that I've only scratched the surface but through the study of the documentation, I was able to make sense of how this framework is laid out. With a basic knowledge of music theory, so can you! However, before we dive into Tone.js and what is has to offer, I must first discuss what built-in browser feature Tone.js enhances, the Web Audio API.
Web Audio API
So what is the Web Audio API? Before its existence, audio in webpages were still being supported, but not as efficiently with the modern web audio API. Starting in 2011, the first draft of a web audio API appeared in W3C, and since then several JavaScript libraries and frameworks have been built to assist developers in their implementation of this tool. It is a built-in browser tool that allows developers to process and synthesize audio operations in an audio context. The audio context mentioned is simply an object that contains everything related to web audio. This object in reference to web audio relations behaves as an interface that represents an audio-processing graph built from audio modules linked together.
A basic and mostly typical workflow for web audio can generally be summarized by the below chart.
In order to create these nodes to begin executing any audio processing, the audio context must first be instantiated. If you are interested in hacking with web audio, it is considered a good practice to have no more than one audio context in a single project.
const audioContext = new AudioContext();
The instantiation of the audio context means you now have an audio component to hack with! From this starting point, us developers can now begin building out our web audio project. The input phase of the audio context entails creating sources such as `` in the html file and other sources available for use in the .js file where the audio context is defined.
const soundNode = audioContext.createOscillator();
soundNode.connect(audioContext.destination);
The above snippet instantiates a common audio source in the Web Audio API called an oscillator. An oscillator in the audio context is an audio-processing module that creates a given frequency of wave. This interface can represent different periodic wave forms that produce different audio results. While this is just part of the process, the audio remains just a simple sound wave that is then connected to a destination, the computer's speaker. Let's expand this process and explore a few ways developers can manipulate these sound waves. For starters, how do we know when the audio starts? Let's take a look at a more specific context of the Web Audio API.
// new instance of an audio context
const audioContext = new AudioContext();
// volumeNode uses a gain module to control the volume
const volumeNode = audioContext.createGain();
// connecting the gain module to the destination
volumeNode.connect(audioContext.destination);
// similar to oscillator as shown above but with assigned frequencies
const triangleNode = audioContext.createOscillator();
triangleNode.frequency.value = 347;
const squareNode = audioContext.createOscillator();
squareNode.frequency.value = 443;
// types of oscillator modules
triangleNode.type = "triangle";
squareNode.type = "square";
//start the oscillators
triangleNode.start();
squareNode.start();
// connecting the audio waves to the volumeNode
triangleNode.connect(volumeNode);
squareNode.connect(volumeNode);
The above snippet puts together what we've discussed so far with a few additional steps that will help the project operate with more of the developer's intentions. Starting with the the volumeNode variable, I've assigned it the GainNode module which will allow me to control the volume of the audio context's output through the speakers. However, when the GainNode module is connected to the audio context's destination, any oscillators instantiated will not have the option to connect directly to the destination.
However, that doesn't mean I can't connect the oscillators to the GainNode module that is already connected to the audio context's destination. This still helps create a linear process from start to finish and can be expanded by patching, stringing, together other AudioNodes that have the ability to change the behavior of the oscillator's frequency. Next, I've called the .start() method on each oscillator allowing the default value of its "when" parameter to start playback immediately. Once the audio waves are signaled to start, we can then connect the oscillators to the destination which will allow the audio to finally carry through the speaker.
In this context, it helps to envision guitar pedals and how they are connected from the moment a cable connects the guitar to the pedals which eventually connect to the output of the amp. Web Audio projects operate in the same manner, just in text editors and not on a stage.
Tone.js
Now that we've taken a look at a basic Web Audio API operation, it seems projects can get tedious and potentially messy if the workflow skips out on organization and proper patching of audio nodes to the destination. Fortunately there are available frameworks that help developers keep that workflow organized, such as Tone.js. Tone.js is a Web Audio framework that allows the creation of interactive music in the browser. The syntax involved with the use of this framework is crucial in bridging the language gap by abstracting away from the machine. Tone.js was built with the intention of creating familiarity to both musicians and audio programmers.
Preloaded with a library of classes, developers can be more specific with note values without relying on specific numerical frequencies, as with the Web Audio API.
const pluckSynth = new Tone.PluckSynth().toDestination();
pluckSynth.triggerAttackRelease("C3", "4n");
Tone.Transport.start();
This basic example already eliminates the need of all the extra connect methods as seen in the previous snippets. Already in the new instantiation of a pluck synth, we are able to chain the toDestination method in one line without having to establish the audio context for the project. As for the note value and the time keeping of the synth, this is where basic knowledge of music theory comes into play. The triggerAttackRelease method combines both the triggerAttack and triggerRelease methods of Tone.js which triggers the "pitch-octave" note at the specified duration. In this case, the duration I chose is a tempo-relative value of quarter notes which in basic terms pulses on the beat. If you're unfamiliar with the terminology, you are at least familiar with quarter notes and other tempo-relative values through listening to any form of music with rhythm. Lastly, in order for the synth to start the audio, Tone.Transport.start() is invoked. You can think of Transport as the main keeper of time that can be adjusted on the go.
To further expand on what Tone.js is capable of, lets implement a loop and play with the timing of the audio using code from the above example and another Transport method.
const pluckSynth = new Tone.PluckSynth().toDestination();
const metalSynth = new Tone.MetalSynth().toDestination();
const pluckLoop = new Tone.Loop(time => {
pluckSynth.triggerAttackRelease("C5", "8n", time);
}, "4n").start("0");
const metalLoop = new Tone.Loop(time => {
metalSynth.triggerAttackRelease("G5", "8n", time);
}, "4n").start("8n");
Tone.Transport.start();
Tone.Transport.bpm.rampTo(120, 15);
This may seem like somewhat of a jump, but it's more of a step up in creating a musical experience! In this example I added a MetalSynth and implemented a loop for each one. The Loop module in Tone.js creates a looped callback at a specified interval. Inside each callback I'm implementing triggerAttackRelease on each synth at different note values creating a perfect fifth in music theory lingo. The second argument "8n" is the release timer on every eighth-note and time is referring to the Transport's time stamps so we can keep track of where we are in the sequence of the outputs. Of course the loops need their start signal after instantiation and the last line, bpm.rampTo is telling Transport to speed up to 120 beats per minute of the course of 15 seconds.
Another cool feature of this snippet is where each synth is off put from each other by an eighth-note. Pay attention to the start methods invoked inside each loop, the default value 0 in pluckLoop begins immediately while the "8n" in metalLoop offsets the start of the loop by an eighth-note. This creates an oscillating effect that triggers each note right after the other creating a simple but programmed "tune"!
Just by using Tone.js, the implementation of basic Web Audio API functionality is more accessible and less tedious. Developers have been using Tone.js to not only have access to familiar and readable code, but to also build out audio projects with either pre-existing instruments or any instrument they create. If you're interested in your own sound, Tone.js offers a variety of tools that make this goal achievable without having to rely on pre-built synths.
Final considerations
Aside from being a crucial tool to building sound in the web browser, Tone.js remains accessible to all developers regardless if they are familiar with the framework. Audio creation and manipulation may seem daunting to someone who isn't well versed in music composition or audio manipulation, but it's more obtainable than one might observe. Aside from those potential setbacks, the Tone.js framework offers the option of deep exploration and provides clear examples of how the methods work making this software worth the consideration.
Top comments (0)