DEV Community

Cover image for Making an Audio Visualizer with Redwood
Milecia
Milecia

Posted on

Making an Audio Visualizer with Redwood

Working with audio on the web is an overlooked way of communicating information to users. You can take audio files and give users a real-time visualization of what's playing.

In this tutorial, we're going to make an audio visualizer with P5.js in the Redwood framework. It will take sounds that it picks up from the mic and visualize them. We'll also add a way to save a snapshot of the visual when we push a button.

Creating the app

To get started, we'll make a new Redwood app. Open a terminal and run the following command.

yarn create redwood-app audio-visualizer
Enter fullscreen mode Exit fullscreen mode

This will generate a lot of files and directories for you. The main two directories you'll work in are the api and web directories. The api directory is where you will handle all of your back-end needs. This is where you'll define the models for your database and the types and resolvers for your GraphQL server.

The web directory holds all of the code for the React app. This is where we'll be focused since everything we're doing is on the front-end. We'll start by importing a few JavaScript libraries.

Setting up the front-end

Before we get started, I just want to note that if you're following along with TypeScript, you might run into some issues with the P5 sound library. I ran into issues where it kind of worked, but it also kind of didn't.

That's why we're going to be working with JavaScript files even though I usually work with TypeScript. P5 is a little tricky to get working in React and it took me a few different tries to figure out how to get this working.

We're going to import the P5 libraries now, but we won't do it using npm or yarn. We're going to go straight to the index.html and add a couple of script tags with links to the P5 files we need. So in the <head> element, add the following code after the <link> tag.

<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.4.0/p5.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.4.0/addons/p5.sound.min.js"></script>
Enter fullscreen mode Exit fullscreen mode

Now that we have the libraries in the project, we need to set up a page to render our visualizer. We'll take advantage of some Redwood functionality for this. In the terminal, run this command.

yarn rw g page visualizer /
Enter fullscreen mode Exit fullscreen mode

This command will create a new page under web > src > pages called VisualizerPage.js. You'll also see a Storybook file and a test file. These were generated with that Redwood command. This is a good time to run the app and see what it looks like.

In the terminal, run the following command to start the app.

yarn rw dev
Enter fullscreen mode Exit fullscreen mode

This will start the front-end and back-end of the Redwood app and when your browser loads, you should see something similar to this.

initial page without visualizer

We'll make a quick update to the text on the page. So inside the VisualizerPage.js file in web > src > page > VisualizerPage, update the code to the following.

import { MetaTags } from '@redwoodjs/web'

const VisualizerPage = () => {
  return (
    <>
      <MetaTags
        title="Visualizer"
        description="Visualizer description"
      />

      <h1>Simple audio visualizer</h1>
      <p>
       This will take any sounds picked up by your mic and make a simple visualization for them.
      </p>
    </>
  )
}

export default VisualizerPage
Enter fullscreen mode Exit fullscreen mode

Now we're ready to start adding the code we need to pick up sound from a user's mic and render a visualization.

Adding the music player

First, we'll add a new import statement. We're going to need to reference an element, so we're going to take advantage of the useRef hook. At the end of your import statements, add this one.

import { useRef } from 'react'
Enter fullscreen mode Exit fullscreen mode

Then inside of the VisualizerPage component, add this line to make a reference we can use on an element.

const app = useRef();
Enter fullscreen mode Exit fullscreen mode

Now inside of the return statement, add this element right before the closing tag.

<div ref={app}></div>
Enter fullscreen mode Exit fullscreen mode

With these things in place, we're ready to use that <div> as our visualizer element.

Integrating the visualizations

We can start using P5 to create the visualization. We'll add one more imported hook to the file. We'll be adding the useEffect hook. So in your existing import statements, add useEffect to the existing useRef line so it's all in one import statement.

import { useRef, useEffect } from 'react'
Enter fullscreen mode Exit fullscreen mode

Then inside the VisualizerPage component, add the following hook beneath the useRef variable.

useEffect(() => {
  let newP5 = new p5(sketch, app.current);

  return () => {
    newP5.remove();
  };
}, []);
Enter fullscreen mode Exit fullscreen mode

This useEffect hook initializes our instance of a P5 canvas in the app ref we created. If anything weird happens, it'll remove the P5 instance. This setup only happens when the page is initially loaded. That's why we have the empty array as a parameter.

Next, we can define what sketch is. This is how we tell P5 what it should render, how it should do it, and when it should update. We'll build this piece by piece.

Let's define the sketch function.

const sketch = p => {
  let mic, fft, canvas;

  p.setup = () => {
    canvas = p.createCanvas(710, 400);
    p.noFill();

    mic = new p5.AudioIn();
    mic.start();
    p.getAudioContext().resume()
    fft = new p5.FFT();
    fft.setInput(mic);
  }
}
Enter fullscreen mode Exit fullscreen mode

We start by taking the current instance of P5 as a variable called p. Then we define a few variables to hold a value for our mic, to handle some fft operations, and to create the canvas element.

Then we define what P5 should do on setup. It creates a new canvas with the width and height we defined. We decide it shouldn't have any kind of fill in the canvas.

Now things start to get interesting. We'll grab our mic input object with the AudioIn method. Then we'll call mic.start to get the mic to start listening for sound. Because most browsers don't let you automatically start recording a user's mic, we have to add the line to resume listening.

Next, we create an fft object that we use to handle the input from the mic. This is important for our visualizer to account for different pitches it picks up through the mic.

Since we have the setup ready to go, we need to define what should be drawn in the canvas. Below the setup method we just defined, add this code.

p.draw = () => {
  p.background(200);

  let spectrum = fft.analyze();

  p.beginShape();
  p.stroke('#1d43ad')
  p.strokeWeight('3')

  spectrum.forEach((spec, i) => {
    p.vertex(i, p.map(spec, 0, 255, p.height, 0));
  })

  p.endShape();
}
Enter fullscreen mode Exit fullscreen mode

First, this changes the background color to a shade of grey. Then we use fft.analyze to get the amplitude or height of each frequency that's picked up from the mic.

Then we use beginShape to tell P5 we're going to be drawing some type of line. Next we give the line a stroke color and a strokeWeight to add some definition to how the line will look.

Next we take each point in the spectrum from our fft and add a vertex for the points on the line. This will give us a visual representation of how the sound's pitches break down. Once all of those vertices are added to the shape, we finish the line by calling endShape.

All that's left now is saving a snapshot of the image when a key is pressed. We'll do that with the following code. Make sure to add this below the draw method we just finished.

p.keyPressed = () => {
  if (p.keyCode === 39) {
    p.saveCanvas('canvasSnapshot', 'png')
  }
}
Enter fullscreen mode Exit fullscreen mode

This is one of the ways you can interact with P5. Take a look through their docs if you want to learn more. I chose the right arrow, but you can feel free to change this to any other key. Just make sure you update the keyCode value.

Right now, if a user presses the right arrow key, a snapshot of the visualization will be downloaded to their device. It'll be a png file named canvasSnapshot.

That's it! All that's left is to refresh the browser and make sure your mic permissions are adjusted. You should see something like this in your browser now.

visual of sounds picked up from mic

If you hit the right arrow key, you'll get an image that looks similar to this.

saved image from key press

Finished code

If you want to take a look at this working, you can check out this Code Sandbox or you can get the code from the audio-visualizer folder in this repo.

Conclusion

Working with audio on the web can be an interesting way to provide data to users. It can help make your apps more accessible if you use it correctly. You can also generate images that might give you a better understanding of the sound you're working with. This definitely comes up a lot in machine learning!

Top comments (1)

Collapse
 
deciduously profile image
Ben Lovy

This is cool!