DEV Community

ndesmic
ndesmic

Posted on

How to record an HTML canvas element and make a GIF

This sort of slipped into my recent post on doing the camera for my WebGL project because I needed something to make visual samples to post with the article. As of this writing, Chromium browsers will work, Firefox should work with a couple tweaks (noted below, but not present in the demo), Safari will not.

Get your canvas

The canvas can have anything, WebGL, normal 2d stuff, WebGPU it doesn't matter. You can also do video and audio elements too.

In my code I have canvas that cycles through colors.

The code



const canvas = document.querySelector("canvas");
const recordBtn = document.querySelector("button");

let recording = false;
let mediaRecorder;
let recordedChunks;

recordBtn.addEventListener("click", () => {
  recording = !recording;
    if(recording){
            recordBtn.textContent = "Stop";
            const stream = canvas.captureStream(25);
            mediaRecorder = new MediaRecorder(stream, {
                mimeType: 'video/webm;codecs=vp9'
                                ignoreMutedMedia: true
            });
            recordedChunks = [];
            mediaRecorder.ondataavailable = e => {
                if(e.data.size > 0){
                    recordedChunks.push(e.data);
                }
            };
            mediaRecorder.start();
        } else {
            recordBtn.textContent = "Record"
            mediaRecorder.stop();
            setTimeout(() => {
                const blob = new Blob(recordedChunks, {
                    type: "video/webm"
                });
                const url = URL.createObjectURL(blob);
                const a = document.createElement("a");
                a.href = url;
                a.download = "recording.webm";
                a.click();
                URL.revokeObjectURL(url);
            },0);
        }
});


Enter fullscreen mode Exit fullscreen mode

First we get a stream from the canvas canvas.captureStream(25). The parameter is the max framerate, you can change less but if it animates more than that you'll drop frames.

Next we create a MediaRecorder. This takes two parameters a stream, which comes from the canvas (or video, audio etc) and some options which include the bitrate and which codec you want to use. The list of options can be found on MDN but I use 2:

  • ignoreMutedMedia: true. This is because we don't have an audio track so we don't need to waste size with silence.
  • mimeType: 'video/webm;codecs=vp9'. This one is a bit odd but we can provide the MIME type for WEBM. It also includes the codec as we can use VP8 or VP9. WEBM with VP9 will probably get you the best compression to quality ratio but you can play around with it.

Then we have to keep track of the "chunks". In many cases this is a misnomer as you'll probably only get one chunk, it's not really streaming in the way you think. The media recorder has a dataavailable event that triggers every time the stream has a new chunk for you. So we inspect size to make sure it has something and then append it to the array with our other chunks. To start this whole process we use start().

Note for Firefox users

You'll need to remove ignoreMutedMedia as it doesn't seem to work in Firefox and will just cause the whole canvas to not produce chunks. You'll also need to use the VP8 codec video/webm;codecs=vp8 as Firefox doesn't yet support VP9.

Stopping and downloading

If we were already recording then we use the other block which stops and downloads the recording. We call stop on the media recorder to stop. This will cause the buffer to flush and you'll get a chunk in ondataavailable. The sucky part is that this doesn't happen synchronously so I wrap the next part in a setTimeout 0 to make sure it happens after the flush. There are slightly more accurate ways to do this but this is fine for most real cases.

Using the blob constructor we capture the array of chunks and turn it into a blob with the MIME type video/webm. If you use a different codec that is not webm you need to change it.

Then we have the canonical way to download something in a web app. You need to create an object URL for the blob. Create a link (you don't need to attach it to the DOM), set the href to the object URL, add a download attribute so that the browser triggers downloading with the filename suggestion and then programmatically click it. After that clean up the object URL otherwise you'll have a memory leak.

And there we have a WEBM of whatever you were doing. If all media and blog platforms were nice this would be all we need.

Converting to GIF

Some platforms are less good than others. Despite the fact that WEBMs take up a fraction of the size they often aren't accepted but GIFs usually are and this seems to include dev.to. So sadly, we need to bloat our video with a crappy image codec that looks bad and is much bigger. This part is also less easy. The simplest way I've found is to use FFMPEG which is a basically an industry standard, open-source, CLI swiss army knife of video conversion.

You can find a download for it here:
https://ffmpeg.org/



ffmpeg -y -i input.webm -vf palettegen palette.png


Enter fullscreen mode Exit fullscreen mode

(For Powershell you need to use ./ffmpeg.exe)

First run this. This extracts a pallet as a PNG file for the gif as without it we'll get worse looking results. The -y says to overwrite the output file without asking. -i marks an input and -vf means to export a filtergraph (I don't exactly know but it has something to do with pallets)



ffmpeg -y -i input.webm -i palette.png -filter_complex paletteuse -r 10 output.gif


Enter fullscreen mode Exit fullscreen mode

This converts the WEBM to a GIF. We're also providing the filtergraph we made in the last step. We use the -filter_complex palettuse option which is again some filtergraph thing I don't fully understand but seems to amount to "use the pallet." The last parameter -r is the framerate. You can match the video for better results or lower it for reduced file size.

You can also add the -loop parameter. By default it is 0 which means loop forever but you can have it stop as well.

Here's what it produces:

Without generating a pallet:
output0

Pallet at 10 FPS:
output1

Pallet at 25FPS (native recording fps):
output2

Size-wise:

  • webm: 5kb
  • gif (no pallet, 10fps): 28.0kb
  • gif (pallet, 10fps): 12.0kb
  • gif (pallet, 25fps): 24.0kb

Using the pallet seems to improve file size as well.

If it doesn't exist already perhaps the next step is to convert FFMPEG to WASM so we can run this all in the browser without dealing with the command line. But this is a quick and dirty way to get an example for dev.to.

It's also worth mentioning if you want to dive into the rabbit-hole there are zillions of options with FFMPEG to get better compression, different formats and special features. http://www.ffmpeg.org/ffmpeg.html

Top comments (0)