DEV Community

Cover image for Framr - Give Your Demo Video a Fancy Gaussian Blur Frame
Shane Duffy
Shane Duffy

Posted on • Originally published at shaneduffy.io

Framr - Give Your Demo Video a Fancy Gaussian Blur Frame

Initially I wrote an FFmpeg script to give my YouTube videos a Gaussian blur frame, but I realized FFmpeg had actually been ported to WebAssembly, so I made a little web app to do it all from the browser.

Links

Web App: framr.dev

FFmpeg Script: framr-script

Rainmeter and Background Terminal

After I built my first open source project a few years ago, Background Terminal, I wanted to post it to the Rainmeter subreddit, so that people using Rainmeter would see it. If you don't know what Rainmeter is, it's just an application for making interactive, live Windows wallpapers. Background Terminal was just an application to also allow you to overlay a terminal on your Windows wallpaper, so it worked well with Rainmeter.

Anyway, people would often post short clips their interactive wallpaper designs in the Rainmeter subreddit, and at the time I had started to see people post it with these fancy, partially opaque frames with a bit of shadow. It looked really nice, so I installed DaVinci Resolve and spent a couple days figuring out how to achieve this. And it ended up looking really good! (See the gif above)

Starting a YouTube Channel

I recently started working on a YouTube Channel, and while doing my first video I realized that it might look good to use this same technique during the sections where I'm showing my monitor, to give it a more "polished" feel. After attempting to manually do this for each clip in DaVinci, it quickly became apparent that this would be prohibitively time consuming. But perhaps I could automate it somehow? I looked into Davinci extensions or scripts, but I was still a total noob with it, and this would be far too much of a time investment.

A few months previously I had used a CLI tool, FFmpeg, to extract video frames for an AI training tool I was working on. Could it be used for something like this?

Learning FFmpeg

After scouring the web, I managed to put together the following script, achieving my desired effect:

# Create inner clip
innerClip="inner_$1"
ffmpeg -i $1 -vf scale=1800:1012 -preset slow -crf 18 $innerClip -y

# Create outer clip
outerClip="outer_$1"
ffmpeg -i $1 -vf "boxblur=30" -c:a copy $outerClip -y

# Apply shadow to outer clip
shadowClip="shadow_$1"
ffmpeg -i $outerClip -i shadow.png -filter_complex "overlay" $shadowClip -y

# Apply inner clip to shadow clip
ffmpeg -i $shadowClip -i $innerClip -filter_complex "overlay = 60:34" result.mp4 -y
Enter fullscreen mode Exit fullscreen mode

It works by copying the video itself into innerClip and outerClip. Then it applies a blur effect to outerClip, and puts the innerClip on top of it, slightly shrunken down. Finally, it adds a shadow (which you must provide as a shadow PNG, I just manually created this in GIMP). Here's an example of the transformation:

Before

After

You can probably tell, based on the fixed numbers in the script, that this script only works for 1080p videos, but this was fine for my purposes.

Turning This Into a Web App: framr.dev

It wasn't long after this I happened to come across FFmpeg ported to WebAssembly! I was a bit occupied with several other projects at the time, but I knew right away that I wanted to migrate my cumbersome script into a nice and dynamic web application, so I whipped up a little prototype that night.

Generating Shadows Dynamically

After putting together a little UI with Angular, I realized that if I wanted to make this capable of working on every video size (not just 1080p) I would need a way to dynamically generate shadows of any size. So after a video is dropped into the app, I was able to use a canvas to generate a shadow with the appropriate size:

// Ensure it is large enough to contain shadow
this.context.canvas.width = video.Width;
this.context.canvas.height = video.Height;

// Clear canvas
this.context.clearRect(0, 0, video.Width, video.Height);

// Calculate shadow dimensions
let innerWidth = video.Width * this.frameRatio;
let innerHeight = video.Height * this.frameRatio;
let x = (video.Width - innerWidth) / 2;
let y = (video.Height - innerHeight) / 2;

// Create shadow
this.context.rect(x, y, innerWidth, innerHeight);
this.context.shadowColor = '#000000';
this.context.shadowBlur = 20;
this.context.shadowOffsetX = 10;
this.context.shadowOffsetY = 10;
this.context.fill();

// Load FFmpeg
if (!this.ffmpeg.isLoaded()) {
    await this.ffmpeg.load();
}

// Save canvas image to FFmpeg
var buffer = Buffer.from((this.canvasElement.toDataURL().split(';base64,')[1]), 'base64');
this.ffmpeg.FS('writeFile', 'shadow.png', buffer);
Enter fullscreen mode Exit fullscreen mode

Running Our FFmpeg Script

After we write this file to our FFmpeg context, we can then call our original script:

// Save video to FFmpeg
this.ffmpeg.FS('writeFile', video.File.name, await fetchFile(video.File));

// Begin processing
let outputName = 'framed_' + video.File.name;
await this.ffmpeg.run('-i', video.File.name, '-vf', 'scale=' + innerWidth + ':' + innerHeight, '-preset', 'slow', '-crf', '18', 'inner-clip.mp4');
await this.ffmpeg.run('-i', video.File.name, '-vf', 'boxblur=30', '-c:a', 'copy', 'outer-clip.mp4');
await this.ffmpeg.run('-i', 'outer-clip.mp4', '-i', 'shadow.png', '-filter_complex', 'overlay', 'shadow-clip.mp4');
await this.ffmpeg.run('-i', 'shadow-clip.mp4', '-i', 'inner-clip.mp4', '-filter_complex', 'overlay=' + x + ':' + y, outputName);

// Download result
const data = this.ffmpeg.FS('readFile', outputName);
this.downloadBlob(new Blob([data.buffer]), outputName);
Enter fullscreen mode Exit fullscreen mode

While FFmpeg has some functionality for calling some things with proper functions, I found that not everything that I needed here was supported, in which case we can just use the ffmpeg.run function to run raw FFmpeg scripts.

I fed it a few files, and it yielded exactly the result that it did when running my original script! However, it was much, much slower...

Optimizing FFmpeg

While running it on my desktop, it was so fast that I didn't really care about optimization. However, it was roughly 10x slower on WebAssembly... So I started looking into how I could make some performance improvements.

The biggest issue with my existing code was that I was applying each step, one at a time. This means that each layer of the video was individually processed and written to a file, before finally combining them all into one video. But as it turns out, we can actually use a complex filter to run all of these steps in a single go:

// Begin processing
let outputName = 'framed_' + video.File.name;
await this.ffmpeg.run('-i', video.File.name, '-i', 'shadow.png', '-filter_complex', 
'[0]boxblur=30[a];[a][1]overlay[b];[0]scale=' + innerWidth + ':' + innerHeight + '[c];[b][c]overlay=' + x + ':' + y, '-preset', 'slow', '-crf', '18', outputName);

// Download result
const data = this.ffmpeg.FS('readFile', outputName);
this.downloadBlob(new Blob([data.buffer]), outputName);
Enter fullscreen mode Exit fullscreen mode

[0]boxblur=30[a] - Apply a boxblur to video input 0, define the result as a

[a][1]overlay[b] - Overlay a on top of the video input 1, define the result as b

[0]scale=innerWidth:innerHeight[c] - Scale the video input 0 to the specified dimensions, define the result as c

[b][c]overlay=x:y - Overlay b on top of c at the specified position offset, output the result

This makes it so that each step runs in memory, without writing to the disk until the final step, thus notably improving performance. It is still much slower than running the script manually, but it's pretty cool that we can run this all from the browser regardless.

Top comments (0)