DEV Community

Cover image for How we edited 175 conference videos in 5 hours
Christian Heilmann
Christian Heilmann

Posted on • Originally published at wearedevelopers.com

How we edited 175 conference videos in 5 hours

Every year after the WeAreDevelopers World Congress is over, we have a ton of video footage to edit and release. Most of it is in raw format and needs editing by hand, but a lot of our sessions are also streamed live on YouTube and thus easier to re-use. This year, these were 175 talks, panels and keynotes. My job was it to find a way to edit their start and end times, add an intro and outro video and extract the audio for caption generation.

Last year this took a few months. This year, the conversion bit was done in about 5 hours. How? By using the power of … PHP!

For the impatient: here is the script you can also run. But let's step back a bit into what the task was and how we approached it.

The Setup

Here are the things we used to batch convert the videos in that amount of time:

  • 1 Mac Mini M2 16 GB
  • Google Sheets
  • PHP / HTML / CSS / JavaScript
  • Terminal
  • Google Drive
  • yt-dlp
  • ffmpeg

Step 1: Get and clean the data

The first step was to take a look at what we have. The third party tool we use to track all sessions at the event is Swapcard, and whilst they provide an API, it seemed faster to do a full export of the data into a spreadsheet. Alas, that wasn't that easy as there had to be two separate sheets. Information about all the sessions and information about all the people (including speakers). The data was a lot more than we needed for this purpose, and it was messy, mixing upper and lower case and having HTML in the descriptions.

A spreadsheet with lots of information in non-cleaned formats

The items I needed was:

  • The Title of the talk
  • The Description of the talk
  • The Time and Date to cross-refernce with the conference schedule
  • The YouTube ID of the stream
  • The Unique ID of Swapcard of the item to cross-reference with the speaker sheet.

Also, I am terrible with Excel, so the first thing I did was import the sheet into Google Sheets. There I deleted all the cruft I didn't need by column and all the sessions that did not have a YouTube ID. This gave me a much smaller dataset and what's really cool about Google Sheets is that you can share a sheet in TSV format as a URL. Go to File > Share > Publish to Web, select "Tab separated values" and you get a URL you can use.

Google Sheets File menu open and with the Share entry selected showing the secondary menu with Publish to web as the selection

The Publish to the web dialog of Google Sheets with Tab Separated values selected as the output format

Tab separated values, makes more sense than CSV (comma separated values) in my book. With CSV exports, you always have the issue that values that do contain commas (like the title and description) will have quotes around them and each quote inside them needs to be proceeded by a backslash. With tabs, you do not have those issues.

That way I could cut down the dataset and use it in my own scripts. You can try this yourself, in this Reading Google Sheet as TSV CodePen. The sheet itself is available here.

In JavaScript:

// TSV to JSON
let url = 'https://docs.google.com/spreadsheets/d/'+
          '1DCOd7LLkYxW7VOzjvxuovMCKElB4g_233h7WIvAuuxM/'+
          'pub?output=tsv';
// get the sheet as TSV          
fetch(url).then(response => response.text()).then(text => {
    // split at line breaks
    let lines = text.split('\n');
    // get header data of first line and split at tab
    let headers = lines[0].split('\t'); 
    // delete first entry (headers)
    lines.shift();
    // replace each line with the data stored in it
    lines = lines.map(line => line.split('\t'));
    // start dataset
    let dataset = [];
    // iterate over lines and assemble named objects
    lines.forEach(data => {
        let dummyobj = {}
        data.forEach((item, i) => {
            dummyobj[headers[i].trim()] = item.trim();
        });
        dataset.push(dummyobj);
    })
    // get glorious JSON
    console.log(JSON.stringify(dataset));
});
Enter fullscreen mode Exit fullscreen mode

In PHP:

<?php
$url = 'https://docs.google.com/spreadsheets/d/'.
'1DCOd7LLkYxW7VOzjvxuovMCKElB4g_233h7WIvAuuxM/'.
'pub?output=tsv';
$data = file_get_contents($url);
$list = explode("\n", $data);
$keys = explode("\t", $list[0]);
array_shift($list);
$dataset = array();
foreach($list as $l) {
    $row = explode("\t", $l);
    $n = array();
    for($i=0; $i<count($keys); $i++) {
        $n[trim($keys[$i])] = trim($row[$i]);
    }
    array_push($dataset, $n);
}
echo json_encode($dataset);
?>
Enter fullscreen mode Exit fullscreen mode

This gave me almost all data, but I had one thing that was missing: the correct start and end times of the videos. The streaming recordings had all kind of introductory videos, sometimes started in the middle of a break and showed the recorded Q&A after the talks. All of which I didn't want. What to do?

Step 2: Build a crowd-sourcing interface (of sorts)

Easy, use the power of having access to colleagues. Using the above live data from the Google Sheet, I embedded the sheet in an HTML document and offered links to the videos. I then asked colleagues to put their names next to the video and check it. Once they found the start and the end of the video, they entered it into the sheet and that would remove it from the list. Here is what that looked like (slightly sped up):

The main stumbling block I found is that you can't just embed a YouTube video by changing the src of an iframe but you need to use the youtube embed API instead. You can see this in action in this demo codepen.

My colleagues came through like troopers, and two hours later I had all the start and end times in the spreadsheet. Once I had that, I could move on to the batch conversion.

Step 3: Pull in the timing data and batch convert the videos

In order to get the videos from YouTube, I am using yt-dlp, basically YouTube-DL with a few more features.

Once installed, you can go to the Terminal and call yt-dlp with a valid YouTube ID. For example:

$ yt-dlp pOVduya8ytU
Enter fullscreen mode Exit fullscreen mode

This is a pretty chatty experience, as yt-dlp tends to over-report:

Output of yt-dlp downloading a movie, giving all kind of details you don't need or don't understand

This gave me the full video on my hard drive. There is also a feature in yt-dlp to download a certain time segment of a video but it seemed unreliable and I also have to use ffmpeg to add intro and outro videos in any case.

There were a few other niggles with this. First of all, the file name is terrible, often full of special characters and contains the YouTube ID. You can work around that by asking yt-dlp to use another file name:

yt-dlp --output simpler.mp4 pOVduya8ytU
Enter fullscreen mode Exit fullscreen mode

As I wanted the original file name though to clean it up, I asked to return that one:

yt-dlp --print filename pOVduya8ytU
Enter fullscreen mode Exit fullscreen mode

The next issue was that YouTube are clever clogs and always give the best format in terms of speed, size and performance. So quite a few of the videos were MKV with WebM video and Opus Audio. For the cleanup of the audio and to add my own intro and outro videos I did need MP4 and M4A files though. There is an option to force that, so I used this one:

yt-dlp --output cleanedname.mp4 --merge-output-format mp4 
Enter fullscreen mode Exit fullscreen mode

That got me something to work with. Next I had to cut out the video segment I wanted. Using ffmpeg, this is pretty straight forward. For example, to get the video between second one and second two, it is:

$ ffmpeg -ss 00:00:01 -to 00:00:02 -i myvideo.mp4 -c copy partvideo.mp4
Enter fullscreen mode Exit fullscreen mode

However, if yt-dlp was too chatty and flooded the terminal with messages, ffmpeg is a whole other level of screaming. You can make it be less annoying by setting the log level to only show errors:

$ ffmpeg -hide_banner -loglevel error 
Enter fullscreen mode Exit fullscreen mode

Now it came to adding the intro and outro files. The niggle I found was that some of the videos were 1920 × 1080 and others 1280 × 720 which meant I had to use different intro and outro videos. To get the size of a video file, you can use ffprobe which is part of the ffmpeg install.

ffprobe -v error -select_streams v -show_entries stream=width,height -of csv=p=0:s=x video.mp4
Enter fullscreen mode Exit fullscreen mode

ffmpeg reporting the size of a video file in the terminal

OK, I forced the mp4 format and I know the video size. Adding intro and outro video was the next step. This is a tad more involved ffmpeg sourcery and I am big enough to admit that I just looked that up on the web:

$ ffmpeg -i intro.mp4 -i video.mp4 -i outro.mp4 -filter_complex "[0:v] [0:a] [1:v] [1:a] [2:v] [2:a] concat=n=3:v=1:a=1 [v] [a]" -map "[v]" -map "[a]" " final.mp4
Enter fullscreen mode Exit fullscreen mode

The last bit then was to get the audio stream of the video, which is a lot more straight forward:

$ ffmpeg -i movie.mp4 -vn -acodec copy audio.mp4
Enter fullscreen mode Exit fullscreen mode

The wonderful thing about PHP is that you can run any other shell tool using the exec() command. Which brings me to the final script I ran in a folder to get all the videos and convert them:

<?php

// replaces non-valid chararacters in filenames
// converts spaces to hyphens
// removes leading and trailing hyphens
// and capitalizes the first letter of each word
function converfilename($name) {
    $name = str_replace([
         "?", "[", "]", "/", "\\", "=", "<", ">", 
         ":", ";", ",", "'", '"', "&", "$", "#",
          "*", "(", ")", "|", "~" 
    ], "", $name);
    $name = preg_replace("/[\s-]+/", "-", $name);
    $name = trim($name, ".-_");
    return ucwords($name);
}

function convert($id,$start,$end,$uid){
    // Get the YouTube ID and remove share links.
    $id = preg_replace("/\?share.*/","",$id);
    $uid = str_replace('==','',$uid);
    // Set the ffmpeg log level.
    $tool = 'ffmpeg -hide_banner -loglevel error ';

    // Get the video file name.
    $cmd = "yt-dlp --print filename '$id'";
    $name = exec($cmd);
    $name = preg_replace("/\..*$/",".mp4",$name);

    // Convert the filename to a valid format.
    // Create a folder for the video 
    // and move into it.
    $dirname = converfilename(
               str_replace('.mp4','',$name)).
               '---'.$uid;
    $name = converfilename($name);
    mkdir($dirname);
    chdir($dirname);

    // Get the video from YouTube.
    echo "\n\n*** Getting ".$name." *** \n\n";
    $cmd = "yt-dlp --output '$name' "
           "--merge-output-format mp4 '$yt$id'";
    $x = exec($cmd);

    // Check if the video is small or big and define
    // intro and outro video names accordingly.
    $cmd = "ffprobe -v error -select_streams v ".
           "-show_entries stream=width,height -of "
           ".csv=p=0:s=x '$name'";
    $size = exec($cmd);
    echo '*'.$size.'*'."\n";
    $small = strstr($size, '1280') ? '-small' : '';
    $intro = "../Intro$small.mp4";
    $outro = "../Outro$small.mp4";

    // Get the part of the video from start to end. 
    echo "\n\n*** Getting clip from $start to $end *** \n\n";
    $chunkname = preg_replace("/\.mp4$/","-clip.mp4",$name);
    $cmd = $tool."-ss $start -to $end -i '$name' -c copy '$chunkname'";
    exec($cmd);

    // Add intro and outro videos
    echo "\n\n*** Adding intro and outro *** \n\n";
    $introname = str_replace('-clip.mp4','---'.$uid.'.mp4',$chunkname);
    $cmd = $tool."-i '$introname' -vn -acodec copy '".
           str_replace(".mp4",".m4a",$introname)."'";
    exec($cmd);
    // Delete old part of the video.
    unlink($chunkname);

    // Extract audio from the video.
    echo "*** Extracting audio from ".$name." *** \n";
    $cmd = $tool."-i '$introname' -vn -acodec copy '".str_replace(".mp4",".m4a",$introname)."'";
    exec($cmd);
    // Go back up one level.
    chdir('..');
}

// Get all the timings.
$timings = file_get_contents('timings.tsv');
$lines = explode("\n",$timings);

// If you only want to convert a few
// videos at a time, you can slice the 
// array like this:
// $lines = array_slice($lines,148,10);

// Loop through all timings and convert
// each video to a clip.
foreach($lines as $i => $l){
    echo ($i+1)." of ".sizeof($lines)."\n";
    $chunks = explode("\t",$l);
    $id = $chunks[6];
    $start = '0'.$chunks[1];
    $end = '0'.$chunks[2];
    $uid = $chunks[7];
    echo $chunks[3]."\n";
    echo ($id.'-'.$start.'-'.$end.'-'.$uid)."\n";
    convert($id, $start, $end, $uid);
}
?>
Enter fullscreen mode Exit fullscreen mode

It's not pretty, it's not clever or suave. But it really did the job. I ended up with a folder full of videos all having the unique ID as part of the folder name.

The biggest time spent was the re-encoding with intro and outro. The rest is almost instantenous as ffmpeg makes copies for the other actions. One thing I also really like is that yt-dlp recognises when a video has already been downloaded and will skip the new download.

Using more PHP script work, I merged the two TSV files and created a dataset of all the talk information. But that's the topic of another post - if you want.

Top comments (0)