DEV Community

Kade Esterline
Kade Esterline

Posted on

Make Videos With JavaScript 2

Since my last blog post I've made a few changes to the app I've been working on. The app is going to take take user input for a stackoverflow questions url and a youtube videos url and create a narrated tiktok similar to this project. First I want to talk about some of the changes I've made, then I'll talk about the main.js file and what it does.

Changes since last week

A few things have changed but nothing major. I created a folder inside of /lib for the files being used in edit-video.js. I did this because the amount of steps needed to actually edit the video together justifies being split up. So far, I've been able to attach the audio files from the Google Cloud text-to-speech API to the screen shots from Puppeteer. I haven't been able to find the same amount of time to make as much progress as I had initially, but since I switched from etro.js to using ffmpeg, it's been going a lot smoother. I decided to make the switch to ffmpeg after reading into remotion and making an attempt at implementing it. I also found shotstack but didn't want to potentially be limited in the amount of API calls I could make.

I also renamed index.js file to main.js at some point and haven't changed it back yet.

Here are the changes I made to the /lib file

└─── lib
    └─── edit-video
            add-audio.js
            edit-video.js
            image-to-video.js
    │
    │   api-call.js
    │   download-video.js
    │   parse-text.js
    │   screenshot.js
    │   text-to-speech.js

Enter fullscreen mode Exit fullscreen mode

There could still end up being a few changes while I wrap the project up, and I'll have a similar update the next time I write a post.

The Main.js File

main.js is the entryway to the app. In main.js, I'm importing modules from /lib as well as a few npm packages.

import chalk from "chalk";
import inquirer from "inquirer";
import { screenshot } from "./lib/screenshot.js";
import { makeApiCall } from "./lib/api-call.js";
import { downloadVideo } from "./lib/download-video.js";
import { convertTextToSpeech } from "./lib/text-to-speech.js";
import { editVideo } from "./lib/edit-video.js";
Enter fullscreen mode Exit fullscreen mode

In the last post I talked a little about what each npm package I was using did, but I'm going to give a better explanation of how they're used in my app.

Chalk and Inquirer are used together when the app prints to the console and when it asks for user input. Inquirer is used to grab input and Chalk is used to style anything logged to the console.

Here's how they're being used in main.js

const greeting = chalk.green("STACKOVERFLOW VIDEO CREATOR");
console.log(greeting);

async function getQuestionURL() {
  const answer = await inquirer.prompt({
    name: "questionURL",
    type: "input",
    message: "Enter the stackoverflow questions URL:",
  });
  questionURL = answer.questionURL;
}
Enter fullscreen mode Exit fullscreen mode

greeting prints out in green text, and when getQuestionURL is called the user is prompted to enter a URL to a stackoverflow question.

The following five import statements are importing modules being exported from /lib. I'll show how each function is being called in this post, and make a few more posts going over what the functions being called actually do.

After the app gets both URL's from the user, the app then asks the user to confirm their input before calling a function called startVideoEdit which makes function calls to the /lib modules.

First, makeApiCall gets the text to be converted to audio. It's passed in a question URL and a few global variables kept in main.js that are used throughout the app.

await makeApiCall(questionURL, questionDataObj, plainTextStrings, fileNames);
Enter fullscreen mode Exit fullscreen mode

Then screenshot is called and uses Puppeteer to screenshot the pictures needed. This function also takes a question URL and global variables.

await screenshot(questionURL, questionDataObj, fileNames);
Enter fullscreen mode Exit fullscreen mode

Next convertTextToSpeech gets called to get audio for the title, question body and answers to the question. Text strings are passed into the function as well as a title to save the audio as. I'd like to eventually try to refactor this into just one function call, but for now it takes a few to get the job done. First, I get the audio for the title, followed by the body of the question before looping through an array of question strings.

await convertTextToSpeech(questionDataObj.title[0], "question-title");
await convertTextToSpeech(questionDataObj.textString[0], "question-body");

async function getAnswerAudio() {
  for (let i = 1; i <= plainTextStrings.strings.length; i++) {
    await convertTextToSpeech(
      plainTextStrings.strings[i - 1],
      fileNames[i + 1]
    );
  }
}

await getAnswerAudio();
Enter fullscreen mode Exit fullscreen mode

Lastly, before editVideo is finally called to wrap everything up, downloadVideo is called to download the youtube video to use as the background to the video. The only parameter for downloadVideo is the youtube vieo passed in by the user.

await downloadVideo(videoURL);
Enter fullscreen mode Exit fullscreen mode

That does it for main.js, if you have any questions I'll do my best to reply. Thanks for reading and feel free to check out some of my other posts on dev.to.

Top comments (0)