Example repo here:
A while back, I wrote a version of this article using Ruby on Rails. While I was happy with how it turned out, at heart I'm a node developer. Because of that, I decided to redo that project in my language of choice.
I'll also be using AdonisJS as my backend framework instead of Express. I've found the AdonisJS framework has a lot of the conventions and features that I enjoyed about Rails, but within the JS ecosystem.
What We'll Be Building:
At the end of this project, we'll have a simple webapp that is capable of doing the following:
- Uploading user submitted
.mp4
files to an S3 bucket - Transcoding those
.mp4
files into an HLS playlist for streaming - Serving those HLS videos via Cloudfront CDN
Since we'll be using S3 and Cloudfront, you will need an AWS account. However, the AWS charges of those two services should be nominal given our current use case.
Getting Started:
Start by running the following command to initialize a new AdonisJs app.
npm init adonis-ts-app@latest adonis-vod
Select the following settings:
CUSTOMIZE PROJECT
❯ Select the project structure · web
❯ Enter the project name · adonis-vod
❯ Setup eslint? (y/N) · false
❯ Configure webpack encore for compiling frontend assets? (y/N) · false
After the cli tool finishes running, your new Adonis app will be setup in the adonis-vod
directory. Open that up in your IDE of choice.
Next, we'll need to install Lucid, Adonis' default ORM, so that we can create models and interact with our database.
Run the following to install Lucid.
npm i @adonisjs/lucid
Then, after npm completes installation, configure Lucid by running the following command:
node ace configure @adonisjs/lucid
Select SQLite
as a database driver, and for 'Select where to display instructions...' choose 'In the terminal'.
❯ node ace configure @adonisjs/lucid
❯ Select the database driver you want to use … Press <SPACE> to select
◉ SQLite
◯ MySQL / MariaDB
◯ PostgreSQL
◯ OracleDB
◯ Microsoft SQL Server
...
❯ Select where to display instructions … Press <ENTER> to select
In the browser
❯ In the terminal
From the cli output, the only variable we'll worry about right now is the DB_CONNECTION
env variable.
Open env.ts
and edit it to look like the following:
(Note the addition of DB_CONNECTION
at the bottom of the file)
import Env from '@ioc:Adonis/Core/Env'
export default Env.rules({
HOST: Env.schema.string({ format: 'host' }),
PORT: Env.schema.number(),
APP_KEY: Env.schema.string(),
APP_NAME: Env.schema.string(),
DRIVE_DISK: Env.schema.enum(['local'] as const),
NODE_ENV: Env.schema.enum(['development', 'production', 'test'] as const),
DB_CONNECTION: Env.schema.string(),
})
If you're not familiar with AdonisJS, the env.ts
is NOT our .env file, but rather a file that checks the existence of the required env vars before starting our server.
If you open up the .env
file, you should see
DB_CONNECTION=sqlite
on line 8, which is our actual env variable to select SQLite as our database connection.
The configuration settings for our SQLite database are stored in the config/database.ts
file, but we don't need to make any modifications for our app to function.
Next, we'll install the S3 driver for AdonisJS' Drive, this will allow us to upload and store our video files in AWS S3. Run the following command:
npm i @adonisjs/drive-s3
Once that finishes installing, run the configure command as well:
node ace configure @adonisjs/drive-s3
Once again, select "In the terminal" to display instructions. This time we will want to make updates to both the env variable names, and to the configuration settings.
First open .env
and change the newly created S3 env variables to be the following:
AWS_ACCESS_KEY=dummyKey
AWS_SECRET_KEY=dummySecret
S3_BUCKET=dummyBucket
S3_REGION=dummyRegion
Note that we're changing the variable names from S3_KEY
and S3_SECRET
to AWS_ACCESS_KEY
and AWS_SECRET_KEY
, since we'll be using Cloudfront as well. This name change isn't technically neccessary, but I prefer to be clear that the AWS credentials we'll be using are for more than just S3.
You can also delete the S3_ENDPOINT
variable since we won't be using that.
With the .env
file updated, open up config/drive.ts
, and scroll down until you see the commented out S3 Driver settings. Uncomment the s3 driver configuration, and update the settings to be the following:
s3: {
driver: 's3',
visibility: 'private', // <- This is VERY important
key: Env.get('AWS_ACCESS_KEY'),
secret: Env.get('AWS_SECRET_KEY'),
region: Env.get('S3_REGION'),
bucket: Env.get('S3_BUCKET'),
},
(If you see a typescript error with the driver
property after installing and configuring @adonisjs/drive-s3 you might need to Restart your Typescript Server)
Take note that we updated the key, and secret env variables names from the defaults to the new ones. We also changed the setting visibility
from 'public' to 'private'. If you don't change that variable you will run into errors when trying to upload to s3.
We also need to update our local
drive settings in drive.ts
update the settings to the following
local: {
driver: 'local',
visibility: 'private',
root: Application.tmpPath(''),
serveFiles: false,
basePath: '/',
},
Since we went ahead and set up our environment variables for AWS, we should handle setting up our S3 bucket, Cloudfront Distribution, and IAM user as well.
Start by logging into the AWS Console. From there, type 'IAM' in the top search bar, then click the result to open the "Identity and Access Management" dashboard.
On the dashboard, select "users" on the left-hand navbar, then click the blue "Add users" button.
Type in a name for your user, then select "Access key - Programmatic access" for the credential type. Click "Next: permissions" to continue. For permissions
For simplicity, we'll be using "Attach existing policies directly" with the policies: AmazonS3FullAccess
and CloudFrontFullAccess
, but in a production app you would probably want to use a more restrictive policy following principle of least privilege.
Simply check the box next to each Policy to attach it to our new user.
Click "Next: Tags" and skip this page by clicking "Next:Review" at the bottom right hand corner of the screen. As long as your user has the two permission policies from above listed under the "Permissions summary" you're all set to click "Create User"
The next screen will show you your credentials for the newly created user. As the page mentions, this is the last time AWS will show you these credentials, so it's best to download them as a CSV and store the file securely for future reference.
With our IAM user created, we can take those credentials from the CSV file and add them to our .env
file. The CSV column Access key ID
corresponds to our AWS_ACCESS_KEY
variable and Secret access key
to AWS_SECRET_KEY
.
Next, we'll create an S3 bucket to store our uploaded videos. In the same search bar you used to find 'IAM' start searching for 'S3' to open the 'S3' dashboard.
Click the orange "Create bucket" button to open up the bucket creation panel. You'll need to provide your bucket a globally unique name, but other than that, we won't need to change any other settings, so scroll to the bottom of the page and click the "Create bucket" button to finish creating your new bucket.
A quick note about S3 Buckets: By default, when you create a new Bucket the setting "Block all public access" will be set to true. Even though we will want our users to be able to access the media we store in our S3 bucket, we don't want users accessing files directly from S3.
Instead, we'll use a CloudFront CDN Distribution to serve our S3 files. This gives us 2 main benefits: 1 - our media is served from whichever CloudFront server is closest to the requesting user, giving our app faster speeds, 2 - It's cheaper to serve our media from CloudFront than it is from S3.
Update the last 2 env variables S3_BUCKET
and S3_REGION
with the correct options for the S3 bucket you just created. For S3 bucket, you simply need the name, not the ARN.
Finally, let's setup our CloudFront distribution. Open up the CloudFront Dashboard the same way we did with IAM and S3. Click, "Create Distribution".
On the Create Distribution dashboard, the first setting we'll need to update is the "Origin domain", in our case, the origin should be the S3 bucket you just created.
In the "Choose origin domain" search dropdown, you should be able to see your newly created bucket listed. Choose that.
The "Name" field should populate with a default value after selecting your Bucket.
For "S3 bucket access" select "Yes use OAI (bucket can restrict access to only CloudFront)". Under "Origin access identitiy" there should be a default option for you to choose. Then finally under "Bucket policy" select "Yes, update the bucket policy".
You'll also need to update the Response headers policy to use the "CORS-with-preflight-and-SecurityHeadersPolicy" so that we don't run into any CORS issues.
In a later installment, we'll configure our CloudFront Distribution to use Signed Cookies to limit media access to only our app users, but for now these settings are enough to get us up and running. Scroll to the very bottom and click "Create Distribution"
The final configuration step we need to take is adding our "tmp" directory to the exclude
array in our tsconfig.json
file. Since our application will be generating new files (.ts & .m3u8) into the tmp dir, we want to exclude that dir from being watched to automatically restart the server on file changes.
Open tsconfig.json
and update the exclude array to look like the following:
...
"exclude": [
"node_modules",
"build",
"tmp"
],
...
If you forget to add "tmp" to exclude
you'll see your dev server restarting a number of times while the transcode function we'll be implementing later is running.
With those configuration steps out of the way, we can start building out our app.
Run the following two commands to generate a database migration and a model for our Video object.
node ace make:migration videos
node ace make:model Videos
Note that AdonisJS will handle changing word between singular or plural automatically, so node ace make:model Videos
becomes app/Models/Video.ts
.
Open the database/migrations/[timestamp]_videos.ts
file, and update the public async up()
method to look like the following:
public async up () {
this.schema.createTable(this.tableName, (table) => {
table.increments('id')
table.string('name', 255)
table.string('original_video', 255)
table.string('hls_playlist', 255)
/**
* Uses timestamptz for PostgreSQL and DATETIME2 for MSSQL
*/
table.timestamp('created_at', { useTz: true })
table.timestamp('updated_at', { useTz: true })
})
}
In the above code, we are adding 3 columns to our database table: name - which is the name of our video, original_video - which will store a reference to path of our original video file in s3, and hls_playlist - which will store a reference to the hls playlist file in s3 that our app will generate and upload.
Pay attention to the fact that for database column names in migration files, AdonisJS uses snake_case convention.
Next, open app/Models/Video.ts
and update the class definition to the following:
export default class Video extends BaseModel {
@column({ isPrimary: true })
public id: number
@column()
public name: string
@column()
public originalVideo: string
@column()
public hlsPlaylist: string
@column.dateTime({ autoCreate: true })
public createdAt: DateTime
@column.dateTime({ autoCreate: true, autoUpdate: true })
public updatedAt: DateTime
}
In the above code, you'll notice that we're adding the same database columns that we added in the migration, except this time as properties on our Video class, and in camelCase instead of snake_case.
With our databse model and migration setup, we're almost ready to start using our database in our app. Before we can do so though, we need to "apply" our migration to our database. If you're coming from a MongoDB or NoSQL background, the concept of migrations might be new to you, but fortunatley applying database migrations is easy, run the following command:
node ace migration:run
The above command will "apply" that migration by creating the tables and columns specified in our migration file, and setting them to the correct data type within the database. If you want to read up a little more on migrations, here's the AdonisJS docs on the topic.
Now that our database migrations have been run, we can write our Controller, so that we can actually perform CRUD operations on our Video model. Run the following command:
node ace make:controller Videos
That will automatically generate an app/Controllers/Http/VideosController.ts
file for you. We'll be adding 4 methods to our Videos controller so that we can create new videos, upload & transcode them, then watch back our videos via HLS streaming. The 5 methods will be:
- index() - List all the videos in our app
- create() - Render the video upload page
- store() - Save a new video
- show() - Watch a single video
For our VideosController
we'll also need to install a few more dependencies. Run the following:
npm install @ffmpeg-installer/ffmpeg @ffprobe-installer/ffprobe hls-transcoder
(Full Disclosure: hls-transcoder is a package I maintain)
Below is the implementation of the VideosController
class. The only thing you'll need to update is {url: 'YOUR-CLOUDFRONT-URL-HERE'}
in the show()
method, line 29. And as the comment mentions, don't include the protocol, ie https://
. This could be moved to an env variable, but for our purposes hard-coding is fine and not really a security risk in this instance.
Otherwise, feel free to copy / past the Video Controller code from below, and I'll briefly explain what each method does.
import type { HttpContextContract } from '@ioc:Adonis/Core/HttpContext'
import Application from '@ioc:Adonis/Core/Application'
import Drive from '@ioc:Adonis/Core/Drive'
import Logger from '@ioc:Adonis/Core/Logger'
import * as fs from 'fs'
import * as path from 'path';
import ffmpeg from '@ffmpeg-installer/ffmpeg';
const ffprobe = require('@ffprobe-installer/ffprobe')
import Video from 'App/Models/Video'
import Transcoder from 'hls-transcoder'
export default class VideosController {
public async index({ view }: HttpContextContract) {
const videos = await Video.all()
return view.render('videos/index.edge', { videos })
}
public async create({ view }: HttpContextContract) {
return view.render('videos/create.edge')
}
public async show({ params, view }: HttpContextContract) {
const video = await Video.findByOrFail('id', params.id)
const cloudfront = {url: 'YOUR-CLOUDFRONT-URL-HERE'} // <- Put your cloudfront url here DON'T include https://
return view.render('videos/show.edge', { video, cloudfront })
}
public async store({ request, response }: HttpContextContract) {
const videoFile = request.file('videoFile')
const name = request.input('name')
var video = await Video.create({
name: name,
})
// Since id is generated at the database level, we can't use video.id before video is created
video.originalVideo = `uploads/${video.id}/original.mp4`
await videoFile?.moveToDisk(`uploads/${video.id}`, {
name: `original.mp4`
}, 's3')
await this.transcodeVideo(video)
response.redirect().toPath('/videos')
}
private async transcodeVideo(video: Video): Promise<void> {
const local = Drive.use('local')
const s3 = Drive.use('s3')
// Get FileBuffer from S3
const videoFileBuffer = await s3.get(video.originalVideo)
// Save S3 file to local tmp dir
await local.put(`transcode/${video.id}/original.mp4`, videoFileBuffer)
// Get reference to tmp file
const tmpVideoPath = Application.tmpPath(`transcode/${video.id}/original.mp4`)
const transcoder = new Transcoder(
tmpVideoPath,
Application.tmpPath(`transcode/${video.id}`),
{
ffmpegPath: ffmpeg.path,
ffprobePath: ffprobe.path
}
)
// Log transcoder progress status
transcoder.on('progress', (progress) => {
Logger.info(progress)
})
// Run the transcoding
await transcoder.transcode()
// After transcoding, upload files to S3
let files
try {
files = fs.readdirSync(Application.tmpPath(`transcode/${video.id}/`))
} catch (err) {
Logger.error(err)
}
await files.forEach( async (file) => {
const extname = path.extname(file)
if(extname === '.ts' || extname === '.m3u8') {
const fileStream = await local.get(`transcode/${video.id}/${file}`)
await s3.put(`uploads/${video.id}/${file}`, fileStream)
}
})
// Then, clean up our tmp/ dir
try {
await fs.rmSync(Application.tmpPath(`transcode/${video.id}/`), { recursive: true })
} catch (err) {
Logger.error(err)
}
video.hlsPlaylist = `uploads/${video.id}/index.m3u8`
await video.save()
return new Promise((resolve) => {
resolve();
})
}
}
With that code added, let me briefly explain what we're doing.
index()
- The index method queries our database for every video, then renders the videos/index.edge
template with the array of videos
passed into the template as state.
create()
- The create method returns the view with our form to upload and create new video objects.
show()
- The show method, which accepts a video id as a url parameter, queries our database to find that video by the supplied id, and renders the videos/show.edge
template with both the video, and the url of our CloudFront distribution passed into the template as state.
The above 3 methods should be familiar if you've used frameworks like Rails or Laravel before. The 4th method store()
is also used in Rails and Laravel conventions, but we're adding a good bit of custom functionality here, so let's walk through that in more depth.
First, our store()
method accepts 2 inputs from the request, videoFile
and name
. Next, we use the Video
model class to create a new video object, and we pass in the name
variable. Since we'll be using the video id
in our transcoding and storage PATHs we can't instantiate the new video object with the originalVideo
or hlsPlaylist
properties just yet.
After creating the new video, we can call
video.originalVideo = `uploads/${video.id}/original.mp4`
, to set that property (It doesn't save to the database until we call the .save() method though!)
Then, we use the .moveToDisk
method from Drive, to upload our user's video to S3 for storage.
In the next line, we call the private this.transcodeVideo()
method, and pass in our video
as a parameter. Let's walk through what this method does.
First, transcodeVideo()
gets references to the local
and s3
Drives. Then, using the s3
Drive, we find the file we just uploaded by using the video.originalVideo
property we just set, and save a reference to the File Buffer.
Then, using the local
Drive, we store that file in our tmp directory to use for transcoding. If it seems redundant to upload the file to s3 then download the file locally, it kind of is, except setting up our transcoder this way makes it significantly easier to move our transcoding to a background job if we decide to implement a Job Queue later on.
With the original video file saved to our tmp dir, we then pass that file to our transcoder, and call the transcoder.transcode()
method. The transcoder will emit progress
events everytime ffmpeg
updates us on our transcoding progress, so I've included Logger.info
calls to console log that status as it comes through. Our frontend is very bare, so we won't be getting any progress updates or feedback on the frontend.
Finally, after the transcoder finishes running, we loop over the Transcoder's output directory and upload the new .ts
and .m3u8
files needed for HLS playback. After the files are uploaded, we remove the files in the /tmp
dir, then set the hlsPlaylist
property on our video object before calling the video.save()
method to write those property updates to the database.
Back in the store()
method, after this.transcodeVideo()
completes running, we redirect the user to the Videos index, where their newly uploaded video should appear.
There's a lot happening in those 2 methods, and the code could (and should) be refactored before going into any sort of production usage, but for our example it works just fine.
You'll also remember that in a few of the methods we made mention of templates. These are references to Edge templates, AdonisJS's templating engine. Let's create those files now by creating a new folder resources/views/videos
And create three new files:
resources/views/videos/create.edge
resources/views/videos/index.edge
resources/views/videos/show.edge
(This isn't really a frontend tutorial, so I'm loading video.js via cdn where needed, and I'm foregoing any styling or design. Basically, it ugly, but it works.)
create.edge
<h1>Create Video</h1>
<a href="/videos">Back</a>
<hr >
<form
action="{{ route('VideosController.store') }}"
method="POST"
enctype="multipart/form-data"
>
<div>
<p>
<label for="name">Video Name</label>
</p>
<input type="text" name="name" />
</div>
<div>
<p>
<label for="videoFile">Choose a video file:</label>
</p>
<input type="file" name="videoFile" accept="video/mp4" />
</div>
<br />
<div>
<button type="submit">Create Video</button>
</div>
</form>
index.edge
<h1>Videos Index</h1>
<a href="/videos/create">Create new Video</a>
<hr />
<style>
.card {
margin-top: 2rem;
margin-bottom: 2rem;
padding-top: 1rem;
padding-bottom: 1rem;
box-shadow: 0 4px 8px 0 rgba(0,0,0,0.2);
transition: 0.3s;
}
.card:hover {
box-shadow: 0 8px 16px 0 rgba(0,0,0,0.2);
}
.container {
padding: 2px 16px;
}
</style>
@each(video in videos)
<div class="card">
<div class="container">
<p><strong>Video Name:</strong> {{ video.name }}</p>
<a href="/videos/{{ video.id}}">Watch</a>
</div>
</div>
@end
show.edge
<head>
<link href="https://vjs.zencdn.net/7.19.2/video-js.css" rel="stylesheet" />
</head>
<h1>Show Video</h1>
<a href="/videos">Back</a>
<hr />
<p><strong>Video Name: </strong> {{ video.name }}</p>
<video-js id=vid1 width=852 height=480 class="vjs-default-skin" controls>
<source
src="http://{{ cloudfront.url }}/uploads/{{ video.id }}/index.m3u8"
type="application/x-mpegURL">
</video-js>
<script src="https://vjs.zencdn.net/7.19.2/video.min.js"></script>
<script>
var player = videojs('vid1', {
limitRenditionByPlayerDimensions: false,
});
player.play();
</script>
We're almost ready to test our app, the last step before we can do so is to configure our routes to connect to our Videos Controller.
Open the start/routes.ts
file and add the following routes below the Hello World route.
Route.get('/videos/create', 'VideosController.create')
Route.get('/videos', 'VideosController.index')
Route.get('/videos/:id', 'VideosController.show')
Route.post('/videos', 'VideosController.store')
Once those routes are added, start the Adonis server in development mode by running:
npm run dev
In a web browser, open up http://localhost:3333/videos
to find the videos index page.
Click the "Create new Video" to open up /videos/create
. I'm uploading a copy of Blender's Big Buck Bunny. Once you click the "Create Video" button, no feedback will be shown on the frontend. But, if you check the terminal where you're running your adonis server, you should start to see ffmpeg progress after a couple seconds.
(I'd also recommend not using a huge video file just due to the lack of feeback)
After video transcoding finishes, you'll automatically be redirected back to the video index, but this time, your new video should be listed.
If you click the "Watch" link, you'll be taken to the Show video page, where (fingers crossed) your video will be available for HLS playback inside a video.js player.
It works! To verify that our video is being streamed to us via HLS, you can open up the developer tools in the browser of your choice (I'm using Chrome) to look at the network requests.
If we look at the network tab you can see the various .ts
files being loaded for our video. (It's only loading at 720p, but that's a frontend video.js issue so we won't troubleshoot that here)
If we do want to test HLS adaptive playback though, we can use the Network throttling feature in Chrome to simulate a lower bandwidth connection. I'm using the "Slow 3G" preset.
You can see that on the slower connection, our HLS stream adapts to start using the lower quality (thereby smaller file size) versions to give the user a better playback experience.
We now have a primitive, but functional, HLS transcoder and VOD streaming platform. In the next installment, we can implement CloudFront Pre-signed Cookies to limit access to our videos, and also implement a Job Queue, to give the end-user a better experience during transcoding.
Top comments (3)
Great content!!
Have you thought of doing transcoding stuff on other ec2 instance whose job would be to just transcode video files.
Hey Bishal,
Thanks for the read! And funny that you mention that, because that's actually one of the ideas I was messing around with the keep extending this application.
I got a semi-working implementation using adonis-bull as a job queue, and I believe that module has functionality to setup workers for specific job types, ie running instances solely for transcoding.
The other option that I was kicking around was using AWS Elemental Media Convert but haven't had enough time to fully dive into either of those options