Over the past year, I’ve been working with 8BitJosh to create and maintain a website to showcase his YouTube video content. As a Twitch streamer and YouTube content creator, reaching new audiences can be a difficult task. Josh wanted a website to help make his content easier to find, so that’s what I’ve helped him do.
I turned to Eleventy and Netlify to create a low cost website for content creators. By leveraging Eleventy’s excellent data cascade and the YouTube Data API, I was able to create a simple integration that ensures Josh’s website always features his latest videos. Using the YouTube Data API’s Playlists endpoint, I can readily fetch videos from the “recent videos” playlist on 8BitJosh’s YouTube channel.
While he produces videos to publish on YouTube, Josh primarily livestreams on Twitch. Thanks to a recent policy change, Twitch streamers are now allowed to simulcast to multiple platforms. When this change went into effect, Josh started simulcasting his livestreams to YouTube. As a result, past streams now appear in the recent videos playlist after his livestreams end. For the website, Josh only wanted to feature his produced videos, not past livestreams. This required updating the script that fetches videos from YouTube.
This post details the initial integration I wrote, and the changes required to exclude livestreams from the YouTube Data API. It’s written in JavaScript, and is structured as a module for use in Eleventy’s data cascade.
Fetching Recent Videos from the YouTube Data API
Here’s the initial version of my script, which I saved as a global data file for Eleventy called newestVideos.js
:
require('dotenv').config();
const Cache = require('@11ty/eleventy-fetch');
const PLAYLIST_ID = 'PLAYLIST_ID';
const NUM_VIDEOS = 6;
const API_KEY = process.env.YOUTUBE_API_KEY;
module.exports = async () => {
try {
const {items} = await Cache(
`https://youtube.googleapis.com/youtube/v3/playlistItems?part=snippet&maxResults=${NUM_VIDEOS}&playlistId=${PLAYLIST_ID}&key=${API_KEY}`,
{
duration: '12h',
type: 'json'
}
);
return items;
} catch (ex) {
console.log(ex);
return [];
}
};
If you’re familiar with fetching remote data in Eleventy, this script is probably pretty straightforward. But let’s go through it a few lines at a time.
In the first few lines, we’re doing a little bit of setup. We use dotenv
to read an environment variable that holds our API key for the YouTube Data API. This keeps us from storing our API key in the code, which is great for security. We use eleventy-fetch
to make the API request, which makes it simple to cache API responses to avoid making too many requests. Then we create a few constants: a string for our playlist ID, the number of videos we want to display on the site, and we read our API key from the environment.
require('dotenv').config();
const Cache = require('@11ty/eleventy-fetch');
const PLAYLIST_ID = 'PLAYLIST_ID';
const NUM_VIDEOS = 6;
const API_KEY = process.env.YOUTUBE_API_KEY;
We’re creating this as an asynchronous function so we don’t block other build tasks, and using arrow function notation since our module only contains this one public method. The request is being contained in a try / catch block so API errors don’t break the build. If there’s an error, we simply log it and return an empty array.
module.exports = async () => {
try {
...
} catch (ex) {
console.log(ex);
// If failed, return back an empty array
return [];
}
};
Within the try
block, we’re making a request to the playlistItems
endpoint, setting the number of results to our desired video count, using the playlist ID and our API key. For 8BitJosh’s site, the playlist ID is for his channel’s recent videos, which the API will return sorted newest to oldest by default, which is precisely what we want. Our API request also includes the part
parameter, which is how we specify to the YouTube API what details we want returned. We're requesting the snippet
property, which includes details like the video title, date it was published, and thumbnail images.
const {items} = await Cache(
`https://youtube.googleapis.com/youtube/v3/playlistItems?part=snippet&maxResults=${NUM_VIDEOS}&playlistId=${PLAYLIST_ID}&key=${API_KEY}`,
{
duration: '12h', // 12 hours
type: 'json'
}
);
return items;
With the above request, we'll receive a response like this:
{
"kind": "youtube#playlistItemListResponse",
"etag": "FzfXldAFpYl7qA-fl-t58SyrNYw",
"nextPageToken": "EAAaIVBUOkNBWWlFRFV6TWtKQ01FSTBNakpHUWtNM1JVTW9BUQ",
"items": [
{
"kind": "youtube#playlistItem",
"etag": "c6kYc1nwdfWI0aVAuj49856IJfE",
"id": "VVU4QnJ6NnJXV3hBUUtWZFgxVmV3cjlRLjMza1RDSUVYQV9Z",
"snippet": {
"publishedAt": "2023-11-17T14:11:13Z",
"channelId": "UC8Brz6rWWxAQKVdX1Vewr9Q",
"title": "Spider-Man Remastered! Playing through the whole series!",
"description": "Playing through Spider-Man 1, then Miles Morales, the Spider-Man 2!",
"thumbnails": {
"default": {
"url": "https://i.ytimg.com/vi/33kTCIEXA_Y/default_live.jpg",
"width": 120,
"height": 90
},
"medium": {
"url": "https://i.ytimg.com/vi/33kTCIEXA_Y/mqdefault_live.jpg",
"width": 320,
"height": 180
},
"high": {
"url": "https://i.ytimg.com/vi/33kTCIEXA_Y/hqdefault_live.jpg",
"width": 480,
"height": 360
},
"standard": {
"url": "https://i.ytimg.com/vi/33kTCIEXA_Y/sddefault_live.jpg",
"width": 640,
"height": 480
},
"maxres": {
"url": "https://i.ytimg.com/vi/33kTCIEXA_Y/maxresdefault_live.jpg",
"width": 1280,
"height": 720
}
},
"channelTitle": "8bitJosh",
"playlistId": "UU8Brz6rWWxAQKVdX1Vewr9Q",
"position": 0,
"resourceId": {
"kind": "youtube#video",
"videoId": "33kTCIEXA_Y"
},
"videoOwnerChannelTitle": "8bitJosh",
"videoOwnerChannelId": "UC8Brz6rWWxAQKVdX1Vewr9Q"
}
}
],
"pageInfo": {
"totalResults": 542,
"resultsPerPage": 6
}
}
The only property we're interested in is the videos themselves, so we’re using destructuring to extract the array of video objects items
. We cache this response for up to 12 hours, and return the items.
And that’s it! We now have a usable collection of video objects.
Displaying Remote Data in an Eleventy Template
As a bit of an aside, I wanted to show how this collection is used in Eleventy. Because this module is setup as a global data file, the array of videos is available to us in every template using the name of the file. In this case, newestVideos
.
The site uses nunjucks templates, and outputs links to the videos as “cards”. Our template code looks something like this (some site-specific markup has been excluded for brevity):
<div class="video-list">
{% for item in newestVideos %}
<a href="https://www.youtube.com/watch?v={{ item.snippet.resourceId.videoId }}" target="_blank">
<div style="background-image: url({{ item.snippet.thumbnails.high.url }});"></div>
<h3>{{ item.snippet.title }}</h3>
<time>{{ item.snippet.publishedAt }}</time>
</a>
{% endfor %}
</div>
In this part of the template, we’re looping through newestVideos
, and outputting the video thumbnail, title, and publish date inside a link. The live version includes some classes, which are used to style these links to look like “cards”, along with some structured data markup for SEO purposes.
Excluding Livestream Videos from a YouTube Playlist
When Josh started simulcasting his streams, the above version of the script included past livestreams whenever the site was rebuilt. In order to exclude them, the script had to be significantly altered.
As of the writing of this post, the YouTube Data API doesn’t include a native method for excluding livestream videos (that I could find, anyway). In searching for other solutions, the repeated pattern seemed to be that livestream videos need to be filtered manually, so that’s what I opted to do.
To do this, the script is now comprised of three functions.
The first function, fetchFromYouTube
handles making the request to our playlist. Because the API response can now include livestreamed videos, there’s no guarantee that we’ll have the desired number of produced videos in our initial request. Thus, we now extract both items
and nextPageToken
, which the YouTube API will use to handle paging through the playlist in the event we need to make multiple requests.
For this, I added a new constant VIDEOS_PER_PAGE
which is the number of videos to fetch from the playlist at a time. This number is significantly higher than the number of videos we try to display to reduce the total number of API requests, and livestreams may occur more frequently than produced videos.
The cache time has also been reduced. With livestreams included, this playlist may update more often than before, plus as you’ll see, we can cache subsequent requests for much longer.
async function fetchFromYouTube(nextPage = null) {
let url = `https://youtube.googleapis.com/youtube/v3/playlistItems?part=snippet&maxResults=${VIDEOS_PER_PAGE}&playlistId=${PLAYLIST_ID}&key=${API_KEY}`;
if (nextPage) url += `&pageToken=${nextPage}`; // include page token if provided
try {
const {nextPageToken, items} = await Cache(
url,
{
duration: '1h', // 1 hour
type: 'json'
}
);
return [nextPageToken, items]; // return token to fetch next page and the results as array
} catch (ex) {
console.log(ex);
// If failed, return back an empty array
return [];
}
}
The second function handles another request to a different API endpoint. In order to determine if a given video in our playlist is a past livestream, we have to query the videos
endpoint to request the liveStreamingDetails
. I called this function checkIfLivestreamed
.
This function queries for livestream details (which are start and end timestamps from when the stream was live). If a video was not livestreamed, no details are returned.
For any given videoID, it’s either livestreamed or not - this will not change after the video is published. Therefore, the response from this endpoint is cached for two weeks, allowing subsequent builds of the site to avoid duplicate requests for quite a while. An expiration time was added simply to avoid our cache getting overly large with outdated videos.
async function checkIfLiveStreamed(videoId) {
let url = `https://youtube.googleapis.com/youtube/v3/videos?part=liveStreamingDetails&id=${videoId}&key=${API_KEY}`;
try {
// Cache results from an individual video for 2 weeks, this info almost definitely will not change
const {items} = await Cache(
url,
{
duration: '2w', // 2 weeks
type: 'json'
}
);
// if liveStreamingDetails is defined, this video was livestreamed
return items[0].liveStreamingDetails != null;
} catch (ex) {
console.log(ex);
// If failed, return false; assume video was not livestreamed
return false;
}
}
With these two functions defined, we’re now able to setup a loop to perform this querying and filtering. The basic logic is:
- Fetch videos from the playlist
- For each video, check if it was livestreamed
- If the video was not livestreamed, add it to our list of videos to display
- Repeat until we have the number of non-livestreamed videos we want to display
So, the function our module exports now looks like this:
module.exports = async () => {
const newestVideos = []; // we'll push video items to this array
let nextPage = null; // prep var for pagination
// while we're filling the list
while(newestVideos.length < NUM_VIDEOS_TO_SHOW) {
// fetch a page of videos from our playlist
let ytResults = await fetchFromYouTube(nextPage);
if (ytResults.length === 0) break; // nothing returned, probably an error, end the loop
nextPage = ytResults[0]; // save id for the next page if we need to paginate
let videos = ytResults[1]; // videos
for(let i = 0; i < videos.length; i++) {
const videoId = videos[i].snippet.resourceId.videoId;
// if video is not livestreamed, add it to our list
const liveStreamed = await checkIfLiveStreamed(videoId);
if (!liveStreamed) {
newestVideos.push(videos[i]);
}
// If we've hit our display limit, stop checking if videos in this set were livestreamed
if (newestVideos.length === NUM_VIDEOS_TO_SHOW) {
break;
}
} // end for loop
} // end while loop
return newestVideos;
};
We create a loop that will run until we reach our desired number of videos. Within it, we request videos from the YouTube playlist. We save the next page token in case we need it, then loop through the returned videos. If a video was not livestreamed, it goes into our output list.
If we reach our limit within the current page of results, we end early to avoid making additional API requests. In the event we do not hit our limit on the current page, the outer while
loop will run for another iteration, fetching the next page from the playlist API.
Now the complete newestVideos.js
script looks like this (comments removed for brevity):
require('dotenv').config();
const Cache = require('@11ty/eleventy-fetch');
const PLAYLIST_ID = 'PLAYLIST_ID';
const NUM_VIDEOS_TO_SHOW = 6;
const VIDEOS_PER_PAGE = 25;
const API_KEY = process.env.YOUTUBE_API_KEY;
async function fetchFromYouTube(nextPage = null) {
let url = `https://youtube.googleapis.com/youtube/v3/playlistItems?part=snippet&maxResults=${VIDEOS_PER_PAGE}&playlistId=${PLAYLIST_ID}&key=${API_KEY}`;
if (nextPage) url += `&pageToken=${nextPage}`;
try {
const {nextPageToken, items} = await Cache(
url,
{
duration: '1h',
type: 'json'
}
);
return [nextPageToken, items];
} catch (ex) {
console.log(ex);
return [];
}
}
async function checkIfLiveStreamed(videoId) {
let url = `https://youtube.googleapis.com/youtube/v3/videos?part=liveStreamingDetails&id=${videoId}&key=${API_KEY}`;
try {
const {items} = await Cache(
url,
{
duration: '2w',
type: 'json'
}
);
return items[0].liveStreamingDetails != null;
} catch (ex) {
console.log(ex);
return false;
}
}
module.exports = async () => {
const newestVideos = [];
let nextPage = null;
while(newestVideos.length < NUM_VIDEOS_TO_SHOW) {
let ytResults = await fetchFromYouTube(nextPage);
if (ytResults.length === 0) break;
nextPage = ytResults[0];
let videos = ytResults[1];
for(let i = 0; i < videos.length; i++) {
const videoId = videos[i].snippet.resourceId.videoId;
const liveStreamed = await checkIfLiveStreamed(videoId);
if (!liveStreamed) {
newestVideos.push(videos[i]);
}
if (newestVideos.length === NUM_VIDEOS_TO_SHOW) {
break;
}
}
}
return newestVideos;
};
Conclusion
It would be nice if the YouTube API allowed for a parameter to exclude livestreamed videos to keep this as simple as my initial implementation, but overall I didn’t find this particularly difficult to do. So far, this is working really well for 8BitJosh’s site.
That said, there may be room for improvement. Maybe I missed something in the YouTube API documentation, or maybe there’s a more concise way to do the filtering. If you have any suggestions, you can always send me a toot.
Top comments (0)