Note:This post is just for educational purpose to show the use of Webscraping.
Have you ever been in a situation like, you are having you final exams in a week or so and you haven't attended a single lecture(I mean mindfully, of course)? Then you ask YouTube for help (happens in my case every time) and what you get is a huge(even larger than node_modules) playlist to watch and limited data/data-speed at your place.
Or like, you wanted to learn a new skill/language/framework from YouTube and you get a good "playlist" but limited storage space in your phone.
Now, there are websites/applications to download YouTube videos.
OOPS, either they download one video at a time or if they download a complete playlist at once, they are paid ones. You can download the playlist videos one-by-one(until that's a huge one). Exams are important, you can pay for them and if it is about learning, let it take some more time, we have entire life?
Wait a minute, you are a Python developer. Why pay for a service you can build with a few lines of code?
This post will be about a simple project "YoPlaDo-YouTube Playlist Downloader" built using Python. We will be writing a program that takes the YouTube playlist link and web-scraps all the video links using Selenium and download the videos using YouTube-dl.
Have you ever searched and downloaded images? Or have ever Ctrl+ C and Ctrl+ V (If you know you know)? Or submitted an assignment with the solutions you get online? Basically, this is what scraping is.
Collecting data or to be more specific Extracting data from a website is Web-Scraping. Instead doing things manually, you can automate things. That's what a Web Scrapper does. You give it a list of things to extract and then it goes for shopping(scraping) from a website.
For example, you need an image, it searches for img tags.
These days Web-Scraping is being used in every fields staring from Digital Marketing to DataScience or AI.
So, various languages provide various libraries,frameworks and tools to make a Web Scraper or a Web Crawler. Python uses Selenium ,Beautiful Soup, Scrapy and a few more.
In this we will be creating a basic project with Selenium for a dynamic website.
"Selenium is a portable framework for testing web applications." - Wikipedia
Primarily it is for automating web applications for testing purposes, but is certainly not limited to just that.
Boring web-based administration tasks can (and should) also be automated as well.
I will be explaining things in a simpler way. For detailed information, you can go through the Docs.
"Program to download videos from YouTube.com and other video sites" - Pypi
For detailed information, you can go through the Docs.
We will be extracting links from a playlist(like https://www.youtube.com/playlist?list=) and download each video automatically with the program. This is just a basic project overview for beginners. You can go for documentation and make improvements.
If you want to follow-up with the post, you can make use of "https://www.youtube.com/playlist?list=PLGzz7pyosmlJfx9ivigemSouoZR9uLT2-" as the input link as this is used in the below example. It will be easier to understand. Also make sure the playlist you use, should have videos which are downloadable and without any deleted videos. This problem can be solved using Exception block but in order to keep this post simple for beginners, the block hasn't been added.
Before importing make sure , you have downloaded the required libraries. You can use,
pip install selenium
pip install youtube-dl
from selenium import webdriver import time import youtube_dl import os
url = input("Enter the Youtube Playlist URL : ") driver = webdriver.Chrome() driver.get(url) time.sleep(5)
Used to initiate Chrome browser with input URL.
time.sleep(5) provides a time gap to start the driver.
playlist= videos=driver.find_elements_by_class_name('style-scope ytd-playlist-video-renderer')
First we create an empty list "playlist" to store all the links to be extracted.
Then webscraping comes into play.
For simpler understanding, the line
driver.find_elements_by_class_name('style-scope ytd-playlist-video-renderer') is used to extract all the contents of source file that comes under specified class division.
for video in videos: link=video.find_element_by_xpath('.//*[@id="content"]/a').get_attribute("href") end=link.find("&") link=link[:end] playlist.append(link) """ For example, a playlist with 6 videos Enter youtube playlist link : https://www.youtube.com/playlist?list=PLGzz7pyosmlJfx9ivigemSouoZR9uLT2- ['https://www.youtube.com/watch?v=iyL9-EE3ngk&list=PLGzz7pyosmlJfx9ivigemSouoZR9uLT2-&index=1', 'https://www.youtube.com/watch?v=G7E8YrOiYrQ&list=PLGzz7pyosmlJfx9ivigemSouoZR9uLT2-&index=2', 'https://www.youtube.com/watch?v=79D4Y1cUK7I&list=PLGzz7pyosmlJfx9ivigemSouoZR9uLT2-&index=3', 'https://www.youtube.com/watch?v=MUe0FPx8kSE&list=PLGzz7pyosmlJfx9ivigemSouoZR9uLT2-&index=4', 'https://www.youtube.com/watch?v=UkpmjbHYV0Y&list=PLGzz7pyosmlJfx9ivigemSouoZR9uLT2-&index=5', 'https://www.youtube.com/watch?v=WTOFLmB9ge0&list=PLGzz7pyosmlJfx9ivigemSouoZR9uLT2-&index=6'] """
A new term? Xpath?(superhero path?)
"XPath is a technique in Selenium to navigate through the HTML structure of a page."
For simpler understanding that is just a path to find specific tags.
While the loop inspects all the elements in "videos" object
finds all the anchor tags or links present under the division whose id="content" and extract or scrape their href.
The rest part of the code is for validation done to find only the link to the videos of the selected playlist by striping the playlist id and index number from the link.
""" After processing it looks like: Enter youtube playlist link : https://www.youtube.com/playlist?list=PLGzz7pyosmlJfx9ivigemSouoZR9uLT2- ['https://www.youtube.com/watch?v=iyL9-EE3ngk', 'https://www.youtube.com/watch?v=G7E8YrOiYrQ', 'https://www.youtube.com/watch?v=79D4Y1cUK7I', 'https://www.youtube.com/watch?v=MUe0FPx8kSE', 'https://www.youtube.com/watch?v=UkpmjbHYV0Y', 'https://www.youtube.com/watch?v=WTOFLmB9ge0'] """
Wondering why those specific classes?
Link of videos are under these sections.
os.chdir('C:/Users/Trideep/Downloads') for link in playlist: with youtube_dl.YoutubeDL(ydl_opts) as ydl: ydl.download([link]) driver.close()
Used to change the download location to the "Downloads".
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
Here comes the Youtube-dl. Looping through the "playlist" list, each link is processed and downloaded using YoutubeDL.
"ydl.download('Url to directory')" processes the link and downloads it to the mentioned directory.
You can further add video specifications or type of video using other attributes of youtube-dl .
driver.close() used to close the driver.
And with merely 30 lines of code, you saved a few dollars and gifted yourself a good project.
Well, for proper execution you can add your own exception blocks and logic. I would suggest you to go through the documentation.
For complete code, you can visit:
Happy Scraping! Happy Coding.