DEV Community

Cover image for How to scrape a website with Python (Beginner tutorial)
hil for SerpApi

Posted on • Originally published at serpapi.com

How to scrape a website with Python (Beginner tutorial)

If you've ever been curious about how to extract valuable data from websites, you're in the right place. Web scraping is a powerful tool for gathering information from the internet, and Python, with its rich ecosystem of libraries, makes this task easy for us.

Web scraping tutorial with Python tutorial

In this blog post, we'll cover:

  • List of tools we can use for web scraping with Python.
  • Simple web scraping for static websites.
  • Using Selenium for dynamic content or Javascript-heavy site/
  • MechanicalSoup to automate some task in browser.

We have a lot of libraries in Python that we can use for scraping data from a website. Here is some of it:

  • Category: HTTP Libraries
    • Tool/Library: Requests
    • Description: Simple HTTP library for Python, built for human beings.
  • Category:
    • Tool/Library: urllib
    • Description: A module for fetching URLs included with Python.
  • Category:
    • Tool/Library: urllib3
    • Description: A powerful, user-friendly HTTP client for Python.
  • Category:
    • Tool/Library: httpx
    • Description: A fully featured HTTP client for Python 3, which provides sync and async APIs, and support for both HTTP/1.1 and HTTP/2.
  • Category: Parsing Libraries
    • Tool/Library: Beautiful Soup
    • Description: A library for pulling data out of HTML and XML files.
  • Category:
    • Tool/Library: lxml
    • Description: Processes XML and HTML in Python, supporting XPath and XSLT.
  • Category:
    • Tool/Library: pyquery
    • Description: A jQuery-like library for parsing HTML.
  • Category: Web Drivers
    • Tool/Library: Selenium
    • Description: An automated web browser, useful for complex scraping tasks.
  • Category:
    • Tool/Library: Splinter
    • Description: Open-source tool for testing web applications.
  • Category: Automation Tools
    • Tool/Library: Scrapy
    • Description: An open-source web crawling and scraping framework.
  • Category:
    • Tool/Library: MechanicalSoup
    • Description: A Python library for automating interaction with websites.
  • Category: Data Processing
    • Tool/Library: pandas
    • Description: A fast, powerful, flexible and easy-to-use data analysis tool.
  • Category: JavaScript Support
    • Tool/Library: Pyppeteer (Python port of Puppeteer)
    • Description: A tool for browser automation and web scraping.

Feel free to suggest if you know any other tools out there!

Step by Step basic web scraping tutorial in Python

Here's a basic tutorial on web scraping in Python. For this example, we will use two popular libraries: requests for making HTTP requests and Beautiful Soup for parsing HTML.

Prerequisites:

  • Basic understanding of Python.
  • Python is installed on your machine.
  • PIP for installing Python packages.

Step 1: Install Necessary Libraries

First, you need to install the requests and BeautifulSoup libraries. You can do this using pip:

pip install requests beautifulsoup4

Enter fullscreen mode Exit fullscreen mode

Step 2: Import Libraries

In your Python script or Jupyter Notebook, import the necessary modules:

import requests
from bs4 import BeautifulSoup

Enter fullscreen mode Exit fullscreen mode

Step 3: Make an HTTP Request

Choose a website you want to scrape and send a GET request to it. For this example, let's scrape Google's homepage.

url = 'https://google.com'
response = requests.get(url)

Enter fullscreen mode Exit fullscreen mode

Step 4: Parse the HTML Content

Once you have the HTML content, you can use Beautiful Soup to parse it:

soup = BeautifulSoup(response.text, 'html.parser')

Enter fullscreen mode Exit fullscreen mode

Step 5: Extract Data

Now, you can extract data from the HTML. Let's say you want to extract all the headings:

headings = soup.find_all('div')
for heading in headings:
    print(heading.text.strip())

Enter fullscreen mode Exit fullscreen mode

Step 6: Handle Errors

Always make sure to handle errors like bad requests or connection problems:

if response.status_code == 200:
    # Proceed with scraping
    # ...
else:
    print("Failed to retrieve the web page")

Enter fullscreen mode Exit fullscreen mode

Notes

We need two primary tools to perform web scraping in Python: HTTP Client and HTML Parser.

  • An HTTP API Client to fetch web pages. e.g. requests, urllib, pycurl or httpx
  • An HTML parser to extract data from the fetched pages. e.g. Beautiful Soup, lxml, or pyquery

Here is a concrete example on how to use these tools on a real world use case: How to scrape Google search results with Python

Step by Step scraping dynamic content in Python

What if the content you want to scrape is not loaded initially? Sometimes, the data hides behind a user interaction. To scrape dynamic content in Python, which often involves interacting with JavaScript, you'll typically use Selenium.

Unlike the requests and BeautifulSoup combination, which works well for static content, Selenium can handle dynamic websites by automating a web browser.

Prerequisites:

  • Basic knowledge of Python and web scraping (as covered in the previous lesson).
  • Python is installed on your machine.
  • Selenium package and a WebDriver installed.

Step 1: Install Selenium

First, install Selenium using pip:

pip install selenium

Enter fullscreen mode Exit fullscreen mode

Step 2: Download WebDriver

You'll need a WebDriver for the browser you want to automate (e.g., Chrome, Firefox). For Chrome, download ChromeDriver. Make sure the WebDriver version matches your browser version. Place the WebDriver in a known directory or update the system path.

Step 3: Import Selenium and Initialize WebDriver

Import Selenium and initialize the WebDriver in your script.

from selenium import webdriver

driver = webdriver.Chrome()
Enter fullscreen mode Exit fullscreen mode

Step 4: Fetch Dynamic Content

Open a website and fetch its dynamic content. Let's use http://example.com as an example.

url = 'https://google.com'
driver.get(url)

Enter fullscreen mode Exit fullscreen mode

Step 5: Print title

Here is an example of how to get a certain element on the page.

print(driver.title)
Enter fullscreen mode Exit fullscreen mode

Try to run this script. You'll see a new browser pop up and open the page.

Step 6: Interact with the Page (if necessary)

If you need to interact with the page (like clicking buttons or filling forms), you can do so:

text_box = driver.find_element(by=By.NAME, value="my-text")
submit_button = driver.find_element(by=By.CSS_SELECTOR, value="button")
submit_button.click()

Enter fullscreen mode Exit fullscreen mode

Step 7: Scrape Content

Now, you can scrape the content. For example, to get all paragraphs:

paragraphs = driver.find_elements_by_tag_name('p')
for paragraph in paragraphs:
    print(paragraph.text)

Enter fullscreen mode Exit fullscreen mode

Step 8: Close the Browser

Once done, don't forget to close the browser:

driver.quit()

Enter fullscreen mode Exit fullscreen mode

Additional Tips:

  • Selenium can perform almost all actions that you can do manually in a browser.
  • For complex web pages, consider using explicit waits to wait for elements to load.
  • Remember to handle exceptions and errors.

Here is a video tutorial on using Selenium for automation in Python by NeuralNine on Youtube.

A basic example of web scraping using MechanicalSoup

MechanicalSoup is a Python library for web scraping that combines the simplicity of Requests with the convenience of BeautifulSoup. It's particularly useful for interacting with web forms, like login pages. Here's a basic example to illustrate how you can use MechanicalSoup for web scraping:

Please note that MechanicalSoup doesn't handle javascript loaded content. That's a task for Selenium 😉

Prerequisites:

  • Python is installed on your machine.
  • Basic understanding of Python and HTML.

Step 1: Install MechanicalSoup

You can install MechanicalSoup via pip:

pip install mechanicalsoup

Enter fullscreen mode Exit fullscreen mode

Step 2: Import MechanicalSoup

In your Python script, import MechanicalSoup:

import mechanicalsoup

Enter fullscreen mode Exit fullscreen mode

Step 3: Create a Browser Object

MechanicalSoup provides a Browser class, which you'll use to interact with web pages:

browser = mechanicalsoup.StatefulBrowser()
Enter fullscreen mode Exit fullscreen mode

Step 4: Make a Request

Let's say you want to scrape data from a simple example page. You can use the Browser object to open the URL:

url = 'https://google.com'
print(browser.get(url))

Enter fullscreen mode Exit fullscreen mode

Step 5: Parse the HTML Content

The page variable now contains the response from the website. You can access the BeautifulSoup object via browser.page:

page = browser.page
print(page)
Enter fullscreen mode Exit fullscreen mode

Step 6: Extract Data

Now, you can extract data using BeautifulSoup methods. For example, to get all paragraphs:

page = browser.page
pTags = page.find_all('p')
print(pTags)
Enter fullscreen mode Exit fullscreen mode

Step 7: Handling Forms (Optional)

If you need to interact with forms, you can do so easily.

Given this HTML content on a page:

<form action="/pages/forms/" class="form form-inline" method="GET">
<label for="q">Search for Teams:  </label>
<input class="form-control" id="q" name="q" placeholder="Search for Teams" type="text"/>
<input class="btn btn-primary" type="submit" value="Search"/>
</form>
Enter fullscreen mode Exit fullscreen mode

To submit a search query on a form:

# Select the form
browser.select_form('form')

# Fill the form with your query
browser['q'] = 'red'

# Submit the form
response = browser.submit_selected()

# Print the URL (assuming the form is correctly submitted and a new page is loaded)
print("Form submitted to:", response.url)

Enter fullscreen mode Exit fullscreen mode

What if you have multiple forms on the page?

select_form and another method in MechanicalSoup usually accept a CSS selector parameter. So, whether it's id or class you can always name it specifically there.

When to use MechanicalSoup (From their documentation)

MechanicalSoup is designed to simulate the behavior of a human using a web browser. Possible use-case include:

- Interacting with a website that doesn’t provide a webservice API, out of a browser.

- Testing a website you’re developing

Why use Python for web scraping?

Python is a popular choice for web scraping for several reasons. Here are the top three:

  1. Seamless Integration with Data Science Tools: After scraping data from the web, you often need to clean, analyze, and visualize this data, which is where Python's data science capabilities come in handy. Tools like Pandas, NumPy, and Matplotlib integrate seamlessly with web scraping libraries, allowing for an efficient end-to-end process. Here's a bit more detail on each:
  2. Rich Ecosystem of Libraries: Python has a vast selection of libraries specifically designed for web scraping, such as Beautiful Soup, Scrapy, Selenium, Requests, and MechanicalSoup. These libraries simplify the process of extracting data from websites, parsing HTML and XML, handling HTTP requests, and even interacting with JavaScript-heavy sites. This rich ecosystem means that Python offers a tool for almost every web scraping need, from simple static pages to complex, dynamic web applications.
  3. Ease of Learning and Use: Python is known for its simplicity and readability, making it an excellent choice for beginners and experienced programmers alike. Its straightforward syntax allows developers to write less code compared to many other programming languages, making the process of writing and understanding web scraping scripts easier and faster. This ease of use is particularly beneficial in web scraping, where scripts can often become complex and difficult to manage.

That's it! I hope you enjoy this tutorial!

Top comments (0)