DEV Community

Cover image for Fetching and Loading Data from Github
Ochwada Linda
Ochwada Linda

Posted on

Fetching and Loading Data from Github

It's usually preferable to write a function that downloads and decompresses data from github (or online data) rather than doing it manually.

This is especially true if the data changes frequently: you can write a small script that uses the function to retrieve the most recent data (of you can set up a scheduled job to do that automatically at regular intervals). Automating the data retrieval process is also useful if you need to install the dataset on multiple machines.

Here is the fuction to fetch and load data:

# ----- Libraries ---------
from pathlib import Path

import pandas as pd
import tarfile
import urllib.request
# --------------------------

# Function to fetch and load data --->
def function_name():
    zipped_path = Path("datasets/files.tgz")

    if not zipped_path.is_file():
        Path("datasets").mkdir(parents=True, exist_ok=True)
        url = "****/Datasets/raw/main/files.tgz"
        urllib.request.urlretrieve(url, zipped_path)

        with as file_name:
            file_name.extractall(path = "datasets")

    return pd.read_csv(Path("datasets/files/file.csv"))

data_file = function_name()
Enter fullscreen mode Exit fullscreen mode

When function_name()is called, it will look for the dataset file in datasets/files.tgz. If it does not find it, it will create a directory datasets inside your working directory; then it will download the files.tgz from the site****/Datasets/raw/main/files.tgz. This files.tgz contains the file file.csv.

The function with then lod the CSV file into a Pandas DataFrame object containing all the data, and return it.

You can check your data by:

 # Or 
Enter fullscreen mode Exit fullscreen mode

To display the top 10 rows ( or 5 top rows) of your data.

Latest comments (0)