DEV Community

Cover image for Exploratory Data Analysis Ultimate Guide
Webs254
Webs254

Posted on

Exploratory Data Analysis Ultimate Guide

Exploratory Data Analysis (EDA) is a crucial process in any data analysis project. It involves examining, cleaning, and visualizing data to discover patterns, relationships, and anomalies that mau not be evident when looking at raw data. Python has become a popular tool for EDA because of its versatility and powerful data manipulation libraries such as NumPy, Pandas, and Matplotlib. In this article, we will explore the steps involved in performing EDA using Python, including data cleaning, summary statistics, and visualization.

Before we dive into EDA, let's first look at the tools and libraries we need. Python has many powerful libraries for data analysis and visualization. Some of the most popular ones include NumPy, Pandas, Matplotlib, and Seaborn. NumPy is a library for scientific computing with support for arrays and matrices. Pandas is a library for data manipulation and analysis that provides easy-to-use data structures and data analysis tools. Matplotlib is a plotting library for creating static, animated, and interactive visualizations in Python. Seaborn is a data visualization library based on Matplotlib that provides a high-level interface for creating informative and attractive statistical graphics.

Before getting to the analysis we can start by installing these libraries using pip, a Python package installer. To install these libraries, open the command prompt or terminal and type the following commands:

pip install numpy
pip install pandas
pip install matplotlib
pip install seaborn

Enter fullscreen mode Exit fullscreen mode

The first step in any EDA project is to load the data. Python has many libraries that can read data from various sources, such as CSV files, Excel spreadsheets, and databases. We have installed some of the libraries in the example above. For our example on loading a file, we will use a CSV file containing information on fitness, diet, and obesity.

To load the data, we can use Pandas' read_csv() function. Here's an example:

import pandas as pd

data = pd.read_csv('fitness_data.csv')

Enter fullscreen mode Exit fullscreen mode

This will load the data into a Pandas DataFrame, which is a two-dimensional table-like data structure with rows and columns.

Once we have loaded the data, we need to clean and preprocess it. Data cleaning involves identifying and handling inconsistent data, removing duplicates, and converting data types. For instance, we can use the dropna() function to remove rows with missing data, and the drop_duplicates() function to remove duplicate rows. We can also use the astype() function to convert columns to their appropriate data types. Here's an example:


# Remove duplicate rows
data = data.drop_duplicates()

# Convert data types
data['age'] = data['age'].astype('int')
data['weight'] = data['weight'].astype('float')
data['height'] = data['height'].astype('float')

Enter fullscreen mode Exit fullscreen mode

We also need to check its structure, the type of data, and any missing values. We can do this using various functions provided by Pandas. For example, the head() function displays the first few rows of the data, and the info() function shows information about the data, such as column names, data types, and the number of non-null values.

print(data.head())
print(data.info())
Enter fullscreen mode Exit fullscreen mode

Once we have cleaned the data, we can also start exploring it with summary statistics. Summary statistics are descriptive measures that provide a quick overview of the data. They include measures such as the mean, median, mode, variance, standard deviation, and quartiles.

Pandas provides several functions for calculating summary statistics, such as mean(), median(), std(), min(), and max(). Here's an example of how to calculate summary statistics for a dataset:

# Calculate summary statistics for numeric columns
summary_stats = data.describe()

# Print the summary statistics
print(summary_stats)

Enter fullscreen mode Exit fullscreen mode

After inspecting the data, we can proceed with the analysis. One of the most important tasks in EDA is to explore the relationships between variables. We can use scatter plots, line plots, and histograms to visualize the distribution of the data and the relationships between variables.

import matplotlib.pyplot as plt

# Scatter plot
plt.scatter(data["fitness"], data["obesity"])
plt.xlabel("Fitness")
plt.ylabel("Obesity")
plt.show()

# Line plot
plt.plot(data["age"], data["diet"])
plt.xlabel("Age")
plt.ylabel("Diet")
plt.show()

# Histogram
plt.hist(data["weight"], bins=20)
plt.xlabel("Weight")
plt.ylabel("Count")
plt.show()
Enter fullscreen mode Exit fullscreen mode

The scatter plot shows the relationship between fitness and obesity, where we can see a negative correlation between the two variables. The line plot shows the trend between age and diet, where we can see an increase in the diet score as the age increases. The histogram shows the distribution of weight, where we can see that most people have a weight between 60 and 80 kg.

Another important aspect of EDA is to deal with missing values and outliers. Pandas provides functions to handle missing values, such as dropna() and fillna(). We can also use statistical methods to detect outliers and remove them or replace them with appropriate values.

# Remove rows with missing values
data = data.dropna()

# Replace missing values with mean
data["income"] = data["income"].fillna(data["income"].mean())

# Detect and remove outliers
Q1 = data["height"].quantile(0.25)
Q3 = data["height"].quantile(0.75)
IQR = Q3 - Q1
data = data[(data["height"] >= Q1 - 1.5*IQR) & (data["height"] <= Q3 + 1.5*IQR)]

Enter fullscreen mode Exit fullscreen mode

In conclusion, EDA is an essential step in any data analysis project. Python provides powerful libraries to handle data manipulation and visualization, making it a popular tool for EDA. In this article, we have covered some basic concepts of EDA and provided examples of code to perform various tasks. By applying EDA techniques to your data, you can discover insights and patterns that can lead to better decision-making.

Top comments (0)