DEV Community

Cover image for A Better Approach to Data Cleaning in Large Datasets with Python
Varun Gujarathi
Varun Gujarathi

Posted on

A Better Approach to Data Cleaning in Large Datasets with Python

As a software engineer and tech enthusiast, I recently embarked on a challenging yet rewarding project. My task involved working with a massive criminal records dataset to aid in the analysis of the criminal records relief system. This analysis is pivotal in helping lawmakers pass the 'Clean Slate Act' in various states.

The Challenge

I recently led analytics for a statewide criminal records relief initiative. This involved anonymizing a database of over 5 million records by hashing names and birthdates to protect identities. This process required hashing names combined with birthdates, as it's highly improbable for two individuals with the same name to share the same birthdate.

However, a significant issue arose: inconsistent name spellings for the same individual, complicating the process of identifying unique individuals in the dataset. Traditional fuzzy logic matching was time-consuming, with estimates running for over 3 days.

Optimizing with Python

To address this, I developed a Python-based solution that drastically reduced processing time. Here's an overview:

  1. Grouping by Birthdate: I grouped records by birthdate, allowing for localized fuzzy matching within each group.
  2. Consolidating Name Variants: By replacing all matched names with a single variant, I streamlined the dataset, avoiding redundant processing.
  3. Leveraging Multithreading: Utilizing Python's multithreading capabilities, I processed each group simultaneously, significantly speeding up the process.
  4. Python Vector Functions: Wherever possible, I used Python's vector functions to enhance processing speed.

The Code

Here's a snippet of the Python code that made it all possible:

from rapidfuzz import process, fuzz
from joblib import parallel_backend, Parallel, delayed
import multiprocessing
from datetime import datetime

def match_names(group):
    unique_names = group['name'].unique()
    # Store the name to replace others with
    replacement_dict = {}

    for name in unique_names:
      if not name in replacement_dict.keys():
        matches = process.extract(name, unique_names, scorer=fuzz.ratio)
        # Filter matches above 73 score

        for matched_name, score, _ in matches:
          if score >= 75:
            replacement_dict[matched_name] = name

    # Replace names in the group
    if replacement_dict:
      group['name'] = group['name'].replace(replacement_dict)

    return group

# Grouping and parallel processing
grouped = df.groupby('DOB')
results = Parallel(n_jobs=-1, backend="threading")(delayed(match_names)(group) for _, group in grouped)
final_df = pd.concat(results)
Enter fullscreen mode Exit fullscreen mode

Results

The results were astonishing. What was initially estimated to take several days was reduced to mere minutes, showcasing the power of efficient coding and the right algorithms. Most importantly, it enabled expediting critical policy planning that could reshape millions of lives.

Conclusion

This project not only reinforced my Python skills but also highlighted the importance of thinking creatively to solve complex problems:

  • Problem-Solving Skills: Tackling complex issues with innovative solutions.
  • Analytical Thinking: Understanding the data and its intricacies to make informed decisions.
  • Efficiency in Coding: Writing code that is not just functional but also optimized for performance.

These skills were crucial in reducing the processing time from days to minutes, showcasing the power of efficient coding and the right algorithms. It's a testament to how technology can be leveraged to make significant strides in data analysis and legislative support.

Comment below with your top techniques, tools, and tips for optimizing large dataset processing.

Top comments (1)

Collapse
 
viraj63 profile image
Viraj Vhatkar • Edited

I loved the approach of using DOB and then using a fuzzy approach but I would suggest using additional variables for better results.
Eg: if you have 2 individuals "Jame Dov" and "Jane Dov" but the algorithm would give both the same name, so in this case address, city, location too or more specific criteria could be given to avoid changing the names.
But still, I learned something new today.