DEV Community

Cover image for Coding Efficiency: Designing a High-Performance Data Import System with Excel Integration
Ramkumar Jayakumar
Ramkumar Jayakumar

Posted on

Coding Efficiency: Designing a High-Performance Data Import System with Excel Integration

Introduction: Bridging the Data Gap - Efficiently Importing Data from Other Systems

In today's digital world, our applications often need to work together. Imagine you have a customer relationship management (CRM) system that stores all your customer information, but you also have a separate system for tracking sales and orders. To get a complete picture of your business, you might need to import data (like customer details) from one system into the other. This process of bringing data into a system from an external source is called a data import.

Data imports can be crucial for making informed decisions. For example, by combining your customer information with your sales data, you can analyze trends and identify your most valuable customers. But how do you actually move this data between systems?

There are several ways to approach data migration, each with its own advantages and limitations. This blog post will explore how you can design a data import system using a familiar tool: Microsoft Excel.

We'll delve into strategies for handling large datasets efficiently, ensuring data accuracy, and creating a user-friendly experience for the import process. By leveraging the right techniques, you can unlock the power of Excel import and bridge the data gap between your systems seamlessly.

Navigating the Maze of Data Migration

Moving data between different systems can feel like navigating a maze. There are several approaches you can take, each with its own pros and cons. Let's explore some common methods for data import:

  1. Manual Creation: Imagine having to type in all your customer information one by one from a spreadsheet into your CRM system. This is manual data creation, and while it works for very small datasets, it's time-consuming, error-prone, and not scalable for large amounts of data.

  2. Custom Scripts: For more complex data transfers, you might consider writing custom scripts or programs. This approach offers a lot of flexibility, but they lack the efficiency and can be time-consuming to develop and maintain.

  3. API Integration: Many applications offer APIs (Application Programming Interfaces) that allow them to talk to each other. Imagine an API as a special translator between your CRM system and your sales data system. By using the API, you can build a connection that lets you import data automatically. However, using APIs often requires development effort for both sides and may not be available for all systems.

  4. Excel Import: Simple and ubiquitous, but raises concerns about scalability and data integrity, especially for high volumes.

Excel Import: Reimagined for Efficiency

While Excel import is a familiar and accessible solution for bringing data into your system, it can face challenges when dealing with large datasets or complex data validation needs. Imagine you have a massive customer list with thousands of entries in an Excel spreadsheet. Traditionally, importing this data might take a long time or lead to errors if the data isn't formatted correctly.

Here's where we can reimagine Excel import for efficiency:

  • High-Volume Processing: By leveraging specialized techniques, we can optimize the import process to handle large datasets without sacrificing performance or system stability. This might involve splitting the data into smaller chunks for processing or using efficient algorithms for data validation.

  • Robust Validation and Error Handling: Data accuracy is crucial. Our system can perform thorough checks on your Excel data to ensure it meets the requirements of your target system. For example, it can verify data types (like numbers or dates) and identify any inconsistencies or missing information. This helps catch errors early on and prevents inaccurate data from entering your system.

  • Streamlined Workflow: The data import process shouldn't be a hassle. We can design a user-friendly interface that guides you through each step, from configuring the import to uploading your Excel file. This makes it easy for anyone, even beginners, to import data efficiently.

By incorporating these features, we can transform Excel import from a simple data transfer tool into a robust and efficient system. Imagine being able to import your large customer list with confidence, knowing the system will handle the process smoothly and ensure the accuracy of your data.

System Design

Let's delve into the design section for an efficient data import system using Excel, addressing key concerns such as:

1. Dynamic Field Mapping: Empowering User Flexibility

Field Mapping
One of the key challenges in data import systems is accommodating the diverse structure of data from different sources. This is where dynamic field mapping plays a pivotal role. Here's a closer look at how it works:

  • User-Driven Configuration: Instead of relying on fixed mappings, users are empowered to configure the import process based on their specific needs. Upon initiating a data import task, the system prompts users to define mappings between the columns in the Excel file and corresponding fields in the destination system.

  • Flexible Mapping Interface: To ensure user-friendly interaction, the system provides an intuitive interface for mapping configuration. Users are presented with a list of available fields in the destination system, allowing them to easily match them with the columns in the Excel file. Additionally, the interface supports dynamic suggestions and auto-completion to streamline the mapping process.

  • Customizable Mapping Profiles: Recognizing that users may need to import data from multiple sources with varying structures, the system supports the creation of customizable mapping profiles. Users can define and save multiple mapping configurations, each tailored to a specific source or import scenario. This flexibility allows for efficient handling of diverse data formats without the need for repetitive mapping tasks.

  • Validation and Error Handling: To ensure data accuracy and integrity, the system performs validation checks during the mapping configuration phase. This includes verifying the consistency of data types, detecting potential mismatches between source and destination fields, and flagging any inconsistencies or conflicts for user review. Comprehensive error handling mechanisms are in place to guide users in resolving mapping issues effectively.

  • To ensure seamless configuration and mapping between Excel data and the database, a structured approach leveraging MySQL is adopted. Here's how the database schema facilitates the process:

1.Import Configuration Table: This table stores various import configurations available in the system, enabling users to manage different import scenarios effectively.

CREATE TABLE `import_configuration` (
  `import_configuration_id` int UNSIGNED NOT NULL,
  `configuration_name` varchar(55) NOT NULL,
  `created_at` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `is_deleted` tinyint(1) NOT NULL DEFAULT '0',
  PRIMARY KEY (`import_configuration_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;
Enter fullscreen mode Exit fullscreen mode

2.Configuration Mapping Table: This table establishes mappings between Excel columns and database fields for each import configuration. It defines column titles, mandatory fields, sort order, and default values, facilitating accurate data mapping.

CREATE TABLE `import_mapping_columns` (
  `import_mapping_column_id` int UNSIGNED NOT NULL,
  `import_configuration_id` int UNSIGNED NOT NULL,
  `import_column_name` varchar(55) DEFAULT NULL,
  `import_column_title` varchar(100) NOT NULL,
  `is_mandatory` enum('0','1') NOT NULL,
  `sort_order` int NOT NULL,
  `default_value` json DEFAULT NULL,
  `is_deleted` tinyint(1) NOT NULL DEFAULT '0',
  PRIMARY KEY (`import_mapping_column_id`),
  FOREIGN KEY (`import_configuration_id`) REFERENCES `import_configuration`(`import_configuration_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb3;

Enter fullscreen mode Exit fullscreen mode

2.File Upload Phase: Streamlining Data Entry

File Upload Phase
The file upload phase initiates the import process, ensuring seamless integration of Excel data into the system. Here's a concise overview of this crucial phase:

  • Configuration Preparation: Users begin by configuring the import process, selecting the appropriate import configuration from the predefined options. This step ensures that the system knows how to interpret the incoming Excel data.

  • Excel Upload: Once configured, users upload the Excel file containing the data to be imported. The system validates the file format and size to ensure compatibility and prevent data loss.

  • Storage in S3: The uploaded file is securely stored in an Amazon S3 bucket, providing scalability, reliability, and easy access for subsequent processing steps.

  • Background Task Creation: Concurrently, the system creates a background task or job to manage the import process efficiently. This task orchestrates the various steps involved in reading, validating, and importing the Excel data into the system.

3. Excel Reading and Temporary Record Creation: Ensuring Data Integrity

Excel Reading and Temporary Record Creation: Ensuring Data Integrity
In the Excel Reading and Temporary Record Creation phase, the system processes the uploaded Excel file, validates its contents, and creates temporary records for further validation. Here's a detailed breakdown of this crucial step:

  • CRON Picks up the Background Task: The scheduled task responsible for managing the import process retrieves the Excel file from the designated Amazon S3 bucket.

  • Updating Task Status: To prevent duplicate processing, the system updates the status of the background task to "in progress." This prevents multiple instances from picking up the same task simultaneously, maintaining data integrity.

  • Heading Validation: The system verifies the headers of the Excel file to ensure they match the expected format defined in the import configuration. Any discrepancies are flagged for user attention, preventing data misinterpretation.

  • Excel Reading: Using specialized libraries or tools, the system reads the data from the Excel file, extracting each row as a separate dataset for further processing. This step ensures accurate data extraction while handling large volumes efficiently.

  • Row Validation: Each row of data undergoes validation against predefined rules and constraints specified in the import configuration. Mandatory fields, data types, and format checks are performed to identify any inconsistencies or errors. Use hashmaps or some md5 hashing mechanisms for getting duplicate entries within the excel file validation.

  • Splitting into Chunks: To optimize performance and manage memory usage, the dataset is split into manageable chunks for bulk insertion into the database. This ensures efficient processing, especially with large datasets.

  • Bulk Insertion: Validated data is bulk-inserted into temporary tables within the database, segregating valid records from exceptions. This step lays the groundwork for further validation and refinement of the imported data.

  • Within System Validations: Concurrently, the system performs additional validations within the database environment, such as duplicate checks and cross-referencing against existing records. This ensures data consistency and integrity at every stage of the import process.

  • Temporary Records Creation: Validated records are stored as temporary entries within the database, providing a snapshot of the imported data for subsequent review and refinement. Exception records are also retained for further investigation and correction.

By meticulously executing the Excel Reading and Temporary Record Creation phase, the system establishes a solid foundation for data integrity, enabling accurate and reliable integration of external data into the system.

    public function map($row): array
    {
        $this->trimValues($row);

        $row['failed_columns'] = array();
        $row['validation_message'] = array();
        $row['import_status'] = 1;

        $this->setDefaultValues($row);
        $this->validate($row)

        $row['import_status'] = count($row['failed_columns']) > 0 ? 0 : 1;
        $row['failed_columns'] = json_encode($row['failed_columns']);
        $row['validation_message'] = !empty($row['validation_message']) ? implode(", ", $row['validation_message']) : null;

        return $row;
    }

Enter fullscreen mode Exit fullscreen mode
// remove white spaces and non-breaking spaces
    protected function trimValues(&$row): void
    {
        $row = array_map(function ($value) {
            // For numeric values accept zero.
            if ($value == 0) {
                return $value;
            }
            return !empty(trim($value)) ? trim(str_replace("\xc2\xa0", '', $value)) : NULL;
        }, $row);
    }

Enter fullscreen mode Exit fullscreen mode
public function registerEvents(): array
    {
        return [
            AfterImport::class => function () {
                $this->updateImportDetails();
                $this->updateTimesheetImportCount();
            },
        ];
    }

Enter fullscreen mode Exit fullscreen mode
// Insert the chunked Data
public function array(array $array)
{
    ImportDetails::insert($array);
    return $array;
}
Enter fullscreen mode Exit fullscreen mode
    public function chunkSize(): int
    {
        return $this->chunk_size;
    }
Enter fullscreen mode Exit fullscreen mode

Verification & Real Record Creation: Validating and Finalizing Imported Data

Verification & Real Record Creation: Validating and Finalizing Imported Data
After the temporary records are created from the Excel data, the system proceeds to verify and finalize the imported data. This phase ensures that only accurate and validated records are permanently stored in the system. Here's a detailed explanation of this crucial step:

  • Validate Imported Records: All temporary records generated from the Excel data undergo thorough validation checks. This includes verifying data consistency, ensuring adherence to business rules, and detecting any anomalies or discrepancies.

  • Fix Exception Records: If any validation errors are identified during the verification process, the system prompts users to review and rectify the exception records. Users can correct data inaccuracies or provide missing information to ensure completeness and accuracy.

  • Background Task for Real Record Generation: Once the imported data passes validation, a background task is initiated to generate real records from the validated entries. This task orchestrates the finalization process, ensuring data integrity before permanent storage.

  • CRON Picks Up the Task: The scheduled task manager periodically picks up the task for real record creation, ensuring timely processing and minimal delay in data integration.

  • Validate for Duplicate Records: Before creating real records in the database, the system performs a final check to ensure that no duplicate entries exist. This prevents redundancy and maintains data consistency within the system.

  • Construct Insertion Query: Based on the validated temporary records, the system constructs an optimized insertion query using efficient bulk-insert techniques. This minimizes database overhead and maximizes performance during the final data insertion step.

  • Bulk Insertion of Real Records: The validated and deduplicated records are bulk-inserted into the system's database tables, finalizing the data import process. This step ensures that only accurate and verified data is permanently stored for use within the application.

Real Record Creation

By meticulously verifying and finalizing imported data, the system ensures data accuracy, integrity, and consistency, laying the groundwork for effective data-driven decision-making and business operations. This phase marks the culmination of the data import process, ensuring that the system is equipped with reliable and trustworthy information for optimal performance and functionality.

Conclusion

In the dynamic landscape of data management, the seamless integration of external data into our systems is crucial. Throughout this journey, we've delved into the intricacies of designing a high-performance data import system, focusing on the versatility of Excel and the efficiency of our approach.

From configuring import settings to finalizing real records, each phase of the data import process is pivotal in ensuring accuracy, integrity, and efficiency. Our commitment to optimization extends beyond functionality to include minimizing database hits and connections, while also striving to maintain O(n) time complexity.

Through dynamic field mapping and a structured database schema, users can tailor import configurations to their needs, fostering flexibility and customization. This empowers organizations to navigate the complexities of data migration with confidence.

During file upload, we prioritize compatibility and security, laying the groundwork for efficient processing. Subsequent phases, from Excel reading to temporary record creation and verification, ensure meticulous validation with minimal database interactions.

As we conclude, it's clear that a well-designed data import system enhances operational efficiency and underpins informed decision-making. By embracing innovative solutions and optimization strategies, we navigate the complexities of data migration, driving growth and innovation in the digital age while ensuring minimal database overhead and optimal performance. Together, let's continue harnessing the power of efficient data integration to unlock new possibilities and opportunities.
Feel free to post your ideas as to how you would process the data imports.

Top comments (0)