With the growing importance of data-powered decision-making, data engineering is becoming critical to organizations in just about every industry.
This glossary is designed to be a resource for those looking to learn about the field, hire data engineers, or brush up on the terminology. It’s also intended to help you understand the fundamentals of data engineering and its growing importance in today's data-driven world.
What is data engineering?
At its core, data engineering is all about designing, building, and maintaining the infrastructure and systems that support the collection, storage, and processing of large amounts of data. This includes creating and maintaining data pipelines, data warehousing, and data storage systems. It also includes creating and maintaining data quality and governance processes and ensuring the security and accessibility of data.
Data engineering is a critical part of any organization that relies on data to make decisions. It provides the foundation for data-driven decision-making, machine learning, analytics, and reporting. It is a highly technical field that requires knowledge of programming, databases, data warehousing, and cloud computing.
Data engineers work closely with data scientists and analysts to understand their data needs and help them access and use data effectively. They are responsible for making sure that the data is accurate, complete, and accessible to the people who need it.
Check out our project on Github
Glossary of Data Engineering terms
Big Data
Big data refers to extremely large and complex sets of data that are difficult or impossible to process using traditional methods. Big data can come from a variety of sources such as social media, sensor networks, and online transactions. It can include structured data (such as numbers or dates) as well as unstructured data (such as images or video).
The 3 Vs (Volume, Variety, and Velocity) are often used to describe the characteristics of big data. Volume refers to the sheer size of the data, variety refers to the different types of data, and velocity refers to the speed at which the data is generated. Big data requires specialized technologies and methods to process, store, and analyze it.
Business Intelligence
Business Intelligence (BI) is the process of collecting, analyzing, and presenting data to support decision-making and strategic planning within an organization. This can include data from internal systems, as well as external sources like market research and competitor analysis.
BI includes a variety of tools and techniques such as data visualization, reporting, data mining, and OLAP (Online Analytical Processing) to help organizations make sense of their data. The goal of BI is to provide organizations with a complete and accurate picture of their performance, customers, and market to make informed decisions.
Data Analyst
A data analyst is a professional who is responsible for collecting, cleaning, analyzing, and interpreting large sets of data. They use statistical methods, data visualization techniques, and other tools to gain insights and knowledge from data. They use this information to support decision-making and problem-solving within an organization. Data analysts work closely with data scientists, business analysts, and other stakeholders to understand their data needs and help them access and use the data effectively.
Data Architecture
Data architecture refers to the overall design and organization of data within a system or team. It includes the data models, data flow, and storage systems used to manage and access the data. It also includes the processes and policies that are set up to ensure data quality, security, and accessibility. The goal of data architecture is to make sure data is properly structured and stored in a way that supports the needs of the organization and users of the data.
Data Compliance
Data compliance refers to the adherence to regulations and guidelines that govern the collection, storage, use, and disposal of data. It notably includes the protection of sensitive data, such as personal information, and ensuring that such data is handled, stored, and disposed of in accordance with requirements. Data compliance is a critical aspect of data governance and it helps organizations to mitigate risks and protect sensitive information.
Data Exploration
Data exploration is the process of analyzing and understanding a dataset. It includes visualizing data, identifying patterns and relationships, and finding outliers and anomalies. The goal of data exploration is to gain insights and knowledge about the data that can be used to make informed decisions. It is often an iterative process and is useful for building an understanding of the data before getting into more structured approaches like data analysis, machine learning, or statistical modeling.
Data Enrichment
Data enrichment is the process of adding additional data to a dataset to make it more valuable. This can include adding external data, such as weather data or geographic data, to a dataset to gain new insights. It can also include adding derived variables and features, such as calculated fields or aggregated data, to the dataset. Data enrichment is a common step when preparing data for machine learning and statistical modeling, as it can lead to better model performance.
Data Governance
Data governance is the set of policies, standards, and procedures that an organization uses to manage, protect, and ensure the quality of its data. It includes the management of data policies, procedures, standards, and metrics to ensure data is accurate, complete, consistent, and accessible. It also involves the monitoring of data compliance concerning regulations.
Data Ingestion
Data ingestion is the process of bringing data into a system for storage and processing. This includes the collection, extraction, and loading of data from various sources such as databases, files, or the Internet. It is the first step in the data pipeline and it's critical for data quality and accuracy. The data ingestion process can be done with various tools such as ETL (Extract, Transform, Load) processes, data integration platforms, or custom scripts.
Data Integration
Data integration is the process of combining data from multiple sources into a single, unified dataset. This can include combining data from different databases, applications, or file formats. Data integration is a critical step in creating a single source of truth, and it can be done with various tools such as ETL (Extract, Transform, Load) processes, data integration platforms, or custom scripts.
Data Lake
A data lake is a centralized repository that allows organizations to store all their structured and unstructured data at any scale. It is a way to store raw data in its original format and allows for easy data discovery and access through a self-service model. Data lakes are designed to handle large volumes of data, and they are often implemented on distributed storage systems such as Hadoop or cloud storage platforms.
Data Lakehouse
A data lakehouse is a combination of a data lake and a data warehouse. It is a unified, hybrid data platform that enables organizations to store, manage, and analyze both structured and unstructured data in a single repository. The data lakehouse architecture is a new approach that combines the scalability and flexibility of a data lake with the performance and governance of a data warehouse, enabling organizations to get insights faster and make data-driven decisions more effectively.
Data Mart
A data mart is a subset of a data warehouse that is focused on a specific business function or department. It is a repository of data that is tailored to the specific needs of a particular business unit. Data marts are designed to provide a specific set of data to a specific set of users.
Data Mesh
A data mesh is an architectural pattern that aims to decouple data services from application services by breaking down monolithic data systems into small, decentralized, and autonomous data services. This allows for greater flexibility, scalability, and resilience in how data is managed and accessed within an organization. In a data mesh architecture, each data service is responsible for a specific domain or subset of data, and they are loosely coupled to allow for independent development, deployment, and scaling.
Data Mining
Data mining is the process of discovering patterns and knowledge from large sets of data. It involves the use of various techniques such as statistical analysis, machine learning, and artificial intelligence to extract insights from data. Data mining can be applied to a wide range of fields including business, medicine, and science, and it can be used to predict future trends, identify customer behavior, and detect fraud.
Data Observability
Barr Moses, CEO, and Co-Founder of Monte Carlo Data coined the term, data observability back in 2019. According to Barr, data observability is an organization's ability to fully understand the health of the data in its systems. By applying DevOps best practices to data pipelines, data observability ultimately eliminates data downtime.
Data Orchestration
The main purpose of a data orchestration tool is to ensure jobs are executed on a schedule only after a set of dependencies is satisfied. Data orchestration tools will do most of the grunt work for you, like managing connection secrets and plumbing job inputs and outputs. An advantage of using data orchestration tools is that they provide a nice user interface to help engineers visualize all work flowing through the system.
[
Architecture Review: Dagster vs. Airflow
We dive into the weeds to figure out what separates Dagster from Airflow at the architectural level.
](https://www.plural.sh/blog/dagster-vs-airflow/)
Data Modeling
Data modeling is the process of creating a conceptual representation of data and the relationships between data elements. It is useful for designing and implementing infrastructure such as databases and data warehouses that can store and manage data effectively.
Data Pipeline
A data pipeline is a set of processes that move data from one system or stage to another, typically involving extracting data from one or more sources, transforming it to fit the needs of downstream consumers, and loading it into a target system or data store. Data pipelines can be used to automate the flow of data between systems, to ensure data is processed consistently and efficiently, and to support real-time processing and analytics.
Data Preparation
Data preparation is the process of cleaning, transforming, and normalizing data to make it ready for analysis or modeling. This can include tasks such as removing missing or duplicate data, handling outliers, and converting data into a consistent format. Data preparation is a critical step in the data science process, as it can greatly impact the quality and accuracy of the final analysis or model.
Data Science
Data science is an interdisciplinary field that involves using scientific methods, processes, algorithms, and systems to extract insights and knowledge from data. It includes various steps such as data exploration, data modeling, and data visualization. Data science can be applied to a wide range of fields, including business, healthcare, and science. It is a combination of many techniques and skills such as statistics, machine learning, data visualization, data engineering, and domain knowledge.
Data Quality
Data quality refers to the degree to which data is accurate, complete, consistent, and reliable. It is an important aspect of data management, as poor data quality can lead to incorrect or unreliable insights and poor decision-making. Data quality can be managed through a variety of techniques such as data validation, data cleansing, and data governance. Ensuring data quality is a continuous process that should be accounted for throughout data ingestion, transformation, and analysis.
Data Source
A data source is a location or system where data is stored or generated. It can be a database, file, or external system such as a website or sensor network. Data sources can provide structured or unstructured data and can be used for a variety of purposes such as business intelligence, data warehousing, and machine learning.
Data Stack
A data stack refers to the collection of technologies and tools that are used to manage and analyze data within an organization. It often includes databases, data warehousing, data pipelines, data visualization, and machine learning. The data stack can vary depending on the specific needs and requirements of an organization, but it is typically designed to support the collection, storage, processing, and analysis of large amounts of data.
[
Plural | Deploying Data Stack on Kubernetes
Use Plural to deploy and manage the Data stack on your own cloud.
Deploying Data Stack on Kubernetes
](https://www.plural.sh/plural-stacks/data)
Use Plural to deploy a data stack on Kubernetes
Data Warehouse
A data warehouse is a large, centralized repository of data that is specifically designed to support business intelligence and reporting. Data is extracted from various sources, transformed to fit a common data model, and loaded into the warehouse for analysis. Data warehouses are optimized for reading and querying large amounts of data, and they often include features such as indexing, partitioning, and aggregations to support efficient querying.
Data Wrangling
Data wrangling is the process of cleaning, transforming, and normalizing data to make it ready for analysis or modeling. This can include tasks such as removing missing or duplicate data, handling outliers, and converting data into a consistent format. Data wrangling can be time-consuming and labor-intensive, but is an important step in the data science process, as it can greatly impact the quality and accuracy of the final analysis or model.
Deduplication
Deduplication is the process of identifying and removing duplicate records from a dataset. Deduplication can be performed on various fields such as name, address, or email, and it can be done using various techniques such as hashing, string matching, and machine learning. Deduplication is an important step in data preparation, as duplicate records can lead to inaccurate analysis and decision-making.
ELT
ELT stands for Extract, Load, Transform, it is a process where data is first extracted from various sources, loaded into a target system, and then transformed to fit the needs of downstream consumers. This is different from the traditional ETL process (Extract, Transform, Load) where the data is first transformed before being loaded into the target system. ELT allows for more efficient processing of large volumes of data as it can take advantage of the processing power of modern data warehousing and big data platforms.
ETL
ETL stands for Extract, Transform, Load, it is a process for moving data from one or more sources into a target system, such as a data warehouse, for further analysis and reporting. The process consists of three main steps: Extracting data from various sources, transforming the data to fit a common data model, and loading the data into the target system. ETL processes are often automated and scheduled to run regularly to ensure that the target system is up-to-date.
Machine Learning
Machine learning (ML) is a subfield of artificial intelligence that allows systems to learn from data and improve their performance without being explicitly programmed. Machine learning algorithms can be used to classify, cluster, or predict outcomes based on data. There are various types of machine learning algorithms, such as supervised learning, unsupervised learning, and reinforcement learning. The goal of machine learning is to create models that can make predictions or make decisions based on historical data.
Reverse ETL
Reverse ETL is the process of moving data from a target system, such as a data warehouse, to one or more sources. It is the opposite of the traditional ETL process where data is extracted from the sources and loaded into the target system. Reverse ETL is used when data needs to be propagated back to the source systems after it has been transformed, consolidated, or processed in the target system. This is often important for data governance and data compliance reasons.
Plural for Data Engineers
Recently, we have noticed a trend among data teams choosing open-source tools for their data stacks when either building or re-evaluating their existing infrastructure. And, with the current state of the market, it makes sense to continuously evaluate your stack to ensure that you are keeping costs down.
Open-source toolkits are growing in popularity among data teams due to their low cost, high flexibility, and helpful developer communities. To deploy those open-source toolkits, data teams deploy them with Kubernetes.
However, the biggest struggle data teams face when using open-source technology is managing, deploying, and integrating the tools themselves in their own cloud.
Plural aims to make deploying open-source applications on the cloud a breeze for organizations of all sizes. In under ten minutes, you can deploy a product-ready open-source data infrastructure stack on a Kubernetes cluster with Plural.
To learn more about how Plural works and how we are helping engineering teams across the world deploy open-source applications in a cloud production environment, reach out to our team to schedule a demo.
Ready to effortlessly deploy and operate open-source applications in minutes? Get started with Plural today.
Join us on our Discord channel for questions, discussions, and to meet the rest of the community.
Top comments (0)