Abstract
A range of open-source and commercial tools can provide Extract, Load and Transform (ELT) capabilities. These tools provide source and destination connectors and can use automatic data format conversion. This article will use a commercial ELT tool to replicate a MongoDB Atlas database to SingleStoreDB Cloud.
Introduction
In a previous article, we used open-source Airbyte to create an ELT pipeline between SingleStoreDB and Apache Pulsar. We have also seen in another article several methods to ingest MongoDB JSON data into SingleStoreDB. In this article, we'll evaluate a commercial ELT tool called Hevo Data to create a pipeline between MongoDB Atlas and SingleStoreDB Cloud. Switching to SingleStoreDB has many benefits, as described in the Captain Metrics Customer Case Study and webinar.
SingleStoreDB configuration
A previous article showed the steps required to create a free SingleStoreDB Cloud account. We'll use Hevo Demo Group as our Workspace Group Name and hevo-demo as our Workspace Name. We'll make a note of our password and host name. Finally, we'll create a new database using the SQL Editor:
CREATE DATABASE IF NOT EXISTS hevo_demo;
MongoDB configuration
We'll use the MongoDB Atlas shared cluster deployment on AWS. This will give us a 3-node cluster (one primary and two secondary). Once the cluster is deployed, we'll load the restaurants sample dataset. We'll also create a user called hevo
and assign readAnyDatabase
privileges to this user for our initial tests.
Hevo Data configuration
1. Configure Source
We'll search and choose MongoDB Atlas as the source.
We'll fill in the Configure your MongoDB Atlas Source form, as follows:
- Pipeline Name: MongoDB Atlas Source
-
General Connection Settings:
- Select Paste Connection String
-
Connection URI:
mongodb+srv://hevo:<password>@<cluster>
- Select an Ingestion Mode: Change Streams
-
Advanced Settings: Disable Load All Databases and select
sample_restaurants
from the list
We'll replace the <password>
and <cluster>
with the values from MongoDB Atlas.
Several Hevo Data IP addresses are also listed, and these should be added to the IP Access List in MongoDB Atlas.
We'll use the TEST CONNECTION button, and the connection should be successful.
Next, we'll click TEST & CONTINUE.
2. Select Objects
On the next page, we'll check (✔) All objects selected and click CONTINUE.
3. Configure Destination
We'll search and choose MySQL as the destination.
We'll fill in the Configure your MySQL Destination form, as follows:
- Destination Name: SingleStoreDB Destination
- Database Host: <host>
- Database Port: 3306
- Database User: admin
- Database Password: <password>
- Database Name: hevo_demo
We'll replace the <host>
and <password>
with the values from our SingleStoreDB Cloud account.
Several Hevo Data IP addresses are also listed, and these should be added to the Inbound IP Allowlist in the SingleStoreDB Cloud Firewall.
We'll use the TEST CONNECTION button, and the connection should be successful.
Next, we'll click SAVE & CONTINUE.
4. Final Settings
We'll use Auto Mapping and Replicate JSON fields to JSON columns.
Next, we'll click CONTINUE.
The pipeline should start running shortly afterwards, as shown in Figure 1.
It will take a little time for the pipeline to complete.
In SingleStoreDB, two tables should be created, restaurants
and neighborhoods
, to match the two collections in MongoDB.
We can quickly check each table in SingleStoreDB to view what the Hevo Data pipeline has generated. Here is an example from the restaurants
table:
*************************** 1. row ***************************
_id: 5eb3d668b31de5d588f43c79
address: {"building":"94","coord":[-73.99883899999999,40.7245064],"street":"Prince Street","zipcode":"10012"}
borough: Manhattan
cuisine: American
grades: [{"date":1394668800000,"grade":"A","score":2},{"date":1362441600000,"grade":"A","score":9},{"date":1330646400000,"grade":"A","score":11},{"date":1300665600000,"grade":"A","score":8}]
name: Fanelli Cafe
restaurant_id: 41040365
__hevo__database_name: sample_restaurants
__hevo_id: 6a2ef09b93623b083d2ed2d727e7bfcdd2a44dcda1460b8c89f001ee015eba94
__hevo__ingested_at: 1728195951623
__hevo__marked_deleted: 0
__hevo__source_modified_at: NULL
and here is a truncated example from the neighborhoods
table:
*************************** 1. row ***************************
_id: 55cb9c666c522cafdb053ad3
geometry: {"coordinates":[[[-73.72586264116339,40.683241080946296], ...,[72586264116339,40.683241080946296]]],"type":"Polygon"}
name: Rosedale
__hevo__database_name: sample_restaurants
__hevo_id: c9c391c729805b91c3b21f1227c106697d3d67569c7f31b2db823154c5358985
__hevo__ingested_at: 1728195950854
__hevo__marked_deleted: 0
__hevo__source_modified_at: NULL
Eventually, there should be 25,359 rows in the restaurants
table and 195 rows in the neighborhoods
table. These numbers match the numbers from MongoDB Atlas.
Summary
This short article has shown one example of a database automatically replicated to SingleStoreDB using a commercial ELT platform. Overall, the results have been very promising. Auto Mapping worked successfully, and all the data were copied from MongoDB Atlas to SingleStoreDB Cloud.
Top comments (0)