1. Solution Overview
GBase 8a MPP Cluster is the only analytical MPP database in China with disaster recovery capability across two cities and three data centers. Using the RsyncTool synchronization tool provided by the product, near real-time data synchronization between clusters in the same city or remote locations can be achieved. It has already been deployed and implemented by many customers such as Agricultural Bank of China, PICC, People’s Bank of China.
In 2024, the 8a product launched an enhanced version of the RsyncTool called GVR (GBase Visual RsyncTool), which introduces graphical user interfaces (GUI) for configuration management, synchronization task orchestration, and monitoring. The manageability, maintainability, and usability of cluster data synchronization and disaster recovery functionalities have been greatly enhanced. Additionally, incremental synchronization performance has been optimized, and the granularity of table locking during synchronization has been reduced. On one hand, by comparing metadata between the primary and standby clusters in parallel, tables without data changes are excluded from the synchronization task, thus improving performance. On the other hand, the lock granularity of tables on the primary cluster during synchronization is reduced—tables are only read-locked during the metadata snapshot phase, and DML operations are supported during the data synchronization phase, significantly reducing the impact on business operations.
This article aims to introduce the active-active disaster recovery solution for GBase 8a MPP Cluster based on the GVR tool, including resource requirements, applicable scenarios, and a brief introduction to GVR tool features.
2. Overview of the Active-Active Synchronization Solution
The GBase 8a MPP Cluster active-active disaster recovery solution is based on the visual data synchronization tool GVR (GBase Visual RsyncTool). GVR supports incremental data synchronization at the table level between two GBase 8a MPP Cluster setups, which can be deployed in the same city or in different locations, forming an active-active disaster recovery solution with mutual data backups.
The GBase 8a active-active disaster recovery solution recommends using one cluster as the primary cluster for data entry (it can also handle query tasks), while the other cluster operates as a standby in read-only mode. In normal operations, the primary cluster handles data write tasks and synchronizes incremental data from the tables to the standby cluster at the table level using the synchronization tool. Both clusters can provide query services.
In case of a complete failure of the primary cluster, the system can switch business operations to the standby cluster, which will then handle write tasks. After the primary cluster is restored, the synchronization tool can synchronize the incremental data from the standby cluster back to the primary cluster, and business operations can switch back to the primary cluster or remain on the standby, turning the original primary cluster into the standby.
3. Resource Requirements
3.1 Scale Requirements for the Primary and Standby Clusters
1) Logical architecture of primary and standby clusters must be identical:
- The active-active solution requires the primary and standby clusters to have the same number of primary shards. For instance, if the primary cluster has 10 gnode compute nodes (or gnode instances in a multi-instance setup), each with 1 primary shard, the number of primary shards should also be 10 on the standby cluster.
- To ensure that the standby cluster can provide equivalent computing performance after a switchover, it is recommended that the standby cluster is built with the same number of nodes and server configurations as the primary cluster.
2) Identical data distribution rules between primary and standby clusters:
- The active-active solution requires the data hash distribution rules in the primary and standby clusters to be identical. This ensures that the hash partitioning rules in the
gbase.nodedatamap
system table are consistent between both clusters.
3.2 Network Connectivity Requirements
1) Network connectivity between nodes:
- The active-active synchronization solution involves three components: the primary cluster, the standby cluster, and the GVR sync tool. The network between all three components must be connected. That is, all node servers (management node
gcluster
, compute nodegnode
) of both the primary and standby clusters, as well as the server hosting the GVR sync tool, must be able to communicate with each other. - GBase 8a cluster supports two-plane network deployment, where one plane is used for the private internal network of the cluster, and another for external services. For geographically separated clusters, the business network can be used for data synchronization without requiring internal network connectivity. GVR supports IP Mapping to achieve synchronization over the business network.
- Ports for database services (default: 5258, 5050) and data sync services (default: 5288) must be opened between all nodes in the primary and standby clusters and the GVR sync tool server.
2) Bandwidth requirements between nodes:
- The active-active synchronization adopts a node-to-node (shard-to-shard) data synchronization model. Data is synchronized between the corresponding shards in the primary and standby clusters in parallel. Multiple tables can be synchronized in parallel, requiring adequate bandwidth based on the Recovery Time Objective (RTO) and the amount of incremental data. In other words, the total bandwidth between nodes in the primary and standby clusters must be greater than the daily incremental data size divided by the sync window time.
3.3 Resource Requirements for GVR Sync Tool Deployment
- It is recommended to deploy the GVR tool on separate servers and support multi-node deployment to ensure high availability. It is advised to allocate 1-3 servers for the GVR tool deployment, including storage for sync logs and configuration data. The server configuration should be at least 8-core CPU, 32 GB RAM, and 1 TB storage.
4. Applicable Scenarios
4.1 T+1 Synchronization Scenario
4.1.1 Solution Description
The T+1 asynchronous active-active sync scenario is suitable for situations where data synchronization does not require real-time accuracy, and there is a large time window for synchronization, such as batch processing scenarios. After the daily batch processing jobs are completed, the data in the clusters remains static, allowing incremental data to be synchronized. During synchronization, the tables being synchronized on the primary cluster can support queries and append-only DML operations, while the corresponding tables on the standby cluster are not available for external services.
The T+1 asynchronous synchronization has lower coupling with business operations. By leveraging different time periods for synchronization, inconsistencies between table data slices can be avoided.
1) The batch jobs of the application processing are executed on the primary cluster according to the original logic.
2) The modified tables involved in the application processing jobs are added to the task list for synchronization.
3) After the batch jobs are completed, the active-active cluster synchronization is triggered, or it is segmented by time. For example, if the batch jobs are completed at 8 AM, the active-active synchronization starts at 9 AM.
4) Once all the tables in the synchronization list are synchronized, the data in the standby cluster becomes consistent with the data in the primary cluster, completing the active-active synchronization. At this point, the standby cluster can provide external data query services.
Users can also choose to synchronize all tables by database or provide a larger list of tables to be synchronized, which would skip step 2 by predefining the databases or tables to be synchronized. During active-active synchronization, the GVR synchronization tool automatically checks whether the tables are consistent between the primary and standby clusters. Tables that were not modified during this job cycle will not be synchronized again.
4.1.2 Incremental Synchronization Time Assessment
The T+1 asynchronous active-active synchronization needs to complete the synchronization of incremental data changes within a fixed time window, which takes a relatively long time. The required time can be estimated based on the volume of data changes and the network transmission bandwidth. GBase 8a’s active-active synchronization performs parallel network transmission of shard-to-shard and supports the parallel synchronization of multiple tables, allowing full utilization of network bandwidth.
For scenarios where the synchronization time window is insufficient, data synchronization can be carried out in batches according to business characteristics. For example, if the batch processing time window is from midnight to 8 AM, where 12 AM to 2 AM is used for daily incremental data loading, 2 AM to 3 AM for processing and generating public data, 3 AM to 5 AM for processing and generating summary-level data, and 5 AM to 8 AM for statistical calculations, then the synchronization batches can be arranged as follows:
- 3 AM: Perform active-active synchronization of the daily incremental data tables in the ODS layer.
- 5 AM: Perform active-active synchronization of the public processing data tables.
- 9 AM: Perform active-active synchronization of the remaining unsynchronized tables.
4.1.3 Disaster Recovery Metric Assessment
The disaster recovery metrics under the T+1 asynchronous active-active synchronization scheme depend on the time required to re-run the daily batch jobs.
Since the original data in the OLAP system comes from external systems and typically possesses idempotent business processing characteristics, data recovery can be achieved by reloading and re-running batch processing. Therefore, the RPO (Recovery Point Objective) is 0. For data that cannot be recalculated through batch jobs, the worst-case RPO is from the last synchronization completion time to the time of the disaster, meaning RPO < 24 hours. For such business scenarios, the RPO can be reduced by increasing the frequency of data synchronization, such as synchronizing this type of data every hour to shorten the RPO to less than 1 hour.
As for the RTO (Recovery Time Objective), the recovery time starts from the last completed synchronization and is determined by the time needed to re-run the batch jobs. For instance, if re-running the daily batch jobs takes 8 hours, then the RTO will be less than 8 hours.
4.2 Job-Level Near Real-Time Synchronization Scenario
4.2.1 Solution Description
Job-level near real-time synchronization involves adding table data synchronization tasks to the batch processing job scheduling of the application. After the completion of a batch processing job, active-active synchronization is performed for the tables involved in that job. The GVR provides users with a synchronization task scheduling interface, allowing the addition of an active-active synchronization task as the last step in the batch processing job. Once the synchronization task is completed, the job execution is considered complete.
In this solution, active-active synchronization is somewhat coupled with business operations. By using job scheduling, timely data synchronization between the primary and backup clusters can be achieved, meeting the goal of reducing RPO (Recovery Point Objective) and RTO (Recovery Time Objective).
The architecture diagram of this solution is as follows:
1) Modify the business program scripts to add active-active synchronization task scheduling at the end of each batch processing job.
2) Predefine or dynamically generate the list of tables to be synchronized during the job execution. After the job is completed, data synchronization is performed based on the predefined list or the list of tables with data changes during the job.
3) Once the table synchronization task is complete, the batch job is officially considered finished. In case the data synchronization task fails, the job scheduling program should support rerunning the synchronization task without rerunning the entire job.
4) To minimize the performance impact of data synchronization on batch jobs, the final synchronization task can be executed asynchronously. Once the batch job is completed, subsequent batch jobs can proceed while data synchronization runs asynchronously, eventually returning the synchronization results.
4.2.2 Incremental Data Synchronization Time Assessment
Job-level near real-time data synchronization distributes data synchronization tasks between the primary and backup clusters into various job tasks. Each job has a small data synchronization volume, and the synchronization time is short.
Synchronization performance can still be estimated based on the data volume and network bandwidth. In scenarios with multiple high-performance serial batch processing jobs, table synchronization tasks can be executed asynchronously to minimize performance impact on the job flow.
4.2.3 Disaster Recovery Metric Assessment
Job-level near real-time data synchronization is similar to T+1 asynchronous active-active synchronization. For business processes that can regenerate data through batch jobs, the RPO is 0. For data that cannot be recalculated through batch jobs, the RPO is equal to the active-active switch time plus the job batch processing time, which can be within minutes.
The RTO (Recovery Time Objective) is close to the time required to re-run the job. For example, if a job that takes 1 hour is running when a disaster occurs in the primary cluster, requiring a switch to the backup cluster, the RTO would be less than 1 hour. The most recent job batch processing can be re-run on the backup cluster based on the tables synchronized during the last synchronization.
4.3 Impact of Active-Active Synchronization on Business and Functional Limitations
1) Impact on Business: The synchronization granularity of the active-active synchronization solution is at the table level, meaning the business impact is also at the table level during synchronization.
-
Primary Cluster: During synchronization, the table supports reading and DML operations such as
INSERT SELECT
,INSERT VALUES
,LOAD
,DELETE
, andUPDATE
orMERGE
operations in append-only form with fast_update mode enabled. DDL andUPDATE
,MERGE
operations without fast_update mode are not supported. - Backup Cluster: The table being synchronized does not support read, DML, or DDL operations.
2) Scope of Synchronization:
- The GVR tool supports synchronization of DML operations on the synchronized tables but does not support DDL operations.
- The GVR tool supports metadata synchronization between the primary and backup clusters, including synchronization of tables, stored procedures, views, and user-defined functions.
5. Introduction to the GVR Active-Active Synchronization Tool
The GVR (GBase_Visual_RsyncTool) encapsulates the implementation details of underlying data synchronization. It simplifies active-active synchronization configuration through a graphical interface, supports synchronization of metadata such as stored procedures, and provides monitoring and operational features for active-active synchronization.
The architecture of the GVR synchronization tool is shown in the diagram above, and the responsibilities of each module are as follows:
- Frontend Service: Provides a visual interface, SNMP/RESTFUL interfaces.
- Backend Service: Receives requests from the frontend, performs user and permission authentication, and manages synchronization task scheduling.
- Underlying Tool: Executes synchronization, encapsulated internally and transparent to the user.
- Logging System: Outputs and records operation logs from the backend service and underlying tool modules.
- Configuration Database: Stores active-active synchronization configuration information persistently and records the execution status of synchronization tasks.
The GVR tool supports the following features:
- Visual configuration of primary-backup cluster synchronization.
- Metadata synchronization.
- Dynamic progress bar to display synchronization progress.
- Viewing of historical synchronization tasks.
- Visual editing of scheduled tasks.
- Synchronization task alert features.
- Data consistency verification between primary and backup clusters.
- Data source management.
- Synchronization task management.
- Synchronization scheduling strategy configuration.
- Synchronization task monitoring.
- Audit of operation logs.
Top comments (0)