DEV Community

Qing
Qing

Posted on

MOT Usage Scenarios

MOT can significantly speed up an application's overall performance, depending on the characteristics of the workload. MOT improves the performance of transaction processing by making data access and transaction execution more efficient and minimizing redirections by removing lock and latch contention between concurrently executing transactions.

MOT's extreme speed stems from the fact that it is optimized around concurrent in‑memory usage management (not just because it is in memory). Data storage, access and processing algorithms were designed from the ground up to take advantage of the latest state of the art enhancements in in-memory and high-concurrency computing.

openGauss enables an application to use any combination of MOT tables and standard disk-based tables. MOT is especially beneficial for enabling your most active, high-contention and performance-sensitive application tables that have proven to be bottlenecks and for tables that require a predictable low-latency access and high throughput.

MOT tables can be used for a variety of application use cases, which include:

· Real Time Decision Making – Risk assessment and Fraud Detection are business critical in Banking, Finance and Insurance institutions. Such scenarios require processing of complex business compliance rules with millisecond latency and high throughput.

· High-throughput Transactions Processing – This is the primary scenario for using MOT, because it supports large transaction volume that requires consistently low latency for individual transactions. Examples of such applications are real-time decision systems, payment systems, financial instrument trading, sports betting, mobile gaming, ad delivery and so on.

· Acceleration of Performance Bottlenecks – High contention tables can significantly benefit from using MOT, even when other tables are on disk. The conversion of such tables (in addition to related tables and tables that are referenced together in queries and transactions) result in a significant performance boost as the result of lower latencies, less contention and locks, and increased server throughput ability.

· Elimination of Mid-Tier Cache – Cloud and Mobile applications tend to have periodic or spikes of massive workload. Additionally, many of these applications have 80% or above read-workload, with frequent repetitive queries. To sustain the workload spikes, as well to provide optimal user experience by low-latency response time, applications sometimes deploy a mid-tier caching layer. Such additional layers increase development complexity and time, and also increase operational costs. MOT provides a great alternative, simplifying the application architecture with a consistent and high performance data store, while shortening development cycles and reducing CAPEX and OPEX costs.

· Large-scale Data Streaming and Data Ingestion – MOT tables enables large-scale streamlined data processing in the Cloud (for Mobile, M2M and IoT), Transactional Processing (TP), Analytical Processing (AP) and Machine Learning (ML). MOT tables are especially good at consistently and quickly ingesting large volumes of data from many different sources at the same time. The data can be later processed, transformed and moved in slower disk-based tables. Alternatively, MOT enables the querying of consistent and up-date data that enable real-time conclusions. In IoT and cloud applications with many real-time data streams, it is common to have special data ingestion and processing triers. For instance, an Apache Kafka cluster can be used to ingest data of 100,000 events/sec with a 10msec latency. A periodic batch processing task enriches and converts the collected data into an alternative format to be placed into a relational database for further analysis. MOT can support such scenarios (while eliminating the separate data ingestion tier) by ingesting data streams directly into MOT relational tables, ready for analysis and decisions. This enables faster data collection and processing, MOT eliminates costly tiers and slow batch processing, increases consistency, increases freshness of analyzed data, as well as lowers Total Cost of Ownership (TCO).

· Lower TCO – Higher resource efficiency and mid-tier elimination can save 30% to 90%. Competitors reported similar case studies (MemSQL, Azure).

Top comments (0)