How to Configure Highly Available MariaDB MaxScale Master-Slave with Galera Cluster?
In this tutorial, we'll deploy a MariaDB Galera cluster on Ubuntu 20.04 LTS (Focal) nodes using a MaxScale proxy database. An intelligent proxy database called MaxScale can send database requests from a cluster to a single server. However, MaxScale employs the Linux kernel's asynchronous I/O in contrast to HAproxy, which improves performance.
The cluster we're building will have a read-write split, which means that MaxScale performs all writes in the master and duplicates them to all nodes so that they may be read.
Node-01 is used as the master while nodes 02 and 03 are used as slaves in this article. Our MaxScale core machine is Node-04.
Main Features of MaxScale
a) A connection will be automatically established to a different node if any database servers fail.
b) A session's connections can be dynamically added or withdrawn.
c) The client request will be forwarded by MaxScale to a number of database servers.
Installing the cluster
The first step is to set up your host file with all of your hosts' private IP addresses and hostnames. This is required in order to avoid the requirement for traffic encryption and have your nodes communicate via private IP addresses. Here is my hosts file (/etc/hosts) on all four servers:
sudo su - nano /etc/hosts 10.5.108.14 node-01 node-01 10.5.108.13 node-02 node-02 10.5.108.15 node-03 node-03 10.5.108.16 node-04 node-04
Our Galera cluster will be the first three, and the MaxScale proxy will be the fourth.
Let's add a key to the first 3 servers' MariaDB repository.
apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0xF1656F24C74CD1D8
The package will be installed in accordance with your arch after we add repositories for the x86 and POWER little endian architectures.
add-apt-repository 'deb [arch=amd64,i386,ppc64el] http://nyc2.mirrors.digitalocean.com/mariadb/repo/10.6/ubuntu focal main'
Update the sources list.
And then install the mariadb
apt install mariadb-server rsync
Configuring and building the cluster
The cluster must then be built and configuration files must be modified. The cluster will be bootstrapped by node-01, and other nodes will join it. So let's start by configuring the three nodes' configuration files.
There we need to find [galera] section and change these lines:
[galera] # Mandatory settings wsrep_on=ON wsrep_provider=/usr/lib/galera/libgalera_smm.so wsrep_cluster_name="linoxide-cluster" wsrep_cluster_address="gcomm://node-01,node-02,node-03" binlog_format=row default_storage_engine=InnoDB innodb_autoinc_lock_mode=2 # # Allow server to accept connections on all interfaces. # bind-address=0.0.0.0 # wsrep_sst_method=rsync wsrep_node_address="10.32.84.135" wsrep_node_name="node-01"
The hostname and address of the current node must be included in the last two lines, so the above file is from node-01. Those two lines must be altered appropriately on each node, while the remaining lines can remain the same.
After finishing this, we must launch the cluster. Stop the database server on each of the three nodes if it is currently operating.
systemctl stop mysql
On first node run:
On the other two nodes
systemctl start mysql
We must set the password on the first node again, therefore we will run
After you ran that, you can type this command on any node
mysql -u root -p -e "SHOW STATUS LIKE 'wsrep_cluster_size'"
It should give you output similar to this
MaxScale Proxy installation and preparing the cluster
Once the cluster is operational, we can move to node-04 and install MaxScale there. Currently, only the x86_64 architecture is supported by MaxScale. Let's use wget to download the maxscale deb package.
Next we will install dependencies
apt install libcurl3
And then install MaxScale
dpkg -i maxscale-2.5.0-1.ubuntu.focal.x86_64.deb
After Maxscale has been installed, we must once more access our Galera cluster's mysql prompt in order to create Maxscale as a user and offer him the necessary operating capabilities.
mysql -u root -p
And in mysql prompt type this line by line
CREATE USER 'maxscale'@'%' IDENTIFIED BY 'you-password-here'; Query OK, 0 rows affected (0.00 sec) GRANT SELECT ON mysql.db TO 'maxscale'@'%'; Query OK, 0 rows affected (0.01 sec) GRANT SELECT ON mysql.user TO 'maxscale'@'%'; Query OK, 0 rows affected (0.01 sec) GRANT SHOW DATABASES ON *.* TO 'maxscale'@'%'; Query OK, 0 rows affected (0.01 sec)
Let's look at how the configuration file below functions. The first section under [maxscale] will, in turn, set the number of CPU threads to 4, enable logging to /var/log/maxscale, disable logging to /var/log/syslog, enable logging to /var/log/maxscale, enable logging to memory and notices, and disable logging to log info and developer options for code debugging.
[Galera Monitor] is the piece that comes next. There, various lines demand our attention. Names of servers must be entered in the servers= line. This is how MaxScale names the servers in this config file farther down; it is not the hostname. We shall set server1 to server3 in our situation. User is the username, in this case maxscale, that we created in the previous section. The password you choose for the maxscale user is the password. Out of our three nodes, Galera Monotor will choose one to serve as master and the others as slaves. The master will be chosen from the node with the lowest WSREP_LOCAL_INDEX. If the cluster configuration changes, a fresh selection may take place and the master node with the lowest index may be chosen. You can use the option disable_master_failback and set it to 1 like in our config file if you don't want master to alter in this manner. In this manner, even if a new node with a lower index joins the cluster, the master won't change.
Then we go on to the [RW Split Router] part, which is the next crucial section. The same user name and password from the [Galera Monitor] section are used here to enter the names of three servers once more.
Finally, we must update those three servers, giving them names that MaxScale will recognise and IP addresses that it will use to contact them. To avoid having to encrypt the traffic, we will utilise private IP addresses for the names [server1], [server2], and [server3].
Now on maxscale server, which is node-04, we are going to configure and start the maxscale proxy database. First lets set up the ufw to allow connections on needed ports.
ufw allow 4006 ufw allow 4007
Then backup the config file
mv /etc/maxscale.cnf /etc/maxscale.cnf.bk
After the file have been backed up and moved, lets make new file from scratch.
You can use the following as a template for configuration there, although obviously you'll need to edit the bits in bold:
[maxscale] admin_host=0.0.0.0 admin_port=8989 admin_secure_gui=false log_info=true # The number of worker threads that are handling the events coming from the kernel. threads=auto #admin_ssl_key=/var/lib/maxscale/certs/server.key #admin_ssl_cert=/var/lib/maxscale/certs/server.crt #admin_ssl_ca_cert=/var/lib/maxscale/certs/ca.crt # Enable or disable the substitution of environment variables in the MaxScale configuration file substitute_variables=true # Enable or disable the logging of messages whose syslog priority is notice log_notice=on syslog=0 maxlog=1 #List of servers in the Cluster [stg-galeramaster101] type=server address=10.5.108.14 port=3306 protocol=MariaDBBackend priority=1 [stg-galeramaster102] type=server address=10.5.108.13 port=3306 protocol=MariaDBBackend priority=2 [stg-galeramaster103] type=server address=10.5.108.15 port=3306 protocol=MariaDBBackend priority=3 # Monitoring for the Galera server nodes [Galera-Monitor] type=monitor module=galeramon servers=stg-galeramaster101,stg-galeramaster102,stg-galeramaster103 user=<username> password=<password> monitor_interval=200 use_priority=true available_when_donor=true # This will ensure that the current master remains the master as long as it's up and dunning disable_master_failback=true backend_connect_timeout=3s backend_write_timeout=3s backend_read_timeout=3s # Galera Read/Write Splitter service [Galera-RW-Service] type=service router=readwritesplit servers=stg-galeramaster101,stg-galeramaster102,stg-galeramaster103 user=<username> password=<password> master_reconnection=true transaction_replay=true transaction_replay_retry_on_deadlock=true master_failure_mode=error_on_write # ADAPTIVE_ROUTING, based on server average response times. See below. slave_selection_criteria=ADAPTIVE_ROUTING # Galera cluster listener [Galera-Listener] type=listener service=Galera-RW-Service protocol=MariaDBClient port=4006 address=0.0.0.0 [Replication-Service] type=service router=readconnroute router_options=master servers=stg-galeramaster101,stg-galeramaster102,stg-galeramaster103 user=<username> password=<password> [Replication-Listener] type=listener service=Replication-Service protocol=MariaDBClient port=4007 address=0.0.0.0
After this have been saved, you can start the maxscale service.
systemctl start maxscale.service
And test whether it is working
maxctrl -pmariadb list servers
The load balancer for our Galera cluster, which consists of three Ubuntu 20.04 LTS (Focal) nodes and a fourth node for MaxScale, has been installed successfully. A suitable option for huge clusters is MaxScale. The smallest configurations were created today, but scaling up is also an option. I sincerely hope that this post helped you become familiar with MaxScale settings.
Enjoy your day:)