Introduction
Out off all the tutorial out there, all the cluster setup are shown for non private networks. since the setup is for private network, u need to skip certain stuff or might as well add few stuffs.
First thing first, the checklist:
- All the server nodes in the same Private Network.
- Minimum of 3 Servers nodes.
- MySQL 8.0 community version installed on all those servers over here MySQL Installation.
- MySQL Shell installed on one those servers MySQL Shell Installation.
Server Setup Details
- App Server : apt 127.0.0.4
- DB Server node 1 : dbt1 127.0.0.1
- DB Server node 2 : dbt2 127.0.0.2
- DB Server node 3 : dbt3 127.0.0.3
After checked all the checklist, start off with the hosts config inoder to allow the private network can talk to each other.
- run
sudo nano /etc/hosts
- set the nodes internal ip into the host .
127.0.0.1 db1
127.0.0.2 db2
127.0.0.3 db3
- remove all iptables config by running
sudo iptables -F
or set the iptables properly - set the firewalld permanent to allow
3306
33060
33061
33062
- configure the my.cnf settings on
/etc/my.cnf
- backup the mysql database (incase of emergency)
- create a user specifically for clusteradmin access to all the server nodes
Setting up MySQL Innodb Cluster
- start setting up the cluster by running the
$ mysqlshell
- run this on the cli
$ mysqlshell
-
run this on the mysqlshell
-
dba.configureInstance('clusteradmin@127.0.0.1')
to configure dbt1 for mysql clustering -
dba.configureInstance('clusteradmin@127.0.0.2')
to configure dbt2 for mysql clustering -
dba.configureInstance('clusteradmin@127.0.0.3')
to configure dbt3 for mysql clustering. - on the same mysqlshell run this command
\c clusteradmin@127.0.0.1
. to connect to the first master nodes which is the dbt1. - run and type this in the mysqlshell
cluster=dba.createCluster('clusterName')
to instantiate and create the cluster group. - after that we need to add the instance to the group by running this commands
-
cluster.addInstance('clusteradmin@127.0.0.2')
this commands is for adding the second db nodes to the cluster group ofclusterName
.- just key in the password of clusteradmin user and choose
clone
to clone from the master nodes.
- just key in the password of clusteradmin user and choose
-
cluster.addInstance('clusteradmin@127.0.0.3')
this commands is for adding the third db nodes to the cluster group ofclusterName
.- just key in the password of clusteradmin user and choose
Incremental Recovery
to get the metadata from the master nodes.
- just key in the password of clusteradmin user and choose
- after all the instance successfully added, you can run
cluster.status()
in the connected mysqlshell to check if the clustering working. if the output as shown 3 nodes are available, then the clustering is working .
-
Setting up the MySQL Router
Always make sure that you have installed the mysql-router into your application server. following this tutorial MySQL Router Installation.
Steps to Configure:
- if you have mysqld installed and running make sure that mysqld on your app server is not running by run the command
$ sudo systemctl stop mysqld
- just run this command to bootsrapping the router to the cluster server
$ mysqlrouter --bootstrap mysqlrouter@127.0.0.1:3306 --conf-use-sockets --directory=/tmp/mysqlrouter
- change directory to
/tmp/mysqlrouter
- run
bash start.sh
- Press
Enter
on your keyboard - it will create the router sockets automatically for you.
- configure your MySQL client at
/etc/my.cnf
with :
[client]
socket=/tmp/mysqlrouter/mysql.sock
Verifiying the MySQL Router Works
- configure your application by using the new port given .
- try to logged into the cluster through router config by running
mysql -uroot -p -P 6446
- see if the database shown on the output is has the same amount of tables in the cluster
Top comments (0)