MySQL: Calculate Read and Write Ratios in Percentage

I have been assigned a task to assist a client to get some idea on his database usage in MySQL Cluster before migrating them to Galera cluster. Galera scales well for reads but not for writes. So the first thing you need to do is to calculate the reads/writes ratio:

SELECT @total_com := SUM(IF(variable_name IN ('Com_select', 'Com_delete', 'Com_insert', 'Com_update', 'Com_replace'), variable_value, 0)) AS `Total`,
 @total_reads := SUM(IF(variable_name = 'Com_select', variable_value, 0)) AS `Total reads`,
 @total_writes := SUM(IF(variable_name IN ('Com_delete', 'Com_insert', 'Com_update', 'Com_replace'), variable_value, 0)) as `Total writes`,
 ROUND((@total_reads / @total_com * 100),2) as `Reads %`,
 ROUND((@total_writes / @total_com * 100),2) as `Writes %`
FROM information_schema.GLOBAL_STATUS;

The output would be as below:

+------------+-------------+--------------+--------+----------+
| Total      | Total reads | Total writes | Reads % | Writes % |
+------------+-------------+--------------+--------+----------+
| 1932344732 |  1899878513 |     32466219 |  98.32 |     1.68 |
+------------+-------------+--------------+--------+----------+
1 row in set (0.00 sec)

The output calculated above is relative since the last restart. So if you just restarted your MySQL server, you may need to wait several hours at least to get a more accurate result.

Simple initial estimation; if writes percentage is less than 50%, then the schema should be running fine in Galera cluster. However, you need to totally understand the application side as well. Hope this helps people out there!

Converting Magento to Work Well on Galera Cluster

I have a Magento data set which run on MySQL-wsrep with Galera. Galera has its known limitations, and one of it is:

DELETE operation is unsupported on tables without primary key. Also rows in tables without primary key may appear in different order on different nodes. Don’t use tables without primary key.

Basically, if you want to have your DB serve by Galera cluster, please use InnoDB storage engine and define a primary key for each table. That’s all. Since Magento dataset is unaware of this limitation, you could see that there are many tables do not meet the criteria.

You can use following query to identify unsupported stuffs in Galera (Thanks to Giuseppe Maxia for this):

SELECT DISTINCT Concat(t.table_schema, '.', t.table_name)     AS tbl,
                t.engine,
                IF(Isnull(c.constraint_name), 'NOPK', '')     AS nopk,
                IF(s.index_type = 'FULLTEXT', 'FULLTEXT', '') AS ftidx,
                IF(s.index_type = 'SPATIAL', 'SPATIAL', '')   AS gisidx
FROM   information_schema.tables AS t
       LEFT JOIN information_schema.key_column_usage AS c
              ON ( t.table_schema = c.constraint_schema
                   AND t.table_name = c.table_name
                   AND c.constraint_name = 'PRIMARY' )
       LEFT JOIN information_schema.statistics AS s
              ON ( t.table_schema = s.table_schema
                   AND t.table_name = s.table_name
                   AND s.index_type IN ( 'FULLTEXT', 'SPATIAL' ) )
WHERE  t.table_schema NOT IN ( 'information_schema', 'performance_schema','mysql' )
       AND t.table_type = 'BASE TABLE'
       AND ( t.engine <> 'InnoDB'
              OR c.constraint_name IS NULL
              OR s.index_type IN ( 'FULLTEXT', 'SPATIAL' ) )
ORDER  BY t.table_schema,
          t.table_name;

Example:

mysql> SELECT DISTINCT
    ->        CONCAT(t.table_schema,'.',t.table_name) as tbl,
    ->        t.engine,
    ->        IF(ISNULL(c.constraint_name),'NOPK','') AS nopk,
    ->        IF(s.index_type = 'FULLTEXT','FULLTEXT','') as ftidx,
    ->        IF(s.index_type = 'SPATIAL','SPATIAL','') as gisidx
    ->   FROM information_schema.tables AS t
    ->   LEFT JOIN information_schema.key_column_usage AS c
    ->     ON (t.table_schema = c.constraint_schema AND t.table_name = c.table_name
    ->         AND c.constraint_name = 'PRIMARY')
    ->   LEFT JOIN information_schema.statistics AS s
    ->     ON (t.table_schema = s.table_schema AND t.table_name = s.table_name
    ->         AND s.index_type IN ('FULLTEXT','SPATIAL'))
    ->   WHERE t.table_schema NOT IN ('information_schema','performance_schema','mysql')
    ->     AND t.table_type = 'BASE TABLE'
    ->     AND (t.engine <> 'InnoDB' OR c.constraint_name IS NULL OR s.index_type IN ('FULLTEXT','SPATIAL'))
    ->   ORDER BY t.table_schema,t.table_name;
+-------------------------------------------------+--------+------+----------+--------+
| tbl                                             | engine | nopk | ftidx    | gisidx |
+-------------------------------------------------+--------+------+----------+--------+
| magento.api2_acl_user                           | InnoDB | NOPK |          |        |
| magento.api_session                             | InnoDB | NOPK |          |        |
| magento.catalogsearch_fulltext                  | MyISAM |      | FULLTEXT |        |
| magento.catalog_category_anc_categs_index_idx   | InnoDB | NOPK |          |        |
| magento.catalog_category_anc_categs_index_tmp   | InnoDB | NOPK |          |        |
| magento.catalog_category_anc_products_index_idx | InnoDB | NOPK |          |        |
| magento.catalog_category_anc_products_index_tmp | InnoDB | NOPK |          |        |
| magento.catalog_category_product_index_enbl_idx | InnoDB | NOPK |          |        |
| magento.catalog_category_product_index_enbl_tmp | InnoDB | NOPK |          |        |
| magento.catalog_category_product_index_idx      | InnoDB | NOPK |          |        |
| magento.catalog_category_product_index_tmp      | InnoDB | NOPK |          |        |
| magento.catalog_product_index_price_downlod_tmp | MEMORY |      |          |        |
| magento.oauth_nonce                             | MyISAM | NOPK |          |        |
| magento.weee_discount                           | InnoDB | NOPK |          |        |
| magento.widget_instance_page_layout             | InnoDB | NOPK |          |        |
| magento.xmlconnect_config_data                  | InnoDB | NOPK |          |        |
+-------------------------------------------------+--------+------+----------+--------+

 

I do not know much about Magento data set and structure. So I am assuming that the output above can simply bring future problems according to Galera limitation, so it might be good to comply with that and alter whatever necessary on those tables.

So I start by adding a simple auto increment primary key into tables labeled as NOPK:

mysql> ALTER TABLE magento.api2_acl_user ADD id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY FIRST;
mysql> ALTER TABLE magento.api_session ADD id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY FIRST;
mysql> ALTER TABLE magento.weee_discount ADD id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY FIRST;
mysql> ALTER TABLE magento.widget_instance_page_layout ADD id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY FIRST;
mysql> ALTER TABLE magento.xmlconnect_config_data ADD id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY FIRST;

Next, add primary key and convert the storage as engine to InnoDB:

mysql> ALTER TABLE magento.oauth_nonce ADD id INT UNSIGNED AUTO_INCREMENT PRIMARY KEY FIRST, ENGINE='InnoDB';

Then, remove the full-text indexing and convert the storage engine to InnoDB:

mysql> ALTER TABLE magento.catalogsearch_fulltext DROP INDEX FTI_CATALOGSEARCH_FULLTEXT_DATA_INDEX;
mysql> ALTER TABLE magento.catalogsearch_fulltext ENGINE='InnoDB';

I am quite sure the above drop indexes statement would likely to have some performance hit. Maybe Magento does not really fit in Galera multi-master environment, but it is worth to give it a try. I will keep updating this post to share about this.

Am going to sleep now. Cheers!

Install MySQL Cluster in Debian

MySQL Cluster is different compare to normal MySQL server. It has 3 roles:

  • management
  • data
  • SQL or API

Data node will required a lot of memory utilization. It is recommended for these nodes to not share any workload with SQL or management nodes as it would end up with resources exhaustion. So we will setup SQL node together with management node to reduce number of servers used to 4 (instead of 6 – 3 roles x 2 servers).

To have the best minimum setup, we will setup 2 servers as 2 data nodes, and another 2 servers will be SQL nodes co-located with management nodes:

  • sql1 (192.168.10.101) – sql node #1 + management node #1
  • sql2 (192.168.10.102) – sql node #2 + management node #2
  • data1 (192.168.10.103) – data node #1
  • data2 (192.168.10.104) – data node #2

I am using Debian 6.0.7 Squeeze 64bit.

 

All Nodes

1. Install libaio-dev which required by MySQL Cluster:

$ apt-get update && apt-get install libaio-dev -y

2. Download the package from MySQL Downloads page here and extract it under /usr/local directory:

$ cd /usr/localwget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.2/mysql-cluster-gpl-7.2.12-linux2.6-x86_64.tar.gz/from/http://cdn.mysql.com/
$ tar -xzf mysql-cluster-gpl-7.2.12-linux2.6-x86_64.tar.gz

3. Rename the extracted directory to a shorter name which is mysql:

$ mv mysql-cluster-gpl-7.2.12-linux2.6-x86_64 mysql

4. Create MySQL configuration directory:

$ mkdir /etc/mysql

5. Export MySQL bin path into user environment:

$ export PATH="$PATH:/usr/local/mysql/bin"

6. Create mysql user and group and assign the correct permission to MySQL base directory:

$ useradd mysql
$ chown -R mysql:mysql /usr/local/mysql

Once completed, make sure that MySQL path is /usr/local/mysql with correct ownership of user mysql.

Data Nodes

1. Login to data1 and data2 create MySQL configuration file at /etc/mysql/my.cnf and add following lines:

[mysqld]
ndbcluster
ndb-connectstring=sql1,sql2
 
[mysql_cluster]
ndb-connectstring=sql1,sql2

2. Create the MySQL cluster data directory:

$ mkdir /usr/local/mysql/mysql-cluster

SQL Nodes + Management Nodes

1. Login to sql1 and sql2 and configure SQL nodes by adding following lines into /etc/mysql/my.cnf:

[mysqld]
ndbcluster
ndb-connectstring=sql1,sql2
port=3306
default_storage_engine=ndbcluster
 
[mysql_cluster]
ndb-connectstring=sql1,sql2

2. Create data directory for MySQL cluster:

$ mkdir -p /usr/local/mysql/mysql-cluster

3. Create MySQL Cluster configuration file at /etc/mysql/config.ini:

$ vim /etc/mysql/config.ini

And add following lines:

[ndb_mgmd default]
DataDir=/usr/local/mysql/mysql-cluster
 
[ndb_mgmd]
NodeId=1
HostName=sql1
 
[ndb_mgmd]
NodeId=2
HostName=sql2
 
[ndbd default]
NoOfReplicas=2
DataMemory=256M
IndexMemory=128M
DataDir=/usr/local/mysql/mysql-cluster
 
[ndbd]
NodeId=3
HostName=data1
 
[ndbd]
NodeId=4
HostName=data2
 
[mysqld]
[mysqld]

4. Copy the mysql.server init script from support-files directory into /etc/init.d directory and setup auto-start on boot:

$ cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysql
$ update-rc.d mysql defaults

5. Install MySQL system tables:

$ cd /usr/local/mysql
$ scripts/mysql_install_db --user=mysql

Starting the Cluster

1. Start MySQL cluster management service in sql1 and sql2:

For sql1:

$ ndb_mgmd -f /etc/mysql/config.ini --ndb-nodeid=1

For sql2:

$ ndb_mgmd -f /etc/mysql/config.ini --ndb-nodeid=2

2. Start MySQL cluster storage service:

For data1:

$ ndbd --ndb-nodeid=3

For data2:

$ ndbd --ndb-nodeid=4

3. Start the MySQL API service at sql1 and sql2:

$ service mysql start

4. Check the MySQL cluster status in sql1 or sql2:

$ ndb_mgm -e show
Connected to Management Server at: ndb1:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @192.168.197.10.103(mysql-5.5.30 ndb-7.2.12, Nodegroup: 0, Master)
id=4 @192.168.197.10.104 (mysql-5.5.30 ndb-7.2.12, Nodegroup: 0)
 
[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.197.10.101 (mysql-5.5.30 ndb-7.2.12)
id=2 @192.168.197.10.102 (mysql-5.5.30 ndb-7.2.12)
 
[mysqld(API)] 2 node(s)
id=5 @192.168.197.10.101 (mysql-5.5.30 ndb-7.2.12)
id=6 @192.168.197.10.102 (mysql-5.5.30 ndb-7.2.12)

5. Since MySQL user table do not run on ndb storage enginer, we need to create MySQL root password in both nodes (sql1 and sql2):

$ mysqladmin -u root password 'r00tP4ssword'

Cluster ready! Now you can start to query the MySQL cluster by connecting to sql1 or sql2. For best performance in load balancing and failover, you can setup HAproxy in front of sql1 or sql2.

High Availability: cPanel with MySQL Cluster, Keepalived and HAProxy

I have successfully installed and integrate MySQL Cluster with HAproxy and Keepalived to provide scalable MySQL service with cPanel server run on CentOS 6.3 64bit. As you guys know that cPanel has a function called “Setup Remote MySQL server” which we can use to remotely access and control MySQL server from cPanel.

This will bring a big advantage, because the cPanel server load will be reduced tremendously due to mysqld service and resource will be serve from a cluster of servers. Following picture shows my architecture:

cpanel_cluster

I will be using following variables:

OS: CentOS 6.3 64bit
WHM/cPanel version: 11.34.0 (build 11)
MySQL root password: MhGGs4wYs

The Tricks

  • We will need to use same MySQL root password in all servers including the cPanel server
  • cPanel server’s SSH key need to be installed in all database servers to allow passwordless SSH login
  • All servers must have same /etc/hosts value
  • At least 4 servers for MySQL Cluster ( 2 SQL/Management/LB and 2 Data nodes)
  • mysql1 (active) and mysql2 (passive) will share a virtual IP which run on Keepalived
  • mysql1 and mysql2 will be the load balancer as well, which redirect MySQL traffic from cPanel server to mysql1 and mysql2
  • MySQL Cluster only support ndbcluster storage engine. Databases will be created in ndbcluster by default
  • For mysql1 and mysql2, MySQL will serve using port 3307 because 3306 will be used by HAProxy for load balancing

All Servers

1. In this post, I am going to turn off firewall and SELINUX for all servers:

$ service iptables stop
$ chkconfig iptables off
$ setenforce 0sed -i.bak 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

2. Install ntp using yum to make sure all servers’ time is in sync:

$ yum install ntp -y
$ ntpdate -u my.pool.ntp.org

Prepare the cPanel Server

1. Lets start declaring all hosts naming in /etc/hosts:

192.168.10.100     cpanel      cpanel.mydomain.com
192.168.10.101     mysql1      mysql1.mydomain.com
192.168.10.102     mysql2      mysql2.mydomain.com
192.168.10.103     mysql-data1 mysql-data1.mydomain.com
192.168.10.104     mysql-data2 mysql-data2.mydomain.com
192.168.10.110     mysql       mysql.mydomain.com    #Virtual IP for mysql service

2. Copy /etc/hosts file to other servers:

$ scp /etc/hosts mysql1:/etc
$ scp /etc/hosts mysql2:/etc
$ scp /etc/hosts mysql-data1:/etc
$ scp /etc/hosts mysql-data2:/etc

3. Setup SSH key. This will allow passwordless SSH between cPanel server and MySQL servers:

$ ssh-keygen -t dsa

Just press ‘Enter’ for all prompts.

4. Copy the SSH key to other servers:

$ ssh-copy-id -i ~/.ssh/id_dsa root@mysql1
$ ssh-copy-id -i ~/.ssh/id_dsa root@mysql2
$ ssh-copy-id -i ~/.ssh/id_dsa root@mysql-data1
$ ssh-copy-id -i ~/.ssh/id_dsa root@mysql-data2

5. Setup MySQL root password in WHM. Login to WHM > SQL Services > MySQL Root Password. Enter the MySQL root password and click “Change Password”.

6. Add additional host in WHM > SQL Services > Additional MySQL Access Hosts and add required host to be allowed to access the MySQL cluster as below:

add_host

 

 

Data Nodes (mysql-data1 and mysql-data2)

1. Download and install MySQL storage package from this page:

$ cd /usr/local/src
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-storage-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-storage-7.1.25-1.el6.x86_64.rpm

2. Create a mysql configuration file at /etc/my.cnf and add following line. This configuration will tell the storage to communicate with mysql1 and mysql2 as the management nodes:

[mysqld]
ndbcluster
ndb-connectstring=mysql1,mysql2
 
[mysql_cluster]
ndb-connectstring=mysql1,mysql2

 

SQL Nodes (mysql1 and mysql2)

1. Install required package using yum:

$ yum install perl libaio* pcre* popt* openssl openssl-devel gcc make -y

2. Download all required packages for Keepalived, HAProxy and MySQL Cluster package from this site (management, tools, shared, client, server):

$ cd /usr/local/src
$ wget http://www.keepalived.org/software/keepalived-1.2.7.tar.gz
$ wget http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.22.tar.gz
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-management-7.1.25-1.el6.x86_64.rpm
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-tools-7.1.25-1.el6.x86_64.rpm
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-shared-7.1.25-1.el6.x86_64.rpm
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-client-7.1.25-1.el6.x86_64.rpm
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-server-7.1.25-1.el6.x86_64.rpm

3. Extract and compile Keepalived:

$ tar -xzf keepalived-1.2.7.tar.gz
$ cd keepalived-*
$ ./configure
$ make
$ make install

4. Extract and compile HAProxy:

$ tar -xzf haproxy-1.4.22.tar.gz
$ cd haproxy-*
$ make TARGET=linux26 ARCH=x86_64 USE_PCRE=1
$ make install

5. Install mysql packages with following order (management > tools > shared > client > server):

$ cd /usr/local/src
$ rpm -Uhv MySQL-Cluster-gpl-management-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-tools-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-shared-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-client-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-server-7.1.25-1.el6.x86_64.rpm

6. Create new directory for MySQL cluster. We also need to create cluster configuration file config.ini underneath it:

$ mkdir /var/lib/mysql-cluster
$ vim /var/lib/mysql-cluster/config.ini

And add following line:

[ndb_mgmd default]
DataDir=/var/lib/mysql-cluster
 
[ndb_mgmd]
NodeId=1
HostName=mysql1
 
[ndb_mgmd]
NodeId=2
HostName=mysql2
 
[ndbd default]
NoOfReplicas=2
DataMemory=256M
IndexMemory=128M
DataDir=/var/lib/mysql-cluster
 
[ndbd]
NodeId=3
HostName=mysql-data1
 
[ndbd]
NodeId=4
HostName=mysql-data2
 
[mysqld]
NodeId=5
HostName=mysql1
 
[mysqld]
NodeId=6
HostName=mysql2

7. Create the mysql configuration file at /etc/my.cnf and add following line:

[mysqld]
ndbcluster
port=3307
ndb-connectstring=mysql1,mysql2
default_storage_engine=ndbcluster
 
[mysql_cluster]
ndb-connectstring=mysql1,mysql2

Starting the Cluster

1. Start mysql cluster management service:

For mysql1:

$ ndb_mgmd -f /var/lib/mysql-cluster/config.ini --ndb-nodeid=1

For mysql2:

$ ndb_mgmd -f /var/lib/mysql-cluster/config.ini --ndb-nodeid=2

2. Start the mysql cluster storage service in both data nodes (mysql-data1 & mysql-data2):

$ ndbd

3. Start the mysql service (mysql1 & mysql2):

$ service mysql start

4. Login to mysql console and run following command  (mysql1 & mysql2):

mysql> use mysql;
mysql> alter table user engine=ndbcluster;
mysql> alter table db engine=ndbcluster;

5. Check the output of table db and user in mysql database and make sure it should appear as below:

mysql> SELECT table_name,engine FROM INFORMATION_SCHEMA.TABLES WHERE table_schema=DATABASE();
+---------------------------+------------+
| table_name                | engine     |
+---------------------------+------------+
| user                      | ndbcluster |
| columns_priv              | MyISAM     |
| db                        | ndbcluster |
| event                     | MyISAM     |
| func                      | MyISAM     |
| general_log               | CSV        |
| help_category             | MyISAM     |
| help_keyword              | MyISAM     |
| help_relation             | MyISAM     |
| help_topic                | MyISAM     |
| host                      | MyISAM     |
| ndb_apply_status          | ndbcluster |
| ndb_binlog_index          | MyISAM     |
| plugin                    | MyISAM     |
| proc                      | MyISAM     |
| procs_priv                | MyISAM     |
| servers                   | MyISAM     |
| slow_log                  | CSV        |
| tables_priv               | MyISAM     |
| time_zone                 | MyISAM     |
| time_zone_leap_second     | MyISAM     |
| time_zone_name            | MyISAM     |
| time_zone_transition      | MyISAM     |
| time_zone_transition_type | MyISAM     |
+---------------------------+------------+
24 rows in set (0.00 sec)

6. Check the management status in mysql1. You should see output similar to below:

$ ndb_mgm -e show
Connected to Management Server at: mysql1:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @192.168.10.103 (mysql-5.1.66 ndb-7.1.25, Nodegroup: 0, Master)
id=4 @192.168.10.104 (mysql-5.1.66 ndb-7.1.25, Nodegroup: 0)
 
[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.10.101 (mysql-5.1.66 ndb-7.1.25)
id=2 @192.168.10.102 (mysql-5.1.66 ndb-7.1.25)
 
[mysqld(API)] 2 node(s)
id=5 @192.168.10.101 (mysql-5.1.66 ndb-7.1.25)
id=6 @192.168.10.102 (mysql-5.1.66 ndb-7.1.25)

7. Change MySQL root password to follow the MySQL root password in cPanel server (mysql1):

$ mysqladmin -u root password 'MhGGs4wYs'

8. Add MySQL root password into root environment so we do not need to specify password to access mysql console (mysql1 & mysql2):

$ vim /root/.my.cnf

And add following line:

[client]
user="root"
password="MhGGs4wYs"

9. Add haproxy user without password to be used by HAProxy to check the availability of real server (mysql1):

mysql> GRANT USAGE ON *.* TO haproxy@'%';

10. Add root user from any host so cPanel servers can access and control the MySQL cluster (mysql1):

mysql> GRANT USAGE ON *.* TO root@'%' IDENTIFIED BY 'MhGGs4wYs';
mysql> GRANT USAGE ON *.* TO root@'mysql1' IDENTIFIED BY 'MhGGs4wYs';
mysql> GRANT USAGE ON *.* TO root@'mysql2' IDENTIFIED BY 'MhGGs4wYs';
mysql> GRANT ALL PRIVILEGES ON *.* TO root@'%';
mysql> GRANT ALL PRIVILEGES ON *.* TO root@'mysql1';
mysql> GRANT ALL PRIVILEGES ON *.* TO root@'mysql2';

11. The last step, we need to allow GRANT privileges to root@’%’ by running following command in mysql console (mysql1):

mysql> UPDATE mysql.user SET `Grant_priv` = 'Y' WHERE `User` = 'root';

 

Configuring Virtual IP and Load Balancer (mysql1 & mysql2)

1. Configure HAProxy by creating a configuration /etc/haproxy.cfg:

$ vim /etc/haproxy.cfg

And add following line:

defaults
    log global
    mode http
    retries 2
    option redispatch
    maxconn 4096
    contimeout 50000
    clitimeout 50000
    srvtimeout 50000
 
listen mysql_proxy 0.0.0.0:3306
    mode tcp
    balance roundrobin
    option tcpka
    option httpchk
    option mysql-check user haproxy
    server mysql1 192.168.10.101:3307 weight 1
    server mysql2 192.168.10.102:3307 weight 1

2. Next we need to configure virtual IP. Open /etc/sysctl.conf and add following line to allow non-local IP to bind:

net.ipv4.ip_nonlocal_bind = 1

And run following command to apply the changes:

$ sysctl -p

3. Create Keepalived configuration file at /etc/keepalived.conf and add following line:

For mysql1:

vrrp_script chk_haproxy {
      script "killall -0 haproxy"    # verify the pid is exist or not
      interval 2                     # check every 2 seconds
      weight 2                       # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
      interface eth0                 # interface to monitor
      state MASTER
      virtual_router_id 51           # Assign one ID for this route
      priority 101                   # 101 on master, 100 on backup
      virtual_ipaddress {
            192.168.10.110           # the virtual IP
      }
      track_script {
            chk_haproxy
      }
}

For mysql2:

vrrp_script chk_haproxy {
      script "killall -0 haproxy"    # verify the pid is exist or not
      interval 2                     # check every 2 seconds
      weight 2                       # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
      interface eth0                 # interface to monitor
      state MASTER
      virtual_router_id 51           # Assign one ID for this route
      priority 100                   # 101 on master, 100 on backup
      virtual_ipaddress {
            192.168.10.110 # the virtual IP
      }
      track_script {
            chk_haproxy
      }
}

4. Start HAProxy:

$ haproxy -D -f /etc/haproxy.cfg

5. Start Keepalived:

$ keepalived -f /etc/keepalived.conf

6. Add following line into /etc/rc.local to make sure Keepalived and HAProxy start on boot:

$ vim /etc/rc.local

And add following line:

/usr/local/sbin/haproxy -D -f /etc/haproxy.cfg
/usr/local/sbin/keepalived -f /etc/keepalived.conf

7. Check the virtual IP should be up in mysql1:

$ ip a | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
inet 192.168.10.101/24 brd 192.168.10.255 scope global eth0
inet 192.168.10.110/32 scope global eth0

8. Verify in mysql2 whether Keepalived is running in backup mode:

$ tail /var/log/messages
Dec 14 12:08:56 mysql2 Keepalived_vrrp[3707]: VRRP_Instance(VI_1) Entering BACKUP STATE

9. Check that HAProxy is run on port 3306 and mysqld is run on port 3307:

$ netstat -tulpn | grep -e mysql -e haproxy
tcp   0   0   0.0.0.0:3306     0.0.0.0:*       LISTEN      3587/haproxy
tcp   0   0   0.0.0.0:3307     0.0.0.0:*       LISTEN      3215/mysqld

 

Setup Remote MySQL Server in cPanel Server

1. Go to WHM > SQL Services > Setup Remote MySQL server and enter following details. Make sure the Remote server address is the virtual IP address setup in Keepalived in mysql1:

Remote server address (IP address or FQDN): mysql.mydomain.org
Remote SSH Port                           : 22
Select authentication method              : Public Key (default)
Select an installed SSH Key               : id_dsa

2. Wait for a while and you will see following output:

remote_success

3. Now MySQL Cluster is integrated within WHM/cPanel. You may verify this by accessing into PHPmyAdmin in WHM at WHM > SQL Services > PHPmyAdmin and you should see that you are connected into the MySQL Cluster as screenshot below:

my_cluster

Testing

We can test our MySQL high availability architecture by turning off the power completely for mysql1 or mysql2 and mysql-data1 or mysql-data2 in the same time. You will notice that the MySQL service will still available in cPanel point-of-view.

Here is my PHPmyAdmin for my test blog running on WordPress. You can notice that the database created is under ndbcluster engine:

blog_cluster

I never test this architecture in any production server yet and I cannot assure that all WHM/cPanel SQL functionalities are working as expected. Following features in cPanel has been tried and working well:

  • PHPMyAdmin
  • cPanel MySQL features (MySQL Database and MySQL Database Wizard)

Monitor MySQL Galera Cluster from Split-Brain

I have another set of MySQL Galera Cluster running on Percona XtraDB Cluster which having 2 nodes with 1 arbitrator. In total, I do have 3 votes in the quorum. The expected problem when we have 2 nodes run in cluster is the possibility for this cluster to be split-brain if the arbitrator is down, followed by network switch down in the same time. This will surely bring a great impact to your database consistency and cluster availability.

To avoid this thing to happen, we need to closely monitor the number of nodes being seen by the cluster. If 3 nodes, it is normal and do nothing. If 2 nodes, we should send a warning which notify one server is down and if 1 node, shutdown the MySQL server so it will prevent for the split-brain to happen.

To check what is the number of node being seen by the cluster in MySQL Galera, we can use this command:

$ mysql -e "show status like 'wsrep_cluster_size'";
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

We should create a BASH script which monitor and evaluate this value to the respective action.

1. First of all, install sendmail and mailx. We need this in order to send the alert via email:

$ yum install sendmail mailx -y

2. Make sure sendmail will be started on boot and we need to start the service as well:

$ chkconfig sendmail on
$ service sendmail start

3. Create the BASH script for this monitoring using text editor in /root/scripts directory named galera_monitor:

$ mkdir -p /root/scripts
$ vim /root/scripts/galera_monitor

And paste following line:

#!/bin/bash
## Monitor galera cluster size
 
## Where the alert should be sent to
EMAIL="[email protected]"
 
cluster_size=`mysql -e "show status like 'wsrep_cluster_size'" | tail -1 | awk {'print $2'}`
hostname=`hostname`
error=`tail -100 /var/lib/mysql/mysql-error.log`
 
SUBJECT1="ERROR: [$hostname] Galera Cluster Size"
SUBJECT2="WARNING: [$hostname] Galera Cluster Size"
EMAILMESSAGE="/tmp/emailmessage.txt"
 
echo "Cluster size result: $cluster_size" > $EMAILMESSAGE
echo "Latest error: $error" >> $EMAILMESSAGE
 
if [ $cluster_size -eq 1 ]; then
    /bin/mail -s "$SUBJECT1" "$EMAIL" < $EMAILMESSAGE
    /etc/init.d/mysql stop                    # stop the mysql server to prevent split-brain
elif [ $cluster_size -eq 2 ]; then
    /bin/mail -s "$SUBJECT2" "$EMAIL" < $EMAILMESSAGE
fi

4. Add the root login credentials into /root/.my.cnf so it can auto-login into mysql console:

[client]
user=root
password=MyR00tP4ss

5. Change the permission of /root/.my.cnf so it only accessible by root:

$ chmod 400 /root/.my.cnf

6. Change the permission of the script so it is executable:

 $ chmod 755 /root/scripts/galera_monitor

7. Add the scripts into cron:

$ echo "* * * * * /bin/sh /root/scripts/galera_monitor" >> /var/spool/cron/root

8. Reload cron daemon to apply changes:

$ service crond reload

Done. You should received an email every minutes if your cluster size has reduced to 2 and you should do something about it to bring the 3 nodes up. If cluster size is 1, then it will stop the mysql server from running.

Notes: You should NOT enable the cron if you re-initialize your Galera cluster, as it will keep MySQL stopping. This script is only suitable for monitoring production cluster.

CentOS 6: Install MySQL Cluster – The Simple Way

In this post I am going to install a MySQL cluster as refer to the architecture as below:

 

 

MySQL cluster is consists of 3 types of node:

  • Data node (mysql-data1 & mysql-data2)
  • Sql daemon node (mysql-mysqld1 & mysql-mysqld2)
  • Management node (mysql-management)

Data node will hold the database and it will replicated automatically to all data nodes. Sql daemon node is the interface between database and client. It will serve the query they got from data nodes, similar like a “gateway”. Management node is required in order to monitor and manage the whole cluster. Recommended minimum setup for high availability and scalability will be 5 servers as what I have highlights in the picture above. I will be  using CentOS 6.3 64bit for all servers.

All Servers

1. SELINUX must be disabled on all server. Change the SELINUX configuration file at /etc/sysconfig/selinux:

SELINUX=disabled

2. Firewall is disabled on all servers:

$ service iptables stop
$ chkconfig iptables off
$ setenforce 0

3. Entries under /etc/hosts for all servers should be as below:

web-server         192.168.1.21
mysql-mysqld1      192.168.1.51 
mysql-mysqld2      192.168.1.52
mysql-management   192.168.1.53
mysql-data1        192.168.1.54
mysql-data2        192.168.1.55

Management Node

1. Download and install MySQL Cluster (management & tools) package from here:

$ cd /usr/local/src
$ wget http://download.softagency.net/MySQL/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-management-7.0.35-1.rhel5.x86_64.rpm
$ wget http://download.softagency.net/MySQL/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-tools-7.0.34-1.rhel5.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-management-7.0.35-1.rhel5.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-tools-7.0.34-1.rhel5.x86_64.rpm

2. Create the mysql-cluster directory and configuration file config.ini:

$ mkdir -p /var/lib/mysql-cluster
$ vim /var/lib/mysql-cluster/config.ini

And add following line:

[ndb_mgmd default]
DataDir=/var/lib/mysql-cluster
 
[ndb_mgmd]
HostName=mysql-management
 
[ndbd default]
NoOfReplicas=2
DataMemory=256M
IndexMemory=128M
DataDir=/var/lib/mysql-cluster
 
[ndbd]
HostName=mysql-data1
 
[ndbd]
HostName=mysql-data2
 
[mysqld]
HostName=mysql-mysqld1
 
[mysqld]
HostName=mysql-mysqld2

Data Nodes

1. Following steps should be executed on both data nodes (mysql-data1 and mysql-data2). Download and install the MySQL storage package from here:

$ cd /usr/local/src
$ wget http://download.softagency.net/MySQL/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-storage-7.0.35-1.rhel5.x86_64.rpm 
$ rpm -Uhv MySQL-Cluster-gpl-storage-7.0.35-1.rhel5.x86_64.rpm

2. Add following line under /etc/my.cnf:

[mysqld]
ndbcluster
ndb-connectstring=mysql-management
 
[mysql_cluster]
ndb-connectstring=mysql-management

SQL Nodes

1. Following steps should be executed on both SQL nodes (mysql-mysqld1 and mysql-mysqld2). Remove mysql-libs using yum:

$ yum remove mysql-libs -y

2. Install required package using yum:

$ yum install libaio -y

3. Download the MySQL client, shared and server package from MySQL download site here:

$ cd /usr/local/src
$ wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.2/MySQL-Cluster-client-gpl-7.2.8-1.el6.x86_64.rpm/from/http://cdn.mysql.com/
$ wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.2/MySQL-Cluster-shared-gpl-7.2.8-1.el6.x86_64.rpm/from/http://cdn.mysql.com/wget http://www.mysql.com/get/Downloads/MySQL-Cluster-7.2/MySQL-Cluster-server-gpl-7.2.8-1.el6.x86_64.rpm/from/http://cdn.mysql.com/

4. Install all packages:

$ rpm -Uhv MySQL-Cluster-*

5. Add following line into /etc/my.cnf:

[mysqld]
ndbcluster
ndb-connectstring=mysql-management
default_storage_engine=ndbcluster
 
[mysql_cluster]
ndb-connectstring=mysql-management

Start the Cluster

1. To start the cluster, we must follow this order:

Management Node > Data Node > SQL Node

2. So, login to management node (mysql-management) and execute following command:

$ ndb_mgmd -f /var/lib/mysql-cluster/config.ini
MySQL Cluster Management Server mysql-5.1.63 ndb-7.0.35
2012-11-22 07:36:55 [MgmtSrvr] INFO -- The default config directory '/usr/mysql-cluster' does not exist. Trying to create it...
2012-11-22 07:36:55 [MgmtSrvr] INFO -- Sucessfully created config directory

3.  Next is start the ndbd service in Data Node (mysql-data1):

$ ndbd
2012-11-22 07:37:24 [ndbd] INFO -- Angel connected to 'mysql-management:1186'
2012-11-22 07:37:24 [ndbd] INFO -- Angel allocated nodeid: 2

4. Next is start the ndbd service in Data Node (mysql-data2):

$ ndbd
2012-11-22 07:37:24 [ndbd] INFO -- Angel connected to 'mysql-management:1186'
2012-11-22 07:37:24 [ndbd] INFO -- Angel allocated nodeid: 3

5. Next is start the mysql service in SQL node (mysql-mysqld1):

$ service mysql start

6. Next is start the mysql service in SQL node (mysql-mysqld2):

$ service mysql start

Monitor the Cluster

Monitoring the cluster will required you to login into management server. To check overall status of cluster:

$ ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.1.54 (mysql-5.1.63 ndb-7.0.35, Nodegroup: 0, Master)
id=3 @192.168.1.55 (mysql-5.1.63 ndb-7.0.35, Nodegroup: 0)
 
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.1.53 (mysql-5.1.63 ndb-7.0.35)
 
[mysqld(API)] 2 node(s)
id=4 @192.168.1.51 (mysql-5.5.27 ndb-7.2.8)
id=5 @192.168.1.52 (mysql-5.5.27 ndb-7.2.8)

To check the Data nodes status:

$ ndb_mgm -e "all status"
Connected to Management Server at: localhost:1186
Node 2: started (mysql-5.1.63 ndb-7.0.35)
Node 3: started (mysql-5.1.63 ndb-7.0.35)

To check the memory usage of data nodes:

$ ndb_mgm -e "all report memory"
Connected to Management Server at: localhost:1186
Node 2: Data usage is 0%(23 32K pages of total 8192)
Node 2: Index usage is 0%(20 8K pages of total 16416)
Node 3: Data usage is 0%(23 32K pages of total 8192)
Node 3: Index usage is 0%(20 8K pages of total 16416)

Stopping the Cluster

1. To stop the cluster, we must follow this order:

SQL Node > Management Node / Data Node

2. Login to SQL node (mysql-mysqld1mysql-mysqld2) and run following command:

$ service mysql stop

3. Login to management node (mysql-management) and run following command:

$ ndb_mgm -e shutdown

 

Done! You should now able to create or import database on one of the SQL node. You can put a load balancer in front of the SQL node to take advantage on the performance and high availability.

Notes

You may notice that the distribution version I installed is “rhel5”. You can get the “el6” distribution package at this page: http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.2/ and search for any “el6” package name.

High Availability: Configure Piranha for HTTP, HTTPS and MySQL

Piranha is a simple yet powerful tool to manage virtual IP and service with its web-based GUI.

As refer to my previous post on how to install and configure Piranha for HTTP service: http://blog.secaserver.com/2012/07/centos-configure-piranha-load-balancer-direct-routing-method/, in this post we will complete over the Piranha configuration with HTTP and HTTPS load balancing using direct-routing with firewall marks and MySQL load balancing using direct-routing only.

HTTP/HTTPS will need to be accessed by users via virtual public IP 130.44.50.120 while MySQL service will be accessed by web servers using virtual private IP 192.168.100.30. Kindly refer to picture below for the full architecture:

 

All Servers

SELINUX must be turned off on all servers. Change the SELINUX configuration file at /etc/sysconfig/selinux:

SELINUX=disabled

Load Balancers

1. All steps should be done in both servers unless specified. We will install Piranha and other required packages using yum:

$ yum install piranha ipvsadm mysql -y

2. Open firewall ports as below:

$ iptables -A INPUT -m tcp -p tcp --dport 3636 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 80 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 443 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 539 -j ACCEPT
$ iptables -A INPUT -m udp -p udp --dport 161 -j ACCEPT

3. Start all required services and make sure they will auto start if server reboot:

$ service piranha-gui start
$ chkconfig piranha-gui on
$ chkconfig pulse on

4. Run following command to set password for user piranha. This will be used when accessing the web-based configuration tools:

$ piranha-passwd

5. Turn on IP forwarding. Open /etc/sysctl.conf and make sure following line has value 1:

net.ipv4.ip_forward = 1

And run following command to activate it:

$ sysctl -p

6. Check whether iptables is loaded properly as the kernel module:

$ lsmod | grep ip_tables
ip_tables 17733 3 iptable_filter,iptable_mangle,iptable_nat

7. Since we will need to serve HTTP and HTTPS from the same server, we need to group the traffic to be forwarded to the same destination. To achieve this, we need to mark the packet using iptables and so it being recognized correctly on the destination server. Set the iptables rules to mark all packets which destined for the same server as “80”:

$ iptables -t mangle -A PREROUTING -p tcp -d 130.44.50.120/32 --dport 80 -j MARK --set-mark 80
$ iptables -t mangle -A PREROUTING -p tcp -d 130.44.50.120/32 --dport 443 -j MARK --set-mark 80

Load Balancer #1

1. Check the IP address is correctly setup:

$ ip a | grep inet
inet 130.44.50.121/28 brd 110.74.131.15 scope global eth0
inet 192.168.100.41/24 brd 192.168.10.255 scope global eth1

2. Login into Piranha at http://130.44.50.121:3636/. Login as user piranha and password which has been setup in step #4 of Load Balancers section.

3. Enable redundancy. Go to Piranha > Redundancy > Enable.

4. Enter the IP information as below:

Redundant server public IP     : 130.44.50.122
Monitor NIC links for failures : Enabled
Use sync daemon                : Enabled

Click ‘Accept’.

5. Go to Piranha > Virtual Servers > Add > Edit. Add information as below and click ‘Accept’:

 

6. Next, go to Real Server. This we will put the IP address of all real servers that serve HTTP. Fill up all required information as below:

7. Now we need to do the similar setup to HTTPS. Just change the port number for ‘Application port’ to 443. For Real Server, change the real server’s destination port to 443.

8. For MySQL virtual server, enter information as below:

 

9. For MySQL real servers, enter information as below:

 

10. Configure monitoring script for MySQL virtual server. Click on ‘Monitoring Script’ and configure as below:

 

11. Setup the monitoring script for mysql:

$ vim /root/mysql_mon.sh

And add following line:

#!/bin/sh
USER=monitor
PASS=M0Npass5521
####################################################################
CMD=/usr/bin/mysqladmin
 
IS_ALIVE=`$CMD -h $1 -u $USER -p$PASS ping | grep -c "alive"`
 
if [ "$IS_ALIVE" = "1" ]; then
    echo "UP"
else
    echo "DOWN"
fi

12. Change the script permission to executable:

$ chmod 755 /root/mysql_mon.sh

13. Now copy over the script and Piranha configuration file to load balancer #2:

$ scp /etc/sysconfig/ha/lvs.cf lb2:/etc/sysconfig/ha/lvs.cf
$ scp /root/mysql_mon.sh lb2:/root/

14. Restart Pulse to activate the Piranha configuration in LB#1:

$ service pulse restart

Load Balancer #2

In this server, we just need to restart pulse service as below:

$ chkconfig pulse on
$ service pulse restart

Database Cluster

1. We need to allow the MySQL monitoring user from nanny (load balancer) in the MySQL cluster. Login into MySQL console and enter following SQL command in one of the server:

mysql> GRANT USAGE ON *.* TO monitor@'%' IDENTIFIED BY 'M0Npass5521';

2. Add the virtual IP manually using iproute:

$ /sbin/ip addr add 192.168.100.30 dev eth1

3. Add following entry into /etc/rc.local to make sure the virtual IP is up after boot:

$ echo '/sbin/ip addr add 192.168.100.30 dev eth1' >> /etc/rc.local

Attention: If you restart the interface that hold virtual IP in this server, you need to execute step #2 to bring up the virtual IP manually. VIPs can not be configured to start on boot.

4. Check the IPs in the server. Example below was taken from server Mysql1:

$ ip a | grep inet
inet 130.44.50.127/24 brd 130.44.50.255 scope global eth0
inet 192.168.100.33/24 brd 192.168.100.255 scope global eth1
inet 192.168.100.30/32 scope global eth1

Web Cluster

1. On each and every server, we need to install a package called arptables_jf from yum. We will used this to manage our ARP tables entries and rules:

$ yum install arptables_jf -y

2. Add following rules respectively for every server:

Web1:

arptables -A IN -d 130.44.50.120 -j DROP
arptables -A OUT -d 130.44.50.120 -j mangle --mangle-ip-s 130.44.50.123

Web 2:

arptables -A IN -d 130.44.50.120 -j DROP
arptables -A OUT -d 130.44.50.120 -j mangle --mangle-ip-s 130.44.50.124

Web 3:

arptables -A IN -d 130.44.50.120 -j DROP
arptables -A OUT -d 130.44.50.120 -j mangle --mangle-ip-s 130.44.50.125

3. Enable arptables_jf to start on boot, save the rules and restart the service:

$ service arptables_jf save
$ chkconfig arptables_jf on
$ service arptables_jf restart

4. Add the virtual IP manually into the server using iproute command as below:

$ /sbin/ip addr add 130.44.50.120 dev eth0

5. Add following entry into /etc/rc.local to make sure the virtual IP is up after boot:

$ echo '/sbin/ip addr add 130.44.50.120 dev eth0' >> /etc/rc.local

Attention: If you restart the interface that hold virtual IP in this server, you need to execute step #4 to bring up the virtual IP manually. VIPs can not be configured to start on boot.

6. Check the IPs in the server. Example below was taken from server Web1:

$ ip a | grep inet
inet 130.44.50.123/28 brd 110.74.131.15 scope global eth0
inet 130.44.50.120/32 scope global eth0
inet 192.168.100.21/24 brd 192.168.100.255 scope global eth1

You are now having a complete high availability MySQL and HTTP/HTTPS service with auto failover and load balance features by Piranha using direct routing method.

In this tutorial, I am not focusing on HTTPS because in this test environment I do not have SSL setup correctly and do not have much time to do that. By the way, you may use following BASH script to monitor HTTPS from Piranha (nanny):

#!/bin/bash
 
if [ $# -eq 0 ]; then
        echo "host not specified"
        exit 1
fi
 
curl -s --insecure \
	--cert /etc/crt/hostcert.pem \
	--key /etc/crt/hostkey.pem \
	https://${1}:443 | grep "" \
	&> /dev/null
 
if [ $? -eq 0 ]; then
        echo "UP"
else
        echo "DOWN"
fi

I hope this tutorial could be useful for some guys out there!

CentOS: Integrate ClusterControl into Existing MySQL Galera Cluster

I am going to integrate Severalnines’s ClusterControl into my existing MySQL Galera cluster which currently being used by our development team. ClusterControl allow us to manage, control and monitor our MySQL cluster from single point.

Actually, you can use their installer script to install a complete MySQL Galera suite as refer to this post.

In this case, I already have a set of MySQL Galera Cluster running for our development team which has been installed using the hard way as similar to this post. So my mission is to integrate this great tool into my existing MySQL Galera cluster.

Requirement

  • 3 servers which already synced into a Galera cluster
  • 1 server for ClusterControl
  • No firewall and SELINUX enabled
  • The value inside /etc/hosts for all servers should be as below:
192.168.1.201    galera1.local galera1
192.168.1.202    galera2.local galera2
192.168.1.203    galera3.local galera3
192.168.1.210    clustercontrol.local clustercontrol

ClusterControl Server

1. ClusterControl required all servers to use passwordless SSH. We need to setup the SSH key to all of Galera nodes. Lets generate the key as user root:

$ ssh-keygen -t rsa

Notes: Just accept default value by pressing Enter until finish.

2. Copy the public key that we just created to all Galera nodes:

$ ssh-copy-id -i ~/.ssh/id_rsa root@galera1
$ ssh-copy-id -i ~/.ssh/id_rsa root@galera2
$ ssh-copy-id -i ~/.ssh/id_rsa root@galera3

Notes: You may need to enter the root password of the destination server each time you execute the command.

3. Download the bootstrap script from Severalnines page here. This script will help us a lot for this integration:

$ cd /usr/local/src
$ wget http://severalnines.com/downloads/cmon/cc-bootstrap-1.1.34.tar.gz
$ tar -xzf cc-bootstrap-1.1.34.tar.gz

4. Before we start, we might need to make sure several package is installed using yum. Sudo and crond are required:

$ yum install cronie sudo -y

5. Run the bootstrap script for controller. This will configure and install all required package in order to run ClusterControl:

$ cd cc-bootstrap-1.1.34
$ bin/bootstrap-controller.sh

Notes: During installation you might see the installer will try to regenerate the RSA key. Just choose ‘N’.

Is this your Controller host IP, 192.168.1.210 [Y/n]: Y
What is your username [ubuntu] (e.g, ubuntu or root for RHEL): root
Where do you want to have the ClusterControl config files [/root/s9s]:
Where is your ssh key [/root/.ssh/id_rsa]:
Enter a MySQL root password to be set for the MySQL server on the Controller host [password]:
* On debian/ubunu when installing the MySQL server please enter the above root password when prompted
 
What is your DB cluster type [galera] (mysqlcluster|replication|galera|mysql_singl):
Where are your DB hosts [ip1 ip2 ip3 ... ipN]: 192.168.1.201 192.168.1.202 192.168.1.203
Enter the MySQL root password on the agent/DB hosts [password]: MyDBp4ss67
Thank you...
 
Install a MySQL Server and rrdtools? [Y/n]: Y
Install ClusterControl Controller? [Y/n]: Y
Install apache2 and php5 [Y/n]: Y
Install ClusterControl's Web application? [Y/n]: Y

6. Once done, we need to run the bootstrap script for agents. This will remotely install and configure required package in order to control Galera cluster from the ClusterControl server:

$ bin/bootstrap-agents.sh

7. Install the JPgraph. This is required for ClusterControl to view graph correctly:

$ bin/install-jpgraph.sh

8. If you already configured your MySQL Galera cluster configuration under /etc/my.cnf, we need to copy the file into ClusterControl server under directory /root/s9s/mysql/config. As for me, I skip this steps because I have nothing configured inside /etc/my.cnf in cluster nodes.

$ scp root@galera1:/etc/my.cnf /root/s9s/mysql/config

9. Start cmon service, cron and enable cron on startup:

$ service cmon start
$ chkconfig crond on
$ service crond start

10. Installation done. Login into ClusterControl page at http://192.168.0.210/cmon. We need to change the cluster name to Galera cluster name. You can retrieve this using following command inside any Galera nodes:

$ mysql -e 'show variables' | grep wsrep_cluster_name
wsrep_cluster_name      my_wsrep_cluster

So my Galera cluster name is my_wsrep_cluster.

Go to ClusterControl > Galera Cluster ‘default_cluster_1′(1/0) > Administration > Cluster Settings and change the Cluster Name as below:

Save the changes and you are done. Just wait for a while and you will notice that your Galera cluster nodes will be counted as 1/3 at the Dashboard similar as below:

You do not even need to login into your Galera cluster nodes in order to complete this integration process. Believe me, this tool is a must if you have MySQL Galera cluster setup. I will start integrating ClusterControl into all of my Galera cluster set ASAP!

Easiest Way to Install A Complete MySQL Galera Cluster

Some of us might not having much time in installing and configuring several servers which run in a database cluster. As you can refer to my previous posts which related to installation of MySQL Galera and Percona XtraDB Cluster, it always required to spend 1 or 2 hours to do the installation and configuration.

Severalnines provide a super-great tool to install, configure and monitor the MySQL cluster. Basically, you just need to go to their website, follow the configuration wizards and they will create an installer script. What you just need is several servers ready with basic networking. 4 minimum servers are required; 3 will be running MySQL Galera cluster and another host used to run ClusterControl which will do all the automation work for you including installation, configuration and monitoring.

I will use the latest CentOS 6.3 64bit in this post, even though they do not mention if it is supported yet. I already tested in its working as expected. You may refer to architecture that I used to install as below:

Preparing the Servers

Make sure /etc/hosts for all server are having the same value as below:

192.168.0.221     galera1 galera1.local
192.168.0.222     galera2 galera2.local
192.168.0.223     galera3 galera3.local
192.168.0.230     clustercontrol clustercontrol.local

Disable SELINUX completely in all servers:

$ sed -i.bak 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

Generate Deployment Script

1. Go to Severalnines.com Galera configurator website here: http://www.severalnines.com/galera-configurator/

2. Choose “Codership’s release (based on MySQL 5.5)” and click Next. Start configuring your servers under General Settings as below:

Cloud Provider                   : none/on-premise
Operating System                 : RHEL6 - Redhat 6.2/Fedora/Centos 6.2/OLN 6.2
Platform                         : Linux 64-bit (x86_64)
Number of Galera Servers         : 3+1
MySQL PortNumber                 : 3306
Galera PortNumber                : 4567
Galera SST PortNumber            : 4444 
SSH PortNumber                   : 22
OS User                          : root
MySQL Server password (root user): BTrHs87cE
CMON DB password (cmon user)     : VfjLs12yz
Firewall (iptables)              : disable

3. Click Next. Under Configure Storage tab, enter respective information as example below:

System Memory (MySQL Servers): 512MB
WAN                          : no
Skip DNS Resolve             : yes
Database Size                : <8GB
MySQL Usage                  : Medium write/high read
Number of cores              : 2
innodb_buffer_pool_size      : (auto configured)
innodb_file_per_table        : Checked

4. Click Next. Under Configure Directories tab, enter respective information as example below:

5. Click Next. Under Review Config tab, enter respective information as example below:

6. On the final page, you need to take note on the URL of the deployment script so we can use it to download directly into our server:

Deploy the Cluster

1. Login via console or SSH to the ClusterControl server which is 192.168.0.230 and download the deployment script:

$ cd /usr/local/src
$ wget http://www.severalnines.com/galera-configurator/tmp/r18g0ggko1svcaje8ude7gjmm4/s9s-galera-2.1.0-rpm.tar.gz
$ tar -xzf s9s-galera-2.1.0-rpm.tar.gz

2. Navigate to the installer and start the deployment process:

$ cd s9s-galera-2.1.0-rpm/mysql/scripts
$ ./deploy.sh 2>&1 |tee cc.log

3. Answer few questions, example as below:

Redhat/Centos/Fedora detected - do you want to set SELinux to Permissive mode.
** IF YOU HAVE SELinux == disabled ON ALL HOSTS PRESS 'n' AND IGNORE THE WARNING. **
(you can set SELinux to Enforcing again later if you want) - 'y' is recommended (y/n): n
 
Can you SSH from this host to all other hosts without password?
Choosing 'n' will allow you to setup shared keys. (y/n): n

Your cluster will be ready within few minutes. Once completed, it will notify you the ClusterControl URL. This is the monitoring tools which integrated seamlessly into the MySQL Galera cluster deployment.

ClusterControl

1. Before we connect to the ClusterControl, we need to install certain things like JpGraph so we can view graph correctly. As refer to the support page here, http://support.severalnines.com/entries/20978841-jpgraph-installation. Download JpGraph at this website, http://jpgraph.net/download/:

tar xvfz jpgraph-3.5.0b1.tar.gz -C /var/www/html/cmon
$ cd /var/www/html/cmon
$ ln -s jpgraph-3.5.0b1 jpgraph
$ chown apache:apache -R jpgraph*

2. Start the crond service so the RRDtool graph works correctly:

$ chkconfig crond on
$ service crond start

3. Lets connect to the ClusterControl. Open web browser and go to http://192.168.0.230/cmon/setup.php. In this page, we are required to enter the CMON password:

4. Click “Test the connection and save config data“. You will then should be redirected to the Dashboard page as below:

Done! Your MySQL Galera Cluster is running perfectly with monitoring, configuring and alerting system. As you can see in the screenshot above, the “Host Alarms” has found out the server’s RAM usage is in critical (96% usage). It even give you some advise on how to solve the problem. Isn’t this helpful?

Considering the easiness on deploying this cluster without added cost, you are now having one of the most complete MySQL Galera Cluster suite!

PHP Session using Sharedance in Apache Web Cluster

Our new online shopping cart site is run on 3 Apache servers which mount the same document root in all nodes. With a load balancer in front of it to distribute the HTTP/HTTPS connections equally using weigh round-robin algorithm, we are facing big problem in session handling for the site; When user’s session is not exist in the current server, they need to authenticate once more if load balancer redirect the user to another server. We are considering following solutions:

  • Mount the same partition of session.save_path directory in all nodes using GFS2:
    • Serious IO issue due to high write in single directory
    • Increase server load especially on the GFS2 locking (dlm) process
  • Use memcached server:
    • Need to have simple modification on the PHP code
    • The sessions are saved in memory instead of disk. It is fast but session ID will be gone if server rebooted or service restarted.
  • Set session to be stored in MySQL database:
    • Need to modify the PHP code especially on how it handle locking. If session stored in file format, the file system will automatically handling the locking part.
    • Increase the database server workload due to high read/write and it can generate to millions row within months.

At the end of the day, we are planning to use Sharedance because it is super simple. You just need to install the server, do some changes in php.ini, restart Apache and done! You are then set to have a session server which will be lookup by all web cluster nodes.

OS: CentOS 6.3 64bit
Server IP: 192.168.10.50
Session directory: /var/sharedance
Website: misterryan.com
Webserver #1: 192.168.10.101
Webserver #2: 192.168.10.102
Webserver #3: 192.168.10.103

Session Server (Sharedance)

1. We will use RPMforge to make our life easier:

$ rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt
$ rpm -Uhv http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm

2. Install Sharedance using yum:

$ yum install sharedance -y

3. I want Sharedance to listen to the main IP and cache expiration should be 6 hours. Open /etc/sysconfig/sharedance via text editor and make sure you have following line:

SHAREDANCE_OPTIONS="--ip=192.168.10.50 --expiration=21600"

You can use following command to check the complete list of options available:

$ sharedanced --help

4. Make sure it is auto start on boot and start the Sharedance service:

$ chkconfig sharedance on
$ service sharedance start

5. You can check in the process list this 2 processes should exist and listening to port 1042:

$ ps aux | grep sharedanced
496    2621    0.0   0.0   111520  568  ?  Ss   11:44   0:00   sharedanced [SERVER] 
496    2622    0.0   0.0   111520  352  ?  SN   11:44   0:00   sharedanced [CLEANUP]
$ netstat -tulpn | grep sharedanced
tcp   0   0     192.168.0.171:1042     0.0.0.0:*       LISTEN       2621/sharedanced [SERVER]

To allow firewall rules in iptables for Sharedance port:

$ iptables -I INPUT -p tcp --dport 1042 -j ACCEPT

6. We need to copy the PHP session handler and sharedance PHP file provided by Sharedance which is located under /usr/share/doc/sharedance-0.6/php/ direcotry to all web cluster nodes so they can prepend in the php.ini. I will copy them to /etc/php.d directory in all nodes:

$ cd /usr/share/doc/sharedance-0.6/php/
$ scp session_handler.php sharedance.php 192.168.10.101:/etc/php.d/
$ scp session_handler.php sharedance.php 192.168.10.102:/etc/php.d/
$ scp session_handler.php sharedance.php 192.168.10.103:/etc/php.d/

Web Servers

1. Change the php.ini value of your server as below:

auto_prepend_file = /etc/php.d/session_handler.php
session.save_handler = user

2. Edit /etc/php.d/session_handler.php using text editor and make sure the first line contain the IP address of Sharedance server:

define('SESSION_HANDLER_HOST', '192.168.10.50');

3. Restart Apache web server to apply the changes:

$ service httpd restart

Testing

I download this file: http://blog.secaserver.com/files/session.tar.gz and execute it from the web server to get following result:

We should see the same session ID exist in Sharedance session directory at /var/lib/sharedance:

$ ls -al | grep tqsncjk23k78cm747n4b1eq5l4
-rw------- 1 sharedance sharedance 24 Aug 1 12:17 tqsncjk23k78cm747n4b1eq5l4

CentOS: Configure Piranha as Load Balancer (Direct Routing Method)

I am currently working on a web cluster project using CentOS. In this project, I have 2 web servers running on Apache and mounted the same document root to serve the HTTP content. I also have 2 servers in front of it to become the load balancer and failover to increase high availability of the two-node web server cluster. The virtual IP will be hold by load balancer #1 with auto failover to load balancer #2.

You may refer to diagram below to get clearer picture:

I am using following variables:

All servers’ OS: CentOS 6.2 64bit
Web server #1: 192.168.0.221
Web server #2: 192.168.0.222
Load balancer #1: 192.168.0.231
Load balancer #2: 192.168.0.232
Virtual IP: 192.168.0.220

Load Balancer Server

1. All steps should be done in both servers unless specified. We will install Piranha and other required packages using yum:

$ yum install piranha ipvsadm -y

2. Open firewall ports as below:

  • Piranha: 3636
  • HTTP: 80
  • Hearbeat: 539

3. Start all required services and make sure they will auto start if server reboot:

$ service piranha-gui start
$ chkconfig piranha-gui on
$ chkconfig pulse on

4. Run following command to set password for user piranha. This will be used when accessing the web-based configuration tools:

$ piranha-passwd

5. Turn on IP forwarding. Open /etc/sysctl.conf and make sure following line has value 1:

net.ipv4.ip_forward = 1

And run following command to activate it:

$ sysctl -p

Load Balancer #1

1. Open Piranha web-based configuration tools at http://192.168.0.231:3636 and login as piranha with respective password. We start with configuring Global Settings as below:

2. Then, go to the Redundancy tab and enter the secondary server IP. In this case, we will put load balancer #2 IP as the redundant server in case load balancer #1 is down:

3. Under Virtual Servers tab, click Add and enter required information as below:

4. Now we need to configure the virtual IP and virtual HTTP server to map into the real HTTP server. Go to Virtual Servers > Real Server and add into the list as below:

Make sure you activate the real server once the adding completed by clicking the (DE)ACTIVATE button.

5.  Now copy the configuration file to load balancer #2 to as below:

$ scp /etc/sysconfig/ha/lvs.cf 192.168.0.232:/etc/sysconfig/ha/

6. Restart Pulse service to apply the new configuration:

$ service pulse restart

You can monitor what is happening with Pulse by tailing the /var/log/message output as below:

$ tail -f /var/log/message

Load Balancer #2

No need to configure anything in this server. We just need to restart Pulse service to get affected with the new configuration changes which being copied over from LB1.

$ service pulse restart

If you see the /var/log/message, pulse in this server will report that it will run on BACKUP mode.

Web Servers

1. Since we are using direct-routing method, regards to your Apache installation, we also need to install another package called arptables_jf. Here is some quote from RedHat documentation page:

Using the arptables_jf method, applications may bind to each individual VIP or port that the real server is servicing. For example, the arptables_jf method allows multiple instances of Apache HTTP Server to be running bound explicitly to different VIPs on the system. There are also significant performance advantages to usingarptables_jf over the IPTables option.

However, using the arptables_jf method, VIPs can not be configured to start on boot using standard Red Hat Enterprise Linux system configuration tools.

We will instsall using yum:

$ yum install arptables_jf -y

2. Configure arptables_jf by executing following command:

In web server #1:

$ arptables -A IN -d 192.168.0.220 -j DROP
$ arptables -A OUT -d 192.168.0.220 -j mangle --mangle-ip-s 192.168.0.221

In web server #2:

$ arptables -A IN -d 192.168.0.220 -j DROP
$ arptables -A OUT -d 192.168.0.220 -j mangle --mangle-ip-s 192.168.0.222

3.  Save the arptables rules and make sure the service is started on boot:

$ service arptables_jf save
$ chkconfig arptables_jf on

4.  Add the virtual IP address in the servers:

$ ip addr add 192.168.0.220 dev eth0

5. Since the IP cannot be started during sysinit (boot time), we can automatically start the IP after sysinit complete. Open /etc/rc.local using text editor:

$ vim /etc/rc.local

And add following line:

/sbin/ip addr add 192.168.0.220 dev eth0

Warning: Every time you restart your network service, please make sure to run step #4 to bring up the virtual IP in real server.

Done. You can now point your website to the virtual IP and you will see that the load balancer #1 will report as below:

$ ipvsadm -L
 
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port       Forward Weight  ActiveConn  InActConn
TCP 192.168.0.220:http lblc
-> 192.168.0.221:http       Route   1       0           34
-> 192.168.0.222:http       Route   1       0           19

CentOS: Install Percona XtraDB Cluster

Percona has just released XtraDB Cluster stable version which similar to Galera InnoDB patches in MySQL. Percona XtraDB Cluster is a high availability and high scalability solution for MySQL users. XtraDB Cluster integrates Percona Server with the Galera library of high availability solutions in a single product package.

Variables as below:

OS: CentOS 6.2 64bit
MySQL root password: r00t123##
Node #1 server IP: 192.168.0.201
Node #2 server IP: 192.168.0.202
Node #3 server IP: 192.168.0.203

Preparing the Servers

In order to achieve good performance and high availability, Percona Xtradb require at least 3 hosts running with same hardware specs (please refer to advantages and disadvantages of Galera here).

You must make sure that there is no other MySQL related packages and libraries installed in this server. Make sure following command will return nothing (you may need to remove if any results appear):

$ rpm -qa | grep mysql

This cluster is for INNODB only! If you are using MyISAM, convert your database tables’ storage engine to InnoDB before proceed.

All servers will be assume to have SELINUX disabled and firewall has been turned OFF. Make sure every host is having following value in /etc/hosts:

192.168.0.201     percona1
192.168.0.202     percona2
192.168.0.203     percona3

Installing Percona XtraDB Cluster

1. Download Percona XtraDB Cluster and all required packages (client, shared, galera and server) at here http://www.percona.com/downloads/Percona-XtraDB-Cluster/5.5.20-23.4/RPM/rhel6/x86_64/:

$ mkdir /usr/local/src/percona
$ cd /usr/local/src/percona
$ wget http://www.percona.com/redir/downloads/Percona-XtraDB-Cluster/5.5.20-23.4/RPM/rhel6/x86_64/Percona-XtraDB-Cluster-client-5.5.20-23.4.3748.rhel6.x86_64.rpm
$ wget http://www.percona.com/redir/downloads/Percona-XtraDB-Cluster/5.5.20-23.4/RPM/rhel6/x86_64/Percona-XtraDB-Cluster-galera-2.0-1.109.rhel6.x86_64.rpm
$ wget http://www.percona.com/redir/downloads/Percona-XtraDB-Cluster/5.5.20-23.4/RPM/rhel6/x86_64/Percona-XtraDB-Cluster-server-5.5.20-23.4.3748.rhel6.x86_64.rpm
$ wget http://www.percona.com/redir/downloads/Percona-XtraDB-Cluster/5.5.20-23.4/RPM/rhel6/x86_64/Percona-XtraDB-Cluster-shared-5.5.20-23.4.3748.rhel6.x86_64.rpm

2. This installation also required Percona XtraBackup which you can get from here :

$wget http://www.percona.com/redir/downloads/XtraBackup/XtraBackup-2.0.0/RPM/rhel6/x86_64/percona-xtrabackup-2.0.0-417.rhel6.x86_64.rpm

3. Usually, you will get this error if straight away try to install any of the package:

error: Failed dependencies:
libmysqlclient.so.16()(64bit) is needed by (installed) postfix-2:2.6.6-2.2.el6_1.x86_64
libmysqlclient.so.16(libmysqlclient_16)(64bit) is needed by (installed) postfix-2:2.6.6-2.2.el6_1.x86_64

There is a trick to overcome this problem by running following commands accordingly:

$ yum install mysql-libs -y
$ rpm -e --nodeps mysql-libs

4. Install the packages based on following order (shared > client > galera > xtrabackup > server):

$ rpm -Uhv Percona-XtraDB-Cluster-shared-5.5.20-23.4.3748.rhel6.x86_64.rpm
$ rpm -Uhv Percona-XtraDB-Cluster-client-5.5.20-23.4.3748.rhel6.x86_64.rpm
$ rpm -Uhv Percona-XtraDB-Cluster-galera-2.0-1.109.rhel6.x86_64.rpm
$ rpm -Uhv percona-xtrabackup-2.0.0-417.rhel6.x86_64.rpm
$ rpm -Uhv Percona-XtraDB-Cluster-server-5.5.20-23.4.3748.rhel6.x86_64.rpm

5. Start the mysql server:

$ service mysql start

6. Configure root password:

$ /usr/bin/mysqladmin -u root password 'r00t123##'

We also need to create /root/.my.cnf file to ease up root login to MySQL database. Create a new file using text editor /root directory:

$ vim /root/.my.cnf

And add following line:

[client]
user=root
password='r00t123##'

7. Change the permission so only root can read the file:

$ chmod 400 /root/.my.cnf

Configuring Percona XtraDB Cluster

1. New installation usually do not have MySQL configuration file under /etc directory. We will create the configuration file using text editor:

$ vim /etc/my.cnf

For percona1, add following line:

[mysqld]
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://
wsrep_slave_threads=8
wsrep_sst_method=rsync
wsrep_cluster_name=percona_cluster
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1

For percona2, add following line:

[mysqld]
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.0.201
wsrep_slave_threads=8
wsrep_sst_method=rsync
wsrep_cluster_name=percona_cluster
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1

For percona3, add following line:

[mysqld]
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.0.201
wsrep_slave_threads=8
wsrep_sst_method=rsync
wsrep_cluster_name=percona_cluster
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1

Configuration above will tell Percona to use libgalera_smm.so as the wsrep_provider, gcomm:// means this server will be the reference node and also the SST (the way they copy data) will be using rsync method. The cluster name we will use percona_cluster as identifier for other cluster members to work as a group.

2. Restart the Percona database servers based on following order:

  1. percona1
  2. percona2
  3. percona3

At this point, the cluster should be working as expected. You can write and read to any of the node and also you can try to stop one of the node in a time. You will realized that your MySQL instance will still available between them and the earlier stopped node can automatically joined the cluster.

To take advantage of this, you can configure HAproxy to serve the MySQL instance with load balance and auto failover. You may refer to this post for more information on how to setup HAproxy as part of MySQL high availability.

Notes

– You may need to open following ports if you want to run Pecona XtraDB Cluster in firewall environment:

  • Galera: 4567
  • SST: 4444 (if you are using xtrabackup)
  • SST incremental port: 4445 (if you are using xtrabackup)
  • MySQL: 3306
  • Rsync: 873

– As advised by Galera team:

Only use empty gcomm:// address when you want create a NEW cluster. Never use it when your intention is to reconnect to an existing one. Therefore never leave it hardcoded in any configuration files.

Therefore, once the cluster built, change the gcomm:// URL in server1 to server number 2 (gcomm://192.168.0.202) or number 3 (gcomm://192.168.0.203)

– To monitor the cluster status, you can run following command:

$ mysql -e "show status like 'wsrep_%'"

And monitor following values:

wsrep_cluster_size = 3
wsrep_local_state_comment = Synced
wsrep_ready = ON