In this post, I’m going to cover on how to add load balancer for the MariaDB Galera Cluster. The major steps are:
- Ensure all Galera nodes are running as a single cluster (all nodes are primary and synced).
- Install HAproxy (you can install it on separate node or on the application server).
- Configure clustercheck script. This script performs health check on each backend server.
- Configure HAproxy statistic page.
- Point the application to load balancer.
Our architecture looks like below:
Configure clustercheck script
* Steps described in this section should be performed on all DB nodes unless specified otherwise.
1. Firstly, we need to configure the backend health check reporting. We will use Percona’s clustercheck script available here. Get the script and put it under /usr/local/bin by running following commands:
$ git clone https://github.com/olafz/percona-clustercheck $ cp percona-clustercheck/clustercheck /usr/local/bin |
2. The clustercheck script performs regular check on the Galera node by monitoring several MySQL variables/status. It yields a simple HTML output with corresponding HTTP return code (either 503 – Service Unavailable or 200 – OK). To make things easier for HAproxy to trigger the script and get the latest status of the backend, we have to make it listens to a port. We can use xinetd to turn the script into a service daemon and make it listen to a custom port, in this case, I’m going to use 9200.
Create a new file called /etc/xinet.d/mysqlchk, and add following lines:
# default: on # description: mysqlchk service mysqlchk { disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/local/bin/clustercheck log_on_failure += USERID only_from = 0.0.0.0/0 per_source = UNLIMITED } |
3. Then, we need to add the mysqlchk service into xinetd:
$ echo 'mysqlchk 9200/tcp # MySQL check' >> /etc/services |
4. By default, the script will use a MySQL user called “clustercheckuser” with password “clustercheckpassword!”. We need to ensure this MySQL user is exist with the corresponding password before the script would be able to perform health checks. Run following DDL statements on one of the DB node (Galera should replicate the statement to the other nodes):
mysql> GRANT PROCESS ON *.* TO 'clustercheckuser'@'localhost' IDENTIFIED BY 'clustercheckpassword!'; mysql> FLUSH PRIVILEGES; |
You can change this value inside the clustercheck script in line 32,33. In this post, I’m going to use the default username and password.
5. Verify if the script returns a correct value:
$ /usr/local/bin/clustercheck > /dev/null $ echo $? 0 |
If the DB node is in synced, you should get 0. Otherwise 1 should be the output. The backend health check is configured.
Install HAproxy
* Steps described in this section should be performed on HAproxy or application server.
1. The easy way is using package manager (yum/apt). However, it’s highly recommended to use the latest version available on HAproxy site. Either way, I’ll show the the installation steps here.
a) If you choose to install HAproxy via package manager:
$ yum install haproxy # Redhat/CentOS $ sudo apt-get install haproxy # Debian/Ubuntu |
b) Via source from HAproxy download site:
$ yum install php-curl gcc make # Redhat/CentOS $ apt-get install php5-curl gcc make # Debian/Ubuntu $ wget http://www.haproxy.org/download/1.5/src/ $ tar xvzfz $ cd $ make TARGET=linux26 $ cp -f haproxy /usr/sbin/haproxy |
Installing from source (option b) comes with no init script. So you have to create it manually or simply provision the process via command line (which is non-standard way). I’m not going to cover this unconventional way in this post.
Configure HAproxy
Now we have HAproxy installed. We need to configure it to listen on port 3307 for MySQL service and perform back-end health checks. On /etc/haproxy/haproxy.cfg, ensure following lines exist:
global pidfile /var/run/haproxy.pid daemon user haproxy group haproxy stats socket /var/run/haproxy.socket user haproxy group haproxy mode 600 level admin maxconn 8192 spread-checks 3 quiet defaults mode tcp option dontlognull option tcp-smart-accept option tcp-smart-connect retries 3 option redispatch maxconn 8192 timeout check 3500ms timeout queue 3500ms timeout connect 3500ms timeout client 10800s timeout server 10800s userlist STATSUSERS group admin users admin user admin insecure-password admin user stats insecure-password yourpassword listen admin_page 0.0.0.0:9600 mode http stats enable stats refresh 60s stats uri / acl AuthOkay_ReadOnly http_auth(STATSUSERS) acl AuthOkay_Admin http_auth_group(STATSUSERS) admin stats http-request auth realm admin_page unless AuthOkay_ReadOnly listen mysql_3307 bind *:3307 mode tcp timeout client 10800s timeout server 10800s balance leastconn option httpchk option allbackups default-server port 9200 inter 2s downinter 5s rise 3 fall 2 slowstart 60s maxconn 64 maxqueue 128 weight 100 server db1 10.0.0.187:3306 check server db2 10.0.0.188:3306 check server db3 10.0.0.189:3306 check |
Now enable the service on boot and fire it up:
RHEL/CentOS 6:
$ chkconfig haproxy on # RHEL6 $ service haproxy start # RHEL6 |
Ubuntu 14.04 and lower, Debian 7 and lower:
$ update-rc.d haproxy defaults
$ sudo service haproxy start |
RHEL/CentOS 7, Debian 8, Ubuntu 15.04:
$ systemctl enable haproxy
$ systemctl start haproxy |
Verify if HAproxy is listening to the correct ports:
$ sudo netstat -tulpn | grep haproxy tcp 0 0 0.0.0.0:9600 0.0.0.0:* LISTEN 370/haproxy tcp 0 0 0.0.0.0:3307 0.0.0.0:* LISTEN 370/haproxy |
3307 is the MySQL load-balanced port, while 9600 is the HAproxy statistic page. You can login to check the status by going to http://haproxy_ip_address:9600/ and login with username ‘admin’ and password ‘yourpassword’ as configured inside haproxy.cfg. You should see something like below:
Now, you can redirect the application or MySQL client to HAproxy host on port 3307, for a load balanaced MySQL connections with auto failover.