High Availability: MySQL Cluster with Galera + MySQL Proxy

In this tutorial, I am going to show you on how to achieve higher MySQL uptime with some help from MySQL Proxy, Galera cluster and virtual IP using keepalived.

Actually, the process is similar with my previous post as here, with some added steps to configure MySQL proxy and virtual IP. The architecture can be describe as image below:

Variable that I used:

OS: CentOS 6.0 64bit
MySQL server1: 192.168.0.171
MySQL server2: 192.168.0.172
MySQL proxy server1: 192.168.0.151
MySQL proxy server2: 192.168.0.152
Virtual IP to be shared among MySQL proxies: 192.168.0.170
MySQL root password: q1w2e3!@#
Cluster root username: clusteroot
Cluster root password: q1w2e3!@#
Galera SST user: sst
Galera SST password: sstpass123

Server hostname is important in cluster. Following information has been setup in all servers /etc/hosts file. All configurations shown below are assuming that firewall is turn OFF:

192.168.0.151 myproxy1.cluster.local myproxy1
192.168.0.152 myproxy2.cluster.local myproxy2
192.168.0.171 galera1.cluster.local galera1
192.168.0.172 galera2.cluster.local galera2
127.0.0.1     localhost.localdomain localhost
::1           localhost6 localhost6.localdomain

MySQL Cluster with Galera

1. Following steps are similar with my previous post. But in this tutorial, I am going to rewrite it as refer to this case with latest version of Galera and MySQL. I have no MySQL server installed in this server at the moment. Download the latest Galera library, MySQL with wsrep, MySQL client and MySQL shared from MySQL download page:

$ mkdir -p /usr/local/src/galera
$ cd /usr/local/src/galera
$ wget https://launchpad.net/galera/2.x/23.2.0/+download/galera-23.2.0-1.rhel5.x86_64.rpm
$ wget https://launchpad.net/codership-mysql/5.5/5.5.20-23.4/+download/MySQL-server-5.5.20_wsrep_23.4-1.rhel5.x86_64.rpm
$ wget http://dev.mysql.com/get/Downloads/MySQL-5.5/MySQL-client-5.5.20-1.el6.x86_64.rpm/from/http://ftp.jaist.ac.jp/pub/mysql/
$ wget http://dev.mysql.com/get/Downloads/MySQL-5.5/MySQL-shared-5.5.20-1.el6.x86_64.rpm/from/http://ftp.jaist.ac.jp/pub/mysql/

2. Remove unwanted library and install all the packages in following sequence:

$ yum remove mysql-libs -y
$ rpm -Uhv galera-23.2.0-1.rhel5.x86_64.rpm
$ rpm -Uhv MySQL-client-5.5.20-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-shared-5.5.20-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-server-5.5.20_wsrep_23.4-1.rhel5.x86_64.rpm

3. Start the MySQL service and make sure it start on boot:

$ chkconfig mysql on
$ service mysql start

4. Setup the MySQL root password:

$ /usr/bin/mysqladmin -u root password 'q1w2e3!@#'

5. Setup the MySQL client for root. Create new text file /root/.my.cnf using text editor and add following line:

[client]
user=root
passowrd='q1w2e3!@#'

6. Change the permission to make sure it is not viewable by others:

$ chmod 600 /root/.my.cnf

7. Login to the MySQL server by executing command “mysql” and execute following line. We will also need to create another root user called clusteroot with password as stated on variables above:

mysql> DELETE FROM mysql.user WHERE user='';
mysql> GRANT USAGE ON *.* TO root@'%' IDENTIFIED BY 'q1w2e3!@#';
mysql> UPDATE mysql.user SET Password=PASSWORD('q1w2e3!@#') WHERE User='root';
mysql> GRANT USAGE ON *.* to sst@'%' IDENTIFIED BY 'sstpass123';
mysql> GRANT ALL PRIVILEGES on *.* to sst@'%';
mysql> GRANT USAGE on *.* to clusteroot@'%' IDENTIFIED BY 'q1w2e3!@#';
mysql> GRANT ALL PRIVILEGES on *.* to clusteroot@'%' ;
mysql> quit

8. Create the configuration files and directory, copy the example configuration and create mysql exclusion configuration file:

$ mkdir -p /etc/mysql/conf.d/
$ cp /usr/share/mysql/wsrep.cnf /etc/mysql/conf.d/
$ touch /etc/my.cnf
$ echo '!includedir /etc/mysql/conf.d/' >> /etc/my.cnf

9. Configure MySQL wsrep with Galera library. Open /etc/mysql/conf.d/wsrep.cnf using text editor and find and edit following line:

For galera1.cluster.local:

wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://"
wsrep_sst_auth=sst:sstpass123

For galera2.cluster.local:

wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.0.171:4567"
wsrep_sst_auth=sst:sstpass123

10. Restart services in both servers:

$ service mysql restart

11. Check whether Galera replication is running fine:

$ mysql -e "show status like 'wsrep%'"

If the cluster is working, you should see following value in both servers:

wsrep_ready = ON

MySQL proxy servers

1. In both myproxy1 and myproxy2, we will start with installing virtual IP to make sure the MySQL proxy IP is always available. Lets download and install keepalived from here. OpenSSL header and popt library are required, so we will install it first using yum:

$ yum install -y openssl openssl-devel popt*
$ cd /usr/local/src
$ wget http://www.keepalived.org/software/keepalived-1.2.2.tar.gz
$ tar -xzf keepalived-1.2.2.tar.gz
$ cd keepalived-*
$ ./configure
$ make
$ make install

2. Since we have virtual IP which shared between these 2 servers, we need to tell kernel that we have a non-local IP to be bind to mysql proxy service later. Add following line into /etc/sysctl.conf:

net.ipv4.ip_nonlocal_bind = 1

Run following command to apply the changes:

$ sysctl -p

3. By default, keepalived configuration file will be setup under /usr/local/etc/keepalived/keepalived.conf. We will make things easier by symlink it into /etc directory. We will also clear the configuration example inside it:

$ ln -s /usr/local/etc/keepalived/keepalived.conf /etc/keepalived.conf
$ cat /dev/null > /etc/keepalived.conf

4. Download MySQL proxy at http://dev.mysql.com/downloads/mysql-proxy/ .We will setup MySQL proxy under /usr/local directory:

$ cd /usr/local
$ wget http://mysql.oss.eznetsols.org/Downloads/MySQL-Proxy/mysql-proxy-0.8.2-linux-rhel5-x86-64bit.tar.gz
$ tar -xzf mysql-proxy-0.8.2-linux-rhel5-x86-64bit.tar.gz
$ mv mysql-proxy-0.8.2-linux-rhel5-x86-64bit mysql-proxy
$ rm -Rf mysql-proxy-0.8.2-linux-rhel5-x86-64bit.tar.gz

5. This step is different in both servers for keepalived configuration.

For myproxy1, add following line into /etc/keepalived.conf:

vrrp_script chk_mysqlproxy {
        script "killall -0 mysql-proxy" # verify the pid is exist or not
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
        interface eth0			# interface to monitor
        state MASTER
        virtual_router_id 51		# Assign one ID for this route
        priority 101                    # 101 on master, 100 on backup
        virtual_ipaddress {
            192.168.0.170		# the virtual IP
        }
        track_script {
            chk_mysqlproxy
        }
}

 

For myproxy2, add following line into /etc/keepalived.conf:

vrrp_script chk_mysqlproxy {
        script "killall -0 mysql-proxy" # verify the pid is exist or not
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
        interface eth0			# interface to monitor
        state MASTER
        virtual_router_id 51		# Assign one ID for this route
        priority 100                    # 101 on master, 100 on backup
        virtual_ipaddress {
            192.168.0.170		# the virtual IP
        }
        track_script {
            chk_mysqlproxy
        }
}

6. Run following command to start MySQL proxy in both servers:

$ /usr/local/mysql-proxy/bin/mysql-proxy -D -P 0.0.0.0:3306 -b 192.168.0.171:3306 -b 192.168.0.172:3306

7. Now lets start keepalived and do some testing whether the IP failover is working. Run following command in both servers:

$ keepalived -f /etc/keepalived.conf

Ping IP 192.168.0.170 from another host. Now in myproxy1, stop the network:

$ service network stop

You will notice that the IP will be down for 2 seconds and then it will up again. It means that myproxy2 has taking over IP 192.168.0.170 from myproxy1. If you start back the network in myproxy1, you will noticed the same thing happen because myproxy1 will taking over back the IP from myproxy2, as what we configure in /etc/keepalived.conf. You can also try to kill mysql-proxy process and you will see the virtual IP will be take over again by myproxy2.

Now your MySQL is running in high availability mode. The MySQL client just need to access 192.168.0.170 as the MySQL database server host.

7 thoughts on “High Availability: MySQL Cluster with Galera + MySQL Proxy

  1. Hello!

    Thank you for this howto! Now i have a question, if i have my servers in different locations over the world. How do i configure it than? Do i connect via VPN? Or how i can do it?

    Reply

    1. Do you mean that you want to have cluster run in several servers with different geographical location? This is possible but not recommended unless for disaster recovery only (which means backup of live database). For a live database cluster, you better setup that in one internal network to get the advantages of cluster

      Reply

  2. Hello,

    I followed your tutorial specifically for the load balancing. I had already a working MySQL Galera cluster (which is based on 3 servers)

    Now at the moment i only used one single load balancer server and configure it accordingly and it works fine.

    i make the virtual ip 10.10.10.239 successfully.

    But, my problem is that i am not able to access the database servers from remote system.

    e.g.

    I login to a different server and when i do ping 10.10.10.239 it works.

    Then i type.

    mysql -u root -p -h 10.10.10.239

    and it give me error.

    ERROR 2003 (HY000): Can’t connect to MySQL server on ‘10.10.10.239’ (111)

    I commented the bind-address and skip-external-locking parameters from all 3 servers in my cluster. but still did not work out.

    Any ideas, what i am missing to configure ?

    Reply

    1. Error 111 is connection refused. Have you open the mysql-proxy’s port 3306? Try turn off firewall in all servers and start from there.

      Reply

  3. I don’t understand where the loadbalancing functionality is in this setup. All I see is a failover setup where Proxy 2 will take over from Proxy 1 if it fails.

    There doesn’t seem to be any mechanism for sharing the load between the 2 servers.

    Am I missing something here?!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *