CentOS: Install Percona XtraDB Cluster

Percona has just released XtraDB Cluster stable version which similar to Galera InnoDB patches in MySQL. Percona XtraDB Cluster is a high availability and high scalability solution for MySQL users. XtraDB Cluster integrates Percona Server with the Galera library of high availability solutions in a single product package.

Variables as below:

OS: CentOS 6.2 64bit
MySQL root password: r00t123##
Node #1 server IP: 192.168.0.201
Node #2 server IP: 192.168.0.202
Node #3 server IP: 192.168.0.203

Preparing the Servers

In order to achieve good performance and high availability, Percona Xtradb require at least 3 hosts running with same hardware specs (please refer to advantages and disadvantages of Galera here).

You must make sure that there is no other MySQL related packages and libraries installed in this server. Make sure following command will return nothing (you may need to remove if any results appear):

$ rpm -qa | grep mysql

This cluster is for INNODB only! If you are using MyISAM, convert your database tables’ storage engine to InnoDB before proceed.

All servers will be assume to have SELINUX disabled and firewall has been turned OFF. Make sure every host is having following value in /etc/hosts:

192.168.0.201     percona1
192.168.0.202     percona2
192.168.0.203     percona3

Installing Percona XtraDB Cluster

1. Download Percona XtraDB Cluster and all required packages (client, shared, galera and server) at here http://www.percona.com/downloads/Percona-XtraDB-Cluster/5.5.20-23.4/RPM/rhel6/x86_64/:

$ mkdir /usr/local/src/percona
$ cd /usr/local/src/percona
$ wget http://www.percona.com/redir/downloads/Percona-XtraDB-Cluster/5.5.20-23.4/RPM/rhel6/x86_64/Percona-XtraDB-Cluster-client-5.5.20-23.4.3748.rhel6.x86_64.rpm
$ wget http://www.percona.com/redir/downloads/Percona-XtraDB-Cluster/5.5.20-23.4/RPM/rhel6/x86_64/Percona-XtraDB-Cluster-galera-2.0-1.109.rhel6.x86_64.rpm
$ wget http://www.percona.com/redir/downloads/Percona-XtraDB-Cluster/5.5.20-23.4/RPM/rhel6/x86_64/Percona-XtraDB-Cluster-server-5.5.20-23.4.3748.rhel6.x86_64.rpm
$ wget http://www.percona.com/redir/downloads/Percona-XtraDB-Cluster/5.5.20-23.4/RPM/rhel6/x86_64/Percona-XtraDB-Cluster-shared-5.5.20-23.4.3748.rhel6.x86_64.rpm

2. This installation also required Percona XtraBackup which you can get from here :

$wget http://www.percona.com/redir/downloads/XtraBackup/XtraBackup-2.0.0/RPM/rhel6/x86_64/percona-xtrabackup-2.0.0-417.rhel6.x86_64.rpm

3. Usually, you will get this error if straight away try to install any of the package:

error: Failed dependencies:
libmysqlclient.so.16()(64bit) is needed by (installed) postfix-2:2.6.6-2.2.el6_1.x86_64
libmysqlclient.so.16(libmysqlclient_16)(64bit) is needed by (installed) postfix-2:2.6.6-2.2.el6_1.x86_64

There is a trick to overcome this problem by running following commands accordingly:

$ yum install mysql-libs -y
$ rpm -e --nodeps mysql-libs

4. Install the packages based on following order (shared > client > galera > xtrabackup > server):

$ rpm -Uhv Percona-XtraDB-Cluster-shared-5.5.20-23.4.3748.rhel6.x86_64.rpm
$ rpm -Uhv Percona-XtraDB-Cluster-client-5.5.20-23.4.3748.rhel6.x86_64.rpm
$ rpm -Uhv Percona-XtraDB-Cluster-galera-2.0-1.109.rhel6.x86_64.rpm
$ rpm -Uhv percona-xtrabackup-2.0.0-417.rhel6.x86_64.rpm
$ rpm -Uhv Percona-XtraDB-Cluster-server-5.5.20-23.4.3748.rhel6.x86_64.rpm

5. Start the mysql server:

$ service mysql start

6. Configure root password:

$ /usr/bin/mysqladmin -u root password 'r00t123##'

We also need to create /root/.my.cnf file to ease up root login to MySQL database. Create a new file using text editor /root directory:

$ vim /root/.my.cnf

And add following line:

[client]
user=root
password='r00t123##'

7. Change the permission so only root can read the file:

$ chmod 400 /root/.my.cnf

Configuring Percona XtraDB Cluster

1. New installation usually do not have MySQL configuration file under /etc directory. We will create the configuration file using text editor:

$ vim /etc/my.cnf

For percona1, add following line:

[mysqld]
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://
wsrep_slave_threads=8
wsrep_sst_method=rsync
wsrep_cluster_name=percona_cluster
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1

For percona2, add following line:

[mysqld]
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.0.201
wsrep_slave_threads=8
wsrep_sst_method=rsync
wsrep_cluster_name=percona_cluster
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1

For percona3, add following line:

[mysqld]
wsrep_provider=/usr/lib64/libgalera_smm.so
wsrep_cluster_address=gcomm://192.168.0.201
wsrep_slave_threads=8
wsrep_sst_method=rsync
wsrep_cluster_name=percona_cluster
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1

Configuration above will tell Percona to use libgalera_smm.so as the wsrep_provider, gcomm:// means this server will be the reference node and also the SST (the way they copy data) will be using rsync method. The cluster name we will use percona_cluster as identifier for other cluster members to work as a group.

2. Restart the Percona database servers based on following order:

  1. percona1
  2. percona2
  3. percona3

At this point, the cluster should be working as expected. You can write and read to any of the node and also you can try to stop one of the node in a time. You will realized that your MySQL instance will still available between them and the earlier stopped node can automatically joined the cluster.

To take advantage of this, you can configure HAproxy to serve the MySQL instance with load balance and auto failover. You may refer to this post for more information on how to setup HAproxy as part of MySQL high availability.

Notes

– You may need to open following ports if you want to run Pecona XtraDB Cluster in firewall environment:

  • Galera: 4567
  • SST: 4444 (if you are using xtrabackup)
  • SST incremental port: 4445 (if you are using xtrabackup)
  • MySQL: 3306
  • Rsync: 873

– As advised by Galera team:

Only use empty gcomm:// address when you want create a NEW cluster. Never use it when your intention is to reconnect to an existing one. Therefore never leave it hardcoded in any configuration files.

Therefore, once the cluster built, change the gcomm:// URL in server1 to server number 2 (gcomm://192.168.0.202) or number 3 (gcomm://192.168.0.203)

– To monitor the cluster status, you can run following command:

$ mysql -e "show status like 'wsrep_%'"

And monitor following values:

wsrep_cluster_size = 3
wsrep_local_state_comment = Synced
wsrep_ready = ON

8 thoughts on “CentOS: Install Percona XtraDB Cluster

  1. Interesting post.

    Just one query – there is no mention above on setting the cluster address for percona1 node AFTER the cluster is setup and working, if it remains as “gcomm://” – when percona1 is restarted it will create a new cluster with 1 member. Surely percona1 should be set to the IP address of percona2 or percona3 as the last step?

    Regards,

    Alun.

    Reply

    1. Only one reference node can exist in a cluster. When percona1 down, you need to manually change the address of percona2 to be master:
      mysql> SET GLOBAL wsrep_cluster_address='gcomm://';

      In this case, try to use the most highest uptime of a server as percona1. Restarting percona1 will required you to restart percona2 and percona3 to make sure all nodes are syncing together.

      IMHO, there is no AFTER cluster setup as you mentioned.

      Reply

  2. The cluster should be HA and service any node restart – but surely if node1 restarts it requires manual intervention to fix the cluster?

    I had thought the reference node (“gcomm://”) is only required at the beginning to bootstrap the other nodes. I will continue to look into it!

    Thanks,

    Reply

  3. I did some testing today…setup the cluster as per your guide above.

    Once the cluster was operational with 3 nodes active – I then changed node1 gcomm:// to gcomm://node2-ip and restarted percona/mysql on node1. The cluster continued to be healthy with 3 nodes and data replicating nicely.

    Next I powered off node1, made some DB updates via node2 and node3 – which replicated to each other OK.

    Powered node1 back on – all data was replicated successfully from node2 donor and the cluster remained healthy with 3 nodes.

    The cluster can now survive any of the nodes being offline.

    Cheers.

    Reply

    1. Nice finding Alun. So means your XtraDB cluster is running like this:
      percona1 >< percona2 <- percona3

      Instead of my setup:
      percona1 < - percona2 <- percona3

      Am I right?

      Reply

      1. For testing yes…perhaps I will move to a circular formation as follows:

        percona1 > percona2
        percona2 > percona3
        percona3 > percona1

        Reply

        1. use wsrep_urls instead of wsrep_cluster_address (but this goes to [mysqld_safe] section in my.cnf) – this way, you specify list of servers, and if all fails, than it forms a new cluster:
          wsrep_urls=gcomm://192.168.0.29:4567,gcomm://192.168.0.30:4567,gcomm://192.168.0.31:4567,gcomm://

          you can bring all nodes down, at start them at the almoust same time
          cheers
          Andrija

          Reply

Leave a Reply

Your email address will not be published. Required fields are marked *