Easiest Way to Install A Complete MySQL Galera Cluster

Some of us might not having much time in installing and configuring several servers which run in a database cluster. As you can refer to my previous posts which related to installation of MySQL Galera and Percona XtraDB Cluster, it always required to spend 1 or 2 hours to do the installation and configuration.

Severalnines provide a super-great tool to install, configure and monitor the MySQL cluster. Basically, you just need to go to their website, follow the configuration wizards and they will create an installer script. What you just need is several servers ready with basic networking. 4 minimum servers are required; 3 will be running MySQL Galera cluster and another host used to run ClusterControl which will do all the automation work for you including installation, configuration and monitoring.

I will use the latest CentOS 6.3 64bit in this post, even though they do not mention if it is supported yet. I already tested in its working as expected. You may refer to architecture that I used to install as below:

Preparing the Servers

Make sure /etc/hosts for all server are having the same value as below:

192.168.0.221     galera1 galera1.local
192.168.0.222     galera2 galera2.local
192.168.0.223     galera3 galera3.local
192.168.0.230     clustercontrol clustercontrol.local

Disable SELINUX completely in all servers:

$ sed -i.bak 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

Generate Deployment Script

1. Go to Severalnines.com Galera configurator website here: http://www.severalnines.com/galera-configurator/

2. Choose “Codership’s release (based on MySQL 5.5)” and click Next. Start configuring your servers under General Settings as below:

Cloud Provider                   : none/on-premise
Operating System                 : RHEL6 - Redhat 6.2/Fedora/Centos 6.2/OLN 6.2
Platform                         : Linux 64-bit (x86_64)
Number of Galera Servers         : 3+1
MySQL PortNumber                 : 3306
Galera PortNumber                : 4567
Galera SST PortNumber            : 4444 
SSH PortNumber                   : 22
OS User                          : root
MySQL Server password (root user): BTrHs87cE
CMON DB password (cmon user)     : VfjLs12yz
Firewall (iptables)              : disable

3. Click Next. Under Configure Storage tab, enter respective information as example below:

System Memory (MySQL Servers): 512MB
WAN                          : no
Skip DNS Resolve             : yes
Database Size                : <8GB
MySQL Usage                  : Medium write/high read
Number of cores              : 2
innodb_buffer_pool_size      : (auto configured)
innodb_file_per_table        : Checked

4. Click Next. Under Configure Directories tab, enter respective information as example below:

5. Click Next. Under Review Config tab, enter respective information as example below:

6. On the final page, you need to take note on the URL of the deployment script so we can use it to download directly into our server:

Deploy the Cluster

1. Login via console or SSH to the ClusterControl server which is 192.168.0.230 and download the deployment script:

$ cd /usr/local/src
$ wget http://www.severalnines.com/galera-configurator/tmp/r18g0ggko1svcaje8ude7gjmm4/s9s-galera-2.1.0-rpm.tar.gz
$ tar -xzf s9s-galera-2.1.0-rpm.tar.gz

2. Navigate to the installer and start the deployment process:

$ cd s9s-galera-2.1.0-rpm/mysql/scripts
$ ./deploy.sh 2>&1 |tee cc.log

3. Answer few questions, example as below:

Redhat/Centos/Fedora detected - do you want to set SELinux to Permissive mode.
** IF YOU HAVE SELinux == disabled ON ALL HOSTS PRESS 'n' AND IGNORE THE WARNING. **
(you can set SELinux to Enforcing again later if you want) - 'y' is recommended (y/n): n
 
Can you SSH from this host to all other hosts without password?
Choosing 'n' will allow you to setup shared keys. (y/n): n

Your cluster will be ready within few minutes. Once completed, it will notify you the ClusterControl URL. This is the monitoring tools which integrated seamlessly into the MySQL Galera cluster deployment.

ClusterControl

1. Before we connect to the ClusterControl, we need to install certain things like JpGraph so we can view graph correctly. As refer to the support page here, http://support.severalnines.com/entries/20978841-jpgraph-installation. Download JpGraph at this website, http://jpgraph.net/download/:

tar xvfz jpgraph-3.5.0b1.tar.gz -C /var/www/html/cmon
$ cd /var/www/html/cmon
$ ln -s jpgraph-3.5.0b1 jpgraph
$ chown apache:apache -R jpgraph*

2. Start the crond service so the RRDtool graph works correctly:

$ chkconfig crond on
$ service crond start

3. Lets connect to the ClusterControl. Open web browser and go to http://192.168.0.230/cmon/setup.php. In this page, we are required to enter the CMON password:

4. Click “Test the connection and save config data“. You will then should be redirected to the Dashboard page as below:

Done! Your MySQL Galera Cluster is running perfectly with monitoring, configuring and alerting system. As you can see in the screenshot above, the “Host Alarms” has found out the server’s RAM usage is in critical (96% usage). It even give you some advise on how to solve the problem. Isn’t this helpful?

Considering the easiness on deploying this cluster without added cost, you are now having one of the most complete MySQL Galera Cluster suite!

23 thoughts on “Easiest Way to Install A Complete MySQL Galera Cluster

  1. Wow! I will definitely be using the clustercontrol interface for my current cluster and the wizard for future deployments!

    Reply

  2. Excellent!
    Now I have ClusterControl with 3 nodes running, but I need to increase MySQL ‘max-connections’ setting.
    Can I do that without a license?

    Reply

    1. You need to do this manually in each of MySQL Galera node’s /etc/my.cnf file. Save and restart the all MySQL server to apply the changes. If you have the license, you can do this in ClusterControl with single click

      Reply

  3. Hi,

    I’m new in this, and I have a question, Should I have to install MySql Server in each node or the script does this? I don’t understant well.

    Thanks for your help.

    Reply

    1. Nope. The script will install it for you. Even the configuration will be executed by this script. You just need to pre-configure it first in their website and run the installation script. It really easy and free. Worth to try!

      Reply

  4. Any suggestion on the resources each node should have (CPU, RAM, Storage)? I want to use the cluster to replace mysql databases currently running on individual servers (4 total) with around 20 websites total which include wordpress, streaming media sites and static websites.

    Reply

    1. 8 cores, 16 GB RAM, 500 GB x 2 (RAID 1) should be more than enough to handle 20 databases. But it also depends on concurrent access to the sites. You may use a simple and low-end servers for Clustercontrol. It is highly recommended to build the system on top of virtualization or cloud infra to take advantage of availability and scalability with less cost

      Reply

  5. Nice tuts!

    Owh ya, I have 1 question, is it possible to do with just 2 cluster?
    (is it inserting dummy IP on 3rd box at step 3?)

    Reply

    1. Hi Ketam, as advised by Severalnines team:

      You could try to remove the last dummy IP, and then try to deploy inside:
      s9s-galera-2.1.0/mysql/scripts/install/.s9s/hostnames

      But 3rd galera node (or galera arbitrator) is needed to avoid network partitioning/split brain.

      Reply

  6. Sorry for previous double post due to net problem.
    Ya, its possible to do it on 1 Management and 2 Cluster. I figured it out by just edit manually the third “dummy” IP in every single file from s9s-galera-2.1.0.

    Thanks nice tuts!

    Owh ya, last but not least, ClusterControl is publicly available?

    Reply

    1. Yup. It is publicly available and its free with limited functionality (monitor & simple management task only). You may need to contact Severalnines to get license so you can fully control the cluster using ClusterControl, without even needed to access each of the server.

      Reply

  7. Im using port else than 22 for SSH, and tried to change in the config but no luck. Any suggestion?

    Reply

    1. What do you mean by changing in config? Is it in their website configurator or from the deployment script?

      Reply

  8. Good article.

    Thus, I was looking a following test configuration, where I would have three instances in same server (for testing purposes).

    That kind of config seems not be possible with this kind of UI?

    Reply

    1. I cant see your test configuration here in comment section. Can you send it to my email seca @ email.com?

      Reply

      1. Hi,

        Below is the first node my.cnf:
        ——————————
        [client]
        socket=/var/lib/mysql/mysqld.sock
        user=root
        password=[password]

        [mysqld]
        server_id=1
        port = 4000
        datadir=/opt/mysql/data/d1/
        basedir=/opt/mysql/percona-cluster/5.5/current
        tmpdir=/tmp
        user=mysql
        socket=/var/lib/mysql/mysqld.sock
        pid-file=/var/lib/mysql/mysqld.pid
        log_error=/opt/mysql/data/d1/my_4000_cnf_error.log
        binlog_format=ROW
        skip-name-resolve
        skip-slave-start
        max-allowed-packet=64M
        wait-timeout=900
        default_storage_engine=InnoDB
        sql-mode=TRADITIONAL,NO_ENGINE_SUBSTITUTION
        character-set-server=utf8
        collation_server=utf8_general_ci

        #[mysqld_safe]
        #on primary node
        wsrep_cluster_address=gcomm://
        #wsrep_cluster_address=gcomm://192.168.0.1:4030

        ###Galera
        wsrep_provider=/opt/mysql/percona-cluster/5.5/current/lib/libgalera_smm.so
        wsrep_sst_receive_address=192.168.0.1:4020
        wsrep_node_address=192.168.0.1
        wsrep_node_name=ip-192-168-0-1-4000
        wsrep_slave_threads=8
        wsrep_cluster_name=triple_mysql
        wsrep_provider_options = “gmcast.listen_addr=tcp://192.168.0.1:4030;”
        #wsrep_sst_method=xtrabackup
        wsrep_sst_method=rsync
        wsrep_node_name=node4000
        innodb_locks_unsafe_for_binlog=1
        innodb_autoinc_lock_mode=2
        default_storage_engine=InnoDB
        wsrep_sst_auth=root:[password]
        wsrep_certify_nonPK=1
        wsrep_drupal_282555_workaround=0
        wsrep_notify_cmd=/opt/mysql/percona-cluster/5.5/current/bin/wsrep_notify.sh

        ——————————

        And the second node my5000.cnf:
        ——————————
        [client]
        socket=/var/lib/mysql/mysqld5000.sock
        user=root
        password=[password]

        [mysqld]
        server_id=2
        port = 5000
        datadir=/opt/mysql/data/d2
        basedir=/opt/mysql/percona-cluster/5.5/current
        tmpdir=/tmp
        user=mysql
        socket=/var/lib/mysql/mysqld5000.sock
        pid-file=/var/lib/mysql/mysqld5000.pid
        log_error=/opt/mysql/data/d2/my_5000_cnf_error.log
        binlog_format=ROW
        skip-name-resolve
        skip-slave-start
        max-allowed-packet=64M
        wait-timeout=900
        default_storage_engine=InnoDB
        sql-mode=TRADITIONAL,NO_ENGINE_SUBSTITUTION
        character-set-server=utf8
        collation_server=utf8_general_ci

        #[mysqld_safe]
        wsrep_cluster_address=gcomm://192.168.0.1:4030

        ###Galera
        wsrep_provider=/opt/mysql/percona-cluster/5.5/current/lib/libgalera_smm.so
        wsrep_sst_receive_address=192.168.0.1:4020
        wsrep_node_address=192.168.0.1
        wsrep_node_name=ip-192-168-0-1-5000
        wsrep_slave_threads=8
        wsrep_cluster_name=triple_mysql
        wsrep_provider_options = “gmcast.listen_addr=tcp://192.168.0.1:5030;”
        wsrep_sst_method=rsync
        #wsrep_sst_method=xtrabackup
        wsrep_node_name=node5000
        innodb_locks_unsafe_for_binlog=1
        innodb_autoinc_lock_mode=2
        default_storage_engine=InnoDB
        wsrep_sst_auth=root:[password]
        wsrep_certify_nonPK=1
        wsrep_drupal_282555_workaround=0
        wsrep_notify_cmd=/opt/mysql/percona-cluster/5.5/current/bin/wsrep_notify.sh

        ——————————

        And the third node my6000.cnf:
        ——————————
        [client]
        socket=/var/lib/mysql/mysqld6000.sock
        user=root
        password=[password]

        [mysqld]
        server_id=3
        port = 6000
        datadir=/opt/mysql/data/d3
        basedir=/opt/mysql/percona-cluster/5.5/current
        tmpdir=/tmp
        user=mysql
        socket=/var/lib/mysql/mysqld6000.sock
        pid-file=/var/lib/mysql/mysqld6000.pid
        log_error=/opt/mysql/data/d3/my_6000_cnf_error.log
        binlog_format=ROW
        skip-name-resolve
        skip-slave-start
        max-allowed-packet=64M
        wait-timeout=900
        default_storage_engine=InnoDB
        sql-mode=TRADITIONAL,NO_ENGINE_SUBSTITUTION
        character-set-server=utf8
        collation_server=utf8_general_ci

        #[mysqld_safe]
        wsrep_cluster_address=gcomm://192.168.0.1:4030

        ###Galera
        wsrep_provider=/opt/mysql/percona-cluster/5.5/current/lib/libgalera_smm.so
        wsrep_sst_receive_address=192.168.0.1:4020
        wsrep_node_address=192.168.0.1
        wsrep_node_name=ip-192-168-0-1-6000
        wsrep_slave_threads=8
        wsrep_cluster_name=triple_mysql
        wsrep_provider_options = “gmcast.listen_addr=tcp://192.168.0.1:6030;”
        wsrep_sst_method=rsync
        #wsrep_sst_method=xtrabackup
        wsrep_node_name=node6000
        innodb_locks_unsafe_for_binlog=1
        innodb_autoinc_lock_mode=2
        default_storage_engine=InnoDB
        wsrep_sst_auth=root:[password]
        wsrep_certify_nonPK=1
        wsrep_drupal_282555_workaround=0
        wsrep_notify_cmd=/opt/mysql/percona-cluster/5.5/current/bin/wsrep_notify.sh
        ——————————

        Reply

        1. This is the reply from Severalnines team –
          “Clustercontrol cannot work with that because it only has one mysql port that you can set. A workaround for them is to use linux containers and put each instance in a container then the containers will have a unique ip and same mysql port can be used.”

          Reply

  9. Thanks for the great post. I can’t find anywhere if there is a pricing in this solution including the Clustercontrol panel? It yes, is there an alternative free way to have a control panel on this setup?

    Reply

    1. Stelios, following is the reply from Severalnines team:

      “We are the only management/monitor tool for Galera Cluster, and the Galera developer team uses us as well.

      He can use the community edition of ClusterControl, there’s a lot of useful monitoring functionality available for free:
      http://www.severalnines.com/features-clustercontrol-mysql-galera

      For the commercial edition, he should contact us.”

      Reply

  10. If I setup 3 nodes and wanted to later add a 4th… how would that be done?

    Reply

    1. If you subscript to the Severalnines license, you can add/remove nodes with single click. If not, you may need to manually join the 4th node into cluster but the ClusterControl will not provision that node. Or, you can rebuild the whole cluster again with new deployment script which include the 4th node.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *