Installing MySQL 5.7 (Oracle) using APT repository

I got a task to test and deploy the latest MySQL 5.7 from Oracle on Ubuntu and Debian servers. It turns out with MySQL’s APT repository, this is very trivial. Here is how to get MySQL 5.7 installed on Debian-based systems:

1) Download mysql-apt-config from MySQL download page. Just click on the “Download” button then “No thanks, just start my download”. Copy the link and use wget:

$ wget https://dev.mysql.com/get/mysql-apt-config_0.7.2-1_all.deb

2) Install the package:

$ sudo dpkg -i mysql-apt-config_0.7.2-1_all.deb

* It will prompt out a wizard to configure APT repository. Choose the first option if you need a LEI number to have registered:

Then, choose mysql-5.7

Then, choose OK on the menu

3) Update repository package list:

$ sudo apt-get update

4) Install MySQL community server:

$ sudo apt-get install mysql-community-server

*This will install all required packages as well.

MySQL 5.7 is now installed! That’s it. Easy peasy.

Install and Configure HAProxy for MySQL Galera Cluster

In this post, I’m going to cover on how to add load balancer for the MariaDB Galera Cluster. The major steps are:

  1. Ensure all Galera nodes are running as a single cluster (all nodes are primary and synced).
  2. Install HAproxy (you can install it on separate node or on the application server).
  3. Configure clustercheck script. This script performs health check on each backend server.
  4. Configure HAproxy statistic page.
  5. Point the application to load balancer.

Our architecture looks like below:

galera_haproxy_secaserver

Configure clustercheck script

* Steps described in this section should be performed on all DB nodes unless specified otherwise.

1. Firstly, we need to configure the backend health check reporting. We will use Percona’s clustercheck script available here. Get the script and put it under /usr/local/bin by running following commands:

$ git clone https://github.com/olafz/percona-clustercheck
$ cp percona-clustercheck/clustercheck /usr/local/bin

2. The clustercheck script performs regular check on the Galera node by monitoring several MySQL variables/status. It yields a simple HTML output with corresponding HTTP return code (either 503 – Service Unavailable or 200 – OK). To make things easier for HAproxy to trigger the script and get the latest status of the backend, we have to make it listens to a port. We can use xinetd to turn the script into a service daemon and make it listen to a custom port, in this case, I’m going to use 9200.

Create a new file called /etc/xinet.d/mysqlchk, and add following lines:

# default: on
# description: mysqlchk
service mysqlchk
{
  disable = no
  flags = REUSE
  socket_type = stream
  port = 9200
  wait = no
  user = nobody
  server = /usr/local/bin/clustercheck
  log_on_failure += USERID
  only_from = 0.0.0.0/0
  per_source = UNLIMITED
}

3. Then, we need to add the mysqlchk service into xinetd:

$ echo 'mysqlchk      9200/tcp    # MySQL check' >> /etc/services

4. By default, the script will use a MySQL user called “clustercheckuser” with password “clustercheckpassword!”. We need to ensure this MySQL user is exist with the corresponding password before the script would be able to perform health checks. Run following DDL statements on one of the DB node (Galera should replicate the statement to the other nodes):

mysql> GRANT PROCESS ON *.* TO 'clustercheckuser'@'localhost' IDENTIFIED BY 'clustercheckpassword!';
mysql> FLUSH PRIVILEGES;

You can change this value inside the clustercheck script in line 32,33. In this post, I’m going to use the default username and password.

5. Verify if the script returns a correct value:

$ /usr/local/bin/clustercheck > /dev/null
$ echo $?
0

If the DB node is in synced, you should get 0. Otherwise 1 should be the output. The backend health check is configured.

Install HAproxy

* Steps described in this section should be performed on HAproxy or application server.

1. The easy way is using package manager (yum/apt). However, it’s highly recommended to use the latest version available on HAproxy site. Either way, I’ll show the the installation steps here.

a) If you choose to install HAproxy via package manager:

$ yum install haproxy # Redhat/CentOS
$ sudo apt-get install haproxy # Debian/Ubuntu

b) Via source from HAproxy download site:

$ yum install php-curl gcc make # Redhat/CentOS
$ apt-get install php5-curl gcc make # Debian/Ubuntu
$ wget http://www.haproxy.org/download/1.5/src/
$ tar xvzfz
$ cd
$ make TARGET=linux26
$ cp -f haproxy /usr/sbin/haproxy

Installing from source (option b) comes with no init script. So you have to create it manually or simply provision the process via command line (which is non-standard way). I’m not going to cover this unconventional way in this post.

 

Configure HAproxy

Now we have HAproxy installed. We need to configure it to listen on port 3307 for MySQL service and perform back-end health checks. On /etc/haproxy/haproxy.cfg, ensure following lines exist:

global
        pidfile /var/run/haproxy.pid
        daemon
        user haproxy
        group haproxy
        stats socket /var/run/haproxy.socket user haproxy group haproxy mode 600 level admin
 
        maxconn 8192
        spread-checks 3
        quiet
defaults
        mode    tcp
        option  dontlognull
        option tcp-smart-accept
        option tcp-smart-connect
        retries 3
        option redispatch
        maxconn 8192
        timeout check   3500ms
        timeout queue   3500ms
        timeout connect 3500ms
        timeout client  10800s
        timeout server  10800s
 
userlist STATSUSERS
        group admin users admin
        user admin insecure-password admin
        user stats insecure-password yourpassword
 
listen admin_page 0.0.0.0:9600
        mode http
        stats enable
        stats refresh 60s
        stats uri /
        acl AuthOkay_ReadOnly http_auth(STATSUSERS)
        acl AuthOkay_Admin http_auth_group(STATSUSERS) admin
        stats http-request auth realm admin_page unless AuthOkay_ReadOnly
 
listen  mysql_3307
        bind *:3307
        mode tcp
        timeout client  10800s
        timeout server  10800s
        balance leastconn
        option httpchk
        option allbackups
        default-server port 9200 inter 2s downinter 5s rise 3 fall 2 slowstart 60s maxconn 64 maxqueue 128 weight 100
        server db1 10.0.0.187:3306 check
        server db2 10.0.0.188:3306 check
        server db3 10.0.0.189:3306 check

Now enable the service on boot and fire it up:

RHEL/CentOS 6:

$ chkconfig haproxy on # RHEL6
$ service haproxy start # RHEL6

Ubuntu 14.04 and lower, Debian 7 and lower:

$ update-rc.d haproxy defaults
$ sudo service haproxy start

RHEL/CentOS 7, Debian 8, Ubuntu 15.04:

$ systemctl enable haproxy
$ systemctl start haproxy

Verify if HAproxy is listening to the correct ports:

$ sudo netstat -tulpn | grep haproxy
tcp        0      0 0.0.0.0:9600            0.0.0.0:*               LISTEN      370/haproxy
tcp        0      0 0.0.0.0:3307            0.0.0.0:*               LISTEN      370/haproxy

3307 is the MySQL load-balanced port, while 9600 is the HAproxy statistic page. You can login to check the status by going to http://haproxy_ip_address:9600/ and login with username ‘admin’ and password ‘yourpassword’ as configured inside haproxy.cfg. You should see something like below:

Screen Shot 2015-09-17 at 3.00.03 PM

Now, you can redirect the application or MySQL client to HAproxy host on port 3307, for a load balanaced  MySQL connections with auto failover.

Various Ways to Determine Public IP on Linux CLI

Always when you are working a lot with CLI environment, Linux particularly, you would like to know the public IP address especially when you were running on NAT environment. Here is a list of command that you can use to determine the public IP of your host via command line.

Using curl

cURL is mostly available on all Linux distributions, and is the most popular way to determine public IP address of the host. You just need to know the URL or host that will response with the correct public IP as per below:

$ curl ifconfig.me
52.74.127.152
 
$ curl ipecho.net/plain
52.74.127.152
 
$ curl ident.me
52.74.127.152
 
$ curl icanhazip.com
52.74.127.152
 
$ curl bot.whatismyipaddress.com
52.74.127.152

Using wget

Basically, command executed on curl can be replace with wget -qO- option, as per below:

$ wget -qO- http://ipecho.net/plain
52.74.127.152

Using Lynx

Lynx is a text-based browser which runs like a browser for your CLI

$ lynx checkip.dyndns.org # you will be redirected to a text-based browser

If you have the simplest method apart from what being mentioned here, please share it. I can’t imagine how simple it would be!

 

MySQL Encryption using SSH and Supervisor

If MySQL security is one of your concerns, you should use encryption when connecting to the server. Setting up SSL in MySQL is not really straightforward as you have to generate key, certificate and GRANT for specific user with REQUIRE SSL statement. This would bring additional maintenance task for DBA.

The easiest way to achieve this is by using SSH encryption. Instead directing the application to connect to standard port 3306 with plain connection, why don’t you connect it to a ‘forwarding’ port which map to the MySQL port via SSH? SSH is secure and almost everyone with Linux basic knowledge knows how to manage it.

Setting up SSH access and port forwarding

Let’s say we have a application/web server and a MySQL server listening on standard port 3306:

  • 10.0.0.20 – web server
  • 10.0.0.21 – mysql server

Following steps should be performed on the application/web server.

1. As root user, generate a SSH key:

$ whoami 
root
$ ssh-keygen -t rsa # press enter on all prompts

2. Copy the SSH key on web server to MySQL node:

$ ssh-copy-id 10.0.0.21 # enter the root password for 10.0.0.21 if prompted

3. Verify that you can run following command without the MySQL node prompting a password:

$ ssh 10.0.0.21 "ls /usr"

4. Run SSH command to listen to port 10001 on localhost IP and forward it to port 3306 on 10.0.0.21 as root user:

$ ssh -fNg -L 10001:127.0.0.1:3306 10.0.0.21

5. Verify that you got following port listed:

$ netstat -tulpn | grep 10001
$ ps aux | grep ssh

6. Finally, connect to MySQL server using MySQL client securely:

$ mysql -uroot -p -h127.0.0.1 -P10001

At this point, you can redirect your application to communicate through 127.0.0.1 port 10001 as a secured MySQL connection via SSH.

Dedicated SSH user

Above method works fine if you are running as user root, but this is not the safest method. Since running the SSH command does not require super user privilege, we should create a specific user other than root specifically for this process. In this example, I created a user called ‘myuser’.

1. On both servers, create the user and assign a password.:

$ useradd myuser
$ passwd myuser

2. On application/web server, generate a SSH key for myuser:

$ su - myuser
$ whoami 
myuser
$ ssh-keygen -t rsa # press enter on all prompts

3. Then, copy the SSH key to MySQL server, 10.0.0.21:

$ ssh-copy-id 10.0.0.21 # enter the myuser password for 10.0.0.21 if prompted

4. Start the SSH port forwarding using following command:

$ ssh -fNg -L 10001:127.0.0.1:3306 10.0.0.21

5. To auto execute the command after a reboot, just add following line under /etc/rc.local:

runuser -l myuser -c 'ssh -fNg -L 10001:127.0.0.1:3306 10.0.0.21'

Now, it’s safer to have a dedicated user to perform the port forwarding process.

Setting up Supervisor

Now we have secured our MySQL connection. We need to ensure the SSH process is monitored so when it fails (or if the server rebooted) it will be automatically restarted. You can basically put the command directly inside /etc/rc.local (as shown above), which will be executed automatically upon startup, but this does not cover the worst-case scenario where the process crashes, stops or being killed. This is where Supervisor comes in handy.

Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.

1. Install Supervisor via python easy_install:

$ yum install -y python-setuptools
$ easy_install supervisor

2. Create a configuration file, /etc/supervisord.conf:

$ vim /etc/supervisord.conf

And add following lines:

[supervisord]
nodaemon=false
logfile=/var/log/supervisord.log
pidfile=/var/run/supervisord.pid
 
[program:mysql_secure]
command=ssh -Ng -L 10001:127.0.0.1:3306 10.0.0.21
user=myuser

3. To start Supervisor, just run following command:

$ supervisord -c /etc/supervisord.conf

4. To ensure it starts on boot, we add following line into /etc/rc.local:

/usr/bin/supervisord -c /etc/supervisord.conf

Here is some excerpt from /var/log/supervisord.log indicating it monitors the process correctly:

$ less /var/log/supervisord.log
2015-05-19 20:22:14,093 CRIT Supervisor running as root (no user in config file)
2015-05-19 20:22:14,100 INFO daemonizing the supervisord process
2015-05-19 20:22:14,101 INFO supervisord started with pid 1944
2015-05-19 20:22:15,105 INFO spawned: 'mysql_secure' with pid 1945
2015-05-19 20:22:16,108 INFO success: mysql_secure entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2015-05-19 20:22:24,581 CRIT Supervisor running as root (no user in config file)
2015-05-19 20:22:24,585 INFO daemonizing the supervisord process
2015-05-19 20:22:24,585 INFO supervisord started with pid 1952
2015-05-19 20:22:25,591 INFO spawned: 'mysql_secure' with pid 1953
2015-05-19 20:22:26,801 INFO success: mysql_secure entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

That’s all folks!

 

CentOS 7: Installing and Managing MySQL

Starting from CentOS/RHEL 7, standard MySQL (Oracle) package is no longer available and has been replaced by MariaDB. There will be almost no difference when managing MariaDB since it is basically a drop-in replacement for MySQL. Certainly, MariaDB has attracted huge attention and many of existing MySQL users have been switching to MariaDB, this includes Google and Tumblr.

To install MySQL/MariaDB on CentOS 7 box, just use following command:

$ yum install mariadb mariadb-server

In RHEL, when you run yum install mysql, it will automatically install mariadb 5.5. The MySQL configuration still located in the familiar location: /etc/my.cnf, the MySQL error log is located at /var/log/mariadb/mariadb.log while the data directory is still located at /var/lib/mysql.

CentOS 7 runs on systemd, thus to start the service (similar to service mysqld start):

$ systemctl start mariadb.service

** Other options are: restart, stop, status

To enable the service to start on boot (similar to chkconfig mysqld on):

$ systemctl enable mariadb.service

Above are the only differences when managing MySQL running on CentOS/RHEL 7. To retrieve the list of services with the status, use following command:

$ systemctl list-units

Or:

$ systemctl list-unit-files

That’s it. Having MariaDB as replacement for MySQL is definitely a great choice. You would have no regret using it!

CentOS/Redhat: How to Install MySQL Sandbox

When you have started to work heavily with MySQL servers, and you need to create, install, configure and destroy MySQL instances, MySQL Sandbox is a great tool to automate those tasks. MySQL Sandbox allows us to deploy a MySQL test environment within seconds with simple commands.

So following is what I would do to install MySQL Sandbox on my CentOS box and create 3 mysql instances for my test environment:

1. Install Perl CPAN package:

$ yum install perl-CPAN

2. Connect to Perl MCPAN shell :

$ perl -MCPAN -e shell

**Answer ‘yes’ for the prompt. Perl will auto-configure most of the stuff for you

3. Install required Perl modules:

install Test::More
install MySQL::Sandbox

4. MySQL Sandbox is recommended to run as a non-root user. Create a user called sandbox and switch to the user’s environment:

$ useradd sandbox
$ su - sandbox

5. Download the MySQL Sandox package from Launchpad:

$ wget https://launchpadlibrarian.net/187745286/MySQL-Sandbox-3.0.47.tar.gz

6. Add MySQL sandbox into environment path of user sandbox’s bashrc:

$ vim ~/.bashrc

And append following line:

export PATH=$PATH:/home/sandbox/MySQL-Sandbox-3.0.47/bin

7. Re-login as the sandbox user to activate environment:

$ exit
$ su - sandbox

8. Download MySQL generic TAR file from MySQL Server archive page:

$ wget http://downloads.mysql.com/archives/get/file/mysql-5.6.20-linux-glibc2.5-x86_64.tar.gz

** I used MySQL 5.6.20 for this installation

Installation completed. To create a single MySQL instances, use following command:

$ make_sandbox mysql-5.6.20-linux-glibc2.5-x86_64.tar.gz

I use following command to create 3 MySQL instances for my test environment:

$ make_multiple_sandbox mysql-5.6.20-linux-glibc2.5-x86_64.tar.gz

Your instances should be ready and running within minutes. They will be created under /home/sandbox/sandboxes.

Restoring xbstream Backup Created by Percona Xtrabackup

 

Backing up

Once Percona Xtrabackup is installed, you can create a full backup with parallel compression as below:

$ innobackupex --stream=xbstream --parallel=4 --compress --compress-threads=2 /var/lib/mysql > /root/backup/backup.xbstream

Once completed, you should see a backup file has been created with .xbstream extension:

$ du -sh /root/backup/backup.xbstream
106M backup.xbstream

 

Restoration

1. Extract the file using xbstream -x command as below:

$ xbstream -x < /root/backup/backup.xbstream

2. If you look into the current folder recursively, you should see there are a lot of files ended with .qp extension. We need to extract these files using qpress which is available in Percona repository. Install qpress as below:

CentOS/Redhat:

$ yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
$ yum install qpress

Debian/Ubuntu:

$ apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
echo 'deb http://repo.percona.com/apt VERSION main' >> /etc/apt/sources.list
echo 'deb-src http://repo.percona.com/apt VERSION main' >> /etc/apt/sources.list
$ apt-get update
$ apt-get install qpress

* Replace VERSION with your Debian/Ubuntu release name

3. Extract each file ended with .qp and in their respective directory, then remove .qp file once its extracted:

$ for i in $(find -name "*.qp"); do qpress -vd $i $(dirname ${i}) && rm -f $i; done

4. You can now use the –apply-log command to prepare the MySQL data:

$ innobackupex --apply-log ./

5. Stop MySQL server:

$ service mysql stop

5. Move or remove the current data directory and use –copy-back command to let xtrabackup copy the prepared data to MySQL data directory:

$ rm -Rf /var/lib/mysql/*
$ innobackupex --copy-back ./

6. Assign correct ownership to MySQL data directory:

$ chown -Rf mysql.mysql /var/lib/mysql

7. Start the MySQL server:

$ service mysql start

 

Restoration completed!

MySQL: Calculate Read and Write Ratios in Percentage

I have been assigned a task to assist a client to get some idea on his database usage in MySQL Cluster before migrating them to Galera cluster. Galera scales well for reads but not for writes. So the first thing you need to do is to calculate the reads/writes ratio:

SELECT @total_com := SUM(IF(variable_name IN ('Com_select', 'Com_delete', 'Com_insert', 'Com_update', 'Com_replace'), variable_value, 0)) AS `Total`,
 @total_reads := SUM(IF(variable_name = 'Com_select', variable_value, 0)) AS `Total reads`,
 @total_writes := SUM(IF(variable_name IN ('Com_delete', 'Com_insert', 'Com_update', 'Com_replace'), variable_value, 0)) as `Total writes`,
 ROUND((@total_reads / @total_com * 100),2) as `Reads %`,
 ROUND((@total_writes / @total_com * 100),2) as `Writes %`
FROM information_schema.GLOBAL_STATUS;

The output would be as below:

+------------+-------------+--------------+--------+----------+
| Total      | Total reads | Total writes | Reads % | Writes % |
+------------+-------------+--------------+--------+----------+
| 1932344732 |  1899878513 |     32466219 |  98.32 |     1.68 |
+------------+-------------+--------------+--------+----------+
1 row in set (0.00 sec)

The output calculated above is relative since the last restart. So if you just restarted your MySQL server, you may need to wait several hours at least to get a more accurate result.

Simple initial estimation; if writes percentage is less than 50%, then the schema should be running fine in Galera cluster. However, you need to totally understand the application side as well. Hope this helps people out there!

MySQL: Export Table to Splitted CSV using Stored Procedure

I have been working with Amazon Redshift lately and I need to export some huge tables (hundred millions of rows) to load them into Redshift cluster. Since Redshift supports parallel bulk loading of data files (CSV, TSV, Json), I took advantage of this feature by splitting the MySQL CSV data files before upload them to S3 for data loading. The only problem was I need to split the records evenly for the selected tables.

So, I have came out with following stored procedure. Let’s say we have two tables in MySQL that we want to export; activity_log and messaging_log.

To export and split, copy following stored procedure lines into MySQL:

DELIMITER //
DROP PROCEDURE IF EXISTS export_csv_split //
 
CREATE PROCEDURE export_csv_split (IN table_name VARCHAR(50), IN rows INT)
BEGIN
DECLARE x INT;
DECLARE y INT;
DECLARE total_row INT;
 
SET x=0;
SET y=1;
 
-- count the total rows of the select statement as @total_row
SET @SQLString1 = CONCAT('SELECT COUNT(*) INTO @total_row FROM ',table_name);
PREPARE test1 FROM @SQLString1;
EXECUTE test1;
 
WHILE x <= @total_row DO
 
-- export the rows from select statement with limit
SET @SQLString = CONCAT('SELECT * FROM ',table_name,' LIMIT ',x,',',rows,' INTO OUTFILE "/tmp/',table_name,'-',y,'.csv" FIELDS TERMINATED BY '','' OPTIONALLY ENCLOSED BY ''"'' LINES TERMINATED BY ''\n''');
PREPARE test2 FROM @SQLString;
EXECUTE test2;
 
SET x=x+rows;
SET y=y+1;
 
END WHILE;
 
END //
DELIMITER ;

Then, call the stored procedure and pass the table name and the number of rows per file, 10M rows per file for table activity log and 5M rows per file for table messaging_log:

mysql> call export_csv_split(activity_log,10000000);
mysql> call export_csv_split(messaging_log,5000000);

The data files will be generated under /tmp directory:

$ ls -1 /tmp
activity_log-1.csv
activity_log-2.csv
activity_log-3.csv
activity_log-4.csv
activity_log-5.csv
activity_log-6.csv
...

 

For another case, some huge tables need to be exported with conditions; export all records which have been created for the last 10 years:

DELIMITER //
DROP PROCEDURE IF EXISTS export_csv_split_condition //
 
CREATE PROCEDURE export_csv_split_condition (IN table_name VARCHAR(50), IN rows INT)
BEGIN
DECLARE x INT;
DECLARE y INT;
DECLARE total_row INT;
 
SET x=0;
SET y=1;
 
-- condition of the select statement
SET @condition = CONCAT('WHERE DATE(created) < ''2014-01-01'' AND DATE(created) >= ''2004-01-01''');
 
-- count the total rows of the select statement + condition as @total_row
SET @SQLString1 = CONCAT('SELECT COUNT(*) INTO @total_row FROM ',table_name,' ',@condition);
PREPARE test1 FROM @SQLString1;
EXECUTE test1;
 
WHILE x <= @total_row DO
 
-- export the rows from select statement + condition with limit
SET @SQLString = CONCAT('SELECT * FROM ',table_name,' ',@condition,' LIMIT ',x,',',rows,' INTO OUTFILE "/tmp/',table_name,'-',y,'.csv" FIELDS TERMINATED BY '','' OPTIONALLY ENCLOSED BY ''"'' LINES TERMINATED BY ''\n''');
PREPARE test2 FROM @SQLString;
EXECUTE test2;
 
SET x=x+rows;
SET y=y+1;
 
END WHILE;
 
END //
DELIMITER ;

Similar to the first case, just call the stored procedure as below:

mysql> call export_csv_split_condition('tbl_user_activities',1000000);

You may need to change the COUNT(*) to single column lookup instead, e.g: COUNT(id) to speed up the rows counting process in InnoDB/XtraDB storage engine. Hope this simple sharing can help you a lot on exporting huge tables to CSV.

Install Puppet on CentOS, Redhat, Ubuntu, Debian

Lately, I have been spending my time doing research on developing a Puppet module. As you might know, Puppet is an orchestration tool used by many sysadmins to deploy and configure servers without hassle on repeating the same installation commands over and over again.

With Puppet you just simply:

  1. Install Puppet master
  2. Define node configuration in Puppet master
  3. Install Puppet agent
  4. Let the Puppet agent deploy what you have defined

Developing Puppet module requires me to test the module’s manifest on many popular OS distributions out there. I am going to stick with the subject of this post on installing Puppet on following OS distributions:

  • RHEL 6/CentOS 6
  • RHEL 5/CentOS 5
  • Ubuntu 12.04
  • Ubuntu 14.04
  • Debian 6
  • Debian 7

 

Requirement

Ensure the host’s date and time is synced through ntp and /etc/hosts is configured correctly. Following is the example of /etc/hosts definition that I used:

192.168.10.100    puppetmaster.local
192.168.10.101    mysql1.local   # puppet-agent
192.168.10.102    mysql2.local   # puppet-agent
192.168.10.103    mysql3.local   # puppet-agent

** The /etc/hosts must be same on all nodes so hostname can be resolved to an IP. This is required later during the certificate signing stage by puppet master.

RHEL-based:

yum install -y ntpdate
ntpdate -u my.pool.ntp.org

Debian-based:

sudo apt-get install -y ntpdate
ntpdate -u my.pool.ntp.org

Installing Puppetlabs Repository

We’ll need to install official Puppetlabs repository on all nodes regardless of their role in puppet for the installation purposes. Install the repository definition on respective OS distribution:

RHEL 6/CentOS 6:

rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm

RHEL 5/CentOS 5:

rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-5.noarch.rpm

Ubuntu 12.04 (Precise):

wget https://apt.puppetlabs.com/puppetlabs-release-precise.deb
sudo dpkg -i puppetlabs-release-precise.deb

Ubuntu 14.04 (Trusty):

wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
sudo dpkg -i puppetlabs-release-trusty.deb

Debian 6 (Squeeze):

wget https://apt.puppetlabs.com/puppetlabs-release-squeeze.deb
dpkg -i puppetlabs-release-squeeze.deb

Debian 7 (Wheezy):

wget https://apt.puppetlabs.com/puppetlabs-release-wheezy.deb
dpkg -i puppetlabs-release-wheezy.deb

 

Installing Puppet Master

On puppetmaster.local node, run following command to install Puppet master:

Redhat-based:

yum install -y puppet-server openssl

Debian-based:

sudo apt-get update
sudo apt-get install -y puppetmaster openssl

 

Installing Puppet Agent

On all puppet agent nodes (mysql1.local, mysql2.local, mysql3.local), install puppet agent and its dependencies:

Redhat-based:

yum install -y puppet facter openssl

Debian-based:

sudo apt-get update
sudo apt-get install -y puppet facter openssl

 

Signing the Certificate Authority

Puppet communicates through a secured channel with SSL. When puppet agent runs for the first time, it will auto-generate a new SSL and puppet master must sign it before all the communications begin. Run following command on each agent node:

$ puppet agent --server=puppetmaser.local --no-daemonize --verbose
Info: Creating a new SSL key for mysql1.local
Info: Caching certificate for ca
Info: csr_attributes file loading from /etc/puppet/csr_attributes.yaml
Info: Creating a new SSL certificate request for ccpuppet.local
Info: Certificate Request fingerprint (SHA256): 6F:8B:92:46:B0:3F:04:0A:4F:8D:BD:56:77:24:77:50:1C:E9:F4:EE:C6:00:5E:82:4F:B0:85:B5:26:72:43:E0
Info: Caching certificate for ca

This will generate a certificate to be signed by the puppet master. Now in the puppet master, list the certificate authority (CA):

$ puppet ca list
mysql1.local  (SHA256) 6F:8B:92:46:B0:3F:04:0A:4F:8D:BD:56:77:24:77:50:1C:E9:F4:EE:C6:00:5E:82:4F:B0:85:B5:26:72:43:E0

Sign the CA for this agent:

$ puppet ca sign mysql1.local
Notice: Signed certificate request for mysql1.local
Notice: Removing file Puppet::SSL::CertificateRequest ccpuppet.local at '/var/lib/puppet/ssl/ca/requests/mysql1.local.pem'
"-----BEGIN CERTIFICATE-----\n
...the key..."

Repeat the above steps on the other nodes, mysql2.local and mysql3.local. Now the puppet master should able to communicate with its agents securely.

 

Configure Puppet Agent

The last step is to update /etc/puppet/puppet.conf and add following line under [main] directive:

server=puppetmaster.local

** If you do not configure as above, you will need to add –server=puppetmaster.local on each of the puppet agent command below.

Now you can test from the agent node:

puppet agent --test

 

Deploy MySQL through Puppet

Puppet is now ready. Let’s deploy a mysql server with the simplest way. Go to Puppet Forge and look for a puppet module called puppetlabs-mysql. To install this module, run following command on to the puppet master node:

puppet module install puppetlabs-mysql

On the puppetmaster.local, create a puppet manifest to define how the agent should deploy at /etc/puppet/manifests/site.pp:

# /etc/puppet/manifests/site.pp
 
# Default node - this is compulsory
node "default" {
}
 
# Define the agent nodes
node "mysql1.local", "mysql2.local", "mysql3.local" {
 class { '::mysql::server':
 root_password => 'strongpassword'
 }
 class { '::mysql::client':
 package_ensure => 'present'
 }
}

Above definition will tell Puppet to install a MySQL server and client package on mysql1.local, mysql2.local and mysql3.local. To immediately start the deployment, go to the agent node and run:

puppet agent --test

Or, wait for the Puppet agent service to apply the catalog automatically (depending on the runinterval value, default is 30 minutes).

Once done, you will have three MySQL servers ready to serve! Imagine how much time you would save if you have many servers, applications, softwares and configurations to maintain.

Galera Cluster – Simpler way to view GRA file content

Galera Cluster (MySQL from Codership, Percona XtraDB Cluster, MariaDB Galera Cluster) generates a GRA log files if it fails to apply the writeset on the target node. This files exists in the MySQL data directory. You can get an overview of the file (if exist) by listing your MySQL data directory (in my case, the data directory is at /var/lib/mysql):

$ ls -1 /var/lib/mysql | grep GRA
GRA_10_104865779.log
GRA_13_104865781.log
GRA_5_104865780.log

MySQL Performance Blog has covered this topic in well-explained. I’m going to make this simple. Download the script here and copy it to your /usr/bin directory:

wget http://blog.secaserver.com/files/grareader -P /usr/bin/
chmod 755 /usr/bin/grareader

Just run following command to simply convert the GRA log file to a human-readable output:

grareader [gra_log_file]

Here is the example output:

/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
/*!50003 SET @[email protected]@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
# at 4
#140114 3:12:42 server id 3 end_log_pos 120 Start: binlog v 4, server v 5.6.15-log created 140114 3:12:42 at startup
ROLLBACK/*!*/;
BINLOG '
qjrUUg8DAAAAdAAAAHgAAAAAAAQANS42LjE1LWxvZwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAACqOtRSEzgNAAgAEgAEBAQEEgAAXAAEGggAAAAICAgCAAAACgoKGRkAAf73
8eY=
'/*!*/;
# at 120
#140114 3:12:43 server id 3 end_log_pos 143 Stop
# at 143
#140507 14:55:42 server id 4 end_log_pos 126 Query thread_id=3173489 exec_time=0 error_code=0
use `test_shop`/*!*/;
SET TIMESTAMP=1399445742/*!*/;
SET @@session.pseudo_thread_id=3173489/*!*/;
SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/;
SET @@session.sql_mode=0/*!*/;
SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/;
/*!\C utf8 *//*!*/;
SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/;
SET @@session.lc_time_names=0/*!*/;
SET @@session.collation_database=DEFAULT/*!*/;
ALTER TABLE `tblreshipment_header` DROP `ShipmentStatus`
/*!*/;
DELIMITER ;
# End of log file
ROLLBACK /* added by mysqlbinlog */;
/*!50003 SET [email protected]_COMPLETION_TYPE*/;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

You can download the script here or copy and paste the code:

#!/bin/bash
# Convert Galera GRA_* files to human readable output
# Usage: grareader 
# Example: grareader /var/lib/mysql/GRA_1_1.log
 
##
## GRA header file path
##
path=/tmp
gra_header_path=$path/GRA-Header
tmp_path=$path/grareader.tmp
 
input=$1
[ ! -e $input ] && echo 'Error: File does not exist' && exit 1
 
get_gra_header()
{
        download_url='http://secaserver.com/files/GRA-Header'
        wget_bin=`command -v wget`
        [ -z "$wget_bin" ] && echo 'Error: Unable to locate wget. Please install it first' && exit 1
        echo "Downloadling GRA-Header file into $path"
        $wget_bin --quiet $download_url -P $path
        [ $? -ne 0 ] && echo 'Error: Download failed' && exit 1
}
 
locate_files()
{
        mysqlbinlog_bin=`command -v mysqlbinlog`
        [ -z "$mysqlbinlog_bin" ] && echo 'Error: Unable to locate mysqlbinlog binary. Please install it first' && exit 1
        [ ! -e $gra_header_path ] && echo 'Error: Unable to locate GRA header file' && get_gra_header
 
}
 
locate_files
 
cat $gra_header_path >> $tmp_path
cat $input >> $tmp_path
 
echo ''
clear
$mysqlbinlog_bin -v -v -v $tmp_path
echo ''
rm -rf $tmp_path

 

Hope this could help make your Galera administrative task simpler!

Percona Server Installation Error – libssl.so.10 and libcrypto.so.10

I stumbled upon one error when installing Percona Server and socat via yum repository with following error:

--> Processing Dependency: libssl.so.10(libssl.so.10)(64bit) for package: socat-1.7.2.3-1.el6.x86_64
--> Processing Dependency: libcrypto.so.10(libcrypto.so.10)(64bit) for package: socat-1.7.2.3-1.el6.x86_64

It turns out that:

“Red Hat upgraded the version of OpenSSL in EL6 from 1.0.0 to 1.0.1 during the 6.4-6.5 cycle, in order to resolve a years-old feature request. This package is no longer binary compatible, and programs that were built against OpenSSL 1.0.0 must be rebuilt from source against 1.0.1.”

What do we need to do then?

Luckily the package is available at IUS repository:

rpm -Uhv http://dl.iuscommunity.org/pub/ius/stable/Redhat/6/x86_64/epel-release-6-5.noarch.rpm
rpm -Uhv http://dl.iuscommunity.org/pub/ius/stable/Redhat/6/x86_64/ius-release-1.0-13.ius.el6.noarch.rpm
yum install yum-plugin-replace
yum replace --enablerepo=ius-archive openssl --replace-with openssl10

(Press y for any question)

Verify the dependent libraries are exist (libcrypto.so.10 and libssl.so.10) :

$ ls /usr/lib64/ | grep -e ssl.so -e crypto.so
libcrypto.so.10
libcrypto.so.1.0.1e
libssl3.so
libssl.so.10
libssl.so.1.0.1e

Then, try again with the Percona Server installation command. In some cases, you might need to remove the installed epel-release package since it’s a little bit outdated with the current release 6.8.

Hope the solution will help you guys out there!