CentOS/Redhat: How to Install MySQL Sandbox

When you have started to work heavily with MySQL servers, and you need to create, install, configure and destroy MySQL instances, MySQL Sandbox is a great tool to automate those tasks. MySQL Sandbox allows us to deploy a MySQL test environment within seconds with simple commands.

So following is what I would do to install MySQL Sandbox on my CentOS box and create 3 mysql instances for my test environment:

1. Install Perl CPAN package:

$ yum install perl-CPAN

2. Connect to Perl MCPAN shell :

$ perl -MCPAN -e shell

**Answer ‘yes’ for the prompt. Perl will auto-configure most of the stuff for you

3. Install required Perl modules:

install Test::More
install MySQL::Sandbox

4. MySQL Sandbox is recommended to run as a non-root user. Create a user called sandbox and switch to the user’s environment:

$ useradd sandbox
$ su - sandbox

5. Download the MySQL Sandox package from Launchpad:

$ wget https://launchpadlibrarian.net/187745286/MySQL-Sandbox-3.0.47.tar.gz

6. Add MySQL sandbox into environment path of user sandbox’s bashrc:

$ vim ~/.bashrc

And append following line:

export PATH=$PATH:/home/sandbox/MySQL-Sandbox-3.0.47/bin

7. Re-login as the sandbox user to activate environment:

$ exit
$ su - sandbox

8. Download MySQL generic TAR file from MySQL Server archive page:

$ wget http://downloads.mysql.com/archives/get/file/mysql-5.6.20-linux-glibc2.5-x86_64.tar.gz

** I used MySQL 5.6.20 for this installation

Installation completed. To create a single MySQL instances, use following command:

$ make_sandbox mysql-5.6.20-linux-glibc2.5-x86_64.tar.gz

I use following command to create 3 MySQL instances for my test environment:

$ make_multiple_sandbox mysql-5.6.20-linux-glibc2.5-x86_64.tar.gz

Your instances should be ready and running within minutes. They will be created under /home/sandbox/sandboxes.

Restoring xbstream Backup Created by Percona Xtrabackup

 

Backing up

Once Percona Xtrabackup is installed, you can create a full backup with parallel compression as below:

$ innobackupex --stream=xbstream --parallel=4 --compress --compress-threads=2 /var/lib/mysql > /root/backup/backup.xbstream

Once completed, you should see a backup file has been created with .xbstream extension:

du -sh /root/backup/backup.xbstream
106M backup.xbstream

 

Restoration

1. Extract the file using xbstream -x command as below:

$ xbstream -x < /root/backup/backup.xbstream

2. If you look into the current folder recursively, you should see there are a lot of files ended with .qp extension. We need to extract these files using qpress which is available in Percona repository. Install qpress as below:

CentOS/Redhat:

$ yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm
$ yum install qpress

Debian/Ubuntu:

apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
echo 'deb http://repo.percona.com/apt VERSION main' >> /etc/apt/sources.list
echo 'deb-src http://repo.percona.com/apt VERSION main' >> /etc/apt/sources.list
$ apt-get update
$ apt-get install qpress

* Replace VERSION with your Debian/Ubuntu release name

3. Extract each file ended with .qp and in their respective directory, then remove .qp file once its extracted:

$ for i in $(find -name "*.qp"); do qpress -vd $i $(dirname ${i}) && rm -f $i; done

4. You can now use the –apply-log command to prepare the MySQL data:

$ innobackupex --apply-log ./

5. Stop MySQL server:

$ service mysql stop

5. Move or remove the current data directory and use –copy-back command to let xtrabackup copy the prepared data to MySQL data directory:

$ rm -Rf /var/lib/mysql/*
$ innobackupex --copy-back ./

6. Assign correct ownership to MySQL data directory:

$ chown -Rf mysql.mysql /var/lib/mysql

7. Start the MySQL server:

$ service mysql start

 

Restoration completed!

MySQL: Calculate Read and Write Ratios in Percentage

I have been assigned a task to assist a client to get some idea on his database usage in MySQL Cluster before migrating them to Galera cluster. Galera scales well for reads but not for writes. So the first thing you need to do is to calculate the reads/writes ratio:

SELECT @total_com := SUM(IF(variable_name IN ('Com_select', 'Com_delete', 'Com_insert', 'Com_update', 'Com_replace'), variable_value, 0)) AS `Total`,
 @total_reads := SUM(IF(variable_name = 'Com_select', variable_value, 0)) AS `Total reads`,
 @total_writes := SUM(IF(variable_name IN ('Com_delete', 'Com_insert', 'Com_update', 'Com_replace'), variable_value, 0)) as `Total writes`,
 ROUND((@total_reads / @total_com * 100),2) as `Reads %`,
 ROUND((@total_writes / @total_com * 100),2) as `Writes %`
FROM information_schema.GLOBAL_STATUS;

The output would be as below:

+------------+-------------+--------------+--------+----------+
| Total      | Total reads | Total writes | Reads % | Writes % |
+------------+-------------+--------------+--------+----------+
| 1932344732 |  1899878513 |     32466219 |  98.32 |     1.68 |
+------------+-------------+--------------+--------+----------+
1 row in set (0.00 sec)

The output calculated above is relative since the last restart. So if you just restarted your MySQL server, you may need to wait several hours at least to get a more accurate result.

Simple initial estimation; if writes percentage is less than 50%, then the schema should be running fine in Galera cluster. However, you need to totally understand the application side as well. Hope this helps people out there!