Customize and Disable PHPmyAdmin ‘Export’ Menu

In my development environment, we have 2 levels of PHPmyAdmin user, the superuser (root) and developer user. Superuser is able to access all features available in PHPmyAdmin and developer user is the database user for database planet_shop which have limitation as stated in MySQL user privilege table.

The current problem is that developer user, which use PHPmyAdmin to access and manage the database, is also able to export the database using PHPmyAdmin export menu as screenshot below:

pma_export

My boss want this menu to be hide and disabled to developer to prevent them dump the MySQL data which is strictly confidential. This feature should only accessible for superuser only. To do this, I need to do some changes to PHPmyAdmin coding which is located under /var/www/html/phpmyadmin directory in my web server. I am using following variables:

OS: CentOS 6 64bit
PHPmyAdmin web directory: /var/www/html/phpmyadmin
PHPmyAdmin version: 3.4.3.2 (inside README)

1. We need to hide Export menu to be viewed from 2 places, in libraries/server_links.inc.php and libraries/db_links.inc.php. Open /var/www/html/phpmyadmin/libraries/server_links.inc.php using text editor and find following line (line 67):

$tabs['export']['icon'] = 'b_export.png';
$tabs['export']['link'] = 'server_export.php';
$tabs['export']['text'] = __('Export');

and change it to:

if ($is_superuser) {
    $tabs['export']['icon'] = 'b_export.png';
    $tabs['export']['link'] = 'server_export.php';
    $tabs['export']['text'] = __('Export');
}

2. Then, we need to hide Export menu from database page. Open /var/www/html/phpmyadmin/libraries/db_links.inc.php using text editor and find following line (line 107):

$tabs = array();
$tabs[] =& $tab_structure;
$tabs[] =& $tab_sql;
$tabs[] =& $tab_search;
$tabs[] =& $tab_qbe;
$tabs[] =& $tab_export;

and change it to:

$tabs = array();
$tabs[] =& $tab_structure;
$tabs[] =& $tab_sql;
$tabs[] =& $tab_search;
$tabs[] =& $tab_qbe;
if ($is_superuser) {
    $tabs[] =& $tab_export;
}

3. The first 2 steps were only hiding the Export tab from PHPmyAdmin for non superuser. Now we need to disable it as well in database page. Open /var/www/html/phpmyadmin/db_export.php using text editor and find following line:

// $sub_part is also used in db_info.inc.php to see if we are coming from
// db_export.php, in which case we don't obey $cfg['MaxTableList']
$sub_part = '_export';
require_once './libraries/db_common.inc.php';
$url_query .= '&goto=db_export.php';
require_once './libraries/db_info.inc.php';

And add following line after that:

if (!$is_superuser) {
    require './libraries/server_links.inc.php';
    echo '<h2>' . "\n"
       . PMA_getIcon('b_usrlist.png')
       . __('Privileges') . "\n"
       . '</h2>' . "\n";
    PMA_Message::error(__('No Privileges'))->display();
    require './libraries/footer.inc.php';
}

4. We also need to disable this in server page. Open /var/www/html/phpmyadmin/server_export.php using text editor and find following line:

/**
* Does the common work
*/
require_once './libraries/common.inc.php';
 
$GLOBALS['js_include'][] = 'export.js';

And add following line after that:

if (!$is_superuser) {
    require './libraries/server_links.inc.php';
    echo '<h2>' . "\n"
       . PMA_getIcon('b_usrlist.png')
       . __('Privileges') . "\n"
       . '</h2>' . "\n";
    PMA_Message::error(__('No Privileges'))->display();
    require './libraries/footer.inc.php';
}

 

Done. Now we can verify in PHPmyAdmin by login as the developer and you will notice that Export menu has been hide:

pma_hide

 

If user still access the Export page using direct URL, for example: http://192.168.0.100/phpmyadmin/server_export.php , they will see following error:

pma_nopriv

 

High Availability: cPanel with MySQL Cluster, Keepalived and HAProxy

I have successfully installed and integrate MySQL Cluster with HAproxy and Keepalived to provide scalable MySQL service with cPanel server run on CentOS 6.3 64bit. As you guys know that cPanel has a function called “Setup Remote MySQL server” which we can use to remotely access and control MySQL server from cPanel.

This will bring a big advantage, because the cPanel server load will be reduced tremendously due to mysqld service and resource will be serve from a cluster of servers. Following picture shows my architecture:

cpanel_cluster

I will be using following variables:

OS: CentOS 6.3 64bit
WHM/cPanel version: 11.34.0 (build 11)
MySQL root password: MhGGs4wYs

The Tricks

  • We will need to use same MySQL root password in all servers including the cPanel server
  • cPanel server’s SSH key need to be installed in all database servers to allow passwordless SSH login
  • All servers must have same /etc/hosts value
  • At least 4 servers for MySQL Cluster ( 2 SQL/Management/LB and 2 Data nodes)
  • mysql1 (active) and mysql2 (passive) will share a virtual IP which run on Keepalived
  • mysql1 and mysql2 will be the load balancer as well, which redirect MySQL traffic from cPanel server to mysql1 and mysql2
  • MySQL Cluster only support ndbcluster storage engine. Databases will be created in ndbcluster by default
  • For mysql1 and mysql2, MySQL will serve using port 3307 because 3306 will be used by HAProxy for load balancing

All Servers

1. In this post, I am going to turn off firewall and SELINUX for all servers:

$ service iptables stop
$ chkconfig iptables off
$ setenforce 0sed -i.bak 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

2. Install ntp using yum to make sure all servers’ time is in sync:

$ yum install ntp -y
$ ntpdate -u my.pool.ntp.org

Prepare the cPanel Server

1. Lets start declaring all hosts naming in /etc/hosts:

192.168.10.100     cpanel      cpanel.mydomain.com
192.168.10.101     mysql1      mysql1.mydomain.com
192.168.10.102     mysql2      mysql2.mydomain.com
192.168.10.103     mysql-data1 mysql-data1.mydomain.com
192.168.10.104     mysql-data2 mysql-data2.mydomain.com
192.168.10.110     mysql       mysql.mydomain.com    #Virtual IP for mysql service

2. Copy /etc/hosts file to other servers:

$ scp /etc/hosts mysql1:/etc
$ scp /etc/hosts mysql2:/etc
$ scp /etc/hosts mysql-data1:/etc
$ scp /etc/hosts mysql-data2:/etc

3. Setup SSH key. This will allow passwordless SSH between cPanel server and MySQL servers:

$ ssh-keygen -t dsa

Just press ‘Enter’ for all prompts.

4. Copy the SSH key to other servers:

$ ssh-copy-id -i ~/.ssh/id_dsa root@mysql1
$ ssh-copy-id -i ~/.ssh/id_dsa root@mysql2
$ ssh-copy-id -i ~/.ssh/id_dsa root@mysql-data1
$ ssh-copy-id -i ~/.ssh/id_dsa root@mysql-data2

5. Setup MySQL root password in WHM. Login to WHM > SQL Services > MySQL Root Password. Enter the MySQL root password and click “Change Password”.

6. Add additional host in WHM > SQL Services > Additional MySQL Access Hosts and add required host to be allowed to access the MySQL cluster as below:

add_host

 

 

Data Nodes (mysql-data1 and mysql-data2)

1. Download and install MySQL storage package from this page:

$ cd /usr/local/src
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-storage-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-storage-7.1.25-1.el6.x86_64.rpm

2. Create a mysql configuration file at /etc/my.cnf and add following line. This configuration will tell the storage to communicate with mysql1 and mysql2 as the management nodes:

[mysqld]
ndbcluster
ndb-connectstring=mysql1,mysql2
 
[mysql_cluster]
ndb-connectstring=mysql1,mysql2

 

SQL Nodes (mysql1 and mysql2)

1. Install required package using yum:

$ yum install perl libaio* pcre* popt* openssl openssl-devel gcc make -y

2. Download all required packages for Keepalived, HAProxy and MySQL Cluster package from this site (management, tools, shared, client, server):

$ cd /usr/local/src
$ wget http://www.keepalived.org/software/keepalived-1.2.7.tar.gz
$ wget http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.22.tar.gz
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-management-7.1.25-1.el6.x86_64.rpm
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-tools-7.1.25-1.el6.x86_64.rpm
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-shared-7.1.25-1.el6.x86_64.rpm
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-client-7.1.25-1.el6.x86_64.rpm
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-server-7.1.25-1.el6.x86_64.rpm

3. Extract and compile Keepalived:

$ tar -xzf keepalived-1.2.7.tar.gz
$ cd keepalived-*
$ ./configure
$ make
$ make install

4. Extract and compile HAProxy:

$ tar -xzf haproxy-1.4.22.tar.gz
$ cd haproxy-*
$ make TARGET=linux26 ARCH=x86_64 USE_PCRE=1
$ make install

5. Install mysql packages with following order (management > tools > shared > client > server):

$ cd /usr/local/src
$ rpm -Uhv MySQL-Cluster-gpl-management-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-tools-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-shared-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-client-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-server-7.1.25-1.el6.x86_64.rpm

6. Create new directory for MySQL cluster. We also need to create cluster configuration file config.ini underneath it:

$ mkdir /var/lib/mysql-cluster
$ vim /var/lib/mysql-cluster/config.ini

And add following line:

[ndb_mgmd default]
DataDir=/var/lib/mysql-cluster
 
[ndb_mgmd]
NodeId=1
HostName=mysql1
 
[ndb_mgmd]
NodeId=2
HostName=mysql2
 
[ndbd default]
NoOfReplicas=2
DataMemory=256M
IndexMemory=128M
DataDir=/var/lib/mysql-cluster
 
[ndbd]
NodeId=3
HostName=mysql-data1
 
[ndbd]
NodeId=4
HostName=mysql-data2
 
[mysqld]
NodeId=5
HostName=mysql1
 
[mysqld]
NodeId=6
HostName=mysql2

7. Create the mysql configuration file at /etc/my.cnf and add following line:

[mysqld]
ndbcluster
port=3307
ndb-connectstring=mysql1,mysql2
default_storage_engine=ndbcluster
 
[mysql_cluster]
ndb-connectstring=mysql1,mysql2

Starting the Cluster

1. Start mysql cluster management service:

For mysql1:

$ ndb_mgmd -f /var/lib/mysql-cluster/config.ini --ndb-nodeid=1

For mysql2:

$ ndb_mgmd -f /var/lib/mysql-cluster/config.ini --ndb-nodeid=2

2. Start the mysql cluster storage service in both data nodes (mysql-data1 & mysql-data2):

$ ndbd

3. Start the mysql service (mysql1 & mysql2):

$ service mysql start

4. Login to mysql console and run following command  (mysql1 & mysql2):

mysql> use mysql;
mysql> alter table user engine=ndbcluster;
mysql> alter table db engine=ndbcluster;

5. Check the output of table db and user in mysql database and make sure it should appear as below:

mysql> SELECT table_name,engine FROM INFORMATION_SCHEMA.TABLES WHERE table_schema=DATABASE();
+---------------------------+------------+
| table_name                | engine     |
+---------------------------+------------+
| user                      | ndbcluster |
| columns_priv              | MyISAM     |
| db                        | ndbcluster |
| event                     | MyISAM     |
| func                      | MyISAM     |
| general_log               | CSV        |
| help_category             | MyISAM     |
| help_keyword              | MyISAM     |
| help_relation             | MyISAM     |
| help_topic                | MyISAM     |
| host                      | MyISAM     |
| ndb_apply_status          | ndbcluster |
| ndb_binlog_index          | MyISAM     |
| plugin                    | MyISAM     |
| proc                      | MyISAM     |
| procs_priv                | MyISAM     |
| servers                   | MyISAM     |
| slow_log                  | CSV        |
| tables_priv               | MyISAM     |
| time_zone                 | MyISAM     |
| time_zone_leap_second     | MyISAM     |
| time_zone_name            | MyISAM     |
| time_zone_transition      | MyISAM     |
| time_zone_transition_type | MyISAM     |
+---------------------------+------------+
24 rows in set (0.00 sec)

6. Check the management status in mysql1. You should see output similar to below:

$ ndb_mgm -e show
Connected to Management Server at: mysql1:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @192.168.10.103 (mysql-5.1.66 ndb-7.1.25, Nodegroup: 0, Master)
id=4 @192.168.10.104 (mysql-5.1.66 ndb-7.1.25, Nodegroup: 0)
 
[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.10.101 (mysql-5.1.66 ndb-7.1.25)
id=2 @192.168.10.102 (mysql-5.1.66 ndb-7.1.25)
 
[mysqld(API)] 2 node(s)
id=5 @192.168.10.101 (mysql-5.1.66 ndb-7.1.25)
id=6 @192.168.10.102 (mysql-5.1.66 ndb-7.1.25)

7. Change MySQL root password to follow the MySQL root password in cPanel server (mysql1):

$ mysqladmin -u root password 'MhGGs4wYs'

8. Add MySQL root password into root environment so we do not need to specify password to access mysql console (mysql1 & mysql2):

$ vim /root/.my.cnf

And add following line:

[client]
user="root"
password="MhGGs4wYs"

9. Add haproxy user without password to be used by HAProxy to check the availability of real server (mysql1):

mysql> GRANT USAGE ON *.* TO [email protected]'%';

10. Add root user from any host so cPanel servers can access and control the MySQL cluster (mysql1):

mysql> GRANT USAGE ON *.* TO [email protected]'%' IDENTIFIED BY 'MhGGs4wYs';
mysql> GRANT USAGE ON *.* TO [email protected]'mysql1' IDENTIFIED BY 'MhGGs4wYs';
mysql> GRANT USAGE ON *.* TO [email protected]'mysql2' IDENTIFIED BY 'MhGGs4wYs';
mysql> GRANT ALL PRIVILEGES ON *.* TO [email protected]'%';
mysql> GRANT ALL PRIVILEGES ON *.* TO [email protected]'mysql1';
mysql> GRANT ALL PRIVILEGES ON *.* TO [email protected]'mysql2';

11. The last step, we need to allow GRANT privileges to [email protected]’%’ by running following command in mysql console (mysql1):

mysql> UPDATE mysql.user SET `Grant_priv` = 'Y' WHERE `User` = 'root';

 

Configuring Virtual IP and Load Balancer (mysql1 & mysql2)

1. Configure HAProxy by creating a configuration /etc/haproxy.cfg:

$ vim /etc/haproxy.cfg

And add following line:

defaults
    log global
    mode http
    retries 2
    option redispatch
    maxconn 4096
    contimeout 50000
    clitimeout 50000
    srvtimeout 50000
 
listen mysql_proxy 0.0.0.0:3306
    mode tcp
    balance roundrobin
    option tcpka
    option httpchk
    option mysql-check user haproxy
    server mysql1 192.168.10.101:3307 weight 1
    server mysql2 192.168.10.102:3307 weight 1

2. Next we need to configure virtual IP. Open /etc/sysctl.conf and add following line to allow non-local IP to bind:

net.ipv4.ip_nonlocal_bind = 1

And run following command to apply the changes:

$ sysctl -p

3. Create Keepalived configuration file at /etc/keepalived.conf and add following line:

For mysql1:

vrrp_script chk_haproxy {
      script "killall -0 haproxy"    # verify the pid is exist or not
      interval 2                     # check every 2 seconds
      weight 2                       # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
      interface eth0                 # interface to monitor
      state MASTER
      virtual_router_id 51           # Assign one ID for this route
      priority 101                   # 101 on master, 100 on backup
      virtual_ipaddress {
            192.168.10.110           # the virtual IP
      }
      track_script {
            chk_haproxy
      }
}

For mysql2:

vrrp_script chk_haproxy {
      script "killall -0 haproxy"    # verify the pid is exist or not
      interval 2                     # check every 2 seconds
      weight 2                       # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
      interface eth0                 # interface to monitor
      state MASTER
      virtual_router_id 51           # Assign one ID for this route
      priority 100                   # 101 on master, 100 on backup
      virtual_ipaddress {
            192.168.10.110 # the virtual IP
      }
      track_script {
            chk_haproxy
      }
}

4. Start HAProxy:

$ haproxy -D -f /etc/haproxy.cfg

5. Start Keepalived:

$ keepalived -f /etc/keepalived.conf

6. Add following line into /etc/rc.local to make sure Keepalived and HAProxy start on boot:

$ vim /etc/rc.local

And add following line:

/usr/local/sbin/haproxy -D -f /etc/haproxy.cfg
/usr/local/sbin/keepalived -f /etc/keepalived.conf

7. Check the virtual IP should be up in mysql1:

$ ip a | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
inet 192.168.10.101/24 brd 192.168.10.255 scope global eth0
inet 192.168.10.110/32 scope global eth0

8. Verify in mysql2 whether Keepalived is running in backup mode:

$ tail /var/log/messages
Dec 14 12:08:56 mysql2 Keepalived_vrrp[3707]: VRRP_Instance(VI_1) Entering BACKUP STATE

9. Check that HAProxy is run on port 3306 and mysqld is run on port 3307:

$ netstat -tulpn | grep -e mysql -e haproxy
tcp   0   0   0.0.0.0:3306     0.0.0.0:*       LISTEN      3587/haproxy
tcp   0   0   0.0.0.0:3307     0.0.0.0:*       LISTEN      3215/mysqld

 

Setup Remote MySQL Server in cPanel Server

1. Go to WHM > SQL Services > Setup Remote MySQL server and enter following details. Make sure the Remote server address is the virtual IP address setup in Keepalived in mysql1:

Remote server address (IP address or FQDN): mysql.mydomain.org
Remote SSH Port                           : 22
Select authentication method              : Public Key (default)
Select an installed SSH Key               : id_dsa

2. Wait for a while and you will see following output:

remote_success

3. Now MySQL Cluster is integrated within WHM/cPanel. You may verify this by accessing into PHPmyAdmin in WHM at WHM > SQL Services > PHPmyAdmin and you should see that you are connected into the MySQL Cluster as screenshot below:

my_cluster

Testing

We can test our MySQL high availability architecture by turning off the power completely for mysql1 or mysql2 and mysql-data1 or mysql-data2 in the same time. You will notice that the MySQL service will still available in cPanel point-of-view.

Here is my PHPmyAdmin for my test blog running on WordPress. You can notice that the database created is under ndbcluster engine:

blog_cluster

I never test this architecture in any production server yet and I cannot assure that all WHM/cPanel SQL functionalities are working as expected. Following features in cPanel has been tried and working well:

  • PHPMyAdmin
  • cPanel MySQL features (MySQL Database and MySQL Database Wizard)

cPanel with CentOS 6 as Internet Gateway

I am going to install a web server running on cPanel with several database servers connected only from the internal network (192.168.10.0/24). Since I need to run some yum installation in every box, I need to have internet access on each of the backend server.

My problem is I do have only 1 public IP provided by my ISP. I have no choice and must add another role to my cPanel box running on CentOS 6.3 to be an internet gateway so my database servers can have internet connection for this deployment phase.

Following picture simply explain the architecture that I am going to use:

Web Server (cPanel)

1. Since this server will going to be a gateway, we must allow the IP forwarding inside kernel. Open /etc/sysctl.conf and change following value:

net.ipv4.ip_forward = 1

2. Save the file and run following command to apply the changes:

$ sysctl -p

3. Lets clear the iptables rules first as we are going to add different rules later:

$ iptables -F

4. We need to allow IP masquerading in interface that facing internet connection, in my case is eth0. We also need to accept all connections from/to the internal network (192.168.10.0/24):

$ iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
$ iptables -A FORWARD -d 192.168.10.0/24 -j ACCEPT 
$ iptables -A FORWARD -s 192.168.10.0/24 -j ACCEPT

5. Save the rules:

$ service iptables save

 

Database Servers

1. In every server, add the internal IP address into /etc/sysconfig/network-script/ifcfg-eth0 as below:

Database Server #1:

DEVICE="eth0"
ONBOOT="yes"
IPADDR=192.168.10.101
NETMASK=255.255.255.0
NETWORK=192.168.10.0

Database Server #2:

DEVICE="eth0"
ONBOOT="yes"
IPADDR=192.168.10.102
NETMASK=255.255.255.0
NETWORK=192.168.10.0

Database Server #3:

DEVICE="eth0"
ONBOOT="yes"
IPADDR=192.168.10.103
NETMASK=255.255.255.0
NETWORK=192.168.10.0

2. Change the gateway to point to the web server (cPanel) by adding following line into /etc/sysconfig/network :

GATEWAY=192.168.10.100

3. Add DNS resolver into /etc/resolv.conf as below:

nameserver 8.8.8.8
nameserver 8.8.4.4

4. Restart network service:

$ service network restart

 

Done! All the database servers should be able to have internet connectivity after the network service restarted. One public IP to be shared among servers?? Not a problem!

 

Monitor MySQL Galera Cluster from Split-Brain

I have another set of MySQL Galera Cluster running on Percona XtraDB Cluster which having 2 nodes with 1 arbitrator. In total, I do have 3 votes in the quorum. The expected problem when we have 2 nodes run in cluster is the possibility for this cluster to be split-brain if the arbitrator is down, followed by network switch down in the same time. This will surely bring a great impact to your database consistency and cluster availability.

To avoid this thing to happen, we need to closely monitor the number of nodes being seen by the cluster. If 3 nodes, it is normal and do nothing. If 2 nodes, we should send a warning which notify one server is down and if 1 node, shutdown the MySQL server so it will prevent for the split-brain to happen.

To check what is the number of node being seen by the cluster in MySQL Galera, we can use this command:

$ mysql -e "show status like 'wsrep_cluster_size'";
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

We should create a BASH script which monitor and evaluate this value to the respective action.

1. First of all, install sendmail and mailx. We need this in order to send the alert via email:

$ yum install sendmail mailx -y

2. Make sure sendmail will be started on boot and we need to start the service as well:

$ chkconfig sendmail on
$ service sendmail start

3. Create the BASH script for this monitoring using text editor in /root/scripts directory named galera_monitor:

$ mkdir -p /root/scripts
$ vim /root/scripts/galera_monitor

And paste following line:

#!/bin/bash
## Monitor galera cluster size
 
## Where the alert should be sent to
EMAIL="[email protected]"
 
cluster_size=`mysql -e "show status like 'wsrep_cluster_size'" | tail -1 | awk {'print $2'}`
hostname=`hostname`
error=`tail -100 /var/lib/mysql/mysql-error.log`
 
SUBJECT1="ERROR: [$hostname] Galera Cluster Size"
SUBJECT2="WARNING: [$hostname] Galera Cluster Size"
EMAILMESSAGE="/tmp/emailmessage.txt"
 
echo "Cluster size result: $cluster_size" > $EMAILMESSAGE
echo "Latest error: $error" >> $EMAILMESSAGE
 
if [ $cluster_size -eq 1 ]; then
    /bin/mail -s "$SUBJECT1" "$EMAIL" < $EMAILMESSAGE
    /etc/init.d/mysql stop                    # stop the mysql server to prevent split-brain
elif [ $cluster_size -eq 2 ]; then
    /bin/mail -s "$SUBJECT2" "$EMAIL" < $EMAILMESSAGE
fi

4. Add the root login credentials into /root/.my.cnf so it can auto-login into mysql console:

[client]
user=root
password=MyR00tP4ss

5. Change the permission of /root/.my.cnf so it only accessible by root:

$ chmod 400 /root/.my.cnf

6. Change the permission of the script so it is executable:

 $ chmod 755 /root/scripts/galera_monitor

7. Add the scripts into cron:

$ echo "* * * * * /bin/sh /root/scripts/galera_monitor" >> /var/spool/cron/root

8. Reload cron daemon to apply changes:

$ service crond reload

Done. You should received an email every minutes if your cluster size has reduced to 2 and you should do something about it to bring the 3 nodes up. If cluster size is 1, then it will stop the mysql server from running.

Notes: You should NOT enable the cron if you re-initialize your Galera cluster, as it will keep MySQL stopping. This script is only suitable for monitoring production cluster.

CentOS 6: Install VPN PPTP Client – The Simple Way

I have a PPTP server which run on Mikrotik Routerboard and I need to connect one of my CentOS 6.3 box to this VPN to retrieve some information from internal server. The VPN account already created in PPTP server and this post will just show on how to connect from CentOS CLI box.

I will be using following variables:

Client OS: CentOS 6.3 64bit
PPTP Server: 192.168.100.1
Username: myvega
Password: CgK888ar$

1. Install PPTP using yum:

$ yum install pptp -y

2. Add the username and password inside /etc/ppp/chap-secrets:

myvega     PPTPserver     CgK888ar$    *

The format will be: [username][space][server name][space][password][space][ip address allowed]

3. Create a configuration files under /etc/ppp/peers directory called vpn.myserver.org using text editor:

$ vim /etc/ppp/peers/vpn.myserver.org

And add following line:

pty "pptp 192.168.100.1 --nolaunchpppd"
name myvega
remotename PPTPserver
require-mppe-128
file /etc/ppp/options.pptp
ipparam vpn.myserver.org

4. Register the ppp_mppe kernel module:

$ modprobe ppp_mppe

5. Make sure under /etc/ppp/options.pptp, following options are not commented:

lock
noauth
refuse-pap
refuse-eap
refuse-chap
nobsdcomp
nodeflate
require-mppe-128

6. Connect to the VPN by executing following command:

$ pppd call vpn.myserver.org

Done! You should connected to the VPN server now. Lets check our VPN interface status:

$ ip a | grep ppp
3: ppp0:  mtu 1456 qdisc pfifo_fast state UNKNOWN qlen 3
link/ppp
inet 192.168.100.10 peer 192.168.100.1/32 scope global ppp0

If you face any problem, kindly look into /var/log/message for any error regards to pppd service:

$ tail -f /var/log/message | grep ppp
Dec 4 04:56:48 localhost pppd[1413]: pppd 2.4.5 started by root, uid 0
Dec 4 04:56:48 localhost pptp[1414]: anon log[main:pptp.c:314]: The synchronous pptp option is NOT activated
Dec 4 04:56:48 localhost pptp[1420]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request'
Dec 4 04:56:48 localhost pppd[1413]: Using interface ppp0
Dec 4 04:56:48 localhost pppd[1413]: Connect: ppp0  /dev/pts/1
Dec 4 04:56:48 localhost pptp[1420]: anon log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply
Dec 4 04:56:48 localhost pptp[1420]: anon log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established.
Dec 4 04:56:49 localhost pptp[1420]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request'
Dec 4 04:56:49 localhost pptp[1420]: anon log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply.
Dec 4 04:56:49 localhost pptp[1420]: anon log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 137).
Dec 4 04:56:49 localhost pppd[1413]: CHAP authentication succeeded
Dec 4 04:56:49 localhost pppd[1413]: MPPE 128-bit stateless compression enabled
Dec 4 04:56:50 localhost pppd[1413]: local IP address 192.168.100.10
Dec 4 04:56:50 localhost pppd[1413]: remote IP address 192.168.100.1

To disconnect the VPN, just kill the pppd process:

$ killall pppd