Customize and Disable PHPmyAdmin ‘Export’ Menu

In my development environment, we have 2 levels of PHPmyAdmin user, the superuser (root) and developer user. Superuser is able to access all features available in PHPmyAdmin and developer user is the database user for database planet_shop which have limitation as stated in MySQL user privilege table.

The current problem is that developer user, which use PHPmyAdmin to access and manage the database, is also able to export the database using PHPmyAdmin export menu as screenshot below:

pma_export

My boss want this menu to be hide and disabled to developer to prevent them dump the MySQL data which is strictly confidential. This feature should only accessible for superuser only. To do this, I need to do some changes to PHPmyAdmin coding which is located under /var/www/html/phpmyadmin directory in my web server. I am using following variables:

OS: CentOS 6 64bit
PHPmyAdmin web directory: /var/www/html/phpmyadmin
PHPmyAdmin version: 3.4.3.2 (inside README)

1. We need to hide Export menu to be viewed from 2 places, in libraries/server_links.inc.php and libraries/db_links.inc.php. Open /var/www/html/phpmyadmin/libraries/server_links.inc.php using text editor and find following line (line 67):

$tabs['export']['icon'] = 'b_export.png';
$tabs['export']['link'] = 'server_export.php';
$tabs['export']['text'] = __('Export');

and change it to:

if ($is_superuser) {
    $tabs['export']['icon'] = 'b_export.png';
    $tabs['export']['link'] = 'server_export.php';
    $tabs['export']['text'] = __('Export');
}

2. Then, we need to hide Export menu from database page. Open /var/www/html/phpmyadmin/libraries/db_links.inc.php using text editor and find following line (line 107):

$tabs = array();
$tabs[] =& $tab_structure;
$tabs[] =& $tab_sql;
$tabs[] =& $tab_search;
$tabs[] =& $tab_qbe;
$tabs[] =& $tab_export;

and change it to:

$tabs = array();
$tabs[] =& $tab_structure;
$tabs[] =& $tab_sql;
$tabs[] =& $tab_search;
$tabs[] =& $tab_qbe;
if ($is_superuser) {
    $tabs[] =& $tab_export;
}

3. The first 2 steps were only hiding the Export tab from PHPmyAdmin for non superuser. Now we need to disable it as well in database page. Open /var/www/html/phpmyadmin/db_export.php using text editor and find following line:

// $sub_part is also used in db_info.inc.php to see if we are coming from
// db_export.php, in which case we don't obey $cfg['MaxTableList']
$sub_part = '_export';
require_once './libraries/db_common.inc.php';
$url_query .= '&goto=db_export.php';
require_once './libraries/db_info.inc.php';

And add following line after that:

if (!$is_superuser) {
    require './libraries/server_links.inc.php';
    echo '<h2>' . "\n"
       . PMA_getIcon('b_usrlist.png')
       . __('Privileges') . "\n"
       . '</h2>' . "\n";
    PMA_Message::error(__('No Privileges'))->display();
    require './libraries/footer.inc.php';
}

4. We also need to disable this in server page. Open /var/www/html/phpmyadmin/server_export.php using text editor and find following line:

/**
* Does the common work
*/
require_once './libraries/common.inc.php';
 
$GLOBALS['js_include'][] = 'export.js';

And add following line after that:

if (!$is_superuser) {
    require './libraries/server_links.inc.php';
    echo '<h2>' . "\n"
       . PMA_getIcon('b_usrlist.png')
       . __('Privileges') . "\n"
       . '</h2>' . "\n";
    PMA_Message::error(__('No Privileges'))->display();
    require './libraries/footer.inc.php';
}

 

Done. Now we can verify in PHPmyAdmin by login as the developer and you will notice that Export menu has been hide:

pma_hide

 

If user still access the Export page using direct URL, for example: http://192.168.0.100/phpmyadmin/server_export.php , they will see following error:

pma_nopriv

 

High Availability: cPanel with MySQL Cluster, Keepalived and HAProxy

I have successfully installed and integrate MySQL Cluster with HAproxy and Keepalived to provide scalable MySQL service with cPanel server run on CentOS 6.3 64bit. As you guys know that cPanel has a function called “Setup Remote MySQL server” which we can use to remotely access and control MySQL server from cPanel.

This will bring a big advantage, because the cPanel server load will be reduced tremendously due to mysqld service and resource will be serve from a cluster of servers. Following picture shows my architecture:

cpanel_cluster

I will be using following variables:

OS: CentOS 6.3 64bit
WHM/cPanel version: 11.34.0 (build 11)
MySQL root password: MhGGs4wYs

The Tricks

  • We will need to use same MySQL root password in all servers including the cPanel server
  • cPanel server’s SSH key need to be installed in all database servers to allow passwordless SSH login
  • All servers must have same /etc/hosts value
  • At least 4 servers for MySQL Cluster ( 2 SQL/Management/LB and 2 Data nodes)
  • mysql1 (active) and mysql2 (passive) will share a virtual IP which run on Keepalived
  • mysql1 and mysql2 will be the load balancer as well, which redirect MySQL traffic from cPanel server to mysql1 and mysql2
  • MySQL Cluster only support ndbcluster storage engine. Databases will be created in ndbcluster by default
  • For mysql1 and mysql2, MySQL will serve using port 3307 because 3306 will be used by HAProxy for load balancing

All Servers

1. In this post, I am going to turn off firewall and SELINUX for all servers:

$ service iptables stop
$ chkconfig iptables off
$ setenforce 0sed -i.bak 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

2. Install ntp using yum to make sure all servers’ time is in sync:

$ yum install ntp -y
$ ntpdate -u my.pool.ntp.org

Prepare the cPanel Server

1. Lets start declaring all hosts naming in /etc/hosts:

192.168.10.100     cpanel      cpanel.mydomain.com
192.168.10.101     mysql1      mysql1.mydomain.com
192.168.10.102     mysql2      mysql2.mydomain.com
192.168.10.103     mysql-data1 mysql-data1.mydomain.com
192.168.10.104     mysql-data2 mysql-data2.mydomain.com
192.168.10.110     mysql       mysql.mydomain.com    #Virtual IP for mysql service

2. Copy /etc/hosts file to other servers:

$ scp /etc/hosts mysql1:/etc
$ scp /etc/hosts mysql2:/etc
$ scp /etc/hosts mysql-data1:/etc
$ scp /etc/hosts mysql-data2:/etc

3. Setup SSH key. This will allow passwordless SSH between cPanel server and MySQL servers:

$ ssh-keygen -t dsa

Just press ‘Enter’ for all prompts.

4. Copy the SSH key to other servers:

$ ssh-copy-id -i ~/.ssh/id_dsa root@mysql1
$ ssh-copy-id -i ~/.ssh/id_dsa root@mysql2
$ ssh-copy-id -i ~/.ssh/id_dsa root@mysql-data1
$ ssh-copy-id -i ~/.ssh/id_dsa root@mysql-data2

5. Setup MySQL root password in WHM. Login to WHM > SQL Services > MySQL Root Password. Enter the MySQL root password and click “Change Password”.

6. Add additional host in WHM > SQL Services > Additional MySQL Access Hosts and add required host to be allowed to access the MySQL cluster as below:

add_host

 

 

Data Nodes (mysql-data1 and mysql-data2)

1. Download and install MySQL storage package from this page:

$ cd /usr/local/src
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-storage-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-storage-7.1.25-1.el6.x86_64.rpm

2. Create a mysql configuration file at /etc/my.cnf and add following line. This configuration will tell the storage to communicate with mysql1 and mysql2 as the management nodes:

[mysqld]
ndbcluster
ndb-connectstring=mysql1,mysql2
 
[mysql_cluster]
ndb-connectstring=mysql1,mysql2

 

SQL Nodes (mysql1 and mysql2)

1. Install required package using yum:

$ yum install perl libaio* pcre* popt* openssl openssl-devel gcc make -y

2. Download all required packages for Keepalived, HAProxy and MySQL Cluster package from this site (management, tools, shared, client, server):

$ cd /usr/local/src
$ wget http://www.keepalived.org/software/keepalived-1.2.7.tar.gz
$ wget http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.22.tar.gz
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-management-7.1.25-1.el6.x86_64.rpm
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-tools-7.1.25-1.el6.x86_64.rpm
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-shared-7.1.25-1.el6.x86_64.rpm
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-client-7.1.25-1.el6.x86_64.rpm
$ wget http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.1/MySQL-Cluster-gpl-server-7.1.25-1.el6.x86_64.rpm

3. Extract and compile Keepalived:

$ tar -xzf keepalived-1.2.7.tar.gz
$ cd keepalived-*
$ ./configure
$ make
$ make install

4. Extract and compile HAProxy:

$ tar -xzf haproxy-1.4.22.tar.gz
$ cd haproxy-*
$ make TARGET=linux26 ARCH=x86_64 USE_PCRE=1
$ make install

5. Install mysql packages with following order (management > tools > shared > client > server):

$ cd /usr/local/src
$ rpm -Uhv MySQL-Cluster-gpl-management-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-tools-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-shared-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-client-7.1.25-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-server-7.1.25-1.el6.x86_64.rpm

6. Create new directory for MySQL cluster. We also need to create cluster configuration file config.ini underneath it:

$ mkdir /var/lib/mysql-cluster
$ vim /var/lib/mysql-cluster/config.ini

And add following line:

[ndb_mgmd default]
DataDir=/var/lib/mysql-cluster
 
[ndb_mgmd]
NodeId=1
HostName=mysql1
 
[ndb_mgmd]
NodeId=2
HostName=mysql2
 
[ndbd default]
NoOfReplicas=2
DataMemory=256M
IndexMemory=128M
DataDir=/var/lib/mysql-cluster
 
[ndbd]
NodeId=3
HostName=mysql-data1
 
[ndbd]
NodeId=4
HostName=mysql-data2
 
[mysqld]
NodeId=5
HostName=mysql1
 
[mysqld]
NodeId=6
HostName=mysql2

7. Create the mysql configuration file at /etc/my.cnf and add following line:

[mysqld]
ndbcluster
port=3307
ndb-connectstring=mysql1,mysql2
default_storage_engine=ndbcluster
 
[mysql_cluster]
ndb-connectstring=mysql1,mysql2

Starting the Cluster

1. Start mysql cluster management service:

For mysql1:

$ ndb_mgmd -f /var/lib/mysql-cluster/config.ini --ndb-nodeid=1

For mysql2:

$ ndb_mgmd -f /var/lib/mysql-cluster/config.ini --ndb-nodeid=2

2. Start the mysql cluster storage service in both data nodes (mysql-data1 & mysql-data2):

$ ndbd

3. Start the mysql service (mysql1 & mysql2):

$ service mysql start

4. Login to mysql console and run following command  (mysql1 & mysql2):

mysql> use mysql;
mysql> alter table user engine=ndbcluster;
mysql> alter table db engine=ndbcluster;

5. Check the output of table db and user in mysql database and make sure it should appear as below:

mysql> SELECT table_name,engine FROM INFORMATION_SCHEMA.TABLES WHERE table_schema=DATABASE();
+---------------------------+------------+
| table_name                | engine     |
+---------------------------+------------+
| user                      | ndbcluster |
| columns_priv              | MyISAM     |
| db                        | ndbcluster |
| event                     | MyISAM     |
| func                      | MyISAM     |
| general_log               | CSV        |
| help_category             | MyISAM     |
| help_keyword              | MyISAM     |
| help_relation             | MyISAM     |
| help_topic                | MyISAM     |
| host                      | MyISAM     |
| ndb_apply_status          | ndbcluster |
| ndb_binlog_index          | MyISAM     |
| plugin                    | MyISAM     |
| proc                      | MyISAM     |
| procs_priv                | MyISAM     |
| servers                   | MyISAM     |
| slow_log                  | CSV        |
| tables_priv               | MyISAM     |
| time_zone                 | MyISAM     |
| time_zone_leap_second     | MyISAM     |
| time_zone_name            | MyISAM     |
| time_zone_transition      | MyISAM     |
| time_zone_transition_type | MyISAM     |
+---------------------------+------------+
24 rows in set (0.00 sec)

6. Check the management status in mysql1. You should see output similar to below:

$ ndb_mgm -e show
Connected to Management Server at: mysql1:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @192.168.10.103 (mysql-5.1.66 ndb-7.1.25, Nodegroup: 0, Master)
id=4 @192.168.10.104 (mysql-5.1.66 ndb-7.1.25, Nodegroup: 0)
 
[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.10.101 (mysql-5.1.66 ndb-7.1.25)
id=2 @192.168.10.102 (mysql-5.1.66 ndb-7.1.25)
 
[mysqld(API)] 2 node(s)
id=5 @192.168.10.101 (mysql-5.1.66 ndb-7.1.25)
id=6 @192.168.10.102 (mysql-5.1.66 ndb-7.1.25)

7. Change MySQL root password to follow the MySQL root password in cPanel server (mysql1):

$ mysqladmin -u root password 'MhGGs4wYs'

8. Add MySQL root password into root environment so we do not need to specify password to access mysql console (mysql1 & mysql2):

$ vim /root/.my.cnf

And add following line:

[client]
user="root"
password="MhGGs4wYs"

9. Add haproxy user without password to be used by HAProxy to check the availability of real server (mysql1):

mysql> GRANT USAGE ON *.* TO [email protected]'%';

10. Add root user from any host so cPanel servers can access and control the MySQL cluster (mysql1):

mysql> GRANT USAGE ON *.* TO [email protected]'%' IDENTIFIED BY 'MhGGs4wYs';
mysql> GRANT USAGE ON *.* TO [email protected]'mysql1' IDENTIFIED BY 'MhGGs4wYs';
mysql> GRANT USAGE ON *.* TO [email protected]'mysql2' IDENTIFIED BY 'MhGGs4wYs';
mysql> GRANT ALL PRIVILEGES ON *.* TO [email protected]'%';
mysql> GRANT ALL PRIVILEGES ON *.* TO [email protected]'mysql1';
mysql> GRANT ALL PRIVILEGES ON *.* TO [email protected]'mysql2';

11. The last step, we need to allow GRANT privileges to [email protected]’%’ by running following command in mysql console (mysql1):

mysql> UPDATE mysql.user SET `Grant_priv` = 'Y' WHERE `User` = 'root';

 

Configuring Virtual IP and Load Balancer (mysql1 & mysql2)

1. Configure HAProxy by creating a configuration /etc/haproxy.cfg:

$ vim /etc/haproxy.cfg

And add following line:

defaults
    log global
    mode http
    retries 2
    option redispatch
    maxconn 4096
    contimeout 50000
    clitimeout 50000
    srvtimeout 50000
 
listen mysql_proxy 0.0.0.0:3306
    mode tcp
    balance roundrobin
    option tcpka
    option httpchk
    option mysql-check user haproxy
    server mysql1 192.168.10.101:3307 weight 1
    server mysql2 192.168.10.102:3307 weight 1

2. Next we need to configure virtual IP. Open /etc/sysctl.conf and add following line to allow non-local IP to bind:

net.ipv4.ip_nonlocal_bind = 1

And run following command to apply the changes:

$ sysctl -p

3. Create Keepalived configuration file at /etc/keepalived.conf and add following line:

For mysql1:

vrrp_script chk_haproxy {
      script "killall -0 haproxy"    # verify the pid is exist or not
      interval 2                     # check every 2 seconds
      weight 2                       # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
      interface eth0                 # interface to monitor
      state MASTER
      virtual_router_id 51           # Assign one ID for this route
      priority 101                   # 101 on master, 100 on backup
      virtual_ipaddress {
            192.168.10.110           # the virtual IP
      }
      track_script {
            chk_haproxy
      }
}

For mysql2:

vrrp_script chk_haproxy {
      script "killall -0 haproxy"    # verify the pid is exist or not
      interval 2                     # check every 2 seconds
      weight 2                       # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
      interface eth0                 # interface to monitor
      state MASTER
      virtual_router_id 51           # Assign one ID for this route
      priority 100                   # 101 on master, 100 on backup
      virtual_ipaddress {
            192.168.10.110 # the virtual IP
      }
      track_script {
            chk_haproxy
      }
}

4. Start HAProxy:

$ haproxy -D -f /etc/haproxy.cfg

5. Start Keepalived:

$ keepalived -f /etc/keepalived.conf

6. Add following line into /etc/rc.local to make sure Keepalived and HAProxy start on boot:

$ vim /etc/rc.local

And add following line:

/usr/local/sbin/haproxy -D -f /etc/haproxy.cfg
/usr/local/sbin/keepalived -f /etc/keepalived.conf

7. Check the virtual IP should be up in mysql1:

$ ip a | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
inet 192.168.10.101/24 brd 192.168.10.255 scope global eth0
inet 192.168.10.110/32 scope global eth0

8. Verify in mysql2 whether Keepalived is running in backup mode:

$ tail /var/log/messages
Dec 14 12:08:56 mysql2 Keepalived_vrrp[3707]: VRRP_Instance(VI_1) Entering BACKUP STATE

9. Check that HAProxy is run on port 3306 and mysqld is run on port 3307:

$ netstat -tulpn | grep -e mysql -e haproxy
tcp   0   0   0.0.0.0:3306     0.0.0.0:*       LISTEN      3587/haproxy
tcp   0   0   0.0.0.0:3307     0.0.0.0:*       LISTEN      3215/mysqld

 

Setup Remote MySQL Server in cPanel Server

1. Go to WHM > SQL Services > Setup Remote MySQL server and enter following details. Make sure the Remote server address is the virtual IP address setup in Keepalived in mysql1:

Remote server address (IP address or FQDN): mysql.mydomain.org
Remote SSH Port                           : 22
Select authentication method              : Public Key (default)
Select an installed SSH Key               : id_dsa

2. Wait for a while and you will see following output:

remote_success

3. Now MySQL Cluster is integrated within WHM/cPanel. You may verify this by accessing into PHPmyAdmin in WHM at WHM > SQL Services > PHPmyAdmin and you should see that you are connected into the MySQL Cluster as screenshot below:

my_cluster

Testing

We can test our MySQL high availability architecture by turning off the power completely for mysql1 or mysql2 and mysql-data1 or mysql-data2 in the same time. You will notice that the MySQL service will still available in cPanel point-of-view.

Here is my PHPmyAdmin for my test blog running on WordPress. You can notice that the database created is under ndbcluster engine:

blog_cluster

I never test this architecture in any production server yet and I cannot assure that all WHM/cPanel SQL functionalities are working as expected. Following features in cPanel has been tried and working well:

  • PHPMyAdmin
  • cPanel MySQL features (MySQL Database and MySQL Database Wizard)

cPanel with CentOS 6 as Internet Gateway

I am going to install a web server running on cPanel with several database servers connected only from the internal network (192.168.10.0/24). Since I need to run some yum installation in every box, I need to have internet access on each of the backend server.

My problem is I do have only 1 public IP provided by my ISP. I have no choice and must add another role to my cPanel box running on CentOS 6.3 to be an internet gateway so my database servers can have internet connection for this deployment phase.

Following picture simply explain the architecture that I am going to use:

Web Server (cPanel)

1. Since this server will going to be a gateway, we must allow the IP forwarding inside kernel. Open /etc/sysctl.conf and change following value:

net.ipv4.ip_forward = 1

2. Save the file and run following command to apply the changes:

$ sysctl -p

3. Lets clear the iptables rules first as we are going to add different rules later:

$ iptables -F

4. We need to allow IP masquerading in interface that facing internet connection, in my case is eth0. We also need to accept all connections from/to the internal network (192.168.10.0/24):

$ iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
$ iptables -A FORWARD -d 192.168.10.0/24 -j ACCEPT 
$ iptables -A FORWARD -s 192.168.10.0/24 -j ACCEPT

5. Save the rules:

$ service iptables save

 

Database Servers

1. In every server, add the internal IP address into /etc/sysconfig/network-script/ifcfg-eth0 as below:

Database Server #1:

DEVICE="eth0"
ONBOOT="yes"
IPADDR=192.168.10.101
NETMASK=255.255.255.0
NETWORK=192.168.10.0

Database Server #2:

DEVICE="eth0"
ONBOOT="yes"
IPADDR=192.168.10.102
NETMASK=255.255.255.0
NETWORK=192.168.10.0

Database Server #3:

DEVICE="eth0"
ONBOOT="yes"
IPADDR=192.168.10.103
NETMASK=255.255.255.0
NETWORK=192.168.10.0

2. Change the gateway to point to the web server (cPanel) by adding following line into /etc/sysconfig/network :

GATEWAY=192.168.10.100

3. Add DNS resolver into /etc/resolv.conf as below:

nameserver 8.8.8.8
nameserver 8.8.4.4

4. Restart network service:

$ service network restart

 

Done! All the database servers should be able to have internet connectivity after the network service restarted. One public IP to be shared among servers?? Not a problem!

 

Monitor MySQL Galera Cluster from Split-Brain

I have another set of MySQL Galera Cluster running on Percona XtraDB Cluster which having 2 nodes with 1 arbitrator. In total, I do have 3 votes in the quorum. The expected problem when we have 2 nodes run in cluster is the possibility for this cluster to be split-brain if the arbitrator is down, followed by network switch down in the same time. This will surely bring a great impact to your database consistency and cluster availability.

To avoid this thing to happen, we need to closely monitor the number of nodes being seen by the cluster. If 3 nodes, it is normal and do nothing. If 2 nodes, we should send a warning which notify one server is down and if 1 node, shutdown the MySQL server so it will prevent for the split-brain to happen.

To check what is the number of node being seen by the cluster in MySQL Galera, we can use this command:

$ mysql -e "show status like 'wsrep_cluster_size'";
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| wsrep_cluster_size | 3     |
+--------------------+-------+

We should create a BASH script which monitor and evaluate this value to the respective action.

1. First of all, install sendmail and mailx. We need this in order to send the alert via email:

$ yum install sendmail mailx -y

2. Make sure sendmail will be started on boot and we need to start the service as well:

$ chkconfig sendmail on
$ service sendmail start

3. Create the BASH script for this monitoring using text editor in /root/scripts directory named galera_monitor:

$ mkdir -p /root/scripts
$ vim /root/scripts/galera_monitor

And paste following line:

#!/bin/bash
## Monitor galera cluster size
 
## Where the alert should be sent to
EMAIL="[email protected]"
 
cluster_size=`mysql -e "show status like 'wsrep_cluster_size'" | tail -1 | awk {'print $2'}`
hostname=`hostname`
error=`tail -100 /var/lib/mysql/mysql-error.log`
 
SUBJECT1="ERROR: [$hostname] Galera Cluster Size"
SUBJECT2="WARNING: [$hostname] Galera Cluster Size"
EMAILMESSAGE="/tmp/emailmessage.txt"
 
echo "Cluster size result: $cluster_size" > $EMAILMESSAGE
echo "Latest error: $error" >> $EMAILMESSAGE
 
if [ $cluster_size -eq 1 ]; then
    /bin/mail -s "$SUBJECT1" "$EMAIL" < $EMAILMESSAGE
    /etc/init.d/mysql stop                    # stop the mysql server to prevent split-brain
elif [ $cluster_size -eq 2 ]; then
    /bin/mail -s "$SUBJECT2" "$EMAIL" < $EMAILMESSAGE
fi

4. Add the root login credentials into /root/.my.cnf so it can auto-login into mysql console:

[client]
user=root
password=MyR00tP4ss

5. Change the permission of /root/.my.cnf so it only accessible by root:

$ chmod 400 /root/.my.cnf

6. Change the permission of the script so it is executable:

 $ chmod 755 /root/scripts/galera_monitor

7. Add the scripts into cron:

$ echo "* * * * * /bin/sh /root/scripts/galera_monitor" >> /var/spool/cron/root

8. Reload cron daemon to apply changes:

$ service crond reload

Done. You should received an email every minutes if your cluster size has reduced to 2 and you should do something about it to bring the 3 nodes up. If cluster size is 1, then it will stop the mysql server from running.

Notes: You should NOT enable the cron if you re-initialize your Galera cluster, as it will keep MySQL stopping. This script is only suitable for monitoring production cluster.

CentOS 6: Install VPN PPTP Client – The Simple Way

I have a PPTP server which run on Mikrotik Routerboard and I need to connect one of my CentOS 6.3 box to this VPN to retrieve some information from internal server. The VPN account already created in PPTP server and this post will just show on how to connect from CentOS CLI box.

I will be using following variables:

Client OS: CentOS 6.3 64bit
PPTP Server: 192.168.100.1
Username: myvega
Password: CgK888ar$

1. Install PPTP using yum:

$ yum install pptp -y

2. Add the username and password inside /etc/ppp/chap-secrets:

myvega     PPTPserver     CgK888ar$    *

The format will be: [username][space][server name][space][password][space][ip address allowed]

3. Create a configuration files under /etc/ppp/peers directory called vpn.myserver.org using text editor:

$ vim /etc/ppp/peers/vpn.myserver.org

And add following line:

pty "pptp 192.168.100.1 --nolaunchpppd"
name myvega
remotename PPTPserver
require-mppe-128
file /etc/ppp/options.pptp
ipparam vpn.myserver.org

4. Register the ppp_mppe kernel module:

$ modprobe ppp_mppe

5. Make sure under /etc/ppp/options.pptp, following options are not commented:

lock
noauth
refuse-pap
refuse-eap
refuse-chap
nobsdcomp
nodeflate
require-mppe-128

6. Connect to the VPN by executing following command:

$ pppd call vpn.myserver.org

Done! You should connected to the VPN server now. Lets check our VPN interface status:

$ ip a | grep ppp
3: ppp0:  mtu 1456 qdisc pfifo_fast state UNKNOWN qlen 3
link/ppp
inet 192.168.100.10 peer 192.168.100.1/32 scope global ppp0

If you face any problem, kindly look into /var/log/message for any error regards to pppd service:

$ tail -f /var/log/message | grep ppp
Dec 4 04:56:48 localhost pppd[1413]: pppd 2.4.5 started by root, uid 0
Dec 4 04:56:48 localhost pptp[1414]: anon log[main:pptp.c:314]: The synchronous pptp option is NOT activated
Dec 4 04:56:48 localhost pptp[1420]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request'
Dec 4 04:56:48 localhost pppd[1413]: Using interface ppp0
Dec 4 04:56:48 localhost pppd[1413]: Connect: ppp0  /dev/pts/1
Dec 4 04:56:48 localhost pptp[1420]: anon log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply
Dec 4 04:56:48 localhost pptp[1420]: anon log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established.
Dec 4 04:56:49 localhost pptp[1420]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request'
Dec 4 04:56:49 localhost pptp[1420]: anon log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply.
Dec 4 04:56:49 localhost pptp[1420]: anon log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 137).
Dec 4 04:56:49 localhost pppd[1413]: CHAP authentication succeeded
Dec 4 04:56:49 localhost pppd[1413]: MPPE 128-bit stateless compression enabled
Dec 4 04:56:50 localhost pppd[1413]: local IP address 192.168.100.10
Dec 4 04:56:50 localhost pppd[1413]: remote IP address 192.168.100.1

To disconnect the VPN, just kill the pppd process:

$ killall pppd

Enable Intel 82579LM NIC in VMware ESXi 5.0

We have just bought a new server with Supermicro X9SCL-F motherboard for our backup server. This server comes with 2 NICs:

  • Intel 82579LM Gigabit
  • Intel 82574L Gigabit

Unfortunately, once the hypervisor installation completed, only one network interface is detected by VMware ESXi 5.0 which is Intel 82574L port. Since our architecture required to have 2 different cards so we can use it as fault tolerance to bring high availability features.

What we need to do is basically like this:

  1. Download the driver here: http://dl.dropbox.com/u/27246203/E1001E.tgz
  2. Use ESXi-Customizer to merge the driver and generate a new VMware installation ISO
  3. Burn the custom ISO into disk or USB drive
  4. Reinstall the server

 

Using ESXi-Customizer

1. Download it from here: http://esxi-customizer.googlecode.com/files/ESXi-Customizer-v2.7.1.exe

2. Double click on it and extract the files. Open the folder (ESXi-Customizer-v2.7.1) and double click at ESXi-Customizer.cmd

3. You will see following windows. Kindly enter required details as screenshot below:

Note: My installation ISO is VMware-VMvisor-Installer-5.0.0.update01-623860.x86_64

4. Click Run. The process will start and you will be see following prompt:

Just accept it by clicking “Yes”.

5. Once finished, you will find your new ISO as ESXi-5.x-Custom.iso. You will use this new ISO for VMware ESXi hypervisor installation.

After the installation finish, you can verify this using vSphere > host > Configuration > Network Adapters and you should see similar screenshot as below:

CentOS 6: Install MySQL Cluster – The Simple Way

In this post I am going to install a MySQL cluster as refer to the architecture as below:

 

 

MySQL cluster is consists of 3 types of node:

  • Data node (mysql-data1 & mysql-data2)
  • Sql daemon node (mysql-mysqld1 & mysql-mysqld2)
  • Management node (mysql-management)

Data node will hold the database and it will replicated automatically to all data nodes. Sql daemon node is the interface between database and client. It will serve the query they got from data nodes, similar like a “gateway”. Management node is required in order to monitor and manage the whole cluster. Recommended minimum setup for high availability and scalability will be 5 servers as what I have highlights in the picture above. I will be  using CentOS 6.3 64bit for all servers.

All Servers

1. SELINUX must be disabled on all server. Change the SELINUX configuration file at /etc/sysconfig/selinux:

SELINUX=disabled

2. Firewall is disabled on all servers:

$ service iptables stop
$ chkconfig iptables off
$ setenforce 0

3. Entries under /etc/hosts for all servers should be as below:

web-server         192.168.1.21
mysql-mysqld1      192.168.1.51 
mysql-mysqld2      192.168.1.52
mysql-management   192.168.1.53
mysql-data1        192.168.1.54
mysql-data2        192.168.1.55

Management Node

1. Download and install MySQL Cluster (management & tools) package from here:

$ cd /usr/local/src
$ wget http://download.softagency.net/MySQL/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-management-7.0.35-1.rhel5.x86_64.rpm
$ wget http://download.softagency.net/MySQL/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-tools-7.0.34-1.rhel5.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-management-7.0.35-1.rhel5.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-tools-7.0.34-1.rhel5.x86_64.rpm

2. Create the mysql-cluster directory and configuration file config.ini:

$ mkdir -p /var/lib/mysql-cluster
$ vim /var/lib/mysql-cluster/config.ini

And add following line:

[ndb_mgmd default]
DataDir=/var/lib/mysql-cluster
 
[ndb_mgmd]
HostName=mysql-management
 
[ndbd default]
NoOfReplicas=2
DataMemory=256M
IndexMemory=128M
DataDir=/var/lib/mysql-cluster
 
[ndbd]
HostName=mysql-data1
 
[ndbd]
HostName=mysql-data2
 
[mysqld]
HostName=mysql-mysqld1
 
[mysqld]
HostName=mysql-mysqld2

Data Nodes

1. Following steps should be executed on both data nodes (mysql-data1 and mysql-data2). Download and install the MySQL storage package from here:

$ cd /usr/local/src
$ wget http://download.softagency.net/MySQL/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-storage-7.0.35-1.rhel5.x86_64.rpm 
$ rpm -Uhv MySQL-Cluster-gpl-storage-7.0.35-1.rhel5.x86_64.rpm

2. Add following line under /etc/my.cnf:

[mysqld]
ndbcluster
ndb-connectstring=mysql-management
 
[mysql_cluster]
ndb-connectstring=mysql-management

SQL Nodes

1. Following steps should be executed on both SQL nodes (mysql-mysqld1 and mysql-mysqld2). Remove mysql-libs using yum:

$ yum remove mysql-libs -y

2. Install required package using yum:

$ yum install libaio -y

3. Download the MySQL client, shared and server package from MySQL download site here:

$ cd /usr/local/src
$ wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.2/MySQL-Cluster-client-gpl-7.2.8-1.el6.x86_64.rpm/from/http://cdn.mysql.com/
$ wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.2/MySQL-Cluster-shared-gpl-7.2.8-1.el6.x86_64.rpm/from/http://cdn.mysql.com/wget http://www.mysql.com/get/Downloads/MySQL-Cluster-7.2/MySQL-Cluster-server-gpl-7.2.8-1.el6.x86_64.rpm/from/http://cdn.mysql.com/

4. Install all packages:

$ rpm -Uhv MySQL-Cluster-*

5. Add following line into /etc/my.cnf:

[mysqld]
ndbcluster
ndb-connectstring=mysql-management
default_storage_engine=ndbcluster
 
[mysql_cluster]
ndb-connectstring=mysql-management

Start the Cluster

1. To start the cluster, we must follow this order:

Management Node > Data Node > SQL Node

2. So, login to management node (mysql-management) and execute following command:

$ ndb_mgmd -f /var/lib/mysql-cluster/config.ini
MySQL Cluster Management Server mysql-5.1.63 ndb-7.0.35
2012-11-22 07:36:55 [MgmtSrvr] INFO -- The default config directory '/usr/mysql-cluster' does not exist. Trying to create it...
2012-11-22 07:36:55 [MgmtSrvr] INFO -- Sucessfully created config directory

3.  Next is start the ndbd service in Data Node (mysql-data1):

$ ndbd
2012-11-22 07:37:24 [ndbd] INFO -- Angel connected to 'mysql-management:1186'
2012-11-22 07:37:24 [ndbd] INFO -- Angel allocated nodeid: 2

4. Next is start the ndbd service in Data Node (mysql-data2):

$ ndbd
2012-11-22 07:37:24 [ndbd] INFO -- Angel connected to 'mysql-management:1186'
2012-11-22 07:37:24 [ndbd] INFO -- Angel allocated nodeid: 3

5. Next is start the mysql service in SQL node (mysql-mysqld1):

$ service mysql start

6. Next is start the mysql service in SQL node (mysql-mysqld2):

$ service mysql start

Monitor the Cluster

Monitoring the cluster will required you to login into management server. To check overall status of cluster:

$ ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.1.54 (mysql-5.1.63 ndb-7.0.35, Nodegroup: 0, Master)
id=3 @192.168.1.55 (mysql-5.1.63 ndb-7.0.35, Nodegroup: 0)
 
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.1.53 (mysql-5.1.63 ndb-7.0.35)
 
[mysqld(API)] 2 node(s)
id=4 @192.168.1.51 (mysql-5.5.27 ndb-7.2.8)
id=5 @192.168.1.52 (mysql-5.5.27 ndb-7.2.8)

To check the Data nodes status:

$ ndb_mgm -e "all status"
Connected to Management Server at: localhost:1186
Node 2: started (mysql-5.1.63 ndb-7.0.35)
Node 3: started (mysql-5.1.63 ndb-7.0.35)

To check the memory usage of data nodes:

$ ndb_mgm -e "all report memory"
Connected to Management Server at: localhost:1186
Node 2: Data usage is 0%(23 32K pages of total 8192)
Node 2: Index usage is 0%(20 8K pages of total 16416)
Node 3: Data usage is 0%(23 32K pages of total 8192)
Node 3: Index usage is 0%(20 8K pages of total 16416)

Stopping the Cluster

1. To stop the cluster, we must follow this order:

SQL Node > Management Node / Data Node

2. Login to SQL node (mysql-mysqld1mysql-mysqld2) and run following command:

$ service mysql stop

3. Login to management node (mysql-management) and run following command:

$ ndb_mgm -e shutdown

 

Done! You should now able to create or import database on one of the SQL node. You can put a load balancer in front of the SQL node to take advantage on the performance and high availability.

Notes

You may notice that the distribution version I installed is “rhel5”. You can get the “el6” distribution package at this page: http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.2/ and search for any “el6” package name.

High Availability: Configure Piranha for HTTP, HTTPS and MySQL

Piranha is a simple yet powerful tool to manage virtual IP and service with its web-based GUI.

As refer to my previous post on how to install and configure Piranha for HTTP service: http://blog.secaserver.com/2012/07/centos-configure-piranha-load-balancer-direct-routing-method/, in this post we will complete over the Piranha configuration with HTTP and HTTPS load balancing using direct-routing with firewall marks and MySQL load balancing using direct-routing only.

HTTP/HTTPS will need to be accessed by users via virtual public IP 130.44.50.120 while MySQL service will be accessed by web servers using virtual private IP 192.168.100.30. Kindly refer to picture below for the full architecture:

 

All Servers

SELINUX must be turned off on all servers. Change the SELINUX configuration file at /etc/sysconfig/selinux:

SELINUX=disabled

Load Balancers

1. All steps should be done in both servers unless specified. We will install Piranha and other required packages using yum:

$ yum install piranha ipvsadm mysql -y

2. Open firewall ports as below:

$ iptables -A INPUT -m tcp -p tcp --dport 3636 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 80 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 443 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 539 -j ACCEPT
$ iptables -A INPUT -m udp -p udp --dport 161 -j ACCEPT

3. Start all required services and make sure they will auto start if server reboot:

$ service piranha-gui start
$ chkconfig piranha-gui on
$ chkconfig pulse on

4. Run following command to set password for user piranha. This will be used when accessing the web-based configuration tools:

$ piranha-passwd

5. Turn on IP forwarding. Open /etc/sysctl.conf and make sure following line has value 1:

net.ipv4.ip_forward = 1

And run following command to activate it:

$ sysctl -p

6. Check whether iptables is loaded properly as the kernel module:

$ lsmod | grep ip_tables
ip_tables 17733 3 iptable_filter,iptable_mangle,iptable_nat

7. Since we will need to serve HTTP and HTTPS from the same server, we need to group the traffic to be forwarded to the same destination. To achieve this, we need to mark the packet using iptables and so it being recognized correctly on the destination server. Set the iptables rules to mark all packets which destined for the same server as “80”:

$ iptables -t mangle -A PREROUTING -p tcp -d 130.44.50.120/32 --dport 80 -j MARK --set-mark 80
$ iptables -t mangle -A PREROUTING -p tcp -d 130.44.50.120/32 --dport 443 -j MARK --set-mark 80

Load Balancer #1

1. Check the IP address is correctly setup:

$ ip a | grep inet
inet 130.44.50.121/28 brd 110.74.131.15 scope global eth0
inet 192.168.100.41/24 brd 192.168.10.255 scope global eth1

2. Login into Piranha at http://130.44.50.121:3636/. Login as user piranha and password which has been setup in step #4 of Load Balancers section.

3. Enable redundancy. Go to Piranha > Redundancy > Enable.

4. Enter the IP information as below:

Redundant server public IP     : 130.44.50.122
Monitor NIC links for failures : Enabled
Use sync daemon                : Enabled

Click ‘Accept’.

5. Go to Piranha > Virtual Servers > Add > Edit. Add information as below and click ‘Accept’:

 

6. Next, go to Real Server. This we will put the IP address of all real servers that serve HTTP. Fill up all required information as below:

7. Now we need to do the similar setup to HTTPS. Just change the port number for ‘Application port’ to 443. For Real Server, change the real server’s destination port to 443.

8. For MySQL virtual server, enter information as below:

 

9. For MySQL real servers, enter information as below:

 

10. Configure monitoring script for MySQL virtual server. Click on ‘Monitoring Script’ and configure as below:

 

11. Setup the monitoring script for mysql:

$ vim /root/mysql_mon.sh

And add following line:

#!/bin/sh
USER=monitor
PASS=M0Npass5521
####################################################################
CMD=/usr/bin/mysqladmin
 
IS_ALIVE=`$CMD -h $1 -u $USER -p$PASS ping | grep -c "alive"`
 
if [ "$IS_ALIVE" = "1" ]; then
    echo "UP"
else
    echo "DOWN"
fi

12. Change the script permission to executable:

$ chmod 755 /root/mysql_mon.sh

13. Now copy over the script and Piranha configuration file to load balancer #2:

$ scp /etc/sysconfig/ha/lvs.cf lb2:/etc/sysconfig/ha/lvs.cf
$ scp /root/mysql_mon.sh lb2:/root/

14. Restart Pulse to activate the Piranha configuration in LB#1:

$ service pulse restart

Load Balancer #2

In this server, we just need to restart pulse service as below:

$ chkconfig pulse on
$ service pulse restart

Database Cluster

1. We need to allow the MySQL monitoring user from nanny (load balancer) in the MySQL cluster. Login into MySQL console and enter following SQL command in one of the server:

mysql> GRANT USAGE ON *.* TO [email protected]'%' IDENTIFIED BY 'M0Npass5521';

2. Add the virtual IP manually using iproute:

$ /sbin/ip addr add 192.168.100.30 dev eth1

3. Add following entry into /etc/rc.local to make sure the virtual IP is up after boot:

$ echo '/sbin/ip addr add 192.168.100.30 dev eth1' >> /etc/rc.local

Attention: If you restart the interface that hold virtual IP in this server, you need to execute step #2 to bring up the virtual IP manually. VIPs can not be configured to start on boot.

4. Check the IPs in the server. Example below was taken from server Mysql1:

$ ip a | grep inet
inet 130.44.50.127/24 brd 130.44.50.255 scope global eth0
inet 192.168.100.33/24 brd 192.168.100.255 scope global eth1
inet 192.168.100.30/32 scope global eth1

Web Cluster

1. On each and every server, we need to install a package called arptables_jf from yum. We will used this to manage our ARP tables entries and rules:

$ yum install arptables_jf -y

2. Add following rules respectively for every server:

Web1:

arptables -A IN -d 130.44.50.120 -j DROP
arptables -A OUT -d 130.44.50.120 -j mangle --mangle-ip-s 130.44.50.123

Web 2:

arptables -A IN -d 130.44.50.120 -j DROP
arptables -A OUT -d 130.44.50.120 -j mangle --mangle-ip-s 130.44.50.124

Web 3:

arptables -A IN -d 130.44.50.120 -j DROP
arptables -A OUT -d 130.44.50.120 -j mangle --mangle-ip-s 130.44.50.125

3. Enable arptables_jf to start on boot, save the rules and restart the service:

$ service arptables_jf save
$ chkconfig arptables_jf on
$ service arptables_jf restart

4. Add the virtual IP manually into the server using iproute command as below:

$ /sbin/ip addr add 130.44.50.120 dev eth0

5. Add following entry into /etc/rc.local to make sure the virtual IP is up after boot:

$ echo '/sbin/ip addr add 130.44.50.120 dev eth0' >> /etc/rc.local

Attention: If you restart the interface that hold virtual IP in this server, you need to execute step #4 to bring up the virtual IP manually. VIPs can not be configured to start on boot.

6. Check the IPs in the server. Example below was taken from server Web1:

$ ip a | grep inet
inet 130.44.50.123/28 brd 110.74.131.15 scope global eth0
inet 130.44.50.120/32 scope global eth0
inet 192.168.100.21/24 brd 192.168.100.255 scope global eth1

You are now having a complete high availability MySQL and HTTP/HTTPS service with auto failover and load balance features by Piranha using direct routing method.

In this tutorial, I am not focusing on HTTPS because in this test environment I do not have SSL setup correctly and do not have much time to do that. By the way, you may use following BASH script to monitor HTTPS from Piranha (nanny):

#!/bin/bash
 
if [ $# -eq 0 ]; then
        echo "host not specified"
        exit 1
fi
 
curl -s --insecure \
	--cert /etc/crt/hostcert.pem \
	--key /etc/crt/hostkey.pem \
	https://${1}:443 | grep "" \
	&> /dev/null
 
if [ $? -eq 0 ]; then
        echo "UP"
else
        echo "DOWN"
fi

I hope this tutorial could be useful for some guys out there!

Build Low-cost Call Center using Elastix and Asterisk (Part 2)

This post is continuation of the previous post: http://blog.secaserver.com/2012/10/build-low-cost-call-center-elastix-asterisk-1/

Now we should have our PBX ready to receive call. But we are not yet configure on how to manage calls and agents. We should have a system to monitor agent’s activities. We can achieve this by using Call Center module inside Elastix. Install it by go to Elastix > Addons > Call Center > Install. Wait for a while until the process finish.

Configure Call Center

1. Create agent. Agent is the person who will need to login into the call center system and answer the call in queue. Go to Elastix > Call Center > Agent Options > Show Filter > New Agent. Enter agent’s details, example as below:

Agent Number: 501
Agent Name: Mark Derp
Password: 123456
Retype Password: 123456

Click Save and add another agent details:

Agent Number: 502
Agent Name: Karim Benz
Password: 121212
Retype Password: 121212

2. Create Group. We need to allow agents to login into our Elastix system to view the agent’s console. This console will tell the agent which incoming queue coming from, how long is the call durations, what type of calls that coming in and much more. This will be configured later. Go to Elastix > System > Users > Groups > Create New Groups. Enter information as below:

Group: Agent
Description: Call Center Agents

3. Assign Agent Console to Agent group. Go to Elastix > System > Users > Group Permissions > go to page 4 > tick on Agent Console > Save Selected as Accessible.

4. Create user. This will be used by call center agents to login into Elastix system to view campaign, calls and also view the phone book. Go to Elastix > System > Users > Create New User and enter agent #1 details as below:

Login: mark
Name: Mark Derp
Password: q1w2e3
Retype Password: q1w2e3
Group: Agent
Extension: 201

Click Save and create another user for Agent #2 as below:

Login: karim
Name: Karim Benz
Password: hgp4ss
Retype Password: hgp4ss
Group: Agent
Extension: 202

5. Create Form. This form will be used to enter customer’s details when they call in. This is required for reporting. We will do this so agent can fill up some details on the call description and remark. Go to Elastix > Call Center > Forms > Show Filter > Create New Form. Enter details as screenshot below:

 

6. Select Queue. We need to select which queue that will be used inside our call center and activate it. I will activate my SalesCallQueue which has been created on previous post. Go to Elastix > Ingoing Calls > Queues > Show Filter > Select Queue > Select Queue > 301 SalesCallQueue and click Save.

7. Now we can create campaign. Every incoming calls and outgoing calls that agents will call/receive need to be through a campaign. Inside this campaign, we will insert which queue, which form and some description for the caller’s type. Go to Elastix > Call Center > Ingoing Calls > Ingoing Campaigns > Show Filter > Create New Campaign and enter required details as screenshot below:

Click Save and you are done on configuring call center agents, call route, IVR and call queue. We can now proceed to this call center with our call center agents.

 

Install & Configure Softphone (SIP Client)

There will be a lot of SIP clients available which you can used for this purpose. I am going to use X-Lite. You can download the software here: http://www.counterpath.com/x-lite-5-for-windows-download.html

1. Login into the call center agent’s PC and proceed to download and install the software. We will be using this application to connect to PBX server which it will route the call to your PC based on extension number. For agent #1 PC, login as extension 201 with password [email protected] as below:

For agent #2 laptop, login as extension 202 with password [email protected] similar to screen shot above. You should now has log into your extension (consider your phone) into the PBX system.

NOTES: You are not login as agent yet in this step! You just login your phone. Just think that this softphone as your normal phone.

2. Login into Elastix’s Agent Console at https://192.168.0.70 and use login as created in step #4 under Configure Call Center section.

3. Now select the Agent Number and Extension number that has been setup in the PC for agent #1 as screen shot as below:

4. After you click Enter, you should receive a call from the PBX server (the Caller ID will be Anonymous) and you will be asked to enter the agent’s password. Now enter the password that you have setup for this agent as in step #1 under Configure Call Center section by pressing the PC keyboard and press ‘#’ (shift + 3) button. You will now login as agent and will be redirected to the console as below:

 

Testing and Costing

Now call the number that associated with the direct line from mobile phone. You will then be entertained by a digital receptionist with IVR menu. Select the appropriate options and your call will be put on queue. Then automatically your call center agent will get the respective call and hear your voice. The ‘Hangup’ and ‘Transfer’ button in agent console will only available if the agent is having active calls.

As for total cost of this call center, here is my calculation:

===================================
Stuff                |   Cost (MYR)
===================================
Phone line setup fee |   50 x 4
Phone line rental    |   800/month (200/month x 4)
Server Hardware      |   1400
Telephony Card       |   1980
Agent's Phone        |   0
Agent's Headset      |   250 x 2
PBX system           |   0
Call center system   |   0
===================================
Total                |   4080
===================================

As what I stated in the subject of this post, I have achieved my objective in building a call center/centre with lowest cost possible using mostly open-source tools. My total setup cost is only MYR 4080 or around ~USD $1330 with monthly cost for calls by telco which is MYR 800/month or around ~USD $261/month.

Build Low-cost Call Center using Elastix and Asterisk (Part 1)

My company requires me to build a simple call centre whereby:

  • It can receive incoming call from customers
  • Digital receptionist with interactive voice response (IVR)
  • Make outgoing call
  • Call queue
  • 2 call center agents workings to attend all calls

In order to achieve this, we need to know how many concurrent calls that we usually get during peak hours. In my case, we will usually get/make 3 concurrent calls in a same time. We will require 4 direct line from the Telco provider. The steps will be as below:

  1. Apply 4 direct phone lines from the Telco provider.
  2. Apply another toll-free number to be mapped to this 4 direct phone lines. Example: 1-800-88-1919.
  3. Prepare a server to control this communication aka PBX server.
  4. Buy a telephony interface card to be attached into PBX server.
  5. Install Elastix.
  6. Configure Elastix (extension, call route, IVR, queue).
  7. Install and configure Softphone in agents’ PC. This is the tool that call center agents used to communicate with customer.
  8. Buy a headphone set for call center agents usage.

I will not going to cover whole steps in this post. I am assume that step 1 and 2 has been done by your side. So I will start on step 3 and so on. The architecture that we are going to setup will be as below:

 

I will be using following variables:

Operating System: Elastix 2.3.0 Stable
PBX Server: 192.168.0.70
PBX Hostname: callcenter.mydomain.org

 

Prepare the PBX Server

1. Get a server. I will be using simple Core i3 server with 250 GB HDD and 2GB RAM.

2. Download Elastix ISO from here: http://www.elastix.org/index.php/en/downloads/main-distro.html. During this writing, I will download the 64bit version of Elastix 2.3.0 Stable.

3. Burn the ISO into a CD. We will use this CD to install the Elastix operating system.

4. Start the installation process by boot the CD. Follow all required steps until finish. Make sure you do not miss steps on setting up MySQL root password and Elastix administrator password.

 

Telephony Interface Card

After the installation finish and before we configure Elastix, we need to buy a telephony interface card which not usually available in normal IT store. This card usually being produced by PBX company like Digium, Sangoma and Rhino.

Make sure you understand the difference between FXO and FXS before you buy the telephony card. Since I will be using Softphone for all call agents, I will need to buy 4 FXO port (for direct line) with no FXS port (for station line). You may refer to here for further explanation: http://www.3cx.com/PBX/FXS-FXO.html

For me I am going to buy this analog telephony card from Digium:

 

Configure Elastix

1. Now login into the Elastix server using web browser as admin. As for me, I will be login to https://192.168.0.70 .

2. We need to detect the telephony card before start configuring the PBX server. Make sure the direct phone lines has been attached to the back of the telephony card port. Go to Elastix > System > Hardware Detector > Detect New Hardware. You should see something like below:

This picture shows that only 1 direct phone line is detected and connected to the server. If you attached more than 1 line, you should see the respective port will turn into green.

 

3. Lets start configuring Elastix. The most basic things that we need to have is extension. Go to Elastix > PBX > PBX Configuration > Extensions. Select Generic SIP Device and click Submit. Fill up required data as below for Agent 1:

  • Under ‘Add Extension’
    • User Extension: 201
    • Display Name: Agent 1
  • Under ‘Device Options’
  • Under ‘Voicemail & Directory’
    • Status: Enabled

Click Submit. Now for Agent 2 repeat similar steps and add information as below:

  • Under ‘Add Extension’
    • User Extension: 202
    • Display Name: Agent 2
  • Under ‘Device Options’
  • Under ‘Voicemail & Directory’
    • Status: Enabled

 

4. Create queue. This will define the incoming call to be queue and which agent will required to communicate with them. I will need to create 2 group of queue which is Sales queue and CustomerService queue. Go to Elastix > PBX > PBX Configuration > Queues and enter following details:

  • Under ‘Add Queue’:
    • Queue Number: 301
    • Queue Name: SalesCallQueue
    • Static Agents: A201,0
Click Submit. Now for customer service queue, repeat similar steps and add information as below:
  • Under ‘Add Queue’:
    • Queue Number: 302
    • Queue Name: CustomerServiceCallQueue
    • Static Agents: A202,0

 

5. We need to create some recordings. We will use the voice recording to welcome every incoming call using IVR. You can refer to following video on how to use System Recordings:

As for me, I will be using Text to Wav features under Elastix > PBX > Tools > Text to Wav to create a simple welcoming voice.

 

6. From the voice recording on previous steps, we will need to configure IVR so our digital receptionist could route the call correctly. Go to Elastix > PBX > PBX Configuration > IVR and enter details as screen shot below:

 

7. We then need to configure the Inbound Routes. This will tell PBX on what is the first thing to do when receiving calls. Go to Elastix > PBX > PBX Configuration > Inbound Routes and enter following details:

  • Under ‘Add Incoming Route’:
    • Description: IncomingLine
  • Under ‘Options’:
    • CID name prefix: IncomingCall
  • Under ‘Set Destination’:
    • IVR: Welcome
Click Submit and also click the notification: Apply Configuration Changes Here.
Notes:
This post continues at part 2 on how to install Call Center modules in Elastix and configure the call center agents’ PC to to use Softphone.

Fixing Auto Start and Auto Shutdown Issue in VMware ESXi 5.0

I am a free VMware ESXi 5.0.0 user. The biggest problem with this release is the failure of auto-start and auto-shutdown for VM after/before the hardware node reboot. Whenever you start or restart the ESXi node, you will need to manually turn on every single virtual machine inside it. This has caused a lot of inconvenience especially when your ESXi was in production server line.

VMware has described this bug in details as refer to this link: http://blogs.vmware.com/vsphere/2012/07/clarification-on-the-auto-start-issues-in-vsphere-51-update-1.html.

1. As for me, I will need to download the patch for ESXi 5.0.0. Go to this page: http://www.vmware.com/patchmgr/findPatch.portal and login into your VMware account. Search with following criteria in that page:

 

I will be downloading this into my Windows 7 PC and will be using SSH method to apply the patch.

 

2. Enable SSH. Go to vSphere Client > ESXi host > Configuration > Security Profile > Services > Properties and make sure SSH is running as screenshot below:

 

3. Go to vSphere Client > ESXi host > Configuration > right click storage > Browse Datastore. Create a new folder called ‘update’ inside the datastore and upload the patch as screenshot below:

 

4. Login into the ESXi server using SSH. Run following command to verify the image profiles:

$ esxcli software sources profile list --depot=[datastore1]/update/ESXi500-201207001.zip
Name                              Vendor        Acceptance Level
--------------------------------  ------------  ----------------
ESXi-5.0.0-20120701001s-standard  VMware, Inc.  PartnerSupported
ESXi-5.0.0-20120704001-no-tools   VMware, Inc.  PartnerSupported
ESXi-5.0.0-20120701001s-no-tools  VMware, Inc.  PartnerSupported
ESXi-5.0.0-20120704001-standard   VMware, Inc.  PartnerSupported

 

5. Put the host into maintenance mode. Go to vSphere client > right click to the ESXi node > Enter Maintenance Mode > Yes.

 

6. Start the update process by running following command. We will use profile ESXi-5.0.0-20120701001s-standard for this update:

$ esxcli software profile update --depot=[datastore1]/update/ESXi500-201207001.zip --profile=ESXi-5.0.0-20120701001s-standard

 

7. Reboot the ESXi node using command line or using vSphere client. Once up, exit the maintenance mode and you can verify whether it is fixed by enabling Virtual Machine Startup/Shutdown options under vSphere client > ESXi node > Configuration > Software >  Virtual Machine Startup/Shutdown > Properties and enable the VM auto start as screen shot below:

 

 

Done. Try to reboot the ESXi host and you should see something like below on your vSphere client:

 

Linux: Add New User and Group into .htpasswd

We have several directories which have been restricted to some users in our company. Since they will need to authenticate before able to access the directory via web browser, I need to manage simple Apache user authentication using htpasswd.

User Authentication

To create new password protected directory under /home/website/public_html/secure1, create a new .htaccess file:

$ vim /home/website/public_html/secure1/.htaccess

And enter following line:

AuthUserFile /home/website/.htpasswd
AuthType Basic
AuthName "User Authentication"
Require valid-user

This will tell Apache to refer to .htpasswd for the user authentication data. Now let create a user to be inserted into .htpasswd file:

$  htpasswd -c /home/website/.htpasswd myfirstuser
New password:
Re-type new password:
Adding password for user myfirstuser

Format: htpasswd [options] [location of .htpasswd to be create] [username]

Now you can try to access the secure directory using  website: http://mywebsite.com/secure1. You should able to see login box pop out asking for username and password.

To add another user:

$ htpasswd /home/website/.htpasswd myseconduser

This will insert another line into .htpasswd file. If you see the current value, it should be:

$ cat /home/website/.htpasswd
myfirstuser: Ob5Y/eFTeSXEw
myseconduser: 9oopndPXV7sdE

Group Authentication

In some cases, I need to have a group of people able to access some secure folders. Lets say we have following users:

=================================================================
 USER     | GROUP     | DIRECTORY
=================================================================
 David    | IT        | /home/website/public_html/secure-it
 Nade     | IT        | /home/website/public_html/secure-it
 Mike     | Admin     | /home/website/public_html/secure-admin
 Seth     | Boss      | /home/website/public_html/secure-boss
=================================================================

1. Insert the users into htpasswd file. I will put this under /home/website/.htpasswd:

$ htpasswd -c /home/website/.htpasswd david
$ htpasswd /home/website/.htpasswd nade 
$ htpasswd /home/website/.htpasswd mike
$ htpasswd /home/website/.htpasswd seth

2. Create a htgroup file. This will describe the group for every user. Create a new file /home/website/.htgroup and add following line. Boss group can access all secure directories and others can only access their respective directories:

it: david nade seth
admin: mike seth
boss: seth

3. Apply the access into htacess files for every directories that you want to secure.

For IT group, create new .htaccess file:

$ vim /home/website/public_html/secure-it/.htaccess

And add following line:

AuthUserFile /home/website/.htpasswd
AuthGroupFile /home/website/.htgroup
AuthName "User Authentication"
AuthType Basic
Require group it

For admin group, create new file:

$ vim /home/website/public_html/secure-admin/.htaccess

And add following line:

AuthUserFile /home/website/.htpasswd
AuthGroupFile /home/website/.htgroup
AuthName "User Authentication"
AuthType Basic
Require group admin

For Boss group, create new file:

$ vim /home/website/public_html/secure-boss/.htaccess

And add following line:

AuthUserFile /home/website/.htpasswd
AuthGroupFile /home/website/.htgroup
AuthName "User Authentication"
AuthType Basic
Require group boss