How to Fix ‘Too many open files’ Problem

I have been facing following problem when executing Percona Xtrabackup in my CentOS 6.3 box:

xtrabackup_55 version 2.1.3 for Percona Server 5.5.16 Linux (x86_64) 
(revision id: 608) 
xtrabackup: uses posix_fadvise(). 
xtrabackup: cd to /var/lib/mysql 
xtrabackup: Target instance is assumed as followings. 
xtrabackup: innodb_data_home_dir = ./ 
xtrabackup: innodb_data_file_path = ibdata1:100M:autoextend 
xtrabackup: innodb_log_group_home_dir = ./ 
xtrabackup: innodb_log_files_in_group = 2 
xtrabackup: innodb_log_file_size = 67108864 
xtrabackup: using O_DIRECT 
130619 12:57:36 InnoDB: Warning: allocated tablespace 2405, old maximum 
was 9 
130619 12:57:37 InnoDB: Operating system error number 24 in a file 
InnoDB: Error number 24 means 'Too many open files'. 
InnoDB: Some operating system error numbers are described at 
InnoDB: Error: could not open single-table tablespace file 
InnoDB: We do not continue the crash recovery, because the table may become 
InnoDB: corrupt if we cannot apply the log records in the InnoDB log to it. 
InnoDB: To fix the problem and start mysqld:

Linux / UNIX sets soft and hard limit for the number of file handles and open files. By default the value is too low as you can check using following command:

$ ulimit -n

To increase open files limitation, you can use several ways:

1. Set the limit using ulimit command

$ ulimit -n 8192

This is temporary solution as it will increase the limit accordingly per login session.  Once you logged out and login again, this value will back to default.

2. Permanently define in /etc/security/limits.conf

To make it permanent, you can define the values (soft and hard limit) at /etc/security/limits.conf by adding following lines:

* soft nofile 8192
* hard nofile 8192

The soft limit is the value that the kernel enforces for the corresponding resource. The hard limit acts as a ceiling for the soft limit. Reboot the server to apply the changes. Or, if you do not want to reboot, add following line into the respective user’s .bashrc file, as in my case is root:

$ echo "ulimit -n 8192" >> ~/.bashrc

You will then need to relogin into the session to see the changes.

If the problem still persists, you might need to increase the limit higher and retry again the failed process.


Do not set the value to unlimited as it can caused PAM to fail and you will not able to SSH or console into the box with following error:

Apr 19 09:22:15 rh02 sshd[5679]: error: PAM: pam_open_session(): Permission denied

This issue has been reported in this bug report.

Further reading:

Install MySQL Cluster in Debian

MySQL Cluster is different compare to normal MySQL server. It has 3 roles:

  • management
  • data
  • SQL or API

Data node will required a lot of memory utilization. It is recommended for these nodes to not share any workload with SQL or management nodes as it would end up with resources exhaustion. So we will setup SQL node together with management node to reduce number of servers used to 4 (instead of 6 – 3 roles x 2 servers).

To have the best minimum setup, we will setup 2 servers as 2 data nodes, and another 2 servers will be SQL nodes co-located with management nodes:

  • sql1 ( – sql node #1 + management node #1
  • sql2 ( – sql node #2 + management node #2
  • data1 ( – data node #1
  • data2 ( – data node #2

I am using Debian 6.0.7 Squeeze 64bit.


All Nodes

1. Install libaio-dev which required by MySQL Cluster:

$ apt-get update && apt-get install libaio-dev -y

2. Download the package from MySQL Downloads page here and extract it under /usr/local directory:

$ cd /usr/localwget
$ tar -xzf mysql-cluster-gpl-7.2.12-linux2.6-x86_64.tar.gz

3. Rename the extracted directory to a shorter name which is mysql:

$ mv mysql-cluster-gpl-7.2.12-linux2.6-x86_64 mysql

4. Create MySQL configuration directory:

$ mkdir /etc/mysql

5. Export MySQL bin path into user environment:

$ export PATH="$PATH:/usr/local/mysql/bin"

6. Create mysql user and group and assign the correct permission to MySQL base directory:

$ useradd mysql
$ chown -R mysql:mysql /usr/local/mysql

Once completed, make sure that MySQL path is /usr/local/mysql with correct ownership of user mysql.

Data Nodes

1. Login to data1 and data2 create MySQL configuration file at /etc/mysql/my.cnf and add following lines:


2. Create the MySQL cluster data directory:

$ mkdir /usr/local/mysql/mysql-cluster

SQL Nodes + Management Nodes

1. Login to sql1 and sql2 and configure SQL nodes by adding following lines into /etc/mysql/my.cnf:


2. Create data directory for MySQL cluster:

$ mkdir -p /usr/local/mysql/mysql-cluster

3. Create MySQL Cluster configuration file at /etc/mysql/config.ini:

$ vim /etc/mysql/config.ini

And add following lines:

[ndb_mgmd default]
[ndbd default]

4. Copy the mysql.server init script from support-files directory into /etc/init.d directory and setup auto-start on boot:

$ cp /usr/local/mysql/support-files/mysql.server /etc/init.d/mysql
$ update-rc.d mysql defaults

5. Install MySQL system tables:

$ cd /usr/local/mysql
$ scripts/mysql_install_db --user=mysql

Starting the Cluster

1. Start MySQL cluster management service in sql1 and sql2:

For sql1:

$ ndb_mgmd -f /etc/mysql/config.ini --ndb-nodeid=1

For sql2:

$ ndb_mgmd -f /etc/mysql/config.ini --ndb-nodeid=2

2. Start MySQL cluster storage service:

For data1:

$ ndbd --ndb-nodeid=3

For data2:

$ ndbd --ndb-nodeid=4

3. Start the MySQL API service at sql1 and sql2:

$ service mysql start

4. Check the MySQL cluster status in sql1 or sql2:

$ ndb_mgm -e show
Connected to Management Server at: ndb1:1186
Cluster Configuration
[ndbd(NDB)] 2 node(s)
id=3 @ ndb-7.2.12, Nodegroup: 0, Master)
id=4 @ (mysql-5.5.30 ndb-7.2.12, Nodegroup: 0)
[ndb_mgmd(MGM)] 2 node(s)
id=1 @ (mysql-5.5.30 ndb-7.2.12)
id=2 @ (mysql-5.5.30 ndb-7.2.12)
[mysqld(API)] 2 node(s)
id=5 @ (mysql-5.5.30 ndb-7.2.12)
id=6 @ (mysql-5.5.30 ndb-7.2.12)

5. Since MySQL user table do not run on ndb storage enginer, we need to create MySQL root password in both nodes (sql1 and sql2):

$ mysqladmin -u root password 'r00tP4ssword'

Cluster ready! Now you can start to query the MySQL cluster by connecting to sql1 or sql2. For best performance in load balancing and failover, you can setup HAproxy in front of sql1 or sql2.

CentOS: Install Nagios – The Simple Way

Nagios is the most popular open-source infrastructure monitoring tools. Nagios offers monitoring and alerting for servers, switches, applications, and services. It alerts users when things go wrong and alerts them again when the problem has been resolved.

I have created a script to install Nagios and Nagious plugin in RHEL/CentOS:

# Install nagios and nagios plugin in RHEL/CentOS/Fedora
# Disable SElinux
sed -i.bak 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
setenforce 0
# Nagios requirement
yum install gd gd-devel httpd php gcc glibc glibc-common make openssl openssl-devel -y
# Installation directory
[ ! -d $installdir ] && mkdir -p $installdir
rm -Rf $installdir/*
cd $installdir
wget $nagios_latest_url
wget $nagios_plugin_latest_url
# Nagios
nagios_package=`ls -1 | grep nagios | grep -v plugin`
tar -xzf $nagios_package
cd nagios
echo "Installing Nagios.."
useradd nagios
make all
make install
make install-init
make install-commandmode
make install-config
make install-webconf
echo "Create .htpasswd for nagios"
htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin
cd $installdir
# Nagios Plugin
nagios_plugin_package=`ls -1 | grep nagios-plugin`
tar -xzf $nagios_plugin_package
cd nagios-plugin*
echo "Installing Nagios Plugin.."
make install
echo "Starting Nagios.."
chkconfig nagios on
service nagios start
echo "Staring Apache.."
service httpd restart
chkconfig httpd on
# Configure IPtables
iptables -I INPUT -m tcp -p tcp --dport 80 -j ACCEPT
service iptables save
ip_add=`hostname -I | tr -d ' '`
echo "Installation done.."
echo "Connect using browser http://$ip_add/nagios/"
echo "username: nagiosadmin"
echo "password: (nagios password)"

You can download the script directly here:

$ wget

Changethe  script permission and run script:

$ chmod +x
$ ./

Once completed, you can directly open Nagios page using http://<your_ip_address>/nagios and login with username nagiosadmin with same password you enter during the installation. You should see the Nagios page similar to screenshot below:




Installation done!


CentOS: Install and Configure MongoDB Sharded Cluster

In this post I am going to deploy a MongoDB sharded cluster. MongoDB is an open-source NoSQL, document-oriented database designed for ease of development and scaling.  I am going to use 3 servers, and all the /etc/hosts definition would be as below:        mongo1 mongo1.cluster.local        mongo2 mongo2.cluster.local        mongo3 mongo3.cluster.local

All servers running CentOS 6.3 64bit with firewall and SElinux turned off. All steps must be executed in all servers unless specified.

Install MongoDB

1. Install EPEL repo:

$ rpm -Uhv

2. Install MongoDB and all required components:

$ yum install mongodb* -y --enablerepo=epel


Config Servers

1. Create config database directory. By default, MongoDB will use /data/configdb:

$ mkdir -p /data/configdb

2. Default port for config server is 27019. Start config servers:

$ mongod --configsvr --fork --logpath /var/log/mongodb.log --logappend

You should see following output:

forked process: 5464
all output going to: /var/log/mongodb.log
child process started successfully, parent exiting


Routing Servers

1. By default, mongos will listen on port 27017. Start mongos as below:

$ mongos --configdb mongo1,mongo2,mongo3 --fork --logpath /var/log/mongodb.log --logappend

You should see following output:

forked process: 5534
all output going to: /var/log/mongodb.log
child process started successfully, parent exiting

Shard Servers

1. Create default data directory. By default, MongoDB will use /data/db:

$ mkdir -p /data/db

2. By default, mongod with –shardsvr option will listen on port 27018. Start mongod as below:

$ mongod --shardsvr --fork --logpath /var/log/mongodb.log --logappend

You should see following output:

forked process: 5675
all output going to: /var/log/mongodb.log
child process started successfully, parent exiting


MongoDB Sharding

1. Verify that MongoDB services are listening to correct ports:

$ netstat -tulpn | grep mongo
tcp     0    0*     LISTEN    5534/mongos
tcp     0    0*     LISTEN    5675/mongod
tcp     0    0*     LISTEN    5464/mongod
tcp     0    0*     LISTEN    5534/mongos
tcp     0    0*     LISTEN    5675/mongod
tcp     0    0*     LISTEN    5464/mongod

2. SSH into mongo1 and type mongo to access the mongos console:

$ mongo

3. Use admin database to list the sharding status:

mongos> use admin
mongos> db.runCommand( { listshards : 1 } );

You should get this reply:

{ "shards" : [ ], "ok" : 1 }

4. Add the sharded servers by specifying the hostname and MongoDB shard service port:

mongos> sh.addShard( "mongo1:27018");
{ "shardAdded" : "shard0000", "ok" : 1 }
mongos> sh.addShard( "mongo2:27018");
{ "shardAdded" : "shard0001", "ok" : 1 }
mongos> sh.addShard( "mongo3:27018");
{ "shardAdded" : "shard0002", "ok" : 1 }

5. Download this JSON example file and import into database mydb:

$ wget
$ mongoimport --db mydb --collection zip --file zips.json
connected to:
Mon Mar 25 06:22:35 imported 29470 objects

6. Enable sharding for mydb:

mongos> sh.enableSharding ("mydb");
{ "ok" : 1 }

7. Check sharding status:

mongos> sh.status()
--- Sharding Status ---
sharding version: { "_id" : 1, "version" : 3 }
{ "_id" : "shard0000", "host" : "mongo1:27018" }
{ "_id" : "shard0001", "host" : "mongo2:27018" }
{ "_id" : "shard0002", "host" : "mongo3:27018" }
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "mydb", "partitioned" : true, "primary" : "shard0000" }
{ "_id" : "test", "partitioned" : false, "primary" : "shard0001" }

You can see database mydb has been partitioned by MongoDB with value true.

CentOS: Install MongoDB – The Simple Way

I am in phase of learning a NoSQL database called MongoDB. I will be using a CentOS 6.3 64bit box with minimal ISO installation disc with several package installed like perl, vim, wget, screen, sudo and cronie using yum.

We will use EPEL repo, which includes MongoDB installation package to simplify the deployment.

1. Install EPEL repo for CentOS 6. You can get the link from here,

$ rpm -Uhv

2. Install MongoDB using yum:

$ yum install mongodb* -y

3. Configure mongod to start on boot and start the service:

$ chkconfig mongod on
$ service mongod start

4. MongoDB will be using ports 27017-27019 and 28017. We will add it into the iptables rules:

$ iptables -A INPUT -m tcp -p tcp --dport 27017:27019 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 28017 -j ACCEPT

5. Check whether MongoDB is listening to the correct port:

$ netstat -tulpn | grep mongod
tcp        0      0   *                   LISTEN      26575/mongod

6. Login into MongoDB console by using this command:

$ mongo

7. In the console, you can use help command to see the list of supported command as below:

> help         help on db methods  help on collection methods         sharding helpers         replica set helpers
help admin        administrative help
help connect      connecting to a db help
help keys         key shortcuts
help misc         misc things to know
help mr           mapreduce
show dbs                    show database names
show collections            show collections in current database
show users                  show users in current database
show profile                show most recent system.profile entries with time >= 1ms
show logs                   show the accessible logger names
show log [name]             prints out the last segment of log in memory, 'global' is default
use <db_name>               set current database               list objects in collection foo { a : 1 } )    list objects in foo where a == 1
it                          result of the last line evaluated; use to further iterate
DBQuery.shellBatchSize = x  set default number of items to display on shell
exit                        quit the mongo shell

So now I have required stuffs installed for MongoDB. Lets learn MongoDB by starting at this page:

Basic Linux Command in PDF

My new assistant has zero knowledge on Linux so I should prepare him some of basic linux command with some example and description. Even though he can use ‘man’ command to get detail explanation on specific command, he still not to familiarize with the environment and gain some experience.

I have created following PDF where you can view and download it here:

Download (PDF, Unknown)


CentOS: Restore/Recover from Amanda Backup

So I have Amanda backup server configured in 2 servers as refer to my previous post here. In that setting, I was using Amanda to backup one of the server’s directory /home/webby/public_html in server Now I need to restore all files in directory/home/webby/public_html/blog from latest backup.


 Configure Amanda Client for Restore

1. Login into the Amanda client, in my case is this server, as root. Create a new text file called amanda-client.conf. This file will define the server details that client is going to connect to for restoration:

$ vim /etc/amanda/amanda-client.conf

And add following line:

conf "ServerNetBackup"                # your config name in Amanda server
index_server ""   # your amindexd server
tape_server ""    # your amidxtaped server
ssh_keys ""                           # your ssh keys file if you use ssh auth
unreserved-tcp-port 1025,65535

2. Restart Amanda client service in this server:

$ service xinetd restart

3. Then, we need to login to the Amanda backup server, as in my case is to change the server_args under /etc/xinetd.d/amanda. This will allow Amanda clients to browse the index and tape in Amanda server:

$ vim /etc/xinetd.d/amanda

And change following line to be as below:

server_args             = -auth=bsd amdump amindexd amidxtaped

4. Restart xinetd service:

$ service xinetd restart


Restoring Files

1. To restore files, you simply need to login to the client with root user. The process flow will be as below:

Login to client > Go to the directory that you want to restore > Access Amanda server using amrecover > Select which disk > Select which date > Add into restoration list > Extract > Done

2. So now I am login to as root and navigate to the folder that I want to restore. I am going to restore all files in directory/home/webby/public_html/blog from latest backup because this directory has been accidentally deleted from the server:

$ cd /home/webby/public_html

3.  Connect to Amanda server using following command:

$ amrecover ServerNetBackup -s
AMRECOVER Version 2.6.1p2. Contacting server on ...
220 amanda AMANDA index server (2.6.1p2) ready.
Setting restore date to today (2013-02-06)
200 Working date set to 2013-02-06.
200 Config set to ServerNetBackup.
200 Dump host set to
Use the setdisk command to choose dump disk to recover

4. Lets list the disk for this host in Amanda backup server:

amrecover> listdisk
200- List of disk for host
201- /home/webby/public_html
200 List of disk for host

5. Choose the disk for this backup:

amrecover> setdisk /home/webby/public_html
200 Disk set to /home/webby/public_html.

6. I do not know which tape that holding the latest backup, so I will use history command to list it out all:

amrecover> history
200- Dump history for config "ServerNetBackup" host "" disk /home/webby/public_html
201- 2013-02-05-18-29-38  0  ServerNetBackup-2:1
201- 2013-02-05-13-00-58  0  ServerNetBackup-1:1
201- 2013-02-05-12-59-41  0  ServerNetBackup-15:1
200 Dump history for config "ServerNetBackup" host "" disk /home/webby/public_html

7. Now I should choose the latest backup which is 2013-02-05-18-29-38, which means the backup is create at 6:29:38 PM on 5th of February 2013:

amrecover> setdate 2013-02-05-18-29-38
200 Working date set to 2013-02-05-18-29-38.

8. I have chosen the backup and tape to the latest date. I can then list out all the files in this backup directory as below:

amrecover> ls
2013-02-05-18-29-38 web.config.txt
2013-02-05-18-29-38 tmp/
2013-02-05-18-29-38 test/
2013-02-05-18-29-38 templates/
2013-02-05-18-29-38 robots.txt
2013-02-05-18-29-38 plugins/
2013-02-05-18-29-38 modules/
2013-02-05-18-29-38 media/
2013-02-05-18-29-38 logs/
2013-02-05-18-29-38 libraries/
2013-02-05-18-29-38 language/
2013-02-05-18-29-38 joomla.xml
2013-02-05-18-29-38 installation/
2013-02-05-18-29-38 index.php
2013-02-05-18-29-38 includes/
2013-02-05-18-29-38 images/
2013-02-05-18-29-38 htaccess.txt
2013-02-05-18-29-38 components/
2013-02-05-18-29-38 cli/
2013-02-05-18-29-38 cache/
2013-02-05-18-29-38 blog/
2013-02-05-18-29-38 administrator/
2013-02-05-18-29-38 README.txt
2013-02-05-18-29-38 LICENSE.txt
2013-02-05-18-29-38 .

9. Since I just want to restore blog directory, I will need to add blog into extraction list:

amrecover> add blog
Added dir /blog/ at date 2013-02-05-18-29-38

10. Once added, we can extract the backup to the working directory as below:

amrecover> extract
Extracting files using tape drive changer on host
The following tapes are needed: ServerNetBackup-2
Restoring files into directory /home/webby/public_html
Continue [?/Y/n]? Y
Extracting files using tape drive changer on host
Load tape ServerNetBackup-2 now
Continue [?/Y/n/s/d]? Y

It will then restore all your files into the working directory. Just exit the amrecover console and you can see the restored directory will be exist there, as example below:

$ ls -al | grep blog
drwxr-xr-x   5    webby  webby    4096    Jan 25 04:53    blog

Restoration complete!

ESXi 5.1: can’t create multiextent node Error

I just upgrading my VMware ESXi 5.0 to VMware ESXi 5.1 using installation disc and I found one of my VM inside this machine cannot be started using vSphere client with following error:

Cannot open the disk '/vmfs/volumes/4a365b5d-eceda119-439b-000cfc0086f3/examplevm/examplevm-000002.vmdk' or one of the snapshot disks it depends on.

Further troubleshooting required me to SSH into the ESXi host and analyze the vmware.log under datastore for this VM and I found following error:

DISKLIB-CHAINESX : ChainESXOpenSubChainNode: can't create multiextent node 6b8b4567-MyServer-flat001.vmdk failed with error The system cannot find the file specified (0xbad0003, Not found)

It turn out that the multiextent module is not loaded inside ESXi 5.1 and following command fix my problem:

$ vmkload_mod multiextent
Module multiextent loaded successfully

Add the command into /etc/rc.local.d/ to make sure it will auto start on boot:

/sbin/vmkload_mod multiextent

Once done, try to turn on the VM once again using vSphere client.


CentOS: Install OpenLDAP with Webmin – The Simple Way

Installing OpenLDAP with Webmin will require a lot of steps. I have created a BASH script to install OpenLDAP with Webmin in CentOS 6 servers. To install, simply download the installer script at here:

Installation example will be as below. I am using a freshly installed CentOS 6.3 64bit installed with minimal ISO, with wget and perl installed.

1. Download and extract the installer script:

$ cd /usr/local/src
$ wget

2. Change the permission to 755:

$ chmod 755

3. Execute the script and follow the wizard as example below:

$ ./
           This script will install OpenLDAP
It assumes that there is no OpenLDAP installed in this host
   SElinux will be disabled and firewall will be stopped
What is the root domain? [eg]:
What is the administrator domain? [eg or]:
What is the administrator password that you want to use?: MyN23pQ
Do you want to install Webmin/Do you want me to configure your Webmin LDAP modules? [Y/n]: Y

You should see the installation process output as below:

Kindly review following details before proceed with installation:
Root DN: dc=majimbu,dc=net
Administrator DN: cn=ldap,dc=majimbu,dc=net
Administrator Password: MyN23pQ
Webmin installation: Y
Can I proceed with the installation? [Y/n]: Y
Checking whether openldap-servers has been installed..
openldap-servers package not found. Proceed with installation
Disabling SElinux and stopping firewall..
iptables: Flushing firewall rules:                                 [ OK ]
iptables: Setting chains to policy ACCEPT: filter                  [ OK ]
iptables: Unloading modules:                                       [ OK ]
Installing OpenLDAP using yum..
Package cronie-1.4.4-7.el6.x86_64 already installed and latest version
Package sudo-1.7.4p5-13.el6_3.x86_64 already installed and latest version
OpenLDAP installed
Configuring OpenLDAP database..
Configuring monitoring privileges..
Configuring database cache..
Generating SSL..
Generating a 2048 bit RSA private key
writing new private key to '/etc/openldap/certs/majimbu_key.pem'
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [XX]:MY
State or Province Name (full name) []:Kuala Lumpur
Locality Name (eg, city) [Default City]:Bukit Bintang
Organization Name (eg, company) [Default Company Ltd]:Majimbu Net Corp
Organizational Unit Name (eg, section) []:IT
Common Name (eg, your name or your server's hostname) []
Email Address []:[email protected]
Configuring LDAP service..
Checking OpenLDAP configuration..
config file testing succeeded
OpenLDAP installation done. Starting SLAPD..
Starting slapd:                                                    [ OK ]
Configuring LDAP client inside this host..
Checking the Webmin installation..
Webmin package not found in this host. Installing Webmin..
warning: /var/tmp/rpm-tmp.XmXunn: Header V3 DSA/SHA1 Signature, key ID 11f63c51: NOKEY
Preparing... ########################################### [100%]
Operating system is CentOS Linux
    1:webmin ########################################### [100%]
Webmin install complete. You can now login to
as root with your root password.
Webmin installed.
Configuring webmin LDAP server module..
Configuring webmin LDAP client module..
Installation completed! [ OK ]
    You may need to open following port in firewall: 389, 636, 10000
Dont forget to refresh your Webmin module! Login to Webmin > Refresh Modules


4. Installation done. We need to refresh the Webmin module from the Webmin page. Login into Webmin > Refresh Modules:



5. You need to refresh again the Webmin page so the activated module will be listed in the side menu as screen shot below:


You can now start to create your LDAP object using your Webmin modules Webmin > Servers > LDAP Server To add port exception into firewall rules, you can use following command:

$ iptables -I INPUT -m tcp -p tcp --dport 389 -j ACCEPT
$ iptables -I INPUT -m tcp -p tcp --dport 636 -j ACCEPT
$ iptables -I INPUT -m tcp -p tcp --dport 10000 -j ACCEPT

Clone VM in VMware ESXi 5 using vSphere Client

I am a free VMware ESXi user. Cloning VM is no way to be as easy as VMware Workstation or vCenter. As you know, for free user, you can only manage your ESXi host directly using vSphere client. Cloning will save your time tremendously when you want to have several machines with same configuration inside one host.

I will be cloning a VM which run under CentOS 6.3 64bit platform with VMtools installed and using following details:

ESXi version: VMware ESXi 5.0.0 build 768111
ESXi host:
Main VM name: CentOS-Test
Main VM IP:
Cloned VM name: CentOS-Clone1
Cloned VM IP:

1. We will use SSH to do cloning. You can enable SSH service (if disabled) by using vSphere client and connect to your ESXi host. Go to Configuration > Security Profile > Services > Properties > select SSH > Options > Start as refer to screen shot below:



2. Create the cloned VM using similar specification with the main VM. There is one different thing here that we DO NOT need to create the disk for this cloned VM because we will use the clone virtual hard disk which we will create in step #3:



3. Now we need to clone the disk from main VM to the cloned VM directory. Connect to ESXi host using SSH and run following command:

~ # vmkfstools -i /vmfs/volumes/datastore1/CentOS-Test/CentOS-Test.vmdk /vmfs/volumes/datastore1/CentOS-Clone1/CentOS-Clone1.vmdk
Destination disk format: VMFS zeroedthick
Cloning disk '/vmfs/volumes/datastore1/CentOS-Test/CentOS-Test.vmdk'...
Clone: 100% done.

4. Once done, we will need to add the virtual hard disk into cloned VM using vSphere client. Edit the VM Properties > Add > Hard Disk > Use an existing virtual disk > locate the hard disk as screenshot below > Next > Next > Finish.




5. You will then should have the virtual machine properties summary as below:



6. Start the cloned virtual machine. Once you login into CentOS, you will realized that the network interface will be converted into eth1:


We need to run following command and restart the CentOS box to bring back the eth0 as this is actually the nature of VMware in cloning:

$ rm -Rf /etc/udev/rules.d/70-persistent-net.rules
$ init 6


7. After reboot, you should see the network interface has been changed to eth0 as screen shot below:



Done! Even though it required more steps to clone, it still can save a lot of your time doing OS installation especially if you just want to use the VM temporarily.


CentOS 6: Install Remote Logging Server (rsyslog)

In my office network, we have a lot of small devices like router and switches in our environment. My boss wants me to have a report on all of our network device for auditing purposes. To accomplish this objective, I need to have a server which run as logging server, accepting various type of logging from several devices. This method will ease up my auditing trail in one centralized location.

I will use my development server which run on CentOS to receive logs from my Mikrotik router, as picture below:


I am using following variables:

Rsyslog OS: CentOS 6.0 64bit
Rsyslog Server IP:
Router hostname:
Router IP:

Rsyslog Server

1. Install Rsyslog package:

$ yum install rsyslog -y

2. Make sure you have following line uncommented in /etc/rsyslog.conf:

$UDPServerRun 514
$InputTCPServerRun 514
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
*.info;mail.none;authpriv.none;cron.none /var/log/messages
authpriv.* /var/log/secure
mail.* -/var/log/maillog
cron.* /var/log/cron
*.emerg *
uucp,news.crit /var/log/spooler
local7.* /var/log/boot.log
$AllowedSender TCP,,
$AllowedSender TCP,

3. We need to add following rules into /etc/rsyslog.conf so logs received from the router will be output into a file called /var/log/router.log:

:fromhost-ip,isequal,""                      /var/log/router.log

There are a lot of options you can use to define your remote logging rules, which you can refer to this page:

4. Open firewall port 514 on TCP and UDP:

$ iptables -A INPUT -m tcp -p tcp --dport 514 -j ACCEPT
$ iptables -A INPUT -m udp -p udp --dport 514 -j ACCEPT

5. Restart Rsyslog daemon to apply the configuration:

$ service rsyslog restart

6. We also need to rotate this log file so it will need eating up the server’s disk space. Create a new text file called router under /etc/logrotate.d/ directory:

$ vim /etc/logrotate.d/router

And add following line:

    rotate 5
    /bin/kill -HUP `cat /var/run/ 2> /dev/null` 2> /dev/null || true

Router (Rsyslog Client)

1. Mikrotik router supports remote logging. I just need to login into the Winbox > System > Logging and configure Actions as screenshot below:


2. The next thing, is we need to create the rules on which logging level do we want to be sent to the rsyslog server. Go to Winbox > System > Logging and configure Rules as screenshot below:



Now, the router should send the log remotely to the rsyslog server and we can check the router logs by running following command:

$ tail -f /var/log/router.log
Jan 8 17:23:28 system,info log action changed by admin
Jan 8 17:26:09 system,info filter rule changed by admin
Jan 8 17:26:09 system,info filter rule changed by admin
Jan 8 17:26:23 system,info PPP AAA settings changed by admin
Jan 8 17:26:40 system,info L2TP Server settings changed by admin
Jan 8 17:26:49 system,info filter rule changed by admin
Jan 8 17:26:50 system,info filter rule changed by admin



Customize and Disable PHPmyAdmin ‘Export’ Menu

In my development environment, we have 2 levels of PHPmyAdmin user, the superuser (root) and developer user. Superuser is able to access all features available in PHPmyAdmin and developer user is the database user for database planet_shop which have limitation as stated in MySQL user privilege table.

The current problem is that developer user, which use PHPmyAdmin to access and manage the database, is also able to export the database using PHPmyAdmin export menu as screenshot below:


My boss want this menu to be hide and disabled to developer to prevent them dump the MySQL data which is strictly confidential. This feature should only accessible for superuser only. To do this, I need to do some changes to PHPmyAdmin coding which is located under /var/www/html/phpmyadmin directory in my web server. I am using following variables:

OS: CentOS 6 64bit
PHPmyAdmin web directory: /var/www/html/phpmyadmin
PHPmyAdmin version: (inside README)

1. We need to hide Export menu to be viewed from 2 places, in libraries/ and libraries/ Open /var/www/html/phpmyadmin/libraries/ using text editor and find following line (line 67):

$tabs['export']['icon'] = 'b_export.png';
$tabs['export']['link'] = 'server_export.php';
$tabs['export']['text'] = __('Export');

and change it to:

if ($is_superuser) {
    $tabs['export']['icon'] = 'b_export.png';
    $tabs['export']['link'] = 'server_export.php';
    $tabs['export']['text'] = __('Export');

2. Then, we need to hide Export menu from database page. Open /var/www/html/phpmyadmin/libraries/ using text editor and find following line (line 107):

$tabs = array();
$tabs[] =& $tab_structure;
$tabs[] =& $tab_sql;
$tabs[] =& $tab_search;
$tabs[] =& $tab_qbe;
$tabs[] =& $tab_export;

and change it to:

$tabs = array();
$tabs[] =& $tab_structure;
$tabs[] =& $tab_sql;
$tabs[] =& $tab_search;
$tabs[] =& $tab_qbe;
if ($is_superuser) {
    $tabs[] =& $tab_export;

3. The first 2 steps were only hiding the Export tab from PHPmyAdmin for non superuser. Now we need to disable it as well in database page. Open /var/www/html/phpmyadmin/db_export.php using text editor and find following line:

// $sub_part is also used in to see if we are coming from
// db_export.php, in which case we don't obey $cfg['MaxTableList']
$sub_part = '_export';
require_once './libraries/';
$url_query .= '&goto=db_export.php';
require_once './libraries/';

And add following line after that:

if (!$is_superuser) {
    require './libraries/';
    echo '<h2>' . "\n"
       . PMA_getIcon('b_usrlist.png')
       . __('Privileges') . "\n"
       . '</h2>' . "\n";
    PMA_Message::error(__('No Privileges'))->display();
    require './libraries/';

4. We also need to disable this in server page. Open /var/www/html/phpmyadmin/server_export.php using text editor and find following line:

* Does the common work
require_once './libraries/';
$GLOBALS['js_include'][] = 'export.js';

And add following line after that:

if (!$is_superuser) {
    require './libraries/';
    echo '<h2>' . "\n"
       . PMA_getIcon('b_usrlist.png')
       . __('Privileges') . "\n"
       . '</h2>' . "\n";
    PMA_Message::error(__('No Privileges'))->display();
    require './libraries/';


Done. Now we can verify in PHPmyAdmin by login as the developer and you will notice that Export menu has been hide:



If user still access the Export page using direct URL, for example: , they will see following error: