CentOS: Install MongoDB – The Simple Way

I am in phase of learning a NoSQL database called MongoDB. I will be using a CentOS 6.3 64bit box with minimal ISO installation disc with several package installed like perl, vim, wget, screen, sudo and cronie using yum.

We will use EPEL repo, which includes MongoDB installation package to simplify the deployment.

1. Install EPEL repo for CentOS 6. You can get the link from here, http://dl.fedoraproject.org/pub/epel/6/x86_64/:

$ rpm -Uhv http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

2. Install MongoDB using yum:

$ yum install mongodb* -y

3. Configure mongod to start on boot and start the service:

$ chkconfig mongod on
$ service mongod start

4. MongoDB will be using ports 27017-27019 and 28017. We will add it into the iptables rules:

$ iptables -A INPUT -m tcp -p tcp --dport 27017:27019 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 28017 -j ACCEPT

5. Check whether MongoDB is listening to the correct port:

$ netstat -tulpn | grep mongod
tcp        0      0   *                   LISTEN      26575/mongod

6. Login into MongoDB console by using this command:

$ mongo

7. In the console, you can use help command to see the list of supported command as below:

> help
db.help()         help on db methods
db.mycoll.help()  help on collection methods
sh.help()         sharding helpers
rs.help()         replica set helpers
help admin        administrative help
help connect      connecting to a db help
help keys         key shortcuts
help misc         misc things to know
help mr           mapreduce
show dbs                    show database names
show collections            show collections in current database
show users                  show users in current database
show profile                show most recent system.profile entries with time >= 1ms
show logs                   show the accessible logger names
show log [name]             prints out the last segment of log in memory, 'global' is default
use <db_name>               set current database
db.foo.find()               list objects in collection foo
db.foo.find( { a : 1 } )    list objects in foo where a == 1
it                          result of the last line evaluated; use to further iterate
DBQuery.shellBatchSize = x  set default number of items to display on shell
exit                        quit the mongo shell

So now I have required stuffs installed for MongoDB. Lets learn MongoDB by starting at this page: http://docs.mongodb.org/manual/tutorial/getting-started/#create-a-collection-and-insert-documents

Basic Linux Command in PDF

My new assistant has zero knowledge on Linux so I should prepare him some of basic linux command with some example and description. Even though he can use ‘man’ command to get detail explanation on specific command, he still not to familiarize with the environment and gain some experience.

I have created following PDF where you can view and download it here:

Download (PDF, Unknown)


CentOS: Restore/Recover from Amanda Backup

So I have Amanda backup server configured in 2 servers as refer to my previous post here. In that setting, I was using Amanda to backup one of the server’s directory /home/webby/public_html in server sv101.krispykream.net. Now I need to restore all files in directory/home/webby/public_html/blog from latest backup.


 Configure Amanda Client for Restore

1. Login into the Amanda client, in my case is this server, sv101.krispykream.net as root. Create a new text file called amanda-client.conf. This file will define the server details that client is going to connect to for restoration:

$ vim /etc/amanda/amanda-client.conf

And add following line:

conf "ServerNetBackup"                # your config name in Amanda server
index_server "office.servering.com"   # your amindexd server
tape_server "office.servering.com"    # your amidxtaped server
ssh_keys ""                           # your ssh keys file if you use ssh auth
unreserved-tcp-port 1025,65535

2. Restart Amanda client service in this server:

$ service xinetd restart

3. Then, we need to login to the Amanda backup server, as in my case is office.servering.com to change the server_args under /etc/xinetd.d/amanda. This will allow Amanda clients to browse the index and tape in Amanda server:

$ vim /etc/xinetd.d/amanda

And change following line to be as below:

server_args             = -auth=bsd amdump amindexd amidxtaped

4. Restart xinetd service:

$ service xinetd restart


Restoring Files

1. To restore files, you simply need to login to the client with root user. The process flow will be as below:

Login to client > Go to the directory that you want to restore > Access Amanda server using amrecover > Select which disk > Select which date > Add into restoration list > Extract > Done

2. So now I am login to sv101.krispykream.net as root and navigate to the folder that I want to restore. I am going to restore all files in directory/home/webby/public_html/blog from latest backup because this directory has been accidentally deleted from the server:

$ cd /home/webby/public_html

3.  Connect to Amanda server using following command:

$ amrecover ServerNetBackup -s office.servering.com
AMRECOVER Version 2.6.1p2. Contacting server on office.servering.com ...
220 amanda AMANDA index server (2.6.1p2) ready.
Setting restore date to today (2013-02-06)
200 Working date set to 2013-02-06.
200 Config set to ServerNetBackup.
200 Dump host set to sv101.krispykream.net.
Use the setdisk command to choose dump disk to recover

4. Lets list the disk for this host in Amanda backup server:

amrecover> listdisk
200- List of disk for host sv101.krispykream.net
201- /home/webby/public_html
200 List of disk for host sv101.krispykream.net

5. Choose the disk for this backup:

amrecover> setdisk /home/webby/public_html
200 Disk set to /home/webby/public_html.

6. I do not know which tape that holding the latest backup, so I will use history command to list it out all:

amrecover> history
200- Dump history for config "ServerNetBackup" host "sv101.krispykream.net" disk /home/webby/public_html
201- 2013-02-05-18-29-38  0  ServerNetBackup-2:1
201- 2013-02-05-13-00-58  0  ServerNetBackup-1:1
201- 2013-02-05-12-59-41  0  ServerNetBackup-15:1
200 Dump history for config "ServerNetBackup" host "sv101.krispykream.net" disk /home/webby/public_html

7. Now I should choose the latest backup which is 2013-02-05-18-29-38, which means the backup is create at 6:29:38 PM on 5th of February 2013:

amrecover> setdate 2013-02-05-18-29-38
200 Working date set to 2013-02-05-18-29-38.

8. I have chosen the backup and tape to the latest date. I can then list out all the files in this backup directory as below:

amrecover> ls
2013-02-05-18-29-38 web.config.txt
2013-02-05-18-29-38 tmp/
2013-02-05-18-29-38 test/
2013-02-05-18-29-38 templates/
2013-02-05-18-29-38 robots.txt
2013-02-05-18-29-38 plugins/
2013-02-05-18-29-38 modules/
2013-02-05-18-29-38 media/
2013-02-05-18-29-38 logs/
2013-02-05-18-29-38 libraries/
2013-02-05-18-29-38 language/
2013-02-05-18-29-38 joomla.xml
2013-02-05-18-29-38 installation/
2013-02-05-18-29-38 index.php
2013-02-05-18-29-38 includes/
2013-02-05-18-29-38 images/
2013-02-05-18-29-38 htaccess.txt
2013-02-05-18-29-38 components/
2013-02-05-18-29-38 cli/
2013-02-05-18-29-38 cache/
2013-02-05-18-29-38 blog/
2013-02-05-18-29-38 administrator/
2013-02-05-18-29-38 README.txt
2013-02-05-18-29-38 LICENSE.txt
2013-02-05-18-29-38 .

9. Since I just want to restore blog directory, I will need to add blog into extraction list:

amrecover> add blog
Added dir /blog/ at date 2013-02-05-18-29-38

10. Once added, we can extract the backup to the working directory as below:

amrecover> extract
Extracting files using tape drive changer on host office.servering.com.
The following tapes are needed: ServerNetBackup-2
Restoring files into directory /home/webby/public_html
Continue [?/Y/n]? Y
Extracting files using tape drive changer on host office.servering.com.
Load tape ServerNetBackup-2 now
Continue [?/Y/n/s/d]? Y

It will then restore all your files into the working directory. Just exit the amrecover console and you can see the restored directory will be exist there, as example below:

$ ls -al | grep blog
drwxr-xr-x   5    webby  webby    4096    Jan 25 04:53    blog

Restoration complete!

CentOS: Install OpenLDAP with Webmin – The Simple Way

Installing OpenLDAP with Webmin will require a lot of steps. I have created a BASH script to install OpenLDAP with Webmin in CentOS 6 servers. To install, simply download the installer script at here:

Installation example will be as below. I am using a freshly installed CentOS 6.3 64bit installed with minimal ISO, with wget and perl installed.

1. Download and extract the installer script:

$ cd /usr/local/src
$ wget http://blog.secaserver.com/files/openldap_installer.sh

2. Change the permission to 755:

$ chmod 755 openldap_installer.sh

3. Execute the script and follow the wizard as example below:

$ ./openldap_installer.sh
           This script will install OpenLDAP
It assumes that there is no OpenLDAP installed in this host
   SElinux will be disabled and firewall will be stopped
What is the root domain? [eg mydomain.com]: majimbu.net
What is the administrator domain? [eg ldap.majimbu.net or manager.majimbu.net]: ldap.majimbu.net
What is the administrator password that you want to use?: MyN23pQ
Do you want to install Webmin/Do you want me to configure your Webmin LDAP modules? [Y/n]: Y

You should see the installation process output as below:

Kindly review following details before proceed with installation:
Hostname: ldap.majimbu.net
Root DN: dc=majimbu,dc=net
Administrator DN: cn=ldap,dc=majimbu,dc=net
Administrator Password: MyN23pQ
Webmin installation: Y
Can I proceed with the installation? [Y/n]: Y
Checking whether openldap-servers has been installed..
openldap-servers package not found. Proceed with installation
Disabling SElinux and stopping firewall..
iptables: Flushing firewall rules:                                 [ OK ]
iptables: Setting chains to policy ACCEPT: filter                  [ OK ]
iptables: Unloading modules:                                       [ OK ]
Installing OpenLDAP using yum..
Package cronie-1.4.4-7.el6.x86_64 already installed and latest version
Package sudo-1.7.4p5-13.el6_3.x86_64 already installed and latest version
OpenLDAP installed
Configuring OpenLDAP database..
Configuring monitoring privileges..
Configuring database cache..
Generating SSL..
Generating a 2048 bit RSA private key
writing new private key to '/etc/openldap/certs/majimbu_key.pem'
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [XX]:MY
State or Province Name (full name) []:Kuala Lumpur
Locality Name (eg, city) [Default City]:Bukit Bintang
Organization Name (eg, company) [Default Company Ltd]:Majimbu Net Corp
Organizational Unit Name (eg, section) []:IT
Common Name (eg, your name or your server's hostname) []:ldap.majimbu.net
Email Address []:[email protected]
Configuring LDAP service..
Checking OpenLDAP configuration..
config file testing succeeded
OpenLDAP installation done. Starting SLAPD..
Starting slapd:                                                    [ OK ]
Configuring LDAP client inside this host..
Checking the Webmin installation..
Webmin package not found in this host. Installing Webmin..
Retrieving http://www.webmin.com/download/rpm/webmin-current.rpm
warning: /var/tmp/rpm-tmp.XmXunn: Header V3 DSA/SHA1 Signature, key ID 11f63c51: NOKEY
Preparing... ########################################### [100%]
Operating system is CentOS Linux
    1:webmin ########################################### [100%]
Webmin install complete. You can now login to http://ldap.majimbu.net:10000/
as root with your root password.
Webmin installed.
Configuring webmin LDAP server module..
Configuring webmin LDAP client module..
Installation completed! [ OK ]
    You may need to open following port in firewall: 389, 636, 10000
Dont forget to refresh your Webmin module! Login to Webmin > Refresh Modules


4. Installation done. We need to refresh the Webmin module from the Webmin page. Login into Webmin > Refresh Modules:



5. You need to refresh again the Webmin page so the activated module will be listed in the side menu as screen shot below:


You can now start to create your LDAP object using your Webmin modules Webmin > Servers > LDAP Server To add port exception into firewall rules, you can use following command:

$ iptables -I INPUT -m tcp -p tcp --dport 389 -j ACCEPT
$ iptables -I INPUT -m tcp -p tcp --dport 636 -j ACCEPT
$ iptables -I INPUT -m tcp -p tcp --dport 10000 -j ACCEPT

Clone VM in VMware ESXi 5 using vSphere Client

I am a free VMware ESXi user. Cloning VM is no way to be as easy as VMware Workstation or vCenter. As you know, for free user, you can only manage your ESXi host directly using vSphere client. Cloning will save your time tremendously when you want to have several machines with same configuration inside one host.

I will be cloning a VM which run under CentOS 6.3 64bit platform with VMtools installed and using following details:

ESXi version: VMware ESXi 5.0.0 build 768111
ESXi host:
Main VM name: CentOS-Test
Main VM IP:
Cloned VM name: CentOS-Clone1
Cloned VM IP:

1. We will use SSH to do cloning. You can enable SSH service (if disabled) by using vSphere client and connect to your ESXi host. Go to Configuration > Security Profile > Services > Properties > select SSH > Options > Start as refer to screen shot below:



2. Create the cloned VM using similar specification with the main VM. There is one different thing here that we DO NOT need to create the disk for this cloned VM because we will use the clone virtual hard disk which we will create in step #3:



3. Now we need to clone the disk from main VM to the cloned VM directory. Connect to ESXi host using SSH and run following command:

~ # vmkfstools -i /vmfs/volumes/datastore1/CentOS-Test/CentOS-Test.vmdk /vmfs/volumes/datastore1/CentOS-Clone1/CentOS-Clone1.vmdk
Destination disk format: VMFS zeroedthick
Cloning disk '/vmfs/volumes/datastore1/CentOS-Test/CentOS-Test.vmdk'...
Clone: 100% done.

4. Once done, we will need to add the virtual hard disk into cloned VM using vSphere client. Edit the VM Properties > Add > Hard Disk > Use an existing virtual disk > locate the hard disk as screenshot below > Next > Next > Finish.




5. You will then should have the virtual machine properties summary as below:



6. Start the cloned virtual machine. Once you login into CentOS, you will realized that the network interface will be converted into eth1:


We need to run following command and restart the CentOS box to bring back the eth0 as this is actually the nature of VMware in cloning:

$ rm -Rf /etc/udev/rules.d/70-persistent-net.rules
$ init 6


7. After reboot, you should see the network interface has been changed to eth0 as screen shot below:



Done! Even though it required more steps to clone, it still can save a lot of your time doing OS installation especially if you just want to use the VM temporarily.


CentOS 6: Install Remote Logging Server (rsyslog)

In my office network, we have a lot of small devices like router and switches in our environment. My boss wants me to have a report on all of our network device for auditing purposes. To accomplish this objective, I need to have a server which run as logging server, accepting various type of logging from several devices. This method will ease up my auditing trail in one centralized location.

I will use my development server which run on CentOS to receive logs from my Mikrotik router, as picture below:


I am using following variables:

Rsyslog OS: CentOS 6.0 64bit
Rsyslog Server IP:
Router hostname: router.mynetwork.org
Router IP:

Rsyslog Server

1. Install Rsyslog package:

$ yum install rsyslog -y

2. Make sure you have following line uncommented in /etc/rsyslog.conf:

$ModLoad imuxsock.so
$ModLoad imklog.so
$ModLoad imudp.so
$UDPServerRun 514
$ModLoad imtcp.so
$InputTCPServerRun 514
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
*.info;mail.none;authpriv.none;cron.none /var/log/messages
authpriv.* /var/log/secure
mail.* -/var/log/maillog
cron.* /var/log/cron
*.emerg *
uucp,news.crit /var/log/spooler
local7.* /var/log/boot.log
$AllowedSender TCP,,
$AllowedSender TCP,

3. We need to add following rules into /etc/rsyslog.conf so logs received from the router will be output into a file called /var/log/router.log:

:fromhost-ip,isequal,""                      /var/log/router.log

There are a lot of options you can use to define your remote logging rules, which you can refer to this page: http://www.rsyslog.com/doc/property_replacer.html

4. Open firewall port 514 on TCP and UDP:

$ iptables -A INPUT -m tcp -p tcp --dport 514 -j ACCEPT
$ iptables -A INPUT -m udp -p udp --dport 514 -j ACCEPT

5. Restart Rsyslog daemon to apply the configuration:

$ service rsyslog restart

6. We also need to rotate this log file so it will need eating up the server’s disk space. Create a new text file called router under /etc/logrotate.d/ directory:

$ vim /etc/logrotate.d/router

And add following line:

    rotate 5
    /bin/kill -HUP `cat /var/run/syslogd.pid 2> /dev/null` 2> /dev/null || true

Router (Rsyslog Client)

1. Mikrotik router supports remote logging. I just need to login into the Winbox > System > Logging and configure Actions as screenshot below:


2. The next thing, is we need to create the rules on which logging level do we want to be sent to the rsyslog server. Go to Winbox > System > Logging and configure Rules as screenshot below:



Now, the router should send the log remotely to the rsyslog server and we can check the router logs by running following command:

$ tail -f /var/log/router.log
Jan 8 17:23:28 system,info log action changed by admin
Jan 8 17:26:09 system,info filter rule changed by admin
Jan 8 17:26:09 system,info filter rule changed by admin
Jan 8 17:26:23 system,info PPP AAA settings changed by admin
Jan 8 17:26:40 system,info L2TP Server settings changed by admin
Jan 8 17:26:49 system,info filter rule changed by admin
Jan 8 17:26:50 system,info filter rule changed by admin



CentOS 6: Install VPN PPTP Client – The Simple Way

I have a PPTP server which run on Mikrotik Routerboard and I need to connect one of my CentOS 6.3 box to this VPN to retrieve some information from internal server. The VPN account already created in PPTP server and this post will just show on how to connect from CentOS CLI box.

I will be using following variables:

Client OS: CentOS 6.3 64bit
PPTP Server:
Username: myvega
Password: CgK888ar$

1. Install PPTP using yum:

$ yum install pptp -y

2. Add the username and password inside /etc/ppp/chap-secrets:

myvega     PPTPserver     CgK888ar$    *

The format will be: [username][space][server name][space][password][space][ip address allowed]

3. Create a configuration files under /etc/ppp/peers directory called vpn.myserver.org using text editor:

$ vim /etc/ppp/peers/vpn.myserver.org

And add following line:

pty "pptp --nolaunchpppd"
name myvega
remotename PPTPserver
file /etc/ppp/options.pptp
ipparam vpn.myserver.org

4. Register the ppp_mppe kernel module:

$ modprobe ppp_mppe

5. Make sure under /etc/ppp/options.pptp, following options are not commented:


6. Connect to the VPN by executing following command:

$ pppd call vpn.myserver.org

Done! You should connected to the VPN server now. Lets check our VPN interface status:

$ ip a | grep ppp
3: ppp0:  mtu 1456 qdisc pfifo_fast state UNKNOWN qlen 3
inet peer scope global ppp0

If you face any problem, kindly look into /var/log/message for any error regards to pppd service:

$ tail -f /var/log/message | grep ppp
Dec 4 04:56:48 localhost pppd[1413]: pppd 2.4.5 started by root, uid 0
Dec 4 04:56:48 localhost pptp[1414]: anon log[main:pptp.c:314]: The synchronous pptp option is NOT activated
Dec 4 04:56:48 localhost pptp[1420]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 1 'Start-Control-Connection-Request'
Dec 4 04:56:48 localhost pppd[1413]: Using interface ppp0
Dec 4 04:56:48 localhost pppd[1413]: Connect: ppp0  /dev/pts/1
Dec 4 04:56:48 localhost pptp[1420]: anon log[ctrlp_disp:pptp_ctrl.c:739]: Received Start Control Connection Reply
Dec 4 04:56:48 localhost pptp[1420]: anon log[ctrlp_disp:pptp_ctrl.c:773]: Client connection established.
Dec 4 04:56:49 localhost pptp[1420]: anon log[ctrlp_rep:pptp_ctrl.c:251]: Sent control packet type is 7 'Outgoing-Call-Request'
Dec 4 04:56:49 localhost pptp[1420]: anon log[ctrlp_disp:pptp_ctrl.c:858]: Received Outgoing Call Reply.
Dec 4 04:56:49 localhost pptp[1420]: anon log[ctrlp_disp:pptp_ctrl.c:897]: Outgoing call established (call ID 0, peer's call ID 137).
Dec 4 04:56:49 localhost pppd[1413]: CHAP authentication succeeded
Dec 4 04:56:49 localhost pppd[1413]: MPPE 128-bit stateless compression enabled
Dec 4 04:56:50 localhost pppd[1413]: local IP address
Dec 4 04:56:50 localhost pppd[1413]: remote IP address

To disconnect the VPN, just kill the pppd process:

$ killall pppd

MailMe: Simple Bash to Notify Your Command Status via Email

I usually having problem whereby I always forgot to check what happen to my copying or downloading progress in the server. This has gives me idea to create a script to notify me via email once the command executed and completed.

For example, I usually download a big installer file which usually make me constantly check the download progress. I just need an alert to be send to me once the respective command completed whether it is failed or succeed. Another case is when I am running a big migration. I need to copy the whole /home directory to external hard disk in this will takes days to complete. Using MailMe will definitely increase my works efficiency. I just need to run the respective command and wait for the notification email. Thats all.

1. Install sendmail and mailx using yum. Mailx is required. You can use Postfix or any other SMTP server to send the email instead of sendmail:

$ yum install sendmail mailx -y

2. Start the sendmail service:

$ chkconfig sendmail on
$ service sendmail start

3. Download and integrate the script into environment. We will need to place the script under /usr/local/bin directory.

$ wget http://blog.secaserver.com/files/mailme -P /usr/local/bin
$ chmod 755 /usr/local/bin/mailme

4. We need to change the MAILTO value so the script will send the notification automatically to your email. Open the script using text editor:

$ vim /usr/local/bin/mailme

And change following line:

Done. Now you can integrate mailme into your command. Example as below:

– Download the CentOS 6.3 64bit ISO:

$ mailme 'wget http://centos.ipserverone.com/centos/6.3/isos/x86_64/CentOS-6.3-x86_64-bin-DVD1.iso'

– Rsync the whole backup directory to another server:

$ mailme 'rsync -avzP /backup/*.tar.gz [email protected]:/backup'

Once the command executed successfully, you will get simple email notification like below:

Subject: MailMe Command Notification: test.servering.com
Command: wget http://wordpress.org/latest.tar.gz
Date/Time: Mon Oct 1 11:14:54 MYT 2012

CentOS: Install and Configure Amanda Backup Server

I am going to setup Amanda backup into our office’s server to enable network backup to all of our servers located in different geographical area. The idea will be as below:


All servers are running on CentOS 6 64bit with iptables and SELINUX have been turned off.

Install Amanda Server

1. Install amanda packages using yum:

$ yum install -y amanda*

2. Create the configuration file. I am going to name this backup project is ServerNetBackup. We need to create a directory named by this project and all configuration files for this project will be underneath it:

$ mkdir /etc/amanda/ServerNetBackup

3. Create the core configuration file, amanda.conf:

$ vim /etc/amanda/ServerNetBackup/amanda.conf

And paste following line:

org "ServerNetBackup"                 # Organization name for reports
mailto "[email protected]"        # Email address to receive reports
netusage 10000 Kbps                   # Bandwidth limit, 10M
dumpcycle 1 week                      # Backup cycle is 7 days
runspercycle 7                        # Run 7 times every 7 days
tapecycle 15 tapes                    # Dump to 15 different tapes during the cycle
tpchanger "chg-disk"                  # The tape-changer glue script
changerfile "/etc/amanda/ServerNetBackup/changer"     # The tape-changer file
tapedev "file://central_backup/ServerNetBackup/slots" # The no-rewind tape device to be used
tapetype HARDDISK                                     # Define the type of tape
infofile "/etc/amanda/ServerNetBackup/curinfo"        # Database directory
logdir "/etc/amanda/ServerNetBackup/logs"             # Log directory
indexdir "/etc/amanda/ServerNetBackup/index"          # Index directory
define tapetype HARDDISK {                            # Define our tape behaviour
length 100000 mbytes                                  # Every tape is 100GB in size
amrecover_changer "changer"                           # Changer for amrecover
define dumptype global {                              # The global dump definition
maxdumps 2                                            # The maximum number of backups run in parallel
estimate calcsize                                     # Estimate the backup size before dump
holdingdisk yes                                       # Dump to temp disk (holdingdisk) before backup to tape
index yes                                             # Generate index. For restoration usage
define dumptype root-tar {                            # How to dump root's directory
global                                                # Include global (as above)
program "GNUTAR"                                      # Program name for compress
comment "root partitions dumped with tar"
compress none                                         # No compress
index                                                 # Index this dump
priority low                                          # Priority level
define dumptype user-tar {                            # How to dump user's directory
root-tar                                              # Include root-tar (as above)
comment "user partitions dumped with tar"
priority medium                                       # Priority level
define dumptype comp-user-tar {                       # How to dump & compress user's directory
user-tar                                              # Include user-tar (as above)
compress client fast                                  # Compress in client side with less CPU (fast)

Configure Backup Location

1. Prepare the directory to store all backups:

$ mkdir -p /central_backup/ServerNetBackup/slots

2. Assign correct permission to user amandabackup for the configuration directory and backup directory:

$ chown amandabackup.disk /central_backup -Rf
$ chown amandabackup.disk /etc/amanda/ServerNetBackup -Rf

3. Login as user amandabackup:

$ su - amandabackup

4. Create the virtual tape. This is where the backup files will be stored. We will need to create 15 slots as per tapecycle keyword:

$ for n in `seq 1 15`; do mkdir /central_backup/ServerNetBackup/slots/slot${n}; done

5. We then need to label all slots:

$ for n in `seq 1 15` ; do amlabel ServerNetBackup ServerNetBackup-${n} slot ${n}; done

4. Create all required directories as defined in the configuration file:

mkdir /etc/amanda/ServerNetBackup/curinfo
$ mkdir /etc/amanda/ServerNetBackup/logs
$ mkdir /etc/amanda/ServerNetBackup/index

Configure Service and What to Backup

1. We need to define what to backup in a file called disklist. As user amandabackup, create this file:

$ su - amandabackup
$ vim /etc/amanda/ServerNetBackup/disklist

And add following line:

sv101.krispykream.net /home/webby/public_html   comp-user-tar
gogogo.my-server.org  /etc                      root-tar

Notes: Make sure the hostname is FQDN and can be resolved to an IP. Add the host entry into /etc/hosts is recomended.

2. Exit from amandabackup user and get back to root user:

$ exit

3. Enable amanda service in xinetd.d directory:

$ vim /etc/xinetd.d/amanda

And change following line from “yes” to “no”:

disable = no

4. Enable on boot and restart xinetd service:

$ chkconfig xinetd on
$ service xinetd restart

5. Check the amanda server whether it is running properly by using following command:

$ netstat -a | grep amanda
udp        0          0       *:amanda                *:*

If you see result as above, amanda server is ready to serve!


Install Amanda Backup Client

1. Login to the client’s server and install required package for Amanda using yum:

$ yum install -y amanda amanda-client

2. As user amandabackup, add following line into /var/lib/amanda/.amandahosts to specify where is Amanda backup server:

$ su - amandabackup
$ vim /var/lib/amanda/.amandahosts

And make sure the value as below:

office.servering.com amandabackup amdump
localhost amandabackup amdump
localhost.localdomain amandabackup amdump

3. Exit from user amandabackup and turn to root user:

$ exit

4. Enable amanda service in xinetd.d directory:

$ vim /etc/xinetd.d/amanda

And change following line from “yes” to “no”:

disable = no

5. Enable on boot and start the xinetd service:

$ chkconfig xinetd on
$ service xinetd start

6. Add an entry in /etc/hosts to define backup server IP by adding following line:      office.servering.com

7. In some case, you may need to change the permission of the directory that you want to backup. For example, I need to allow user amandabackup to access directory /home/webby/public_html to create backup:

As root user, change the permission of the directory:

$ chmod 755 /home/webby

Run the Backup Process

1. Now go back to the Amanda server and check our configuration file as amandabackup user:

$ su - amandabackup
$ amcheck ServerNetBackup

You should see the output similar to this:

Client check: 2 host checked in 2.070 seconds.  0 problems found.

2.  If no error found, you can start the backup process immediately by running following command:

$ amdump ServerNetBackup

Or, we can automate this process using cronjob. Run following command as amandabackup user:

$ crontab -e

And add following line:

45 0 * * 2-6 /usr/sbin/amdump ServerNetBackup

3. As root user, reload the crond service to activate this job:

$ service crond reload

If the backup process completed, you should receive an email with backup report. In this email, it will tell you where is the backup location and process summary. I will continue on Amanda restoration process on the next post!

Update: I just updated this post on 5th Feb 2013 to use yum repository instead of package from zmanda.

MySQL – Recover Data Using mysqlbinlog

Our company just launched a new shopping cart web application and currently is under user acceptance phase. Last night, they discovered one bug which cause the payment not synchronize with the sales order data. They thought that it is a fault transaction and proceed to delete the data to synchronize the payment back.

Now they want me to recover only the deleted data which related to the tblpayment between 9 PM to 11 PM. The worse thing about this is I do not activate any MySQL backup yet because I thought it is just for testing. Luckily, I am using InnoDB (XtraDB) and activated binary logging. This could save my day!

I am using following variables:

OS: CentOS 6.2 64bit
MySQL: Percona XtraDB Cluster version 5.5.24, wsrep_23.6.r341
Database: webshop
Tables: tblpayment

1.  Let see if we have binary log active in our server. Login into MySQL console:

mysql> SHOW BINARY logs;
| Log_name         | File_size  |
| mysql-bin.000022 | 12497220   |
| mysql-bin.000023 | 828371469  |

2. Before we start recovering, it is good to flush the binlog and create a full backup:

$ mysqladmin -u root -p flush-logs
$ mysqldump  -u root -p webshop > webshop.sql

3. You should see that a new binary log has been generated as below:

mysql> SHOW BINARY logs;
| Log_name         | File_size  |
| mysql-bin.000022 | 12497220   |
| mysql-bin.000023 | 828371469  |
| mysql-bin.000024 | 4280       |

4. There are several types of binlog format and I am using ROW format as refer to my global variables:

mysql> SHOW GLOBAL VARIABLES LIKE 'binlog_format';
| Variable_name | Value |
| binlog_format | ROW   |
1 row in set (0.00 sec)

5. We will use a tool in MySQL/Percona called mysqlbinlog. This application will help us retrieve our binary log. As we know that last night, the moment that we want to recover to, all database transaction will be log inside mysql-bin.000023. So we will use mysqlbinlog to read the mysql-bin.000023 between 9 PM to 11 PM as below:

$ mysqlbinlog --start-datetime="2012-07-27 21:00:00" --stop-datetime="2012-07-27 23:00:00" mysql-bin.000023 > recovery.txt

6. Since I am using ROW binlog format, mysqlbinlog will retrieve the data in encoded base64 format. I need to have something understandable to read and analyze so I can recover the data just before the time when the data deleted from database. So we need to add decoded option to the previous command:

$ mysqlbinlog --start-datetime="2012-07-27 21:00:00" --stop-datetime="2012-07-27 23:00:00" mysql-bin.000023 --base64-output=decode-rows --verbose > recover_decoded.txt

7. Now we have something understandable to read. Using text editor, open recover_decoded.txt and locate the position of the delete transaction. In my case, I found mine as below:

#120727 20:35:02 server id 1 end_log_pos 467634 Query thread_id=25395 exec_time=0 error_code=0
SET TIMESTAMP=1343392502/*!*/;
# at 467634
# at 467711
#120727 22:35:02 server id 1 end_log_pos 467711 Table_map: `webshop`.`tblpayment` mapped to number 75
#120727 22:35:02 server id 1 end_log_pos 467972 Delete_rows: table id 75 flags: STMT_END_F
### DELETE FROM webshop.tblpayment
### @1=15744
### @2=0
### @3=0
### @4=1343392202
# at 467972
#120727 22:35:02 server id 1 end_log_pos 467999 Xid = 12984
# at 467999

In above log, I can translate that, the first row (@1=15744) is the paymentID which has been deleted by my developer. So I just need to find the INSERT query for this paymentID: 15744 and I found it as below:

#120727 21:12:05 server id 1 end_log_pos 458079 Query thread_id=25395 exec_time=0 error_code=0
SET TIMESTAMP=1343392502/*!*/;
# at 458079
# at 458156
#120727 21:12:05 server id 1 end_log_pos 458156 Table_map: `webshop`.`tblpayment` mapped to number 75
#120727 21:12:05 server id 1 end_log_pos 458417 Write_rows: table id 75 flags: STMT_END_F
### INSERT INTO webshop.tblpayment
### SET
### @1=15744
### @2=0
### @3=0
### @4=1343392202
# at 458417
#120727 21:12:05 server id 1 end_log_pos 458444 Xid = 12985
# at 458444

8. Now I am able to locate where is the INSERT command position in binlog which is started from position 458079 to 458444. Now lets insert back this line into database by running following command:

$ mysqlbinlog --start-position=458079 --stop-position=458444 mysql-bin.000023 | mysql -u root -p

Done! Lets verify whether the missing payment has been inserted back into the database:

mysql > SELECT paymentID FROM webshop.tblpayment WHERE paymentID=15744;
| paymentID |
| 15744     |
1 row in set (0.00 sec)

What I learnt today:

  • Do backup no matter what kind of environment your application run (development, testing or production).
  • Enable binary logging. It surely increase your chance to recover the database!

BASH: Some of My Looping Command Collections

Here are several of my BASH commands collection related to looping which I frequently used. This list will be always updated for reference and knowledge base.

1. Copy .htaccess file under /home/website1/public_html to all directories and sub-directories under /home/website2/public_html excluding .svn directories:

cd /home/website2/public_html
for i in $(find -type d | egrep -v .svn); do cp /home/website1/.htaccess $i; done

2. Rename all files and directories in current path to .bak:

for i in *; do mv $i $i.bak; done

3. Remove .bak extension in all files and directories in current path (undo for command #2):

for i in *; do mv $i $(basename $i .bak); done

4. Return number of files in each directory and sub-directory:

find -type f -execdir pwd \; | sort | uniq -c

5. Generate 24 files with 10 MB in size under current directory:

for i in $(seq 1 1 24); do dd bs=1024 count=10000 if=/dev/zero of=file.$i; done

6. Generate some random data for database foo and table bar in 3 fields (val1,val2,val3):

mysql -e "INSERT INTO foo.bar (val1, val2, val3) VALUES ((SELECT floor(rand() * 10) as randNum), (SELECT floor(rand() * 10) as randNum),(SELECT floor(rand() * 10) as randNum));"


Your share and opinion is welcome!

Apache: Kill Certain httpd/PHP Processes in DSO

Our development team are working on a new project which involved many scripts to be executed in long time using Apache. These scripts are used to migrate and convert old database to the new database fields and formats. Most of the scripts are still under development and required me to monitor the process and terminate the process when required.

One problem when you run Apache and PHP in DSO mode (which is enabled by default when installing using yum) is we can not monitor and see the PHP process that execute the script. PHP is running as dynamic shared object under Apache so the only process you can see from the server is httpd.

We will use following example to illustrate what was happen:

OS: CentOS 6.2 64bit
PHP script URL: http://develteam.org/migration/convert.php
PHP script directory: /home/devel/public_html/migration

If you are running PHP under CGI, suPHP or FastCGI in Apache, you can easily see which PID hold the process and we can kill the process immediately. Example as below:

$ ps aux | grep convert.php | grep -v grep
devel  21003    29.0    0.4    217472    36080   ?    S   13:56   0:00   /usr/bin/php /home/devel/public_html/migration/convert.php

The PID (column no 2) is 21003 and we can use kill command to terminate the process. But when you configure PHP to be run under DSO, the same command will produce nothing as below:

$ ps aux | grep php | grep -v grep

In this case, we need to get some help from another application called lsof (list of open file). This command will produced all open directory that used by certain PID. Since we know that the PHP script is located under /home/devel/public_html/migration directory, we can use this to filter the lsof output:

$ lsof | grep /home/devel/public_html/migration
httpd   32117    nobody   cwd    DIR      8,5    12288     40142612    /home/devel/public_html/migration

From the output we can see the PID (column no 2) of the httpd process that open this directory. This indicate the process that we need to terminate. I will then use kill command to terminate the httpd process:

$ kill -9 32117

To terminate all processes which return by lsof, we can use awk to filter only column no 2 (PID) and execute kill command accordingly:

$ kill -9 `lsof | grep /home/devel/public_html/migration | awk {'print $2'}`

Now our developer team can modify and start again the PHP process for their next test.