How to Fix ‘Too many open files’ Problem

I have been facing following problem when executing Percona Xtrabackup in my CentOS 6.3 box:

xtrabackup_55 version 2.1.3 for Percona Server 5.5.16 Linux (x86_64) 
(revision id: 608) 
xtrabackup: uses posix_fadvise(). 
xtrabackup: cd to /var/lib/mysql 
xtrabackup: Target instance is assumed as followings. 
xtrabackup: innodb_data_home_dir = ./ 
xtrabackup: innodb_data_file_path = ibdata1:100M:autoextend 
xtrabackup: innodb_log_group_home_dir = ./ 
xtrabackup: innodb_log_files_in_group = 2 
xtrabackup: innodb_log_file_size = 67108864 
xtrabackup: using O_DIRECT 
130619 12:57:36 InnoDB: Warning: allocated tablespace 2405, old maximum 
was 9 
130619 12:57:37 InnoDB: Operating system error number 24 in a file 
operation. 
InnoDB: Error number 24 means 'Too many open files'. 
InnoDB: Some operating system error numbers are described at 
InnoDB: 
http://dev.mysql.com/doc/refman/5.5/en/operating-system-error-codes.html 
InnoDB: Error: could not open single-table tablespace file 
InnoDB: We do not continue the crash recovery, because the table may become 
InnoDB: corrupt if we cannot apply the log records in the InnoDB log to it. 
InnoDB: To fix the problem and start mysqld:

Linux / UNIX sets soft and hard limit for the number of file handles and open files. By default the value is too low as you can check using following command:

$ ulimit -n
1024

To increase open files limitation, you can use several ways:

1. Set the limit using ulimit command

$ ulimit -n 8192

This is temporary solution as it will increase the limit accordingly per login session.  Once you logged out and login again, this value will back to default.

2. Permanently define in /etc/security/limits.conf

To make it permanent, you can define the values (soft and hard limit) at /etc/security/limits.conf by adding following lines:

* soft nofile 8192
* hard nofile 8192

The soft limit is the value that the kernel enforces for the corresponding resource. The hard limit acts as a ceiling for the soft limit. Reboot the server to apply the changes. Or, if you do not want to reboot, add following line into the respective user’s .bashrc file, as in my case is root:

$ echo "ulimit -n 8192" >> ~/.bashrc

You will then need to relogin into the session to see the changes.

If the problem still persists, you might need to increase the limit higher and retry again the failed process.

Warning

Do not set the value to unlimited as it can caused PAM to fail and you will not able to SSH or console into the box with following error:

Apr 19 09:22:15 rh02 sshd[5679]: error: PAM: pam_open_session(): Permission denied

This issue has been reported in this bug report.

Further reading:

http://ss64.com/bash/ulimit.html
http://ss64.com/bash/limits.conf.html

Linux: Rsync using Web Interface

We have just launched a new website which being deploy exactly from our development server. There are constant changes on the source code where our programmer always need to do some debugging stuff which being reported from our users. The problem I face frequently is every time they want to sync the new PHP code, I need to manually sync the file for them. My boss do not allow anyone except him to have FTP access to the server.

I am using rsync to do the file syncing from development server to the live server. Both servers are having identical file path for the PHP code, which means in production server, the Apache document root is /home/mywebs/public_html as well as development server. So I need a tool to help me solve this problem. Instead of me doing this for them, why dont they sync the files  to the live server by themselves?

In order to achieve this, I will be using rsync with Webmin and Usermin, a web-based interface for system administration for Unix. Both servers run CentOS 6.2 64bit. My server architecture and variable is as below:

 

Notes: All steps below should be completed in the development (source) server. No need to setup anything on the production (target) server.

1. Download and install Webmin:

$ cd /usr/local/src
$ wget http://prdownloads.sourceforge.net/webadmin/webmin-1.590-1.noarch.rpm
$ rpm -Uhv webmin-1.590-1.noarch.rpm

2. Download and install Usermin:

$ cd /usr/local/src
$ wget http://cdnetworks-kr-1.dl.sourceforge.net/project/webadmin/usermin/1.510/usermin-1.510-1.noarch.rpm
$ rpm -Uhv usermin-1.510-1.noarch.rpm

3. Install rsync using yum:

$ yum install -y rsync

4. Open Webmin, Usermin and rsync ports in iptables. Open iptables and add following lines at /etc/sysconfig/iptables using text editor before any REJECT rules:

-A INPUT -p tcp -m tcp --dport 10000 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 20000 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 873 -j ACCEPT

5. Save and restart iptables:

$ service iptables restart

6. Open Webmin using web browser. In my setup, I will use this URL http://211.43.12.12:10000. Login as root user and navigate to Webmin > Others > Custom Commands > Create a new custom command.

7. Add required information as highlighted below:

8. Now we need to create a user to be used by programmer team. Go to Webmin > Webmin Users > Create a new Webmin user.

Under “Available Webmin modules” section, tick for the “Custom Commands” as screen shot below:

9. Edit the user again. We just want them to choose files under /home/mywebs directory. Go to Webmin > Webmin Users > choose user ‘developer’ > Permissions for all modules and choose as below:

10. Lets create user developer. Go to Webmin > System > Users and Groups > Create a new user and fill up required information as screenshot below:

11. Now as for the last step, we need to setup shared SSH keys between these 2 servers so the rsync can be executed without user mywebs’s password authentication in destination server:

$ su - mywebs
$ ssh-keygen -t dsa # just press enter for all questions
$ ssh-copy-id -i ~/.ssh/id_dsa mywebs@server1.mywebs.biz

 

Done. Now ask your developer to access to Usermin at http://211.43.12.12:20000 and go to Usermin > Others > Custom Commands. The rsync script that we have create before is now available for the developer. They can use this features to sync the file to live server whenever they want:

Fix Windows MBR using Ubuntu Live CD (USB)

I have a user which having problem after removing CentOS partition (using partition manager) in a dual boot PC with Windows Vista. The problem rise after that and Windows Vista simply unable to boot due to missing MBR. Since I have no Windows Vista installation disc anymore with me, I need to use some other way in order to fix the MBR.

Luckily I have the Ubuntu Live CD ISO which I have downloaded several days ago. The idea is boot into LiveCD, use a tool called ms-sys in Ubuntu and fix the MBR right away.

1. Get Ubuntu Desktop Live CD from Ubuntu download page here: http://www.ubuntu.com/download.

2. Download UNetbootin from here. We will use this application to burn our ISO into flash drive.

3. Prepare our flash drive. Format it to FAT32 file system. Since I am using Windows 7, just right click to the drive and click ‘Format’.

4. Launch UNetbootin and select ‘Diskimage’. Locate the ISO file in your PC and click OK as screenshot below:

 

5. Download ms-sys from http://ms-sys.sourceforge.net/ . I will the most stable version which is  2.2.1. Extract it and put it into the flash drive together with the Ubuntu Live CD as screenshot below:

 

6. Now we have enough tools to start the recovering process. Go to the problematic PC and boot from USB. Select “Try Ubuntu without installing” and make sure Ubuntu Live CD is booted up until the desktop page.

 

7. Open terminal. Go to Dashboard icon and type “terminal” to open it. We need to copy ms-sys directory into the local disk and build it:

$ cp /cdrom/ms-sys ~ -Rf
$ cd ~/ms-sys
$ sudo make

8. Run following command to analyze your disk partition:

$ sudo fdisk -l | grep /dev
Disk /dev/sda: 160.0 GB, 160041885696 bytes
/dev/sda1              16    12288527     6144256   12  Compaq diagnostics
/dev/sda2   *    12288528    96175455    41943464    7  HPFS/NTFS/exFAT
/dev/sda3        96175456   312577823   108201184    7  HPFS/NTFS/exFAT
Disk /dev/sdb: 4007 MB, 4007657472 bytes
/dev/sdb1   *         128     7827455     3913664    b  W95 FAT32

The Windows installation is located in the disk that have NTFS. In this case, it will be /dev/sda.

9. Navigate to the bin directory under ms-sys as below and install the MBR using following command:

$ cd ~/ms-sys/bin
$ sudo ./ms-sys --mbrvista /dev/sda

Done! You can now reboot the PC and remove the USB drive. Your Windows Vista should able to load after that.

Notes

Other options that can be use with ms-sys:

$ ./ms-sys 
Usage:
	./ms-sys [options] [device]
Options:
    -1, --fat12     Write a FAT12 floppy boot record to device
    -2, --fat32nt   Write a FAT32 partition NT boot record to device
    -3, --fat32     Write a FAT32 partition DOS boot record to device
    -4, --fat32free Write a FAT32 partition FreeDOS boot record to device
    -5, --fat16free Write a FAT16 partition FreeDOS boot record to device
    -6, --fat16     Write a FAT16 partition DOS boot record to device
    -l, --wipelabel Reset partition disk label in boot record
    -p, --partition Write partition info (hidden sectors, heads and drive id)
                    to boot record
    -H, --heads <n> Manually set number of heads if partition info is written
    -7, --mbr7      Write a Windows 7 MBR to device
    -i, --mbrvista  Write a Windows Vista MBR to device
    -m, --mbr       Write a Windows 2000/XP/2003 MBR to device
    -9, --mbr95b    Write a Windows 95B/98/98SE/ME MBR to device
    -d, --mbrdos    Write a DOS/Windows NT MBR to device
    -s, --mbrsyslinux    Write a public domain syslinux MBR to device
    -z, --mbrzero   Write an empty (zeroed) MBR to device
    -f, --force     Force writing of boot record
    -h, --help      Display this help and exit
    -v, --version   Show program version
    -w, --write     Write automatically selected boot record to device
 
    Default         Inspect current boot record

This tutorial is also applicable to fix MBR for other Windows platform as stated in the command options above.

Install OpenFiler from USB Drive

We just received a new storage server from DELL which will be used to host web server cluster. We will use Openfiler, a free NAS/SAN operating system to manage our RAID-10 storage.

The problem we have now is our storage server has no optical drive and we do not have any external optical drive available here in office. Alternatively, we can use USB stick drive and make sure our server are able to boot from USB.

Server: DELL PowerEdge R510
OS version: Openfiler 2.99 64bit
USB flash drive: /dev/sdb
RAID 10 virtual disk: /dev/sda

Preparing the Flash Drive

1. Download the ISO from here into your local PC. In my case, I downloaded the x86_64 distribution ISO.

2. Download UNetbootin from here. We will use this application to burn our ISO into flash drive.

3. Prepare our flash drive. Format it to FAT32 or FAT file system. Since I am using Windows 7, just right click to the drive and click ‘Format’.

4. Launch UNetbootin and select ‘Diskimage’. Locate the ISO file in your PC and click OK as screenshot below:

5. Once ready, we need to copy the whole ISO into a directory called ‘root’. Navigate to your USB pendrive and create a directory ‘root’ at the parent directory:

 

Once the copy completed, verify and make sure the ISO is exist as screenshot below:

 

Installing into the Server

1. Plug into the server’s USB port and press F11 to show boot options as below:

2. Accept default value up until you see installation method page. Choose “Hard Drive” and you just need to select /dev/sdb1 (our flash drive) and enter “root/” (without quote) so the installer can try to find the Openfiler ISO file (which we have save it earlier):

3. The installer should loaded properly now and you can proceed with the Openfiler installation wizard. Up until you see Bootloader Configuration setting, select “/dev/sda1    First sector of boot partition” as screenshot below:

4. Proceed with the installation wizard until Finish. It will required a reboot after installation completed. REMOVE THE USB DRIVE AS WELL!

 

Post-Installation Configuration

1. By default, Openfiler will boot to “Other” because this is the disk partition that we have install from (see below screenshot). Make sure during the first boot after installation finish, select the Openfiler kernel manually and press Enter:

2. After sysinit complete, you should see the login page of Openfiler. Login as root user and open this text file:

$ vi /etc/bootloader.conf

And change following line from this:

default other0

To this:

default 2.6.32-71.18.1.el6-0.20.smp.gcc4.1.x86_64

3. Save the file and run following command to apply the new bootloader configuration:

$ bootman

If you see some error, ignore it. This error indicate that /dev/sdb1 (the USB flash drive) is not exist which is correct.

Done. You can try to reboot once more to make sure that the bootloader will automatically select Openfiler instead of other device when booting.

Apache: Create/Mount Same Identical Directory in Different Path

One of our web developer required to have 2 directories to be always identical, which means whatever files contain in directory ‘a’ will be appeared in directory ‘b’. From the server and operating system point-of-view, this can be achieved using several methods:

  • Use symbolic link
  • Use mount bind
  • Use bindfs

Each and every method has advantages and disadvantages which will be explained accordingly. I will using following variables:

OS: CentOS 6 64bit
Document root: /home/user/public_html
Directory #1 (reference): /home/user/public_html/system1/filesharing/
Directory #2 (follower): /home/user/public_html/system2/filesharing/

Method 1: Symbolic Link

1. Before you want to use symlink in Apache, you need to allow the functionality in Apache configuration file. Add following line into /etc/httpd/conf/httpd.conf (this will affect into global configuration):

Options +FollowSymLinks -SymLinksIfOwnerMatch

Or:

You can specifically add following line into user’s public_html directory .htaccess file:

Options +FollowSymLinks -SymLinksIfOwnerMatch

It will required AllowOverride to be turned on in httpd.conf as below:

AllowOverride ALL

2. Restart the Apache server for new configuration to be loaded:

$ /etc/init.d/httpd restart

3. Navigate to the secondary directory and create a symbolic link:

$ cd /home/user/public_html/system2
$ ln -s ../system1/filesharing filesharing

This will virtually map the filesharing directory under system1 into system2 directory using relative path.

Advantages:

  • Symlink can be executed in user level as long as the user has write permission to the current directory.
  • You can use PHP symlink function to create symlink.
  • Whenever you delete the follower directory, it will not delete the reference directory. It only remove the symbolic link.
  • You can use relative path.

Disadvantages:

  • By default Apache will turn above option off. So you might see following error which is popular:
     Symbolic link not allowed or link target not accessible
  • Anyone can remove the symlink easily. For example, user A create symlink to user B folder and user B then can remove symlink without user A acknoledgement.
  • Symlink is one of the most popular method for hackers to browse around directories in server. No matter in which directory they can get into, symlink will be working as long as they have the write permission to the directory, usually /tmp. For example, from /tmp folder I can symlink to /var/lib/mysql and browse all databases name inside the server.

Method 2: Mount bind

1. As root user, you can use mount bind to mount same directory with different name. Create a new directory to be mount in the follower directory:

$ mkdir -p /home/user/public_html/system2/filesharing

2. Mount the follower directory to the reference directory:

$ mount --bind /home/user/public_html/system1/filesharing /home/user/public_html/system2/filesharing

3. Add following line into the /etc/fstab if you want it to mount during boot (sysinit) or /etc/rc.local if you want to mount it after boot complete:

For /etc/fstab:

/home/user/public_html/system1/filesharing    /home/user/public_html/system2/filesharing    none    bind    0 0

For /etc/rc.local:

mount --bind /home/user/public_html/system1/filesharing /home/user/public_html/system2/filesharing

You can unmount it manually using following command:

$ umount /home/user/public_html/system2/filesharing

Advantages:

  • Apache will treat both directories as normal directory, which can bypass some expected error in symlink.
  • Only root and sudoers are able to execute this.
  • Mount command options can be applied to follower directory like read-only, permission and ownership.

Disadvantages:

  • You need to make sure the path is mounted correctly. For example after reboot or whenever the reference’s hard disk is having mounting or I/O problem.
  • Use mount bind with precaution. Most Linux and Unix file systems don’t allow hard links to directories (except for the . and .. entries that mkdir creates itself). The reasons are are pretty obvious: you could really confuse programs like ls (ls -R)find and of course fsck if you created links that recursed back to themselves.

Method 3: Bindfs

1. Bindfs is actually working similar to mount bind, instead it is using FUSE for mounting and having better functionality and permission configuration compare to mount. Before installing bindfs, we need to install FUSE with development package using yum:

$ yum install fuse fuse-devel -y

2. Download bindfs at here and install:

$ cd /usr/local/src
$ wget http://bindfs.googlecode.com/files/bindfs-1.10.3.tar.gz
$ tar -xzf bindfs-1.10.3.tar.gz
$ cd bindfs-* 
$ configure
$ make
$ make install

3. Create the ‘filesharing’ directory and mount the directory as below:

$ cd /home/user/public_html/system2
$ mkdir filesharing
$ bindfs -p 755 /home/user/public_html/system1/filesharing filesharing

3. Add following line into the /etc/fstab if you want it to mount during boot (sysinit) or /etc/rc.local if you want to mount it after boot complete:

For /etc/fstab:

bindfs#/home/user/public_html/system1/filesharing    /home/user/public_html/system2/filesharing    fuse    perms=755    0 0

For /etc/rc.local:

bindfs -p 755 /home/user/public_html/system1/filesharing /home/user/public_html/system2/filesharing

You can use mount command to check whether it is mounting correctly or not:

$ mount | grep bindfs
bindfs on /home/user/public_html/system2/filesharing type fuse.bindfs (rw,nosuid,nodev,allow_other,default_permissions)

To unmount manually, simply use umount command:

$ umount /home/user/public_html/system2/filesharing

Advantages:

  • You can create custom rules depending on the policy as you can refer to the man page here. It is useful if you want different people to access with different attributes instead of following the reference directory attributes.
  • Apache will treat both directories as normal directory, which can bypass some expected error in symlink.
  • Only root and sudoers are able to execute this.

Disadvantages:

  • It run on top of FUSE. In some kernel, FUSE has performance issues and easy to hang.
  • You need to make sure the path is mounted correctly. For example after reboot or whenever the reference’s hard disk is having mounting or I/O problem.

 

MySQL: Live Backup using LVM Snapshots

LVM snapshot is an exact copy of an LVM partition that has all the data from the LVM volume from the time the snapshot was created. The advantages of this is that we can get reliable backup in a small matter of time without suspending the MySQL service. Normal backup using mysqldump or mysqlhotcopy will create a logical backup, which usually expensive and CPU intensive.

The idea is like this:

  1. Create a new logical volume in new hard disk
  2. Mount the logical volume into MySQL data and log directory
  3. Create LVM snapshot to the MySQL partition that hold MySQL data and log
  4. Mount the LVM snapshot into the server
  5. Create MySQL backup from that snapshot

I will use following variables:

OS: CentOS 6.2 64bit
MySQL: Percona 5.5.20
Old MySQL data & log directory: /var/lib/mysql
New MySQL data & log directory: /mysql
Backup MySQL partition: /mysql_snap

1. We will use another hard disk to mount /mysql via logical volume. Lets create the partition first:

$ fdisk /dev/sdb

Sequence pressed on keyboard: n > p > 1 > Enter > Enter > w

2.  You should see disk partition has been created as /dev/sdb1 as below:

$ fdisk -l /dev/sdb
 
Disk /dev/sdb: 11.8 GB, 11811160064 bytes
255 heads, 63 sectors/track, 1435 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xaa7ca5e3
 
Device Boot  Start      End        Blocks         Id     System
/dev/sdb1        1     1435      11526606         83     Linux

3.  Check the current physical volume, volume group and logical volume details:

$ pvs && vgs && lvs
  PV         VG       Fmt  Attr PSize  PFree
  /dev/sda2  VolGroup lvm2 a-   19.51g       0
  VG       #PV #LV #SN Attr   VSize  VFree
  VolGroup   2   2   0 wz--n- 30.50g  1022m
  LV       VG       Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  lv_root  VolGroup -wi-ao 17.54g
  lv_swap  VolGroup -wi-ao  1.97g

You can see that this server has a volume group called VolGroup under /dev/sda2. Inside this volume group we have another 2 logical volume called lv_root and lv_swap.

4. What are we going to do now is to use /dev/sdb1 (our new hard disk) to extend VolGroup and create another logical volume for mysql called lv_mysql:

$ pvcreate /dev/sdb1
$ vgextend VolGroup /dev/sdb1

Now volume VolGroup should be extended and have 10G more. You can check VFree value by using this command:

$ vgs
  VG       #PV #LV #SN Attr   VSize  VFree
  VolGroup 2   2    0  wz--n- 30.50g 10.99g

5. We will use 5G for MySQL and the remaining VFree space will be dedicated to snapshot volume. Now lets create the mysql logical volume called lv_mysql:

$ lvcreate -L 5G -n lv_mysql VolGroup

6. When you run following command, you should see lv_mysql has been created under VolGroup volume:

$ lvs
  LV VG Attr LSize Origin Snap% Move Log Copy% Convert
  lv_mysql VolGroup -wi-a- 5.00g
  lv_root VolGroup -wi-ao 17.54g
  lv_swap VolGroup -wi-ao 1.97g

7.  Logical volume created. Lets format it with ext4 filesystem before we can mount them to /mysql directory:

$ mkfs.ext4 /dev/mapper/VolGroup-lv_mysql

8. Add following line into /etc/fstab and mount the partition:

/dev/mapper/VolGroup-lv_mysql           /mysql                  ext4    defaults        0 0

Mount the logical volume:

$ mount -a

9. Stop MySQL service and copy over the data to newly mounted logical volume. We will using rsync to copy to keep the permission, ownership and timestamp. Dont forget to change ownership for /mysql directory as well:

$ service mysql stop
$ rsync -avzP /var/lib/mysql/ /mysql/
$ chown mysql.mysql /mysql

10. Change following value in /etc/my.cnf to map the new directory:

datadir = /mysql
log_bin = 1

Start the Percona server:

$ service mysql start

11. MySQL should start and mapped to the new directory. We can use LVM snapshot from now onwards since MySQL already inside LVM partition. Now, we can start to create snapshot and I will dedicate 5 GB of space for this purpose:

$ lvcreate -L 5G --snapshot -n mysql_backup /dev/VolGroup/lv_mysql

12. Create snapshot is very fast. Once done, we can check the snapshot status as below:

$ lvs | grep mysql_backup
mysql_backup VolGroup swi-a-  5.00g lv_mysql  31.32

13. Now lets mount the snapshot partition so we can see the backup data:

$ mkdir /mysql_snap
$ mount /dev/mapper/VolGroup-mysql_backup /mysql_snap

Done. As for me, I will use NFS to mount this partition in another server and start backing up the data in another MySQL instance. The snapshot is just a process to capture the data at the moment of back up without compromising high CPU utilization and costly downtime.

CentOS: Using XFS File System for MySQL

MySQL is preferred to be run on XFS file system due to its performance on direct IO. Even though many benchmarks have already come out with the latest Linux default file system Ext4 versus XFS, it still convenience to use this file system for our MySQL data directory.

XFS is not come by default in CentOS. So we need to install required utilities to manage XFS. I will use a different virtual hard disk specifically for this and mapped it to MySQL data directory.

OS: CentOS 6.2 64bit
Device: /dev/sdb
Old MySQL data directory: /var/lib/mysql
New MySQL data directory:  /mysql

1. Since this is new hard disk without any partition table, we need to create partition table first:

$ fdisk /dev/sdb

Sequence pressed on keyboard: n > p > 1 > Enter > Enter > w

2.  You should see disk partition has been created as /dev/sdb1 as below:

$ fdisk -l /dev/sdb
 
Disk /dev/sdb: 11.8 GB, 11811160064 bytes
255 heads, 63 sectors/track, 1435 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xaa7ca5e3
 
Device Boot  Start      End        Blocks         Id     System
/dev/sdb1        1     1435      11526606         83     Linux

3. Install XFS utilities via yum:

$ yum install -y xfs*

4. Format the drive with the XFS file system:

$ mkfs.xfs /dev/sdb1

5. Check the file system, create mount point and mount the file system:

$ xfs_check /dev/sdb1
$ mkdir /mysql
$ mount /dev/sdb1 /mysql

6. To allow it mount automatically after boot, at this line into /etc/fstab. Open the file using text editor and add following line:

/dev/sdb1    /mysql     xfs    defaults     0 0

7. Stop MySQL and copy over the data using rsync (to remain permission, ownership and timestamp) and assign correct ownership to the directory:

$ service mysql stop
$ rsync -avz /var/lib/mysql/ /mysql
$ chown mysql.mysql /mysql

8. Change the value of MySQL configuration file to be mapped into the new directory. Open /etc/my.cnf and change or add following line under [mysqld] directive:

datadir = /mysql

9. Start MySQL service:

$ service mysql start

Done. You can verify this in MySQL by executing following command:

mysql> SHOW VARIABLES LIKE 'datadir';
 
+---------------+---------+
| Variable_name | Value   |
+---------------+---------+
| datadir       | /mysql/ |
+---------------+---------+
1 row in set (0.00 sec)

To check the mount status:

$ mount | grep xfs
/dev/sdb1 on /mysql type xfs (rw)

To repair the XFS file system, we need to unmount the file system first:

$ umount /mysql
$ xfs_repair /dev/sdb1

Linux: Mount Box.net Account Locally

Cloud storage nowadays has create a trend on storing and accessing data from any where around the world. 2 most popular cloud storage providers are Dropbox and Box.net. In this post, I am going to show you on how to mount Box.net account inside the Linux box. You are required to have a Box.net account, which is free if you register for personal plan and it comes with 5 GB online storage space.

We will use davfs2 to mount Box.net account via WebDAV. Dropbox do not offer this feature at the moment. Variable that I used as follows:

OS: CentOS 6.2 64bit
Box.net username: [email protected]
Box.net password: MyGu1234
Mount point: /mnt/box/

1. To use the simplest way on installation, we will use RPMforge repository:

$ cd /usr/local/src
$ rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt
$ wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm
$ rpm -K rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm

2. Once done, lets install davfs2 via yum:

$ yum install -y davfs2

3. Create the mount point:

$ mkdir /mnt/box

4. Add following line to /etc/fstab:

https://www.box.com/dav    /mnt/box    davfs    rw,user,noauto 0 0

5. Add Box.net account information into /etc/davfs2/secrets using text editor:

https://www.box.com/dav [email protected]  MyGu1234

Notes: If you use special characters in your password use a forward slash in front of the special character (thanks to Keith for this highlight)

6. Change the lock mechanism of mounted filesystem. Open /etc/davfs2/davfs2.conf and find following value:

#use_locks    1

And change to:

use_locks     0

7. Mount the partition:

$ mount /mnt/box

Done! Now you can start to sync your files to your cloud storage by copying the files into /mnt/box directory. Login into Box.net account and verify that the files should be existed. Depending on the file size, you might need to wait for a while after copying process complete before it appears in the Box.net account.

Following screenshot is the Box.net account. I just sync my public_html backup files to the cloud storage.

Happy ‘clouding’. Cheers!

Linux: Using lsyncd – Live Syncing (Mirror) Daemon

I have a situation where there is one critical website under our company server is critically need to be sync to our backup server. Sync means whatever changes happen in master server, it will replicated to slave server. Yes, you can rsync. But, I do not want to schedule the task as cron to sync. There is one tool which is better than that and very suitable to be applied in this situation called lsyncd.

Lsyncd watches a local directory trees event monitor interface (inotify or fsevents). It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes. By default this is rsync. Lsyncd is thus a light-weight live mirror solution that is comparatively easy to install not requiring new filesystems or blockdevices and does not hamper local filesystem performance.

OS: RHEL 5.3 (Tikanga) 64bit
Source IP: 192.168.50.10
Source directory: /home/webmedia/public_html
Destination IP: 102.168.50.20
Destination directory: /backup/webmedia/public_html

Destination server

In destination server, we just need to make sure that the destination directory is exist. If not, create the destination folder:

$ mkdir -p /backup/webmedia/public_html

Source server

1. It is good to have embedded host value that point to the IP address. Open /etc/hosts and add destination host value:

192.168.50.20   backup-server.local

2. We need to allow passwordless authentication to the destination server. We will using root user to execute the sync process. Run following command as root:

$ ssh-keygen -t dsa

Please enter for all prompts. Once done, run following command to copy the public key to destination server:

$ cat ~/.ssh/id_dsa.pub | ssh root@backup-server.local “cat >> ~/.ssh/authorized_keys”

Or we can use another command which do the same thing called ssh-copy-id:

$ ssh-copy-id -i ~/.ssh/id_dsa root@backup-server.local

Once done, try to access the destination server with following command. Make sure you can access the server without any password prompted:

$ ssh root@backup-server.local

3. Lsyncd required LUA to be installed. For latest RedHat distribution (RHEL 6) you can use yum to install:

$ yum install -y lua*

In my case, I am using the RHEL 5, and the packages are not available at yum. So I will need to find another place for RPM which is here: http://www6.atomicorp.com/channels/atomic/centos/5/x86_64/RPMS/

Download these 3 packages and install:

  1. lua-5.1.4-1.el5.art.x86_64.rpm
  2. lua-static-5.1.4-1.el5.art.x86_64.rpm
  3. lua-devel-5.1.4-1.el5.art.x86_64.rpm
$ mkdir -p /usr/local/src/lua
$ cd /usr/local/src/lua
$ wget http://www6.atomicorp.com/channels/atomic/centos/5/x86_64/RPMS/lua-5.1.4-1.el5.art.x86_64.rpm
$ wget http://www6.atomicorp.com/channels/atomic/centos/5/x86_64/RPMS/lua-devel-5.1.4-1.el5.art.x86_64.rpm
$ wget http://www6.atomicorp.com/channels/atomic/centos/5/x86_64/RPMS/lua-static-5.1.4-1.el5.art.x86_64.rpm
$ rpm -Uhv *.rpm

4. Download and install lsyncd from this website: http://code.google.com/p/lsyncd/downloads/list:

$ cd /usr/local/src
$ wget http://lsyncd.googlecode.com/files/lsyncd-2.0.6.tar.gz
$ tar -xzf  lsyncd-2.0.6.tar.gz
$ cd lsyncd-*
$ ./configure
$ make 
$ make install

5. Once done, start the daemon in the source server by using this command:

$ lsyncd -rsyncssh /home/webmedia/public_html root@backup-server.local /backup/webmedia/public_html

Done. The file should be in sync now. You can check the process using ‘ps‘ command and monitor the progress by constantly viewing the destination directory. In some cases, the sync terminate immediately with following error at /var/log/message:

lsyncd: Error, Consider increasing /proc/sys/fs/inotify/max_user_watches

Depending on the directory that you are watching, you might need to increase the max_user_watches to higher value using following command:

$ echo 65500 > /proc/sys/fs/inotify/max_user_watches

Once the synchronization complete, you can make some test by add more files in the source server, after 5 seconds you will see that the files has been synced to the destination directory. To make this daemon run automatically after boot, you can put the lsyncd command into /etc/rc.local.

Linux: 2 Way File Synchronization and Replication using Unison + FTP

Usually, when we want to replicate or synchronize files between network, we will use rsync, scp or sftp. This kind of replication is one-way replication method, it will sync from master (source) to slave (destination) only. What about if we want to have 2 folders which both are masters? So we need a two-way replication method applied.

Why do I need to have 2 folders which sync each other? Because I already have a load balancer run on top of my web server. In this case, I need to do 2 way replication so the web contents will always be the same to any users who access the web site. The load balancer is run on Pound with normal round-robin algorithm. Following diagram might give us some better understanding:

I will use the HTTP load balancer server as the middle man to execute synchronization via FTP. I need to create 2 FTP account (each in Web#1 and Web#2) and using CurlFTPFS, I will mount both FTP account inside the HTTP load balancer server. Then Unison will do the 2 way replications.

Before start, make sure HTTP load balancer server has load balancer running which you can refer to this post and CurlFTPFS is running which you can refer to this post. Variables as below:

OS: CentOS 6.2 64bit
HTTP Load Balancer IP: 192.168.20.20
Web Server #1: 192.168.20.21
Web Server #2: 192.168.20.22
Directory to be sync: /home/mywebfile/

1. We will install using the simplest method which is yum. Make sure RPMforge is installed in your system. Follow this step if you have no idea on how to enable RPMforge repository:

$ yum install -y unison

2. I am assuming that you have install and configure CurlFTPFS. Mount both FTP accounts:

$ curlftpfs 192.168.20.21 /mnt/ftp/ftpuser1 -o allow_other
$ curlftpfs 192.168.0.22 /mnt/ftp/ftpuser2 -o allow_other

3. Configure Unison. Since we want to synchronize both folders /mnt/ftp/ftpuser1 and /mnt/ftp/ftpuser2 using root user, we need to create a default profile so Unison knows what to sync, where to sync and how to sync. Using text editor, open following files:

$ vim /root/.unison/default.prf

And add following line:

root=/mnt/ftp/ftpuser1
root=/mnt/ftp/ftpuser2
batch=true

4. Start and run Unison for first synchronization:

$ unison default

5. You will notice that both directories will now sync. But this one need to be done manually. To automate this, we can use cron job and setup to run unison every minutes:

$ crontab -e

And add following line:

* * * * * /usr/bin/unison default

Save the file and restart crond:

$ service crond restart

Or, we can use Fsniper to trigger “unison default” command. You may see this post on how to install and configure Fsniper. For more information on Unison, you can refer to the manual page at here.

Warning: CurlFTPFS is not really good in handling remote files synchronization if the connection between those FTP servers are slow. You might need to consider using other network file system like NFS and Samba to make sure the synchronization works smoothly.

CentOS: Upgrading CentOS Release 6.0 to 6.2

The best server maintenance practice is to have all software run up-to-date by following the latest stable release. Most of our servers are has been upgraded to CentOS 6 from CentOS 5 (major release), but also need to upgrade from CentOS 6.0 to CentOS 6.2 (minor release) which usually comes by every about 4 to 8 weeks after upstream release (RedHat).

It just a simple step by the way and variables as below:

OS: CentOS 6.0 64bit
Current release:  CentOS 6.2

1. Check our current kernel and release version:

$ uname -a
Linux centos.local 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/centos-release
CentOS Linux release 6.0 (Final)

2. Before upgrade, it is recommended to clean all cached files from any enabled repository:

$ yum clean all

3. Lets start upgrading. It takes some time depending on your connectivity to CentOS repository:

$ yum update

4. Once completed, proceed to reboot:

$ init 6

5. Check our latest kernel and release version:

$uname -a
Linux centos.local 2.6.32-220.el6.x86_64 #1 SMP Tue Dec 6 19:48:22 GMT 2011 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/centos-release
 CentOS release 6.2 (Final)

To automate this process, we can repeat this process bi-monthly to make sure our operating system is up-to-date with the current release. Run crontab -e and add following line:

0 0 */14 * * yum clean all; yum update

Dont forget to restart crond:

$ service crond restart

Linux: Mount FTP as File System

My developer team requesting me to mount an external FTP account into our web server so they can do file manipulation process between them. To achieve this, I need to mount FTP as file system, so they transparently not realized that they are browsing to an FTP account which has been localized into the server.

I will be using CurlFTPFS, a FTP filesystem based on cURL and FUSE. Variable as below:

OS: CentOS 6 64bit
FTP host: ftp.mydear.org
FTP directory: public_html
FTP username: [email protected]
FTP password: By55k#ds
Mount directory: /mnt/ftp/ftpuser

1. Lets install all requirements via yum:

$ yum install fuse* libcurl* glib* glibc.i686 file-libs file-devel file-static curl -y

2. Download and install CurlFTPFS:

$ cd /usr/local/src
$ wget http://cdnetworks-kr-2.dl.sourceforge.net/project/curlftpfs/curlftpfs/0.9.1/curlftpfs-0.9.1.tar.gz
$ tar -xzf curlftpfs-0.9.1.tar.gz
$ cd curlftpfs-*
$ ./configure
$ make
$ make install

3. We will use .netrc capabilities in storing FTP credentials. Use text editor and create one file called /root/.netrc (if still not exist) and enter following information:

machine ftp.mydear.org
login [email protected]
password By55k#ds

4. Change the permission so it will not accessible by others and prepare the mount directory:

$ chmod 600 /root/.netrc
$ mkdir -p /mnt/ftp/ftpuser

5. Since the developer team need to browse the mounted directory, I need to create a specific user and assign correct permission and ownership to the directory:

$ useradd -m developer -p 'develPASS'
$ chown developer.developer /mnt/ftp/ftpuser -Rf

6. Lets get the UID and GID of user/group developer to be used when mounting the FTP account:

$ id -u developer
501
$ id -g developer
502

7. Mount the FTP account into directory with ownership and allowing other (since we mount it as root) option:

$ curlftpfs ftp.mydear.org /mnt/ftp/ftpuser -o uid=501 -o gid=502 -o allow_other

Done! My developer team now can browse /mnt/ftp/ftpuser in the server and do their file manipulation work. To allow this FTP to be mounted automatically after reboot, you can put following line into /etc/fstab:

curlftpfs#ftp.mydear.org /mnt/ftp/ftpuser fuse rw,uid=501,gid=502,user,noauto,allow_other 0 0

And also, if you want to unmount it, simply run following command:

$ fusermount -uz /mnt/ftp/ftpuser

Notes: You might encounter unmount error if using above command. Depending on kernel and fuse version, upgrading the them might solve the problem.