CentOS: Restore/Recover from Amanda Backup

So I have Amanda backup server configured in 2 servers as refer to my previous post here. In that setting, I was using Amanda to backup one of the server’s directory /home/webby/public_html in server sv101.krispykream.net. Now I need to restore all files in directory/home/webby/public_html/blog from latest backup.

 

 Configure Amanda Client for Restore

1. Login into the Amanda client, in my case is this server, sv101.krispykream.net as root. Create a new text file called amanda-client.conf. This file will define the server details that client is going to connect to for restoration:

$ vim /etc/amanda/amanda-client.conf

And add following line:

conf "ServerNetBackup"                # your config name in Amanda server
 
index_server "office.servering.com"   # your amindexd server
tape_server "office.servering.com"    # your amidxtaped server
 
ssh_keys ""                           # your ssh keys file if you use ssh auth
unreserved-tcp-port 1025,65535

2. Restart Amanda client service in this server:

$ service xinetd restart

3. Then, we need to login to the Amanda backup server, as in my case is office.servering.com to change the server_args under /etc/xinetd.d/amanda. This will allow Amanda clients to browse the index and tape in Amanda server:

$ vim /etc/xinetd.d/amanda

And change following line to be as below:

server_args             = -auth=bsd amdump amindexd amidxtaped

4. Restart xinetd service:

$ service xinetd restart

 

Restoring Files

1. To restore files, you simply need to login to the client with root user. The process flow will be as below:

Login to client > Go to the directory that you want to restore > Access Amanda server using amrecover > Select which disk > Select which date > Add into restoration list > Extract > Done

2. So now I am login to sv101.krispykream.net as root and navigate to the folder that I want to restore. I am going to restore all files in directory/home/webby/public_html/blog from latest backup because this directory has been accidentally deleted from the server:

$ cd /home/webby/public_html

3.  Connect to Amanda server using following command:

$ amrecover ServerNetBackup -s office.servering.com
 
AMRECOVER Version 2.6.1p2. Contacting server on office.servering.com ...
220 amanda AMANDA index server (2.6.1p2) ready.
Setting restore date to today (2013-02-06)
200 Working date set to 2013-02-06.
200 Config set to ServerNetBackup.
200 Dump host set to sv101.krispykream.net.
Use the setdisk command to choose dump disk to recover

4. Lets list the disk for this host in Amanda backup server:

amrecover> listdisk
200- List of disk for host sv101.krispykream.net
201- /home/webby/public_html
200 List of disk for host sv101.krispykream.net

5. Choose the disk for this backup:

amrecover> setdisk /home/webby/public_html
200 Disk set to /home/webby/public_html.

6. I do not know which tape that holding the latest backup, so I will use history command to list it out all:

amrecover> history
200- Dump history for config "ServerNetBackup" host "sv101.krispykream.net" disk /home/webby/public_html
201- 2013-02-05-18-29-38  0  ServerNetBackup-2:1
201- 2013-02-05-13-00-58  0  ServerNetBackup-1:1
201- 2013-02-05-12-59-41  0  ServerNetBackup-15:1
200 Dump history for config "ServerNetBackup" host "sv101.krispykream.net" disk /home/webby/public_html

7. Now I should choose the latest backup which is 2013-02-05-18-29-38, which means the backup is create at 6:29:38 PM on 5th of February 2013:

amrecover> setdate 2013-02-05-18-29-38
200 Working date set to 2013-02-05-18-29-38.

8. I have chosen the backup and tape to the latest date. I can then list out all the files in this backup directory as below:

amrecover> ls
2013-02-05-18-29-38 web.config.txt
2013-02-05-18-29-38 tmp/
2013-02-05-18-29-38 test/
2013-02-05-18-29-38 templates/
2013-02-05-18-29-38 robots.txt
2013-02-05-18-29-38 plugins/
2013-02-05-18-29-38 modules/
2013-02-05-18-29-38 media/
2013-02-05-18-29-38 logs/
2013-02-05-18-29-38 libraries/
2013-02-05-18-29-38 language/
2013-02-05-18-29-38 joomla.xml
2013-02-05-18-29-38 installation/
2013-02-05-18-29-38 index.php
2013-02-05-18-29-38 includes/
2013-02-05-18-29-38 images/
2013-02-05-18-29-38 htaccess.txt
2013-02-05-18-29-38 components/
2013-02-05-18-29-38 cli/
2013-02-05-18-29-38 cache/
2013-02-05-18-29-38 blog/
2013-02-05-18-29-38 administrator/
2013-02-05-18-29-38 README.txt
2013-02-05-18-29-38 LICENSE.txt
2013-02-05-18-29-38 .

9. Since I just want to restore blog directory, I will need to add blog into extraction list:

amrecover> add blog
Added dir /blog/ at date 2013-02-05-18-29-38

10. Once added, we can extract the backup to the working directory as below:

amrecover> extract
 
Extracting files using tape drive changer on host office.servering.com.
The following tapes are needed: ServerNetBackup-2
 
Restoring files into directory /home/webby/public_html
Continue [?/Y/n]? Y
 
Extracting files using tape drive changer on host office.servering.com.
Load tape ServerNetBackup-2 now
Continue [?/Y/n/s/d]? Y

It will then restore all your files into the working directory. Just exit the amrecover console and you can see the restored directory will be exist there, as example below:

$ ls -al | grep blog
drwxr-xr-x   5    webby  webby    4096    Jan 25 04:53    blog

Restoration complete!

CentOS: Install and Configure Amanda Backup Server

I am going to setup Amanda backup into our office’s server to enable network backup to all of our servers located in different geographical area. The idea will be as below:

 

All servers are running on CentOS 6 64bit with iptables and SELINUX have been turned off.

Install Amanda Server

1. Install amanda packages using yum:

$ yum install -y amanda*

2. Create the configuration file. I am going to name this backup project is ServerNetBackup. We need to create a directory named by this project and all configuration files for this project will be underneath it:

$ mkdir /etc/amanda/ServerNetBackup

3. Create the core configuration file, amanda.conf:

$ vim /etc/amanda/ServerNetBackup/amanda.conf

And paste following line:

org "ServerNetBackup"                 # Organization name for reports
mailto "[email protected]"        # Email address to receive reports
netusage 10000 Kbps                   # Bandwidth limit, 10M
 
dumpcycle 1 week                      # Backup cycle is 7 days
runspercycle 7                        # Run 7 times every 7 days
tapecycle 15 tapes                    # Dump to 15 different tapes during the cycle
tpchanger "chg-disk"                  # The tape-changer glue script
 
changerfile "/etc/amanda/ServerNetBackup/changer"     # The tape-changer file
 
tapedev "file://central_backup/ServerNetBackup/slots" # The no-rewind tape device to be used
tapetype HARDDISK                                     # Define the type of tape
 
infofile "/etc/amanda/ServerNetBackup/curinfo"        # Database directory
logdir "/etc/amanda/ServerNetBackup/logs"             # Log directory
indexdir "/etc/amanda/ServerNetBackup/index"          # Index directory
 
define tapetype HARDDISK {                            # Define our tape behaviour
length 100000 mbytes                                  # Every tape is 100GB in size
}
 
amrecover_changer "changer"                           # Changer for amrecover
 
define dumptype global {                              # The global dump definition
maxdumps 2                                            # The maximum number of backups run in parallel
estimate calcsize                                     # Estimate the backup size before dump
holdingdisk yes                                       # Dump to temp disk (holdingdisk) before backup to tape
index yes                                             # Generate index. For restoration usage
}
 
define dumptype root-tar {                            # How to dump root's directory
global                                                # Include global (as above)
program "GNUTAR"                                      # Program name for compress
comment "root partitions dumped with tar"
compress none                                         # No compress
index                                                 # Index this dump
priority low                                          # Priority level
}
 
define dumptype user-tar {                            # How to dump user's directory
root-tar                                              # Include root-tar (as above)
comment "user partitions dumped with tar"
priority medium                                       # Priority level
}
 
define dumptype comp-user-tar {                       # How to dump & compress user's directory
user-tar                                              # Include user-tar (as above)
compress client fast                                  # Compress in client side with less CPU (fast)
}

Configure Backup Location

1. Prepare the directory to store all backups:

$ mkdir -p /central_backup/ServerNetBackup/slots

2. Assign correct permission to user amandabackup for the configuration directory and backup directory:

$ chown amandabackup.disk /central_backup -Rf
$ chown amandabackup.disk /etc/amanda/ServerNetBackup -Rf

3. Login as user amandabackup:

$ su - amandabackup

4. Create the virtual tape. This is where the backup files will be stored. We will need to create 15 slots as per tapecycle keyword:

$ for n in `seq 1 15`; do mkdir /central_backup/ServerNetBackup/slots/slot${n}; done

5. We then need to label all slots:

$ for n in `seq 1 15` ; do amlabel ServerNetBackup ServerNetBackup-${n} slot ${n}; done

4. Create all required directories as defined in the configuration file:

mkdir /etc/amanda/ServerNetBackup/curinfo
$ mkdir /etc/amanda/ServerNetBackup/logs
$ mkdir /etc/amanda/ServerNetBackup/index

Configure Service and What to Backup

1. We need to define what to backup in a file called disklist. As user amandabackup, create this file:

$ su - amandabackup
$ vim /etc/amanda/ServerNetBackup/disklist

And add following line:

sv101.krispykream.net /home/webby/public_html   comp-user-tar
gogogo.my-server.org  /etc                      root-tar

Notes: Make sure the hostname is FQDN and can be resolved to an IP. Add the host entry into /etc/hosts is recomended.

2. Exit from amandabackup user and get back to root user:

$ exit

3. Enable amanda service in xinetd.d directory:

$ vim /etc/xinetd.d/amanda

And change following line from “yes” to “no”:

disable = no

4. Enable on boot and restart xinetd service:

$ chkconfig xinetd on
$ service xinetd restart

5. Check the amanda server whether it is running properly by using following command:

$ netstat -a | grep amanda
udp        0          0       *:amanda                *:*

If you see result as above, amanda server is ready to serve!

 

Install Amanda Backup Client

1. Login to the client’s server and install required package for Amanda using yum:

$ yum install -y amanda amanda-client

2. As user amandabackup, add following line into /var/lib/amanda/.amandahosts to specify where is Amanda backup server:

$ su - amandabackup
$ vim /var/lib/amanda/.amandahosts

And make sure the value as below:

office.servering.com amandabackup amdump
localhost amandabackup amdump
localhost.localdomain amandabackup amdump

3. Exit from user amandabackup and turn to root user:

$ exit

4. Enable amanda service in xinetd.d directory:

$ vim /etc/xinetd.d/amanda

And change following line from “yes” to “no”:

disable = no

5. Enable on boot and start the xinetd service:

$ chkconfig xinetd on
$ service xinetd start

6. Add an entry in /etc/hosts to define backup server IP by adding following line:

125.10.90.90      office.servering.com

7. In some case, you may need to change the permission of the directory that you want to backup. For example, I need to allow user amandabackup to access directory /home/webby/public_html to create backup:

As root user, change the permission of the directory:

$ chmod 755 /home/webby

Run the Backup Process

1. Now go back to the Amanda server and check our configuration file as amandabackup user:

$ su - amandabackup
$ amcheck ServerNetBackup

You should see the output similar to this:

Client check: 2 host checked in 2.070 seconds.  0 problems found.

2.  If no error found, you can start the backup process immediately by running following command:

$ amdump ServerNetBackup

Or, we can automate this process using cronjob. Run following command as amandabackup user:

$ crontab -e

And add following line:

45 0 * * 2-6 /usr/sbin/amdump ServerNetBackup

3. As root user, reload the crond service to activate this job:

$ service crond reload

If the backup process completed, you should receive an email with backup report. In this email, it will tell you where is the backup location and process summary. I will continue on Amanda restoration process on the next post!

Update: I just updated this post on 5th Feb 2013 to use yum repository instead of package from zmanda.

Install OpenFiler from USB Drive

We just received a new storage server from DELL which will be used to host web server cluster. We will use Openfiler, a free NAS/SAN operating system to manage our RAID-10 storage.

The problem we have now is our storage server has no optical drive and we do not have any external optical drive available here in office. Alternatively, we can use USB stick drive and make sure our server are able to boot from USB.

Server: DELL PowerEdge R510
OS version: Openfiler 2.99 64bit
USB flash drive: /dev/sdb
RAID 10 virtual disk: /dev/sda

Preparing the Flash Drive

1. Download the ISO from here into your local PC. In my case, I downloaded the x86_64 distribution ISO.

2. Download UNetbootin from here. We will use this application to burn our ISO into flash drive.

3. Prepare our flash drive. Format it to FAT32 or FAT file system. Since I am using Windows 7, just right click to the drive and click ‘Format’.

4. Launch UNetbootin and select ‘Diskimage’. Locate the ISO file in your PC and click OK as screenshot below:

5. Once ready, we need to copy the whole ISO into a directory called ‘root’. Navigate to your USB pendrive and create a directory ‘root’ at the parent directory:

 

Once the copy completed, verify and make sure the ISO is exist as screenshot below:

 

Installing into the Server

1. Plug into the server’s USB port and press F11 to show boot options as below:

2. Accept default value up until you see installation method page. Choose “Hard Drive” and you just need to select /dev/sdb1 (our flash drive) and enter “root/” (without quote) so the installer can try to find the Openfiler ISO file (which we have save it earlier):

3. The installer should loaded properly now and you can proceed with the Openfiler installation wizard. Up until you see Bootloader Configuration setting, select “/dev/sda1    First sector of boot partition” as screenshot below:

4. Proceed with the installation wizard until Finish. It will required a reboot after installation completed. REMOVE THE USB DRIVE AS WELL!

 

Post-Installation Configuration

1. By default, Openfiler will boot to “Other” because this is the disk partition that we have install from (see below screenshot). Make sure during the first boot after installation finish, select the Openfiler kernel manually and press Enter:

2. After sysinit complete, you should see the login page of Openfiler. Login as root user and open this text file:

$ vi /etc/bootloader.conf

And change following line from this:

default other0

To this:

default 2.6.32-71.18.1.el6-0.20.smp.gcc4.1.x86_64

3. Save the file and run following command to apply the new bootloader configuration:

$ bootman

If you see some error, ignore it. This error indicate that /dev/sdb1 (the USB flash drive) is not exist which is correct.

Done. You can try to reboot once more to make sure that the bootloader will automatically select Openfiler instead of other device when booting.

CentOS: ClamAV Scanning on FTP Service

Scanning on FTP is really important in order to protect your server from the most popular file transferring method available to users. In my case, my boss wants to make sure every uploaded files via FTP is free from virus, trojan or malware.

In order to achieve this, I need to use PureFTPd as the FTP server because it supports calling script once uploaded. This feature will basically trigger a script which we will use to call anti virus process to do the file scanning.

I am using following variables:

OS: CentOS 6.2 64bit
FTP user: ryan
FTP password: Brr432$A
FTP home directory: /home/ryan
Antivirus: ClamAV
Script to scan: /root/scripts/clamav_scan
Quarantine directory: /root/quarantine

1. To make installation steps easier, we will use RPMforge repository configured to yum:

$ rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt
$ rpm -Uhv http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm

2. Install ClamAV and PureFTPD via yum:

$ yum install clamav pure-ftpd -y

3. Update ClamAV database:

$ freshclam

Note: By default, ClamAV will update the virus database on daily basis as you can see under /etc/cron.daily/freshclam.

4. Configure PureFTPD to suit our environment. Open the PureFTPD configuration file at /etc/pure-ftpd/pure-ftpd.conf via text editor and make sure following line is configured as below:

#PAMAuthentication           yes
UnixAuthentication           yes
CallUploadScript             yes

5. Create the home directory for user ryan and assign password:

$ useradd -m ryan
$ passwd ryan

6. Create the script to be used by PureFTPd to call ClamAV for file scanning. We will also create a quarantine folder for ClamAV to collect the suspected files. We will use a BASH script called clamav_scan under /root/scripts directory:

$ mkdir -p /root/quarantine
$ mkdir -p /root/scripts
$ vim /root/scripts/clamav_scan

And add following line:

#!/bin/bash
QUA_DIR=/root/quarantine
SUBJECT="Something detected by ClamAV"
EMAILTO="[email protected]"
EMAILMESSAGE="$QUA_DIR/scan.log"
DATE=`date`
 
# Scan the uploaded file. Move to quarantine if suspicious
/usr/bin/clamscan --move=$QUA_DIR --quiet --no-summary "$1"
 
# Send email if suspicious found
if [ "$(ls -A $QUA_DIR)" ]; then
     echo "Date: $DATE" > $EMAILMESSAGE
     /usr/bin/clamscan -i -r -l $EMAILMESSAGE $QUA_DIR
     /bin/mail -s "$SUBJECT" "$EMAILTO" < $EMAILMESSAGE
     rm -Rf $QUA_DIR/scan.log
fi

7.  Make the files executable and start the PureFTPd with auto startup after boot:

$ chmod 755 /root/scripts/clamav_scan
$ chkconfig pure-ftpd on
$ service pure-ftpd start

8. PureFTPd will required process pure-uploadscript to run separately once the pure-ftpd service started. This process will call the custom script which already created for scanning purposes:

$ pure-uploadscript -r /root/scripts/clamav_scan -B

We also need to put this script on /etc/rc.local to make sure it auto start after boot:

$ echo "/usr/sbin/pure-uploadscript -r /root/scripts/clamav_scan -B" >> /etc/rc.local

Done. Now lets try by uploading some files into the FTP directory. You can try to upload normal file and also try to upload the unwanted files like r57.php. You can see that this suspicious file will be moved to quarantine folder instead of Ryan’s home directory.

MySQL: Live Backup using LVM Snapshots

LVM snapshot is an exact copy of an LVM partition that has all the data from the LVM volume from the time the snapshot was created. The advantages of this is that we can get reliable backup in a small matter of time without suspending the MySQL service. Normal backup using mysqldump or mysqlhotcopy will create a logical backup, which usually expensive and CPU intensive.

The idea is like this:

  1. Create a new logical volume in new hard disk
  2. Mount the logical volume into MySQL data and log directory
  3. Create LVM snapshot to the MySQL partition that hold MySQL data and log
  4. Mount the LVM snapshot into the server
  5. Create MySQL backup from that snapshot

I will use following variables:

OS: CentOS 6.2 64bit
MySQL: Percona 5.5.20
Old MySQL data & log directory: /var/lib/mysql
New MySQL data & log directory: /mysql
Backup MySQL partition: /mysql_snap

1. We will use another hard disk to mount /mysql via logical volume. Lets create the partition first:

$ fdisk /dev/sdb

Sequence pressed on keyboard: n > p > 1 > Enter > Enter > w

2.  You should see disk partition has been created as /dev/sdb1 as below:

$ fdisk -l /dev/sdb
 
Disk /dev/sdb: 11.8 GB, 11811160064 bytes
255 heads, 63 sectors/track, 1435 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xaa7ca5e3
 
Device Boot  Start      End        Blocks         Id     System
/dev/sdb1        1     1435      11526606         83     Linux

3.  Check the current physical volume, volume group and logical volume details:

$ pvs && vgs && lvs
  PV         VG       Fmt  Attr PSize  PFree
  /dev/sda2  VolGroup lvm2 a-   19.51g       0
  VG       #PV #LV #SN Attr   VSize  VFree
  VolGroup   2   2   0 wz--n- 30.50g  1022m
  LV       VG       Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  lv_root  VolGroup -wi-ao 17.54g
  lv_swap  VolGroup -wi-ao  1.97g

You can see that this server has a volume group called VolGroup under /dev/sda2. Inside this volume group we have another 2 logical volume called lv_root and lv_swap.

4. What are we going to do now is to use /dev/sdb1 (our new hard disk) to extend VolGroup and create another logical volume for mysql called lv_mysql:

$ pvcreate /dev/sdb1
$ vgextend VolGroup /dev/sdb1

Now volume VolGroup should be extended and have 10G more. You can check VFree value by using this command:

$ vgs
  VG       #PV #LV #SN Attr   VSize  VFree
  VolGroup 2   2    0  wz--n- 30.50g 10.99g

5. We will use 5G for MySQL and the remaining VFree space will be dedicated to snapshot volume. Now lets create the mysql logical volume called lv_mysql:

$ lvcreate -L 5G -n lv_mysql VolGroup

6. When you run following command, you should see lv_mysql has been created under VolGroup volume:

$ lvs
  LV VG Attr LSize Origin Snap% Move Log Copy% Convert
  lv_mysql VolGroup -wi-a- 5.00g
  lv_root VolGroup -wi-ao 17.54g
  lv_swap VolGroup -wi-ao 1.97g

7.  Logical volume created. Lets format it with ext4 filesystem before we can mount them to /mysql directory:

$ mkfs.ext4 /dev/mapper/VolGroup-lv_mysql

8. Add following line into /etc/fstab and mount the partition:

/dev/mapper/VolGroup-lv_mysql           /mysql                  ext4    defaults        0 0

Mount the logical volume:

$ mount -a

9. Stop MySQL service and copy over the data to newly mounted logical volume. We will using rsync to copy to keep the permission, ownership and timestamp. Dont forget to change ownership for /mysql directory as well:

$ service mysql stop
$ rsync -avzP /var/lib/mysql/ /mysql/
$ chown mysql.mysql /mysql

10. Change following value in /etc/my.cnf to map the new directory:

datadir = /mysql
log_bin = 1

Start the Percona server:

$ service mysql start

11. MySQL should start and mapped to the new directory. We can use LVM snapshot from now onwards since MySQL already inside LVM partition. Now, we can start to create snapshot and I will dedicate 5 GB of space for this purpose:

$ lvcreate -L 5G --snapshot -n mysql_backup /dev/VolGroup/lv_mysql

12. Create snapshot is very fast. Once done, we can check the snapshot status as below:

$ lvs | grep mysql_backup
mysql_backup VolGroup swi-a-  5.00g lv_mysql  31.32

13. Now lets mount the snapshot partition so we can see the backup data:

$ mkdir /mysql_snap
$ mount /dev/mapper/VolGroup-mysql_backup /mysql_snap

Done. As for me, I will use NFS to mount this partition in another server and start backing up the data in another MySQL instance. The snapshot is just a process to capture the data at the moment of back up without compromising high CPU utilization and costly downtime.

Linux: Mount FTP as File System

My developer team requesting me to mount an external FTP account into our web server so they can do file manipulation process between them. To achieve this, I need to mount FTP as file system, so they transparently not realized that they are browsing to an FTP account which has been localized into the server.

I will be using CurlFTPFS, a FTP filesystem based on cURL and FUSE. Variable as below:

OS: CentOS 6 64bit
FTP host: ftp.mydear.org
FTP directory: public_html
FTP username: [email protected]
FTP password: By55k#ds
Mount directory: /mnt/ftp/ftpuser

1. Lets install all requirements via yum:

$ yum install fuse* libcurl* glib* glibc.i686 file-libs file-devel file-static curl -y

2. Download and install CurlFTPFS:

$ cd /usr/local/src
$ wget http://cdnetworks-kr-2.dl.sourceforge.net/project/curlftpfs/curlftpfs/0.9.1/curlftpfs-0.9.1.tar.gz
$ tar -xzf curlftpfs-0.9.1.tar.gz
$ cd curlftpfs-*
$ ./configure
$ make
$ make install

3. We will use .netrc capabilities in storing FTP credentials. Use text editor and create one file called /root/.netrc (if still not exist) and enter following information:

machine ftp.mydear.org
login [email protected]
password By55k#ds

4. Change the permission so it will not accessible by others and prepare the mount directory:

$ chmod 600 /root/.netrc
$ mkdir -p /mnt/ftp/ftpuser

5. Since the developer team need to browse the mounted directory, I need to create a specific user and assign correct permission and ownership to the directory:

$ useradd -m developer -p 'develPASS'
$ chown developer.developer /mnt/ftp/ftpuser -Rf

6. Lets get the UID and GID of user/group developer to be used when mounting the FTP account:

$ id -u developer
501
$ id -g developer
502

7. Mount the FTP account into directory with ownership and allowing other (since we mount it as root) option:

$ curlftpfs ftp.mydear.org /mnt/ftp/ftpuser -o uid=501 -o gid=502 -o allow_other

Done! My developer team now can browse /mnt/ftp/ftpuser in the server and do their file manipulation work. To allow this FTP to be mounted automatically after reboot, you can put following line into /etc/fstab:

curlftpfs#ftp.mydear.org /mnt/ftp/ftpuser fuse rw,uid=501,gid=502,user,noauto,allow_other 0 0

And also, if you want to unmount it, simply run following command:

$ fusermount -uz /mnt/ftp/ftpuser

Notes: You might encounter unmount error if using above command. Depending on kernel and fuse version, upgrading the them might solve the problem.

cPanel: Exclude Directory during Backup

Backup is the first thing-to-do and should not be forgotten by a good system administrator. Since cPanel has their built-in backup creator as well as backup management system, we can take advantage of this tool and use them to suit our needs. In my situation, we have many cPanel accounts and some of them is higher than 10GB of disk usage, mostly due to website uploaded contents.

Creating backup will be a hard thing if you have too many inodes or too much disk consumption. It is a good thing if we can exclude some of the directory for example user_uploaded when creating cPanel backup, and the rest can be backup manually by downloading them to a local server.

In this tutorial I will create a full backup what some directories being excluded. Variables as below:

OS: RHEL 4 32bit
cPanel account: premen
Home directory for user: /home/premen

1. Identify the directory that we want to exclude. In this case, I will exclude the high disk storage directory. Using following command might help:

$ cd /home/premen
$ du -h | grep G

The command should list all directories which more than 1GB of size. Example as below:

6.4G    ./public_html/portal/tmp
8.7G    ./public_html/portal/user_uploaded
15.1G   ./public_html

2. Then we need to generate a file called cpbackup-exclude.conf under respective home directory of the user as refer to cPanel documentation at here:

$ cd /home/premen
$ touch cpbackup-exclude.conf
$ vi  cpbackup-exclude.conf

Paste following line:

public_html/portal/tmp
public_html/portal/user_uploaded

3. Now we will create the backup either via pkgacct script:

/usr/local/cpanel/scripts/pkgacct premen /home userbackup

or you can click “Download or Generate a Full Website Backup” under cPanel as screenshot below:

Once the backup ready, you will notice that both directories have been excluded from the cPanel full backup and your backup size should be smaller and faster to be compressed. Cheers!

Create iSCSI Target in OpenFiler

If you have a SAN storage, or a dedicated server to serve as file and storage service to other server, I am suggesting you to use Openfiler. This operating system is specifically built to manage and deliver file-based Network Attached Storage and block-based Storage Area Networking in a single framework.

In this tutorial, I will not showing you on how to install Openfiler. I am just showing you on how to setup iSCSI target to be mounted in another server. Variables as follow:

OS: Openfiler 2.99 64bit
Openfiler IP: 10.1.1.1
Disk device: /dev/sdb
Disk size: 50 GB
Server that mount the iSCSI: 10.1.1.100

1. We start by reviewing the block drive layout which detected in the system. Login to the Openfiler web adminitration portal with default credentials as below:

Username: openfiler
Password: password

2. Make sure we turn on the iSCSI services and make it run. Go to Openfiler > Services and make sure it appear as below:

3.  Lets specify which host can connect to this storage server. So in this case, I want to allow 10.1.1.100 to access iSCSI target which we will create later. Go to Openfiler > System > Network Access Configuration and specify which host you want to allow:

4. We need to create physical volume for /dev/sdb. Go to Openfiler > Volumes > Block Devices, select information as screenshot below and click Create:

5. Create volume group for /dev/sdb1 by go to Openfiler > Volumes > Volumes Group.  I will put server1_vg as the name because I want to mount this in server1 once ready.

You should see something like this:

6. Create volume as ‘data‘ inside server1_vg volume group by go to Openfiler > Volumes > Add Volume. Make sure you select ‘block (iSCSI, FC, etc)‘ as the volume type:

7. Now we can do iSCSI mapping. Go to Openfiler > iSCSI Targets > LUN Mapping, and click Map.

8. Make sure we allow host access to this target. Go to Openfiler > iSCSI Targets > Network ACL, and allow which host you want to access to the target:

9. iSCSI target ready. Now you can connect them to any host you want and make sure you install the iSCSI initiator at the remote server.

Process summary will be as below:

  1. Create physical volume
  2. Create volume group
  3. Create volume
  4. Map volume with LUN
  5. Allow hosts define in step 3
  6. Mount into the destination server

Linux: Yum Repository from DVD

In my case, I need to setup one web server without internet connection. As I am convenience to use yum for any package installation in Linux, we need to tell yum to look for the CentOS DVD instead Internet (default).

Kindly find variables as below:

OS: CentOS 6 64bit
DVD path: /mnt/cdrom
Mount point: /media/CentOS

1. Create mount point directory:

$ mkdir /media/CentOS

2. Insert CentOS installation DVD #1 into and mount it to the mount point that we want:

$ mount /mnt/cdrom /media/CentOS
mount: block device /dev/sr0 is write-protected, mounting read-only

3. We need to tell yum to refer to the installation DVD instead of repository via internet. To do this, I need to enable the CentOS-Media repository. Open /etc/yum.repos.d/CentOS-Media.repo via text editor and changed following line to 1:

enabled=0

To:

enabled=1

4. Since I will not use the default one at all, I will move all other .repo files to another folder called yum.repos.d.bak under /etc directory:

$ mkdir /etc/yum.repos.d.bak
$ ls -1 | grep -v CentOS-Media.repo | xargs -I {} mv /etc/yum.repos.d.bak

Done! Now you can run yum package installer from installation disc directly. If you planned to use this method from now onwards, you might need to edit the /etc/fstab files or playing with /etc/rc.local script to automatically mount the installation disc when reboot.

Smartd Error: 1 Currently unreadable (pending) sectors

I am encountering following error in /var/log/messages:

Aug 15 03:55:42 hostname smartd[2366]: Device: /dev/sda, 1 Currently unreadable (pending) sectors

Which cause the / partition to be mounted as read-only. The server is accessible anyway but you cant do anything much inside. Lets troubleshoot this.

Collecting Information/Troubleshooting

I see read-only filesystem mounted when creating a test file in /root directory:

$ touch /root/testfile
touch: cannot touch `/root/testfile': Read-only file system

What is SMART daemon (smartd)?

Self-Monitoring, Analysis and Reporting Technology (SMART) system built into many ATA-3 and later ATA, IDE and SCSI-3 hard drives. The purpose of SMART is to monitor the reliability of the hard drive and predict drive failures, and to carry out different types of drive self-tests. We will use smartctl command to help us find out what is wrong with the disk.

Continue reading “Smartd Error: 1 Currently unreadable (pending) sectors” »

Linux: Create and Mount Swap via SSH

Some server that I am working with has no swap space mounted in the server. Swap is necessary as backup of our physical memory in case system needs more memory than what it has at that time and also increase application loading speed especially when starting and closing application.

Swap space can be setup in 2 ways, as partition or as a file. Since this server is already online and I have many free partition left in “/” partition, so it may good for me just to create swap file rather than swap partition.

My variables as below:

OS: RHEL 5.7 64bit (Tikanga)
Swap file location: /mnt/swapfile
Swap size: 4 GB

Continue reading “Linux: Create and Mount Swap via SSH” »

CentOS 5: Converting Ext3 to Ext4

Ext4 (fourth extended file system) is the successor of current widely used Ext3 filesystem in Linux.

Since Ext4 filesystem already in the market, we can fully utilise this and improve IO (input/output) performance. Ext4 is well-known to be good in handling large storage, reduce up to 9 times of file system checking (fsck) time compare to Ext3 (refer to this) and also checksums in the journal.

Variables as follow:

OS: CentOS 5.6 64bit
Kernel version: 2.6.18-238.19.1.el5
Backup partition: /backup (mount from /dev/sdb)

1. First of all, its recommended to backup everything first. We will us ‘dd‘ command to backup the whole partition to another hard disk. That hard disk is attached via SATA cable. We will format the backup hard disk with ext3 filesystem and and mount as /backup partition:

$ fdisk /dev/sdb
.....
 
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-3916, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-3916, default 3916):
Using default value 3916
 
Command (m for help): w
The partition table has been altered!
 
......

The sequence I press in the keyboard is: n > p > 1 > enter > enter > w

Continue reading “CentOS 5: Converting Ext3 to Ext4” »