Debian: Converting Apache + PHP to FastCGI – The Simple Way

I have a server running on Debian 6 64bit installed with Apache and PHP5 using apt-get package manager. By default, it will configure DSO  as the PHP handler (more details here). I need to convert it to serve PHP through FastCGI (mod_fcgid) to save memory usage. It turns to be that converting them is easy and require simple steps.

 

Here is my pre-configured Apache + PHP settings (installed using apt-get install apache2 php5 command):

$ dpkg --get-selections | grep apache
apache2             install
apache2-mpm-prefork install
apache2-utils       install
apache2.2-bin       install
apache2.2-common    install
libapache2-mod-php5 install
 
$ dpkg --get-selections | grep php
libapache2-mod-php5 install
php5                install
php5-cli            install
php5-common         install
php5-suhosin        install

 

1. Update repository:

$ apt-get update

2. Install required packages for fcgid:

$ apt-get install apache2-mpm-worker libapache2-mod-fcgid php5-cgi

3. Enable cgi.fix_pathinfo in /etc/php5/apache2/php.ini:

cgi.fix_pathinfo=1

4. Stop Apache:

$ /etc/init.d/apache2 stop

5. Disable php5, since we will be using php-cgi:

$ a2dismod php5

6. Setup the virtual host for the website under /etc/apache2/sites-available/default:

 <VirtualHost *:80>
        ServerName www.example.com
        ServerAdmin admin@example.com
        DocumentRoot /var/www
 
        <Directory /var/www>
                Options +ExecCGI
                AllowOverride AuthConfig FileInfo Limit
                AddHandler fcgid-script .php
                FCGIWrapper /usr/lib/cgi-bin/php .php
                Order Deny,Allow
                Allow from All
        </Directory>
 
        ErrorLog /var/log/apache2/error.log
        LogLevel warn
 
        CustomLog /var/log/apache2/access.log combined
 
</VirtualHost>

7. Start Apache:

$ /etc/init.d/apache2 start

 

Done! You are now running on FastCGI as the PHP handler. You can verify this with phpinfo:

phpinfo

 

High Availability: Configure Piranha for HTTP, HTTPS and MySQL

Piranha is a simple yet powerful tool to manage virtual IP and service with its web-based GUI.

As refer to my previous post on how to install and configure Piranha for HTTP service: http://blog.secaserver.com/2012/07/centos-configure-piranha-load-balancer-direct-routing-method/, in this post we will complete over the Piranha configuration with HTTP and HTTPS load balancing using direct-routing with firewall marks and MySQL load balancing using direct-routing only.

HTTP/HTTPS will need to be accessed by users via virtual public IP 130.44.50.120 while MySQL service will be accessed by web servers using virtual private IP 192.168.100.30. Kindly refer to picture below for the full architecture:

 

All Servers

SELINUX must be turned off on all servers. Change the SELINUX configuration file at /etc/sysconfig/selinux:

SELINUX=disabled

Load Balancers

1. All steps should be done in both servers unless specified. We will install Piranha and other required packages using yum:

$ yum install piranha ipvsadm mysql -y

2. Open firewall ports as below:

$ iptables -A INPUT -m tcp -p tcp --dport 3636 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 80 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 443 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 539 -j ACCEPT
$ iptables -A INPUT -m udp -p udp --dport 161 -j ACCEPT

3. Start all required services and make sure they will auto start if server reboot:

$ service piranha-gui start
$ chkconfig piranha-gui on
$ chkconfig pulse on

4. Run following command to set password for user piranha. This will be used when accessing the web-based configuration tools:

$ piranha-passwd

5. Turn on IP forwarding. Open /etc/sysctl.conf and make sure following line has value 1:

net.ipv4.ip_forward = 1

And run following command to activate it:

$ sysctl -p

6. Check whether iptables is loaded properly as the kernel module:

$ lsmod | grep ip_tables
ip_tables 17733 3 iptable_filter,iptable_mangle,iptable_nat

7. Since we will need to serve HTTP and HTTPS from the same server, we need to group the traffic to be forwarded to the same destination. To achieve this, we need to mark the packet using iptables and so it being recognized correctly on the destination server. Set the iptables rules to mark all packets which destined for the same server as “80”:

$ iptables -t mangle -A PREROUTING -p tcp -d 130.44.50.120/32 --dport 80 -j MARK --set-mark 80
$ iptables -t mangle -A PREROUTING -p tcp -d 130.44.50.120/32 --dport 443 -j MARK --set-mark 80

Load Balancer #1

1. Check the IP address is correctly setup:

$ ip a | grep inet
inet 130.44.50.121/28 brd 110.74.131.15 scope global eth0
inet 192.168.100.41/24 brd 192.168.10.255 scope global eth1

2. Login into Piranha at http://130.44.50.121:3636/. Login as user piranha and password which has been setup in step #4 of Load Balancers section.

3. Enable redundancy. Go to Piranha > Redundancy > Enable.

4. Enter the IP information as below:

Redundant server public IP     : 130.44.50.122
Monitor NIC links for failures : Enabled
Use sync daemon                : Enabled

Click ‘Accept’.

5. Go to Piranha > Virtual Servers > Add > Edit. Add information as below and click ‘Accept’:

 

6. Next, go to Real Server. This we will put the IP address of all real servers that serve HTTP. Fill up all required information as below:

7. Now we need to do the similar setup to HTTPS. Just change the port number for ‘Application port’ to 443. For Real Server, change the real server’s destination port to 443.

8. For MySQL virtual server, enter information as below:

 

9. For MySQL real servers, enter information as below:

 

10. Configure monitoring script for MySQL virtual server. Click on ‘Monitoring Script’ and configure as below:

 

11. Setup the monitoring script for mysql:

$ vim /root/mysql_mon.sh

And add following line:

#!/bin/sh
USER=monitor
PASS=M0Npass5521
####################################################################
CMD=/usr/bin/mysqladmin
 
IS_ALIVE=`$CMD -h $1 -u $USER -p$PASS ping | grep -c "alive"`
 
if [ "$IS_ALIVE" = "1" ]; then
    echo "UP"
else
    echo "DOWN"
fi

12. Change the script permission to executable:

$ chmod 755 /root/mysql_mon.sh

13. Now copy over the script and Piranha configuration file to load balancer #2:

$ scp /etc/sysconfig/ha/lvs.cf lb2:/etc/sysconfig/ha/lvs.cf
$ scp /root/mysql_mon.sh lb2:/root/

14. Restart Pulse to activate the Piranha configuration in LB#1:

$ service pulse restart

Load Balancer #2

In this server, we just need to restart pulse service as below:

$ chkconfig pulse on
$ service pulse restart

Database Cluster

1. We need to allow the MySQL monitoring user from nanny (load balancer) in the MySQL cluster. Login into MySQL console and enter following SQL command in one of the server:

mysql> GRANT USAGE ON *.* TO [email protected]'%' IDENTIFIED BY 'M0Npass5521';

2. Add the virtual IP manually using iproute:

$ /sbin/ip addr add 192.168.100.30 dev eth1

3. Add following entry into /etc/rc.local to make sure the virtual IP is up after boot:

$ echo '/sbin/ip addr add 192.168.100.30 dev eth1' >> /etc/rc.local

Attention: If you restart the interface that hold virtual IP in this server, you need to execute step #2 to bring up the virtual IP manually. VIPs can not be configured to start on boot.

4. Check the IPs in the server. Example below was taken from server Mysql1:

$ ip a | grep inet
inet 130.44.50.127/24 brd 130.44.50.255 scope global eth0
inet 192.168.100.33/24 brd 192.168.100.255 scope global eth1
inet 192.168.100.30/32 scope global eth1

Web Cluster

1. On each and every server, we need to install a package called arptables_jf from yum. We will used this to manage our ARP tables entries and rules:

$ yum install arptables_jf -y

2. Add following rules respectively for every server:

Web1:

arptables -A IN -d 130.44.50.120 -j DROP
arptables -A OUT -d 130.44.50.120 -j mangle --mangle-ip-s 130.44.50.123

Web 2:

arptables -A IN -d 130.44.50.120 -j DROP
arptables -A OUT -d 130.44.50.120 -j mangle --mangle-ip-s 130.44.50.124

Web 3:

arptables -A IN -d 130.44.50.120 -j DROP
arptables -A OUT -d 130.44.50.120 -j mangle --mangle-ip-s 130.44.50.125

3. Enable arptables_jf to start on boot, save the rules and restart the service:

$ service arptables_jf save
$ chkconfig arptables_jf on
$ service arptables_jf restart

4. Add the virtual IP manually into the server using iproute command as below:

$ /sbin/ip addr add 130.44.50.120 dev eth0

5. Add following entry into /etc/rc.local to make sure the virtual IP is up after boot:

$ echo '/sbin/ip addr add 130.44.50.120 dev eth0' >> /etc/rc.local

Attention: If you restart the interface that hold virtual IP in this server, you need to execute step #4 to bring up the virtual IP manually. VIPs can not be configured to start on boot.

6. Check the IPs in the server. Example below was taken from server Web1:

$ ip a | grep inet
inet 130.44.50.123/28 brd 110.74.131.15 scope global eth0
inet 130.44.50.120/32 scope global eth0
inet 192.168.100.21/24 brd 192.168.100.255 scope global eth1

You are now having a complete high availability MySQL and HTTP/HTTPS service with auto failover and load balance features by Piranha using direct routing method.

In this tutorial, I am not focusing on HTTPS because in this test environment I do not have SSL setup correctly and do not have much time to do that. By the way, you may use following BASH script to monitor HTTPS from Piranha (nanny):

#!/bin/bash
 
if [ $# -eq 0 ]; then
        echo "host not specified"
        exit 1
fi
 
curl -s --insecure \
	--cert /etc/crt/hostcert.pem \
	--key /etc/crt/hostkey.pem \
	https://${1}:443 | grep "" \
	&> /dev/null
 
if [ $? -eq 0 ]; then
        echo "UP"
else
        echo "DOWN"
fi

I hope this tutorial could be useful for some guys out there!

Linux: Add New User and Group into .htpasswd

We have several directories which have been restricted to some users in our company. Since they will need to authenticate before able to access the directory via web browser, I need to manage simple Apache user authentication using htpasswd.

User Authentication

To create new password protected directory under /home/website/public_html/secure1, create a new .htaccess file:

$ vim /home/website/public_html/secure1/.htaccess

And enter following line:

AuthUserFile /home/website/.htpasswd
AuthType Basic
AuthName "User Authentication"
Require valid-user

This will tell Apache to refer to .htpasswd for the user authentication data. Now let create a user to be inserted into .htpasswd file:

$  htpasswd -c /home/website/.htpasswd myfirstuser
New password:
Re-type new password:
Adding password for user myfirstuser

Format: htpasswd [options] [location of .htpasswd to be create] [username]

Now you can try to access the secure directory using  website: http://mywebsite.com/secure1. You should able to see login box pop out asking for username and password.

To add another user:

$ htpasswd /home/website/.htpasswd myseconduser

This will insert another line into .htpasswd file. If you see the current value, it should be:

$ cat /home/website/.htpasswd
myfirstuser: Ob5Y/eFTeSXEw
myseconduser: 9oopndPXV7sdE

Group Authentication

In some cases, I need to have a group of people able to access some secure folders. Lets say we have following users:

=================================================================
 USER     | GROUP     | DIRECTORY
=================================================================
 David    | IT        | /home/website/public_html/secure-it
 Nade     | IT        | /home/website/public_html/secure-it
 Mike     | Admin     | /home/website/public_html/secure-admin
 Seth     | Boss      | /home/website/public_html/secure-boss
=================================================================

1. Insert the users into htpasswd file. I will put this under /home/website/.htpasswd:

$ htpasswd -c /home/website/.htpasswd david
$ htpasswd /home/website/.htpasswd nade 
$ htpasswd /home/website/.htpasswd mike
$ htpasswd /home/website/.htpasswd seth

2. Create a htgroup file. This will describe the group for every user. Create a new file /home/website/.htgroup and add following line. Boss group can access all secure directories and others can only access their respective directories:

it: david nade seth
admin: mike seth
boss: seth

3. Apply the access into htacess files for every directories that you want to secure.

For IT group, create new .htaccess file:

$ vim /home/website/public_html/secure-it/.htaccess

And add following line:

AuthUserFile /home/website/.htpasswd
AuthGroupFile /home/website/.htgroup
AuthName "User Authentication"
AuthType Basic
Require group it

For admin group, create new file:

$ vim /home/website/public_html/secure-admin/.htaccess

And add following line:

AuthUserFile /home/website/.htpasswd
AuthGroupFile /home/website/.htgroup
AuthName "User Authentication"
AuthType Basic
Require group admin

For Boss group, create new file:

$ vim /home/website/public_html/secure-boss/.htaccess

And add following line:

AuthUserFile /home/website/.htpasswd
AuthGroupFile /home/website/.htgroup
AuthName "User Authentication"
AuthType Basic
Require group boss

PHP Session using Sharedance in Apache Web Cluster

Our new online shopping cart site is run on 3 Apache servers which mount the same document root in all nodes. With a load balancer in front of it to distribute the HTTP/HTTPS connections equally using weigh round-robin algorithm, we are facing big problem in session handling for the site; When user’s session is not exist in the current server, they need to authenticate once more if load balancer redirect the user to another server. We are considering following solutions:

  • Mount the same partition of session.save_path directory in all nodes using GFS2:
    • Serious IO issue due to high write in single directory
    • Increase server load especially on the GFS2 locking (dlm) process
  • Use memcached server:
    • Need to have simple modification on the PHP code
    • The sessions are saved in memory instead of disk. It is fast but session ID will be gone if server rebooted or service restarted.
  • Set session to be stored in MySQL database:
    • Need to modify the PHP code especially on how it handle locking. If session stored in file format, the file system will automatically handling the locking part.
    • Increase the database server workload due to high read/write and it can generate to millions row within months.

At the end of the day, we are planning to use Sharedance because it is super simple. You just need to install the server, do some changes in php.ini, restart Apache and done! You are then set to have a session server which will be lookup by all web cluster nodes.

OS: CentOS 6.3 64bit
Server IP: 192.168.10.50
Session directory: /var/sharedance
Website: misterryan.com
Webserver #1: 192.168.10.101
Webserver #2: 192.168.10.102
Webserver #3: 192.168.10.103

Session Server (Sharedance)

1. We will use RPMforge to make our life easier:

$ rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt
$ rpm -Uhv http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm

2. Install Sharedance using yum:

$ yum install sharedance -y

3. I want Sharedance to listen to the main IP and cache expiration should be 6 hours. Open /etc/sysconfig/sharedance via text editor and make sure you have following line:

SHAREDANCE_OPTIONS="--ip=192.168.10.50 --expiration=21600"

You can use following command to check the complete list of options available:

$ sharedanced --help

4. Make sure it is auto start on boot and start the Sharedance service:

$ chkconfig sharedance on
$ service sharedance start

5. You can check in the process list this 2 processes should exist and listening to port 1042:

$ ps aux | grep sharedanced
496    2621    0.0   0.0   111520  568  ?  Ss   11:44   0:00   sharedanced [SERVER] 
496    2622    0.0   0.0   111520  352  ?  SN   11:44   0:00   sharedanced [CLEANUP]
$ netstat -tulpn | grep sharedanced
tcp   0   0     192.168.0.171:1042     0.0.0.0:*       LISTEN       2621/sharedanced [SERVER]

To allow firewall rules in iptables for Sharedance port:

$ iptables -I INPUT -p tcp --dport 1042 -j ACCEPT

6. We need to copy the PHP session handler and sharedance PHP file provided by Sharedance which is located under /usr/share/doc/sharedance-0.6/php/ direcotry to all web cluster nodes so they can prepend in the php.ini. I will copy them to /etc/php.d directory in all nodes:

$ cd /usr/share/doc/sharedance-0.6/php/
$ scp session_handler.php sharedance.php 192.168.10.101:/etc/php.d/
$ scp session_handler.php sharedance.php 192.168.10.102:/etc/php.d/
$ scp session_handler.php sharedance.php 192.168.10.103:/etc/php.d/

Web Servers

1. Change the php.ini value of your server as below:

auto_prepend_file = /etc/php.d/session_handler.php
session.save_handler = user

2. Edit /etc/php.d/session_handler.php using text editor and make sure the first line contain the IP address of Sharedance server:

define('SESSION_HANDLER_HOST', '192.168.10.50');

3. Restart Apache web server to apply the changes:

$ service httpd restart

Testing

I download this file: http://blog.secaserver.com/files/session.tar.gz and execute it from the web server to get following result:

We should see the same session ID exist in Sharedance session directory at /var/lib/sharedance:

$ ls -al | grep tqsncjk23k78cm747n4b1eq5l4
-rw------- 1 sharedance sharedance 24 Aug 1 12:17 tqsncjk23k78cm747n4b1eq5l4

CentOS: Configure Piranha as Load Balancer (Direct Routing Method)

I am currently working on a web cluster project using CentOS. In this project, I have 2 web servers running on Apache and mounted the same document root to serve the HTTP content. I also have 2 servers in front of it to become the load balancer and failover to increase high availability of the two-node web server cluster. The virtual IP will be hold by load balancer #1 with auto failover to load balancer #2.

You may refer to diagram below to get clearer picture:

I am using following variables:

All servers’ OS: CentOS 6.2 64bit
Web server #1: 192.168.0.221
Web server #2: 192.168.0.222
Load balancer #1: 192.168.0.231
Load balancer #2: 192.168.0.232
Virtual IP: 192.168.0.220

Load Balancer Server

1. All steps should be done in both servers unless specified. We will install Piranha and other required packages using yum:

$ yum install piranha ipvsadm -y

2. Open firewall ports as below:

  • Piranha: 3636
  • HTTP: 80
  • Hearbeat: 539

3. Start all required services and make sure they will auto start if server reboot:

$ service piranha-gui start
$ chkconfig piranha-gui on
$ chkconfig pulse on

4. Run following command to set password for user piranha. This will be used when accessing the web-based configuration tools:

$ piranha-passwd

5. Turn on IP forwarding. Open /etc/sysctl.conf and make sure following line has value 1:

net.ipv4.ip_forward = 1

And run following command to activate it:

$ sysctl -p

Load Balancer #1

1. Open Piranha web-based configuration tools at http://192.168.0.231:3636 and login as piranha with respective password. We start with configuring Global Settings as below:

2. Then, go to the Redundancy tab and enter the secondary server IP. In this case, we will put load balancer #2 IP as the redundant server in case load balancer #1 is down:

3. Under Virtual Servers tab, click Add and enter required information as below:

4. Now we need to configure the virtual IP and virtual HTTP server to map into the real HTTP server. Go to Virtual Servers > Real Server and add into the list as below:

Make sure you activate the real server once the adding completed by clicking the (DE)ACTIVATE button.

5.  Now copy the configuration file to load balancer #2 to as below:

$ scp /etc/sysconfig/ha/lvs.cf 192.168.0.232:/etc/sysconfig/ha/

6. Restart Pulse service to apply the new configuration:

$ service pulse restart

You can monitor what is happening with Pulse by tailing the /var/log/message output as below:

$ tail -f /var/log/message

Load Balancer #2

No need to configure anything in this server. We just need to restart Pulse service to get affected with the new configuration changes which being copied over from LB1.

$ service pulse restart

If you see the /var/log/message, pulse in this server will report that it will run on BACKUP mode.

Web Servers

1. Since we are using direct-routing method, regards to your Apache installation, we also need to install another package called arptables_jf. Here is some quote from RedHat documentation page:

Using the arptables_jf method, applications may bind to each individual VIP or port that the real server is servicing. For example, the arptables_jf method allows multiple instances of Apache HTTP Server to be running bound explicitly to different VIPs on the system. There are also significant performance advantages to usingarptables_jf over the IPTables option.

However, using the arptables_jf method, VIPs can not be configured to start on boot using standard Red Hat Enterprise Linux system configuration tools.

We will instsall using yum:

$ yum install arptables_jf -y

2. Configure arptables_jf by executing following command:

In web server #1:

$ arptables -A IN -d 192.168.0.220 -j DROP
$ arptables -A OUT -d 192.168.0.220 -j mangle --mangle-ip-s 192.168.0.221

In web server #2:

$ arptables -A IN -d 192.168.0.220 -j DROP
$ arptables -A OUT -d 192.168.0.220 -j mangle --mangle-ip-s 192.168.0.222

3.  Save the arptables rules and make sure the service is started on boot:

$ service arptables_jf save
$ chkconfig arptables_jf on

4.  Add the virtual IP address in the servers:

$ ip addr add 192.168.0.220 dev eth0

5. Since the IP cannot be started during sysinit (boot time), we can automatically start the IP after sysinit complete. Open /etc/rc.local using text editor:

$ vim /etc/rc.local

And add following line:

/sbin/ip addr add 192.168.0.220 dev eth0

Warning: Every time you restart your network service, please make sure to run step #4 to bring up the virtual IP in real server.

Done. You can now point your website to the virtual IP and you will see that the load balancer #1 will report as below:

$ ipvsadm -L
 
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port       Forward Weight  ActiveConn  InActConn
TCP 192.168.0.220:http lblc
-> 192.168.0.221:http       Route   1       0           34
-> 192.168.0.222:http       Route   1       0           19

CentOS: Setup IPv6 using HE Tunnel Broker with Apache

Even though IPv4 has been fully occupied, many people still not realized that they need to start implementing IPv6 for their services. In this post, I am going to show how to implement IPv6 connectivity to HTTP service which run on Apache.

We will use dual-stack configuration which allowed IPv4 and IPv6 run simultaneously in a single server. In this tutorial, I am assuming that we will use standard Apache installation which come from yum.

IPv6 Kernel Module

I am using CentOS 5.6 32bit and IPv6 module is disabled by default if not configured during first installation. You will see following error when you want to load IPv6 kernel module:

$ modprobe ipv6
FATAL: Module off not found.

This is not an issue if you are using CentOS 5.7 and later. So we need to enable the IPv6 module and make sure it is loaded into kernel.

Open /etc/modprobe.conf using text editor:

$ vim /etc/modprobe.conf

and delete following line:

alias ipv6 off
options ipv6 disable=1

Save the file and reload probe for ipv6 module.

$ modprobe ipv6

To check whether ipv6 is correctly loaded, use lsmod command as below:

$ lsmod | grep ipv6
ipv6       270049   1 cnic

To complete the process, reboot the server.

Once done, lets see network interface in this server. We have 2 active interfaces: localhost (lo) and ethernet (eth0) which is the default route to Internet:

$ ip a
1: lo:  mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:16:17:27:7f:9d brd ff:ff:ff:ff:ff:ff
inet 85.127.181.30/26 brd 85.17.81.63 scope global eth0
inet6 fe80::216:17ff:fe27:7f9d/64 scope link

Get the IPv6

1. Lets check the IPv4 main IP of our server. Run following command to check:

$ curl cpanel.net/myip
85.127.181.30

2. Since we will configuring dual-stack protocol, we need to have our IPv6 connection to be tunnel into IPv4 connectivity. Hurrican Electric (HE) is providing this service for free. We need to create an account, create the IPv6 tunnel and configure it to our server.

Once registered, login into the portal and click ‘Create Regular Tunnel’ and add the main IPv4 of your server and select a tunnel location. Since this server is located in Europe, I will just select Berlin as refer in screenshot below:

3. Click ‘Create Tunnel’. You will then being redirected to a summary page. Go to ‘Example Configurations’ tab and select ‘Linux-route2’ as screenshot below:

That is the command that we need to execute in order to activate the IPv6 in the server.

 

Activate the IPv6

1. There can be 2 ways to activate the IPv6 interface whether using command line or using network interface configuration file. We will activate using command line and also create an network configuration file so we can use ifup and ifdown command to control the interface (just like normal interface script ifcfg-eth0).

2. Execute all commands as stated in the example configuration above:

$ modprobe ipv6
$ ip tunnel add he-ipv6 mode sit remote 216.66.80.30 local 85.127.181.30 ttl 255
$ ip link set he-ipv6 up
$ ip addr add 2001:470:1f0a:6ef::2/64 dev he-ipv6
$ ip route add ::/0 dev he-ipv6
$ ip -f inet6 addr

3. Check whether the interface is up. You should get the IPv6 address provided by TunnelBroker:

$ ifconfig he-ipv6

4. Create the network config file. Go to /etc/sysconfig/network-scripts/ and create a new file using text editor called ifcfg-he:

$ vim /etc/sysconfig/network-scripts/ifcfg-he

And add following line:

DEVICE=he-ipv6
TYPE=sit
BOOTPROTO=none
ONBOOT=yes                         # set to "no" if you prefer to start the tunnel manually
IPV6INIT=yes
IPV6TUNNELIPV4=216.66.80.30        # Server IPv4 address
IPV6ADDR=2001:470:1f0a:6ef::2      # Client IPv6 address

4. Add following line into /etc/sysconfig/network to make sure all IPv6 traffic will be routed through this interface:

NETWORKING_IPV6=yes
IPV6_DEFAULTDEV=he-ipv6

5. Since this server already have APF firewall loaded, we need to disable it because APF is not supporting IPv6 yet. If you want to configure IPv6 firewall, configure your rules under /etc/sysconfig/ip6tables instead:

$ apf -f
$ rm /etc/init.d/apf

6. You can bring the IPv6 up and down using ifup and ifdown command as below:

$ ifdown he-ipv6
$ ifup he-ipv6

 

Point Domain Name to IPv6

The next step should be DNS. We need our hostname resolvable to IPv6 when lookup. Login to the name server and add following AAAA record (IPv6 A record type):

www.mydomain.org      A           85.127.181.30
www.mydomain.org      AAAA        2001:470:1f0a:6ef::2

Done! Wait for DNS propagation to complete before you can test your website.

Configure Apache

1. Since we want our website to be accessed via IPv4 and IPv6, the listen value in httpd.conf will remain as default. Open Apache configuration file located at /etc/httpd/conf/httpd.conf and find following line:

Listen 80

2. My new virtual host for the website will be as below:

NameVirtualHost 85.127.181.30:80
NameVirtualHost [2001:470:1f0a:6ef::2]:80
 
# VirtualHost for IPv4
<VirtualHost 85.127.181.30:80>
    ServerName www.mydomain.org
    ServerAdmin admin@localhost
    DocumentRoot /home/mydomain/public_html
    ErrorLog /home/mydomain/logs/error_log
    CustomLog /home/mydomain/logs/access_log combined
</VirtualHost>
# Virtual host for IPv6
<VirtualHost [2001:470:1f0a:6ef::2]:80>
    ServerName www.mydomain.org
    ServerAdmin admin@localhost
    DocumentRoot /home/mydomain/public_html
    ErrorLog /home/mydomain/logs/error_log
    CustomLog /home/mydomain/logs/access_log combined
</VirtualHost>

3. Check Apache configuration file and start if configuration syntax is correct:

$ service httpd configtest
$ service httpd restart

IPv6 Browsing Test

To test our website’s IPv6 browsing, I will use http://www.ipv6proxy.net/. I used this web proxy to access one of my page http://www.mydomain.org/ipv6.html and following result appear as below:

Done! Your website now can be accessed via IPv4 and IPv6!

Apache: Kill Certain httpd/PHP Processes in DSO

Our development team are working on a new project which involved many scripts to be executed in long time using Apache. These scripts are used to migrate and convert old database to the new database fields and formats. Most of the scripts are still under development and required me to monitor the process and terminate the process when required.

One problem when you run Apache and PHP in DSO mode (which is enabled by default when installing using yum) is we can not monitor and see the PHP process that execute the script. PHP is running as dynamic shared object under Apache so the only process you can see from the server is httpd.

We will use following example to illustrate what was happen:

OS: CentOS 6.2 64bit
PHP script URL: http://develteam.org/migration/convert.php
PHP script directory: /home/devel/public_html/migration

If you are running PHP under CGI, suPHP or FastCGI in Apache, you can easily see which PID hold the process and we can kill the process immediately. Example as below:

$ ps aux | grep convert.php | grep -v grep
devel  21003    29.0    0.4    217472    36080   ?    S   13:56   0:00   /usr/bin/php /home/devel/public_html/migration/convert.php

The PID (column no 2) is 21003 and we can use kill command to terminate the process. But when you configure PHP to be run under DSO, the same command will produce nothing as below:

$ ps aux | grep php | grep -v grep

In this case, we need to get some help from another application called lsof (list of open file). This command will produced all open directory that used by certain PID. Since we know that the PHP script is located under /home/devel/public_html/migration directory, we can use this to filter the lsof output:

$ lsof | grep /home/devel/public_html/migration
httpd   32117    nobody   cwd    DIR      8,5    12288     40142612    /home/devel/public_html/migration

From the output we can see the PID (column no 2) of the httpd process that open this directory. This indicate the process that we need to terminate. I will then use kill command to terminate the httpd process:

$ kill -9 32117

To terminate all processes which return by lsof, we can use awk to filter only column no 2 (PID) and execute kill command accordingly:

$ kill -9 `lsof | grep /home/devel/public_html/migration | awk {'print $2'}`

Now our developer team can modify and start again the PHP process for their next test.

Apache: Create/Mount Same Identical Directory in Different Path

One of our web developer required to have 2 directories to be always identical, which means whatever files contain in directory ‘a’ will be appeared in directory ‘b’. From the server and operating system point-of-view, this can be achieved using several methods:

  • Use symbolic link
  • Use mount bind
  • Use bindfs

Each and every method has advantages and disadvantages which will be explained accordingly. I will using following variables:

OS: CentOS 6 64bit
Document root: /home/user/public_html
Directory #1 (reference): /home/user/public_html/system1/filesharing/
Directory #2 (follower): /home/user/public_html/system2/filesharing/

Method 1: Symbolic Link

1. Before you want to use symlink in Apache, you need to allow the functionality in Apache configuration file. Add following line into /etc/httpd/conf/httpd.conf (this will affect into global configuration):

Options +FollowSymLinks -SymLinksIfOwnerMatch

Or:

You can specifically add following line into user’s public_html directory .htaccess file:

Options +FollowSymLinks -SymLinksIfOwnerMatch

It will required AllowOverride to be turned on in httpd.conf as below:

AllowOverride ALL

2. Restart the Apache server for new configuration to be loaded:

$ /etc/init.d/httpd restart

3. Navigate to the secondary directory and create a symbolic link:

$ cd /home/user/public_html/system2
$ ln -s ../system1/filesharing filesharing

This will virtually map the filesharing directory under system1 into system2 directory using relative path.

Advantages:

  • Symlink can be executed in user level as long as the user has write permission to the current directory.
  • You can use PHP symlink function to create symlink.
  • Whenever you delete the follower directory, it will not delete the reference directory. It only remove the symbolic link.
  • You can use relative path.

Disadvantages:

  • By default Apache will turn above option off. So you might see following error which is popular:
     Symbolic link not allowed or link target not accessible
  • Anyone can remove the symlink easily. For example, user A create symlink to user B folder and user B then can remove symlink without user A acknoledgement.
  • Symlink is one of the most popular method for hackers to browse around directories in server. No matter in which directory they can get into, symlink will be working as long as they have the write permission to the directory, usually /tmp. For example, from /tmp folder I can symlink to /var/lib/mysql and browse all databases name inside the server.

Method 2: Mount bind

1. As root user, you can use mount bind to mount same directory with different name. Create a new directory to be mount in the follower directory:

$ mkdir -p /home/user/public_html/system2/filesharing

2. Mount the follower directory to the reference directory:

$ mount --bind /home/user/public_html/system1/filesharing /home/user/public_html/system2/filesharing

3. Add following line into the /etc/fstab if you want it to mount during boot (sysinit) or /etc/rc.local if you want to mount it after boot complete:

For /etc/fstab:

/home/user/public_html/system1/filesharing    /home/user/public_html/system2/filesharing    none    bind    0 0

For /etc/rc.local:

mount --bind /home/user/public_html/system1/filesharing /home/user/public_html/system2/filesharing

You can unmount it manually using following command:

$ umount /home/user/public_html/system2/filesharing

Advantages:

  • Apache will treat both directories as normal directory, which can bypass some expected error in symlink.
  • Only root and sudoers are able to execute this.
  • Mount command options can be applied to follower directory like read-only, permission and ownership.

Disadvantages:

  • You need to make sure the path is mounted correctly. For example after reboot or whenever the reference’s hard disk is having mounting or I/O problem.
  • Use mount bind with precaution. Most Linux and Unix file systems don’t allow hard links to directories (except for the . and .. entries that mkdir creates itself). The reasons are are pretty obvious: you could really confuse programs like ls (ls -R)find and of course fsck if you created links that recursed back to themselves.

Method 3: Bindfs

1. Bindfs is actually working similar to mount bind, instead it is using FUSE for mounting and having better functionality and permission configuration compare to mount. Before installing bindfs, we need to install FUSE with development package using yum:

$ yum install fuse fuse-devel -y

2. Download bindfs at here and install:

$ cd /usr/local/src
$ wget http://bindfs.googlecode.com/files/bindfs-1.10.3.tar.gz
$ tar -xzf bindfs-1.10.3.tar.gz
$ cd bindfs-* 
$ configure
$ make
$ make install

3. Create the ‘filesharing’ directory and mount the directory as below:

$ cd /home/user/public_html/system2
$ mkdir filesharing
$ bindfs -p 755 /home/user/public_html/system1/filesharing filesharing

3. Add following line into the /etc/fstab if you want it to mount during boot (sysinit) or /etc/rc.local if you want to mount it after boot complete:

For /etc/fstab:

bindfs#/home/user/public_html/system1/filesharing    /home/user/public_html/system2/filesharing    fuse    perms=755    0 0

For /etc/rc.local:

bindfs -p 755 /home/user/public_html/system1/filesharing /home/user/public_html/system2/filesharing

You can use mount command to check whether it is mounting correctly or not:

$ mount | grep bindfs
bindfs on /home/user/public_html/system2/filesharing type fuse.bindfs (rw,nosuid,nodev,allow_other,default_permissions)

To unmount manually, simply use umount command:

$ umount /home/user/public_html/system2/filesharing

Advantages:

  • You can create custom rules depending on the policy as you can refer to the man page here. It is useful if you want different people to access with different attributes instead of following the reference directory attributes.
  • Apache will treat both directories as normal directory, which can bypass some expected error in symlink.
  • Only root and sudoers are able to execute this.

Disadvantages:

  • It run on top of FUSE. In some kernel, FUSE has performance issues and easy to hang.
  • You need to make sure the path is mounted correctly. For example after reboot or whenever the reference’s hard disk is having mounting or I/O problem.

 

Linux: Install and Configure Varnish as Cache Server

In same cases, one of my server is having problem delivering static content due to low I/O disk capabilities. So we need something to help our web service to deliver and cache the static content (mostly pictures and HTML) to help reduce the load of the main server.

You can refer to the diagram below to get clearer picture:

As usual, I will be using CentOS with Varnish cache in front of it. Varnish will also be the failover if one of the web server is down. All servers behind cache server will communicate using internal IP, so the web servers are not expose to outside world.

Variable as below:

OS: CentOS 6.2 64bit
Web1: 192.168.100.11
Web2: 192.168.100.12
Cache Server RAM installed: 16GB
Domain: supremedex.org

1. Install Varnish is super easy:

$ rpm --nosignature -i http://repo.varnish-cache.org/redhat/varnish-3.0/el5/noarch/varnish-release-3.0-1.noarch.rpm
$ yum install varnish

2. We should tell Varnish how do it start. Open /etc/sysconfig/varnish and make sure following line is uncommented and having correct value:

NFILES=131072
MEMLOCK=82000
RELOAD_VCL=1
VARNISH_VCL_CONF=/etc/varnish/default.vcl
VARNISH_LISTEN_PORT=80
VARNISH_ADMIN_LISTEN_PORT=8888
VARNISH_SECRET_FILE=/etc/varnish/secret
VARNISH_MIN_THREADS=2
VARNISH_MAX_THREADS=1000
VARNISH_THREAD_TIMEOUT=120
VARNISH_CACHE_SIZE=12G
VARNISH_CACHE="malloc,${VARNISH_CACHE_SIZE}"
VARNISH_TTL=120
DAEMON_OPTS="-a ${VARNISH_LISTEN_ADDRESS}:${VARNISH_LISTEN_PORT} \
             -f ${VARNISH_VCL_CONF} \
             -T ${VARNISH_ADMIN_LISTEN_ADDRESS}:${VARNISH_ADMIN_LISTEN_PORT} \
             -t ${VARNISH_TTL} \
             -w ${VARNISH_MIN_THREADS},${VARNISH_MAX_THREADS},${VARNISH_THREAD_TIMEOUT} \
             -u varnish -g varnish \
             -S ${VARNISH_SECRET_FILE} \
             -s ${VARNISH_CACHE}"

3. Lets configure Varnish. Open /etc/varnish/default.vcl and make sure following line uncommented:

# Define the internal network subnet.
acl internal {
  "192.168.100.0"/24;
}
 
# Define the list of web servers
# Port 80 Backend Servers
backend web1 { .host = "192.168.100.11"; .probe = { .url = "/server_status.php"; .interval = 5s; .timeout = 1s; .window = 5;.threshold = 3; }}
backend web2 { .host = "192.168.100.12"; .probe = { .url = "/server_status.php"; .interval = 5s; .timeout = 1s; .window = 5;.threshold = 3; }}
 
# Define the director that determines how to distribute incoming requests.
director web_director round-robin {
  { .backend = web1; }
  { .backend = web2; }
}
 
# Respond to incoming requests
sub vcl_recv {
  # Set the director to cycle between web servers.
  set req.backend = web_director;
 
  if (req.url ~ "^/server_status\.php$") {
       return (pass);
  }
 
  # Pipe these paths directly to Apache for streaming.
  if (req.url ~ "^/backup") {
    return (pipe);
  }
 
  # Always cache the following file types for all users.
  if (req.url ~ "(?i)\.(png|gif|jpeg|jpg|ico|swf|css|js|html|htm)(\?[a-z0-9]+)?$") {
    unset req.http.Cookie;
  }
}
 
sub vcl_hash {
}
 
# Code determining what to do when serving items from the Apache servers.
sub vcl_fetch {
  # Don't allow static files to set cookies.
  if (req.url ~ "(?i)\.(png|gif|jpeg|jpg|ico|swf|css|js|html|htm)(\?[a-z0-9]+)?$") {
    # beresp == Back-end response from the web server.
    unset beresp.http.set-cookie;
  }
 
  # Allow items to be stale if needed.
  set beresp.grace = 6h;
}
 
# In the event of an error, show friendlier messages.
sub vcl_error {
  # Redirect to some other URL in the case of a homepage failure.
  if (req.url ~ "^/?$") {
    set obj.status = 302;
    set obj.http.Location = "http://dl.dropbox.com/u/68546782/maintenance.jpg";
  }
}

4. Now we need to create a PHP file for each back end server to make sure PHP and Apache services are running well. This is for Varnish monitoring purpose. Create a new file called server_status.php under Apache root document (in my case is /var/www/html):

$ touch /var/www/html/server_status.php
$ chown nobody.nobody /var/www/html/server_status.php

And add following line:

<?php echo "Status: OK"; ?>

5. So now we can start Varnish using following command:

$ service varnish start

6.  Lets check whether Varnish is working fine:

netstat -tulpn | grep varnish
tcp   0   0     0.0.0.0:80        0.0.0.0:*     LISTEN      10200/varnishd
tcp   0   0     0.0.0.0:8888      0.0.0.0:*     LISTEN      10199/varnishd
tcp   0   0     :::80             :::*          LISTEN      10200/varnishd
tcp   0   0     :::8888           :::*          LISTEN      10199/varnishd

7. Point the domain itself to the cache server as below:

supremedex.org    A       202.133.14.80
www               CNAME   supremedex.org

8. Once the propagation completed, you should able to access the website directly via web browser by accessing http://www.supremedex.org/.

Notes

To change the memory allocation for Varnish, use step #2 and change the VARNISH_CACHE_SIZE. Optimal cache should be 70% – 80% of total RAM, depending how much static files you want to put cache.

To monitor the varnish logs, run following command in console or SSH:

$ varnishlog

To reload configuration after make some changes to Varnish configuration file (default.vcl):

$ service varnish reload

Reference

http://www.lullabot.com/sites/lullabot.com/files/default_varnish3.vcl_.txt

High Availability: Web Server Cluster using Apache + Pound + Unison

To achieve high availability, what we really need is to eliminate single point of failure as many point as possible but, it comes with expensive way to do this. What if we just have 2 servers, and we want to have highest web service availability possible with lowest cost?

Most important part in this high availability is data should be sync between these servers. So we need several tools to help us achieve our target:

  • Apache – Web server
  • Pound – HTTP load balancer/failover
  • Keepalived – IP failover
  • Unison – 2 way file synchronization
  • Fsniper – Monitor file and trigger the file synchronization

Following images will give some clear explanation on the architecture that we will setup:

In this setup, SELINUX and iptables has been turning OFF and root privileges is required. I am using following variables:

OS: CentOS 6.2 64bit
Web server #1 IP: 210.48.111.21
Web server #2 IP: 210.48.111.22
Domain: icreateweb.net
Web site public IP: 210.48.111.20
Web directory: /home/icreate/public_html

The steps are similar on both servers, unless specified. To make things easier, we need to enable RPMforge repository because almost all applications that we need is available there:

$ cd /usr/local/src
$ rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt
$ wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm
$ rpm -Uhv rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm

Both servers has been added following lines into /etc/hosts to ease up SSH communication:

210.48.111.21 webserver1
210.48.111.22 webserver2

We also need to allow SSH between node without password for auto file synchronization. Execute following command:

$ ssh-keygen -t dsa

Press ‘Enter’ until finish. Then, copy the public key to another server (webserver2):

$ ssh-copy-id -i ~/.ssh/id_dsa root@webserver2

Do this on another server as well, but copy the public key to webserver1 in ssh-copy-id command above. This step is critical and should not be skipped.

Web servers

1. Install all required applications using yum. We will skip RPMforge for the moment because I just need simple package of Apache and PHP:

$ yum install httpd* php* -y --disablerepo=rpmforge

2. Create web and logs directory for user icreate:

$ useradd -m icreate
$ chmod 755 /home/icreate
$ mkdir /home/icreate/public_html
$ mkdir /home/icreate/logs
$ touch /home/icreate/logs/access_log
$ touch /home/icreate/logs/error_log

3. We will use Pound as reverse-proxy and load balancer of port 80. So Apache need to run on different port. We will use port 88. Open /etc/httpd/conf/httpd.conf via text editor and make sure following line value as below:

Listen 88

4. Create vhosts.conf under /etc/httpd/conf.d directory and paste following line:

Web server #1:

NameVirtualHost 210.48.111.21:88
 
# Default host
<VirtualHost 210.48.111.21:88>
    ServerName localhost
    ServerAdmin admin@localhost
    DocumentRoot /var/www/html
</VirtualHost>
 
# Virtual host for domain icreateweb.net
<VirtualHost 210.48.111.21:88>
    ServerName icreateweb.net
    ServerAlias www.icreateweb.net
    ServerAdmin webmaster@icreateweb.net
    DocumentRoot /home/icreate/public_html
    ErrorLog /home/icreate/logs/error_log
    CustomLog /home/icreate/logs/access_log combined
</VirtualHost>

Web server #2:

NameVirtualHost 210.48.111.22:88
 
# Default host
<VirtualHost 210.48.111.22:88>
    ServerName localhost
    ServerAdmin admin@localhost
    DocumentRoot /var/www/html
</VirtualHost>
 
# Virtual host for domain icreateweb.net
<VirtualHost 210.48.111.22:88>
    ServerName icreateweb.net
    ServerAlias www.icreateweb.net
    ServerAdmin webmaster@icreateweb.net
    DocumentRoot /home/icreate/public_html
    ErrorLog /home/icreate/logs/error_log
    CustomLog /home/icreate/logs/access_log combined
</VirtualHost>

5. Restart and enable Apache service:

$ chkconfig httpd on
$ service httpd restart

6. Lets just create a html test file to differentiate between 2 web servers:

Web server #1:

$ echo "web server 1" > /home/icreate/public_html/server.html

Web server #2:

$ echo "web server 2" > /home/icreate/public_html/server.html

Website should run in local IP for both servers. Now we need to install and configure other applications that help us achieve high availability.

Unison & Fsniper

1. To keep all files in both servers are sync correctly, we will use Unison to execute file synchronization. Install Unison via yum:

$ yum install unison -y

2. Type following command to initialize Unison profile:

$ unison

3. Lets create Unison configuration file. Using text editor, open /root/.unison/default.prf and add following line. We also ignore server.html which we will use to determine HTTP connection from Pound whether to web #1 or web #2:

Web server #1:

root=/home/icreate/public_html
root=ssh://webserver2//home/icreate/public_html
batch=true
ignore=Name{server.html}

Web server #2:

root=/home/icreate/public_html
root=ssh://webserver1//home/icreate/public_html
batch=true
ignore=Name{server.html}

4. Now lets start first synchronization which is important. Run following command in either one server. In this case, I will run on web server #1:

$ unison default

5. Once completed, your files should be synced between both servers. Next, download and install Fsniper:

$ cd /usr/local/src
$ yum install pcre* file-libs file-devel -y
$ wget http://projects.l3ib.org/fsniper/files/fsniper-1.3.1.tar.gz
$ tar -xzf fsniper-1.3.1.tar.gz
$ cd fsniper-*
$ ./configure
$ make
$ make install

6. Create Fsniper configuration files to watch the directory and trigger the synchronization script. Open /root/.config/fsniper/config and add following line:

watch {
      /home/user/public_html {
      recurse = true
      * {
        handler = echo %%; /root/scripts/file_sync
        }
      }
}

7. Lets create the file_sync script to trigger Unison and check the process. Unison is 2 way replication so only need to have 1 process running in both servers in a same time:

$ mkdir -p /root/scripts
$ vim /root/scripts/file_sync

Web server #1:

#!/bin/bash
# Trigger Unison to do 2 way synchronization
 
# Check if Unison is running on both servers
if [ "$(pidof unison)" ] || [ "$(ssh [email protected] pidof unison)" ]
then
    echo "Unison is running. Exiting"
    exit 0
else
    /usr/bin/unison default
fi

Web server #2:

#!/bin/bash
# Trigger Unison to do 2 way synchronization
 
# Check if Unison is running on both servers
if [ "$(pidof unison)" ] || [ "$(ssh [email protected] pidof unison)" ]
then
    echo "Unison is running. Exiting"
    exit 0
else
    /usr/bin/unison default
fi

8. Now lets start the Fsniper process and allow it to start on boot:

$ /usr/local/bin/fsniper --daemon
$ echo "/usr/local/bin/fsniper --daemon" >> /etc/rc.local

KeepAlived

1. Download and install KeepAlived. This application will allow web server #1 and web server #2 to share the public IP (210.48.111.20) between them:

$ yum install -y openssl openssl-devel popt*
$ cd /usr/local/src
$ wget http://www.keepalived.org/software/keepalived-1.2.2.tar.gz
$ tar -xzf keepalived-1.2.2.tar.gz
$ cd keepalived-*
$ ./configure
$ make
$ make install

2. Since we have virtual IP which shared between these 2 servers, we need to tell kernel that we have a non-local IP to be bind to Pound later. Add following line into /etc/sysctl.conf:

net.ipv4.ip_nonlocal_bind = 1

Run following command to apply the changes:

$ sysctl -p

3. By default, keepalived configuration file will be setup under /usr/local/etc/keepalived/keepalived.conf. We will make things easier by symlink it into /etc directory. We will also need to clear the configuration example inside it:

$ ln -s /usr/local/etc/keepalived/keepalived.conf /etc/keepalived.conf
$ cat /dev/null > /etc/keepalived.conf

4. Lets configure Keepalived:

For web server #1, add following line into /etc/keepalived.conf:

vrrp_script chk_pound {
        script "killall -0 pound"       # verify the pid is exist or not
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
        interface eth0			# interface to monitor
        state MASTER
        virtual_router_id 51		# Assign one ID for this route
        priority 101                    # 101 on master, 100 on backup
        virtual_ipaddress {
            210.48.111.20		# the virtual IP
        }
        track_script {
            chk_pound
        }
}

For web server #2, add following line into /etc/keepalived.conf:

vrrp_script chk_pound {
        script "killall -0 pound"       # verify the pid is exist or not
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
        interface eth0			# interface to monitor
        state MASTER
        virtual_router_id 51		# Assign one ID for this route
        priority 100                    # 101 on master, 100 on backup
        virtual_ipaddress {
            210.48.111.20		# the virtual IP
        }
        track_script {
            chk_pound
        }
}

5. Start Keepalived and make it auto start after boot:

$ keepalived -f /etc/keepalived.conf
$ echo "/usr/local/sbin/keepalived -f /etc/keepalived.conf" >> /etc/rc.local

Pound

1. Install Pound:

$ rpm -Uhv http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS/pound-2.4.3-1.el5.rf.x86_64.rpm

2. Configure Pound by editing the configuration file located under /etc/pound.cfg and paste following line:

User            "nobody"
Group           "nobody"
 
LogLevel        1
Alive           2
 
ListenHTTP
        Address 0.0.0.0
        Port    80
End
 
Service
        HeadRequire "Host: .*icreateweb.net.*"
 
        BackEnd
                Address 210.48.111.21
                Port    88
		TimeOut 300
        End
 
        BackEnd
                Address 210.48.111.22
                Port 88
		TimeOut 300
        End
 
        Session
                Type Cookie
                ID   "JSESSIONID"
                TTL  300
        End
End

3. Allow the service to auto start after boot and start Pound:

$ chkconfig pound on
$ service pound start

4. Check whether Pound is correctly running and listen to port 80:

$ netstat -tulpn | grep pound
tcp        0      0 0.0.0.0:80                  0.0.0.0:*                   LISTEN      6175/pound

 

Done! Now you may try to check your website availability by bringing down the HTTP service or the server itself. By using only 2 servers, it is possible to increase the service uptime to the highest possible. Cheers!

cPanel: Install PHP SSH2 Module

One of my developer required PHP SSH2 module to be loaded into the cPanel server. Since this module is not available inside EasyApache, I need to install it separately and integrate to the current configuration that we have build using EasyApache.

1. Download and install libssh2 from this website, http://www.libssh2.org/snapshots/

$ cd /usr/local/src
$ wget http://www.libssh2.org/snapshots/libssh2-1.4.0-20120319.tar.gz
$ tar -xzf libssh2-1.4.0-20120319.tar.gz
$ cd libssh2-*
$ ./configure
$ make all install

2.  Before we install ssh2 module, we need to know where the PHP extension_dir location:

$ php -i | grep extension_dir
/usr/local/lib/php/extensions/no-debug-non-zts-20090626

3. Then, download PECL ssh2 module from here, http://pecl.php.net/package/ssh2 and install the module:

$ cd /usr/local/lib/php/extensions/no-debug-non-zts-20090626
$ wget http://pecl.php.net/get/ssh2/
$ tar -xzf ssh2-0.11.3.tgz
$ mv ssh2-0.11.3 php-ssh2
$ cd php-ssh2
$ phpize
$ ./configure --with-ssh2
$ make
$ make install

4. Now we need to enable the module in php.ini. Retrieve the php.ini location:

$ php -i | grep "Loaded Configuration File"
 Loaded Configuration File => /usr/local/lib/php.ini

And run following command to map the extension into PHP:

$ echo "extension=ssh2.so" >> /usr/local/lib/php.ini

5. Restart Apache web server (if you are using DSO):

$ service httpd restart

Done! You can check if SSH2 module is loaded or not by using following command:

$ php -i | grep ssh2
Registered PHP Streams => compress.zlib, compress.bzip2, php, file, glob, data, http, ftp, phar, zip, ssh2.shell, ssh2.exec, ssh2.tunnel, ssh2.scp, ssh2.sftp
ssh2
libssh2 version => 1.4.0-20120319
banner => SSH-2.0-libssh2_1.4.0-20120319
PWD => /usr/local/lib/php/extensions/no-debug-non-zts-20090626/php-ssh2
_SERVER["PWD"] => /usr/local/lib/php/extensions/no-debug-non-zts-20090626/php-ssh2
_ENV["PWD"] => /usr/local/lib/php/extensions/no-debug-non-zts-20090626/php-ssh2

Web Server Benchmarking using Apache Benchmark and gnuplot

Apache Benchmark (aka ab) is a tool to benchmark the HTTP web server. It is recommended to test your web server performance before switch it to production environment. I am using this tools to benchmark and do some stress test to our development server before it goes live.

Make sure you have following points prepared before we do benchmarking:

  • URL is accessible via public domain or IP: http://blog.secaserver.com
  • Expect number of clients that your server ready to serve: 50 concurrent users/seconds
  • Expect number of requests per client that your server ready to serve: 10 requests/user/seconds
  • Verify whether keepalived is supported. You can use phpinfo or examine the server header: Keepalived supported
  • How the output should be present: Graph

I will use terminal server CentOS to do this benchmarking remotely. This test should be done at least 3 times so we can see the pattern. We will use gnuplot to present the data in graph. So we need to export the Apache benchmark output to a file that gnuplot understand  in TSV format (tab separated values) called bench1.tsv, and the next test will use respective number like bench2.tsv and so on.

1. Install required tools using yum. We will need to install httpd and gnuplot:

$ yum install -y httpd httpd-tools gnuplot

2. For first test, we can start to do benchmarking by using following command:

$ ab -c 10 -n 50 -k -g /var/www/html/bench1.tsv http://blog.secaserver.com/

Repeat the step above for 2nd and 3rd test using following command:

$ ab -c 10 -n 50 -k -g /var/www/html/bench2.tsv http://blog.secaserver.com/
$ ab -c 10 -n 50 -k -g /var/www/html/bench3.tsv http://blog.secaserver.com/

3. Now TSV files ready to be plot. Lets generate the graph in PNG format:

$ cd /var/www/html
$ gnuplot

You will be entering gnuplot console mode. Run following command to generate the image:

gnuplot> set terminal png
Terminal type set to 'png'
gnuplot> set output "benchmark.png"
gnuplot> set title "Benchmark for blog.secaserver.com"
gnuplot> set size 1,1
gnuplot> set grid y
gnuplot> set xlabel 'Request'
gnuplot> set ylabel 'Response Time (ms)'
gnuplot> plot "bench1.tsv" using 10 smooth sbezier with lines title "Benchmark 1:", "bench2.tsv" using 10 smooth sbezier with lines title "Benchmark 2:", "bench3.tsv" using 10 smooth sbezier with lines title "Benchmark 3:"
gnuplot> exit

Done! You should now able to see the image via browser. Example benchmarking output as below:

Notes

We can create a template script for gnuplot to simplify the process to generate graph. Following template is the same action performed as step #3. We will name this template files as benchmark.tpl:

# output as png image
set terminal png
 
# save file to "benchmark.png"
set output "benchmark.png"
 
# graph title
set title "Benchmark for blog.secaserver.com"
 
# aspect ratio for image size
set size 1,1
 
# enable grid on y-axis
set grid y
 
# x-axis label
set xlabel "Request"
 
# y-axis label
set ylabel "Response Time (ms)"
 
# plot data from bench1.tsv,bench2.tsv and bench3.tsv using column 10 with smooth sbezier lines
plot "bench1.tsv" using 10 smooth sbezier with lines title "Benchmark 1:", \
"bench2.tsv" using 10 smooth sbezier with lines title "Benchmark 2:", \
"bench3.tsv" using 10 smooth sbezier with lines title "Benchmark 3:"

To execute the template, just run following command:

$ gnuplot benchmark.tpl

If you examine TSV files created by ab, you should the column header which is ctime, dtime, ttime and wait. Definition as below:

ctime: Connection Time
dtime: Processing Time
ttime: Total Time
wait: Waiting Time