Linux: Duplicate MySQL Database

My boss wants to duplicate our main database so it can be used by our developer team for our new design project. The database is already run on live system and the developer need to have same contents of database in order to complete the project.

The easiest way to duplicate MySQL database is just using mysqldump command. Based on following variables:

OS: CentOS 6.0 64bit
MySQL root password: [email protected]#
Database name: grand_shopper
Duplicate database name:  grand_shopper_dev
New database user: dev_user
Password: D3eVVbf7

Firstly, since I just want to copy and duplicate database inside the same server, I need to create the database:

$ mysql -u root -p'[email protected]#' -e "create database grand_shopper_dev"

And use following command to duplicate the database content  from grand_shopper to grand_shopper_dev:

$ mysqldump -u root -p'[email protected]#' grand_shopper | mysql -u root -p'[email protected]#' grand_shopper_dev

Format: mysqldump [user] [password] [source database] | mysql [user] [password] [destination database]

Now create MySQL user to associate with the new database. Access the MySQL server using command:

$ mysql -u root -p'[email protected]#'

And run following SQL command:

mysql> GRANT USAGE ON grand_shopper_dev.* TO [email protected] IDENTIFIED BY 'D3eVVbf7';
mysql> GRANT ALL PRIVILEGES ON grand_shopper_dev.* TO [email protected] ;

Try access the new database with new user:

$ mysql -h localhost -u dev_user -p'D3eVVbf7'

FreeBSD: Upgrade from 8.2 to 9.0

If you use this command to upgrade to latest release FreeBSD 9.0:

$ freebsd-update -r 9.0-RELEASE upgrade

You might see following error:

The update metadata is correctly signed, but
failed an integrity check.
Cowardly refusing to proceed any further.

This error indicate that it cannot accept % and @ characters which appear in FreeBSD 9 . To overcome this, run following command:

$ sed -i '' -e 's/=_/=%@_/' /usr/sbin/freebsd-update

Now start the upgrade process:

$ freebsd-update -r 9.0-RELEASE upgrade

Accept all prompted values and follow the wizard. This process downloads all files and patches required for upgrade so it takes time. You might need to press ‘Enter’ once to check /etc/hosts file. Once complete, run following command to start installing the updates:

$ freebsd-update install

After a while, you should see the system will prompt something as below:

Installing updates...rmdir: ///boot/kernel: Directory not empty
 
Kernel updates have been installed. Please reboot and run "/usr/sbin/freebsd-update install"
again to finish installing updates.

Reboot the server:

$ init 6

Once up, it will boot to FreeBSD 9. Run again the installation command:

$ freebsd-update install

After the process completed, the system will ask you to build back all your application which installed using ports. Once done, you need to rerun again the above command to complete the upgrade process and you should something like below:

$ freebsd-update install
Installing updates... Done

Your update should be completed now. To check the new version, run following command:

$ uname -r
9.0-RELEASE

Source: http://lists.freebsd.org/pipermail/freebsd-stable/2011-October/064321.html

High Availability: MySQL Cluster with Galera + HAProxy

In this post, I am going to show you my implementation on how to achieve high availability MySQL setup with load balancing using HAProxy, Galera cluster, garbd and virtual IP using keepalived.

Actually, the process is similar with my previous post as here, with some added steps to configure HAProxy and garbd. The architecture can be describe as image below:

 

Variable that I used:

OS: CentOS 6.0 64bit
MySQL server1: 192.168.0.171
MySQL server2: 192.168.0.172
HAProxy server1: 192.168.0.151
HAProxy server2: 192.168.0.152
Virtual IP to be shared among HAProxy: 192.168.0.170
MySQL root password: [email protected]#
Cluster root username: clusteroot
Cluster root password: [email protected]#
Galera SST user: sst
Galera SST password: sstpass123

Server hostname is important in cluster. Following information has been setup in all servers /etc/hosts file. All configurations shown below are assuming that firewall is turn OFF and SELINUX is DISABLED:

192.168.0.151 haproxy1.cluster.local haproxy1
192.168.0.152 haproxy2.cluster.local haproxy2
192.168.0.171 galera1.cluster.local galera1
192.168.0.172 galera2.cluster.local galera2
127.0.0.1     localhost.localdomain localhost
::1           localhost6 localhost6.localdomain

MySQL Cluster with Galera

1. Following steps are similar with my previous post. But in this tutorial, I am going to rewrite it as refer to this case with latest version of Galera and MySQL. I have no MySQL server installed in this server at the moment. Download the latest Galera library, MySQL with wsrep, MySQL client and MySQL shared from MySQL download page:

$ mkdir -p /usr/local/src/galera
$ cd /usr/local/src/galera
$ wget https://launchpad.net/galera/2.x/23.2.0/+download/galera-23.2.0-1.rhel5.x86_64.rpm
$ wget https://launchpad.net/codership-mysql/5.5/5.5.20-23.4/+download/MySQL-server-5.5.20_wsrep_23.4-1.rhel5.x86_64.rpm
$ wget http://dev.mysql.com/get/Downloads/MySQL-5.5/MySQL-client-5.5.20-1.el6.x86_64.rpm/from/http://ftp.jaist.ac.jp/pub/mysql/
$ wget http://dev.mysql.com/get/Downloads/MySQL-5.5/MySQL-shared-5.5.20-1.el6.x86_64.rpm/from/http://ftp.jaist.ac.jp/pub/mysql/

2. Remove unwanted library and install all the packages in following sequence:

$ rpm -e --nodeps mysql-libs
$ rpm -Uhv galera-23.2.0-1.rhel5.x86_64.rpm
$ rpm -Uhv MySQL-client-5.5.20-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-shared-5.5.20-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-server-5.5.20_wsrep_23.4-1.rhel5.x86_64.rpm

3. Start the MySQL service and make sure it start on boot:

$ chkconfig mysql on
$ service mysql start

4. Setup the MySQL root password:

$ /usr/bin/mysqladmin -u root password '[email protected]#'

5. Setup the MySQL client for root. Create new text file /root/.my.cnf using text editor and add following line:

[client]
user=root
passowrd='[email protected]#'

6. Change the permission to make sure it is not viewable by others:

$ chmod 600 /root/.my.cnf

7. Login to the MySQL server by executing command “mysql” and execute following line. We will also need to create another root user called clusteroot with password and haproxy without password (for HAProxy monitoring) as stated on variables above:

mysql> DELETE FROM mysql.user WHERE user='';
mysql> GRANT USAGE ON *.* TO [email protected]'%' IDENTIFIED BY '[email protected]#';
mysql> UPDATE mysql.user SET Password=PASSWORD('[email protected]#') WHERE User='root';
mysql> GRANT USAGE ON *.* to [email protected]'%' IDENTIFIED BY 'sstpass123';
mysql> GRANT ALL PRIVILEGES on *.* to [email protected]'%';
mysql> GRANT USAGE on *.* to [email protected]'%' IDENTIFIED BY '[email protected]#';
mysql> GRANT ALL PRIVILEGES on *.* to [email protected]'%';
mysql> INSERT INTO mysql.user (host,user) values ('%','haproxy');
mysql> FLUSH PRIVILEGES;
mysql> quit

8. Create the configuration files and directory, copy the example configuration and create mysql exclusion configuration file:

$ mkdir -p /etc/mysql/conf.d/
$ cp /usr/share/mysql/wsrep.cnf /etc/mysql/conf.d/
$ touch /etc/my.cnf
$ echo '!includedir /etc/mysql/conf.d/' >> /etc/my.cnf

9. Configure MySQL wsrep with Galera library. Open /etc/mysql/conf.d/wsrep.cnf using text editor and find and edit following line:

For galera1.cluster.local:

wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://"
wsrep_sst_auth=sst:sstpass123

For galera2.cluster.local:

wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.0.171:4567"
wsrep_sst_auth=sst:sstpass123

10. Restart services in both servers:

$ service mysql restart

11. Check whether Galera replication is running fine:

$ mysql -e "show status like 'wsrep%'"

If the cluster is working, you should see following value in both servers:

wsrep_ready = ON

HAProxy servers

1. In both haproxy1 and haproxy2, we will start with installing virtual IP to make sure the HAProxy IP is always available. Lets download and install keepalived from here. OpenSSL header and popt library are required, so we will install it first using yum:

$ yum install -y openssl openssl-devel popt*
$ cd /usr/local/src
$ wget http://www.keepalived.org/software/keepalived-1.2.2.tar.gz
$ tar -xzf keepalived-1.2.2.tar.gz
$ cd keepalived-*
$ ./configure
$ make
$ make install

2. Since we have virtual IP which shared between these 2 servers, we need to tell kernel that we have a non-local IP to be bind to HAProxy service later. Add following line into /etc/sysctl.conf:

net.ipv4.ip_nonlocal_bind = 1

Run following command to apply the changes:

$ sysctl -p

3. By default, keepalived configuration file will be setup under /usr/local/etc/keepalived/keepalived.conf. We will make things easier by symlink it into /etc directory. We will also need to clear the configuration example inside it:

$ ln -s /usr/local/etc/keepalived/keepalived.conf /etc/keepalived.conf
$ cat /dev/null > /etc/keepalived.conf

4. This step is different in both servers for keepalived configuration.

For haproxy1, add following line into /etc/keepalived.conf:

vrrp_script chk_haproxy {
        script "killall -0 haproxy" # verify the pid is exist or not
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
        interface eth0			# interface to monitor
        state MASTER
        virtual_router_id 51		# Assign one ID for this route
        priority 101                    # 101 on master, 100 on backup
        virtual_ipaddress {
            192.168.0.170		# the virtual IP
        }
        track_script {
            chk_haproxy
        }
}

For haproxy2, add following line into /etc/keepalived.conf:

vrrp_script chk_haproxy {
        script "killall -0 haproxy"     # verify the pid is exist or not
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
        interface eth0			# interface to monitor
        state MASTER
        virtual_router_id 51		# Assign one ID for this route
        priority 100                    # 101 on master, 100 on backup
        virtual_ipaddress {
            192.168.0.170		# the virtual IP
        }
        track_script {
            chk_haproxy
        }
}

5. Download and install HAProxy. Get the source from http://haproxy.1wt.eu/#down .We also need to install some required library using yum:

$ yum install pcre* -y
$ cd /usr/local/src
$ wget http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.19.tar.gz
$ tar -xzf haproxy-1.4.19.tar.gz
$ cd haproxy-*
$ make TARGET=linux26 ARCH=x86_64 USE_PCRE=1
$ make install

6. Create HAProxy configuration file and paste following configuration. This configuration will tell HAProxy to be a reverse-proxy for the virtual IP on port 3306. It then forward the request to backend servers. MySQL checking need to be done via haproxy user:

$ mkdir -p /etc/haproxy
$ touch /etc/haproxy/haproxy.cfg

Add following line into /etc/haproxy/haproxy.cfg:

defaults
        log global
        mode http
        retries 3
        option redispatch
        maxconn 4096
        contimeout 50000
        clitimeout 50000
        srvtimeout 50000
 
listen mysql_proxy 0.0.0.0:3306
        mode tcp
        balance roundrobin
        option tcpka
        option httpchk
        option mysql-check user haproxy
        server galera1 192.168.0.171:3306 weight 1
        server galera2 192.168.0.172:3306 weight 1

7. Since we only have 2 database servers, it means we have 2 members in the cluster. Even though it is working but still not a good idea to have database failover because it can cause “split brain”.  Split brain mode refers to a state in which each database server does not know the high availability (HA) role of its redundant peer, and cannot determine which server currently has the primary HA role. So we will use both HAProxy servers to be the 3rd and 4th member. We called them arbitrator. Galera has provided the binary called garbd to overcome this problem. Download and install Galera library:

$ cd /usr/local/src
$ wget https://launchpad.net/galera/2.x/23.2.0/+download/galera-23.2.0-1.rhel5.x86_64.rpm
$ rpm -Uhv galera-23.2.0-1.rhel5.x86_64.rpm

8.  Run following command to start garbd as daemon to join my_wsrep_cluster group:

$ garbd -a gcomm://192.168.0.171:4567 -g my_wsrep_cluster -d

9. Now lets start keepalived and HAProxy and do some testing whether the IP failover, database failover and load balancing are working. Run following command in both servers:

$ keepalived -f /etc/keepalived.conf
$ haproxy -D -f /etc/haproxy/haproxy.cfg

Ping IP 192.168.0.170 from another host. Now in haproxy1, stop the network:

$ service network stop

You will notice that the IP will be down for 2 seconds and then it will up again. It means that haproxy2 has taking over IP 192.168.0.170 from haproxy1. If you start back the network in haproxy1, you will noticed the same thing happen because haproxy1 will taking over back the IP from haproxy2, as what we configure in /etc/keepalived.conf. You can also try to kill haproxy process and you will see the virtual IP will be take over again by haproxy2.

In other hand, you can try to stop mysql process in galera2 and create a new database inside galera1. After a while, start back the mysql process in galera2 and you should see that galera2 will synchronize to galera1 as reference node to update data.

If everything working as expected, add following line into /etc/rc.local so the service started automatically after boot:

/usr/local/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg
/usr/local/sbin/keepalived -f /etc/keepalived.conf
/usr/bin/garbd -a gcomm://192.168.0.171:4567 -g my_wsrep_cluster -d

Now your MySQL is running in high availability mode. The MySQL client just need to access 192.168.0.170 as the MySQL database server host.

*Notes: All steps above need to be done on all respective servers except if specified

High Availability: MySQL Cluster with Galera + MySQL Proxy

In this tutorial, I am going to show you on how to achieve higher MySQL uptime with some help from MySQL Proxy, Galera cluster and virtual IP using keepalived.

Actually, the process is similar with my previous post as here, with some added steps to configure MySQL proxy and virtual IP. The architecture can be describe as image below:

Variable that I used:

OS: CentOS 6.0 64bit
MySQL server1: 192.168.0.171
MySQL server2: 192.168.0.172
MySQL proxy server1: 192.168.0.151
MySQL proxy server2: 192.168.0.152
Virtual IP to be shared among MySQL proxies: 192.168.0.170
MySQL root password: [email protected]#
Cluster root username: clusteroot
Cluster root password: [email protected]#
Galera SST user: sst
Galera SST password: sstpass123

Server hostname is important in cluster. Following information has been setup in all servers /etc/hosts file. All configurations shown below are assuming that firewall is turn OFF:

192.168.0.151 myproxy1.cluster.local myproxy1
192.168.0.152 myproxy2.cluster.local myproxy2
192.168.0.171 galera1.cluster.local galera1
192.168.0.172 galera2.cluster.local galera2
127.0.0.1     localhost.localdomain localhost
::1           localhost6 localhost6.localdomain

MySQL Cluster with Galera

1. Following steps are similar with my previous post. But in this tutorial, I am going to rewrite it as refer to this case with latest version of Galera and MySQL. I have no MySQL server installed in this server at the moment. Download the latest Galera library, MySQL with wsrep, MySQL client and MySQL shared from MySQL download page:

$ mkdir -p /usr/local/src/galera
$ cd /usr/local/src/galera
$ wget https://launchpad.net/galera/2.x/23.2.0/+download/galera-23.2.0-1.rhel5.x86_64.rpm
$ wget https://launchpad.net/codership-mysql/5.5/5.5.20-23.4/+download/MySQL-server-5.5.20_wsrep_23.4-1.rhel5.x86_64.rpm
$ wget http://dev.mysql.com/get/Downloads/MySQL-5.5/MySQL-client-5.5.20-1.el6.x86_64.rpm/from/http://ftp.jaist.ac.jp/pub/mysql/
$ wget http://dev.mysql.com/get/Downloads/MySQL-5.5/MySQL-shared-5.5.20-1.el6.x86_64.rpm/from/http://ftp.jaist.ac.jp/pub/mysql/

2. Remove unwanted library and install all the packages in following sequence:

$ yum remove mysql-libs -y
$ rpm -Uhv galera-23.2.0-1.rhel5.x86_64.rpm
$ rpm -Uhv MySQL-client-5.5.20-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-shared-5.5.20-1.el6.x86_64.rpm
$ rpm -Uhv MySQL-server-5.5.20_wsrep_23.4-1.rhel5.x86_64.rpm

3. Start the MySQL service and make sure it start on boot:

$ chkconfig mysql on
$ service mysql start

4. Setup the MySQL root password:

$ /usr/bin/mysqladmin -u root password '[email protected]#'

5. Setup the MySQL client for root. Create new text file /root/.my.cnf using text editor and add following line:

[client]
user=root
passowrd='[email protected]#'

6. Change the permission to make sure it is not viewable by others:

$ chmod 600 /root/.my.cnf

7. Login to the MySQL server by executing command “mysql” and execute following line. We will also need to create another root user called clusteroot with password as stated on variables above:

mysql> DELETE FROM mysql.user WHERE user='';
mysql> GRANT USAGE ON *.* TO [email protected]'%' IDENTIFIED BY '[email protected]#';
mysql> UPDATE mysql.user SET Password=PASSWORD('[email protected]#') WHERE User='root';
mysql> GRANT USAGE ON *.* to [email protected]'%' IDENTIFIED BY 'sstpass123';
mysql> GRANT ALL PRIVILEGES on *.* to [email protected]'%';
mysql> GRANT USAGE on *.* to [email protected]'%' IDENTIFIED BY '[email protected]#';
mysql> GRANT ALL PRIVILEGES on *.* to [email protected]'%' ;
mysql> quit

8. Create the configuration files and directory, copy the example configuration and create mysql exclusion configuration file:

$ mkdir -p /etc/mysql/conf.d/
$ cp /usr/share/mysql/wsrep.cnf /etc/mysql/conf.d/
$ touch /etc/my.cnf
$ echo '!includedir /etc/mysql/conf.d/' >> /etc/my.cnf

9. Configure MySQL wsrep with Galera library. Open /etc/mysql/conf.d/wsrep.cnf using text editor and find and edit following line:

For galera1.cluster.local:

wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://"
wsrep_sst_auth=sst:sstpass123

For galera2.cluster.local:

wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://192.168.0.171:4567"
wsrep_sst_auth=sst:sstpass123

10. Restart services in both servers:

$ service mysql restart

11. Check whether Galera replication is running fine:

$ mysql -e "show status like 'wsrep%'"

If the cluster is working, you should see following value in both servers:

wsrep_ready = ON

MySQL proxy servers

1. In both myproxy1 and myproxy2, we will start with installing virtual IP to make sure the MySQL proxy IP is always available. Lets download and install keepalived from here. OpenSSL header and popt library are required, so we will install it first using yum:

$ yum install -y openssl openssl-devel popt*
$ cd /usr/local/src
$ wget http://www.keepalived.org/software/keepalived-1.2.2.tar.gz
$ tar -xzf keepalived-1.2.2.tar.gz
$ cd keepalived-*
$ ./configure
$ make
$ make install

2. Since we have virtual IP which shared between these 2 servers, we need to tell kernel that we have a non-local IP to be bind to mysql proxy service later. Add following line into /etc/sysctl.conf:

net.ipv4.ip_nonlocal_bind = 1

Run following command to apply the changes:

$ sysctl -p

3. By default, keepalived configuration file will be setup under /usr/local/etc/keepalived/keepalived.conf. We will make things easier by symlink it into /etc directory. We will also clear the configuration example inside it:

$ ln -s /usr/local/etc/keepalived/keepalived.conf /etc/keepalived.conf
$ cat /dev/null > /etc/keepalived.conf

4. Download MySQL proxy at http://dev.mysql.com/downloads/mysql-proxy/ .We will setup MySQL proxy under /usr/local directory:

$ cd /usr/local
$ wget http://mysql.oss.eznetsols.org/Downloads/MySQL-Proxy/mysql-proxy-0.8.2-linux-rhel5-x86-64bit.tar.gz
$ tar -xzf mysql-proxy-0.8.2-linux-rhel5-x86-64bit.tar.gz
$ mv mysql-proxy-0.8.2-linux-rhel5-x86-64bit mysql-proxy
$ rm -Rf mysql-proxy-0.8.2-linux-rhel5-x86-64bit.tar.gz

5. This step is different in both servers for keepalived configuration.

For myproxy1, add following line into /etc/keepalived.conf:

vrrp_script chk_mysqlproxy {
        script "killall -0 mysql-proxy" # verify the pid is exist or not
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
        interface eth0			# interface to monitor
        state MASTER
        virtual_router_id 51		# Assign one ID for this route
        priority 101                    # 101 on master, 100 on backup
        virtual_ipaddress {
            192.168.0.170		# the virtual IP
        }
        track_script {
            chk_mysqlproxy
        }
}

 

For myproxy2, add following line into /etc/keepalived.conf:

vrrp_script chk_mysqlproxy {
        script "killall -0 mysql-proxy" # verify the pid is exist or not
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
}
 
vrrp_instance VI_1 {
        interface eth0			# interface to monitor
        state MASTER
        virtual_router_id 51		# Assign one ID for this route
        priority 100                    # 101 on master, 100 on backup
        virtual_ipaddress {
            192.168.0.170		# the virtual IP
        }
        track_script {
            chk_mysqlproxy
        }
}

6. Run following command to start MySQL proxy in both servers:

$ /usr/local/mysql-proxy/bin/mysql-proxy -D -P 0.0.0.0:3306 -b 192.168.0.171:3306 -b 192.168.0.172:3306

7. Now lets start keepalived and do some testing whether the IP failover is working. Run following command in both servers:

$ keepalived -f /etc/keepalived.conf

Ping IP 192.168.0.170 from another host. Now in myproxy1, stop the network:

$ service network stop

You will notice that the IP will be down for 2 seconds and then it will up again. It means that myproxy2 has taking over IP 192.168.0.170 from myproxy1. If you start back the network in myproxy1, you will noticed the same thing happen because myproxy1 will taking over back the IP from myproxy2, as what we configure in /etc/keepalived.conf. You can also try to kill mysql-proxy process and you will see the virtual IP will be take over again by myproxy2.

Now your MySQL is running in high availability mode. The MySQL client just need to access 192.168.0.170 as the MySQL database server host.

10 Simple Mistakes that Webmasters Do

Following point is written from my experience and webmaster observation since becoming server administrator of various web servers:

Directory Browsing Enabled

Depending on your web host server configuration, you might need to check this feature should be DISABLED. If not, it will allow unnecessary access by public user to other files. Plus, others can understand on how your site directories, which is not good.

Bear in mind that directory browsing is being indexed by search engines crawler. This will increase the chance for others to find and simply target your website due to viewable content.

Allow Hotlinking To Static Content

Bandwidth is expensive. Do not allow others to use your content as part of their content and consume your bandwidth. To prevent bandwidth stealing, do not forget to disallow others from hotlinking your static contents including image (.jpg, .png, .gif, .bmp), presentation material (.pdf, .swf, .flv) and script (.js/.css/.xml).

Depending on your web host, there must be embedded function to disable hotlinking to your static content. Contact them for more information.

No Watermark

We must always think that others might steal our images. So do not forget to append watermark to every image in your website.

User will get noticed on the stolen image if they see it in other website. Indirectly, you have been advertised and people will start find the real content, rather than the duplicated one. More traffic will coming in.

PHPinfo Page is Accessible

During the web development process, PHPinfo is one of the things that developer need to have in order to understand the web server environment. Even though the PHPinfo page is not retrievable via search engine, this file MUST not exist in your web server or not accessible publicly if your site has go live.

Most webmaster forget to delete this page after development process completed, which means you are exposing the web server environment to the world.

Ignore Website Appearance in Linux and Mobile Device

Most webmasters will try to test run their website in all browsers run on Windows or Mac. Assuming that Linux and mobile device are using the same browser engine, they usually forget that the appearance might be different in other boxes. Even though at this moment Linux and mobile users are less than 7% of total operating system market shares (statistic by W3schools), you should not ignore them entirely.

The site’s font might look standard in Windows, but in Ubuntu it will look slightly bigger due to system font default size is different. Same goes to mobile device which font looks smaller.

Open Hyperlink in Same Windows

Make sure your hyperlink inside your content will be opened in new tab/windows with <target=”_blank”> HTML tag.

Do not interrupt your user experience while they are accessing the content. Many webmaster forgot about this resulting bad experience for users because they have been redirected from the information that they actually want.

Display Email Address on the Website

Email addresses are easily harvested by address-harversting bots. They just need a search engine to build the list of victims site and read the html tag: “mailto:” or seek for complete email address format which is “[email protected]“. You might start getting spam mails usually from 3 to 6 months after the email address publicly displayed, unless your site block search engine crawler to access.

There are some alternative way to display your email address on the website like using CloudFlare service, where they protect your website email address or follow this example at http://csarven.ca/hiding-email-addresses on how to hide email address in HTML.

The best solution is never reveal your email address and use contact form instead.

No CAPTCHA for Form

Do not ever expect your website visitors are all humans. There are many bad bots (comment spam bots, forum spam bots) out there try to do nasty things with your website, which mostly to generate backlinks for Search Engine Optimization (SEO).

Use CAPTCHA (Completely Automated Public Turing Test To Tell Computers and Humans Apart) for every forms you have like comment form, contact form or registration form. Even some CAPTCHA are breakable as refer to here. The most popular CAPTCHA provider nowadays is RECAPTCHA, which acquired by Google. It is a free service to use while preventing spambots from messing up your site.

Backup in the Same Server

Your website backup should NOT exist in the same server of your web server. Especially if the backup directory can be accessible publicly. Usually, webmaster will create backup from inside the web server. It should then download and remove the backup file from the web server. Some webmasters forget to remove the backup files, which then filling up the disk space and unintentionally turn the backup files downloadable by others.

The best backup practice is to have a remote backup repository server and scheduled to be run on daily basis during non-peak hours.

Simple Password Usage

Do not use simple password for any credentials in your website or web host service. Hackers and bots can simply gain access from any point of authentication like email account, database user, protected directory user, back-end system user, web hosting service user and FTP user if they succeed on guessing your password, usually by using brute-force method.

The best password practice is to have alphabet, numerical and symbols combined in your password which more than 10 characters and always change your password within 3 months, at least.

Conclusion

Simple mistake can lead to bigger problem if we are not careful and realize the consequences that we might face. Standard of procedure, checklist and reminder are some methods to overcome humans’ common mistake which is forgetful.

Linux: Install and Configure Apache with SuPHP

If you install Apache  and PHP web server from your distribution repository, the system will auto-configure your web server to handle PHP script using Dynamic Shared Object (DSO) handler. The major effect of DSO is Apache process will be run under Apache user (it can be user nobody, www-data or wwwroot). If some PHP script is overloading the server, we will still see user ‘nobody’ as the executer, which is hard for us to determine which user’s script cause the problem.

To differentiate between available PHP handler, you can refer to this post. In this tutorial, we will install and configure suPHP to work with default installation of Apache. SuPHP is a tool for executing PHP scripts with the permissions of their owners. It consists of an Apache module (mod_suphp) and a setuid root binary (suphp) that is called by the Apache module to change the uid of the process executing the PHP interpreter.

Variables that I used:

Server: CentOS 6.0 64bit
Server IP: 210.84.17.110
User: ryan
Web directory: /home/ryan/public_html
Website: http://www.misterryan.com/

1. Install Apache, PHP and required compiler via yum:

$ yum install -y httpd* php gcc-c++

2. Download mod_suphp RPM. The RPM version must be same with suPHP version that we are going to download on step 3:

$ cd /usr/local/src
$ wget ftp://rpmfind.net/linux/dag/redhat/el6/en/x86_64/dag/RPMS/mod_suphp-0.7.1-1.el6.rf.x86_64.rpm
$ rpm -Uhv  mod_suphp-0.7.1-1.el6.rf.x86_64.rpm

3. Download and prepare suPHP from this website. At this moment, the latest version is suphp-0.7.1.tar.gz:

$ cd /usr/local/src
$ wget http://www.suphp.org/download/suphp-0.7.1.tar.gz
$ tar -xzf suphp-0.7.1.tar.gz
$ cd suphp-*

4. Locate APR config binary and build suPHP:

$ which apr-1-config
/usr/bin/apr-1-config

Build suPHP:

$ ./configure --with-apr=/usr/bin/apr-1-config --with-setid-mode=owner
$ make
$ make install

5. Create the website and logs directory for user:

$ useradd -m ryan
$ mkdir /home/ryan/public_html
$ mkdir /home/ryan/logs
$ touch /home/ryan/logs/error_log
$ touch /home/ryan/logs/access_log

6. Create the configuration file in Apache to serve the website under user ryan. We will create a new virtual host in the Apache conf.d directory:

$ vi /etc/httpd/conf.d/vhost.conf

And add following line:

NameVirtualHost 210.84.17.110:80
 
<VirtualHost 210.84.17.110:80>
ServerName misterryan.com
ServerAlias www.misterryan.com
 
ServerAdmin [email protected]
DocumentRoot /home/ryan/public_html
 
ErrorLog /home/ryan/logs/error_log
CustomLog /home/ryan/logs/access_log combined
 
suPHP_Engine on
suPHP_UserGroup ryan ryan
AddHandler x-httpd-php .php .php3 .php4 .php5 .phtml
suPHP_AddHandler x-httpd-php
</VirtualHost>

7. Change the PHP handler from mod_php to mod_suphp. We need to edit mod_suphp configuration file at /etc/httpd/conf.d/suphp.conf and uncomment following line:

LoadModule suphp_module modules/mod_suphp.so

And remove mod_php configuration file:

$ rm -Rf /etc/httpd/conf.d/php.conf

8. Make sure suPHP configuration file located at /etc/suphp.conf has value as below:

[global]
logfile=/var/log/httpd/suphp_log
loglevel=info
webserver_user=apache
docroot=/var/www/html:${HOME}/public_html
env_path=/bin:/usr/bin
umask=0077
min_uid=500
min_gid=500
 
; Security options
allow_file_group_writeable=false
allow_file_others_writeable=false
allow_directory_group_writeable=false
allow_directory_others_writeable=false
 
;Check wheter script is within DOCUMENT_ROOT
check_vhost_docroot=true
 
;Send minor error messages to browser
errors_to_browser=false
 
[handlers]
;Handler for php-scripts
x-httpd-php="php:/usr/bin/php-cgi"
 
;Handler for CGI-scripts
x-suphp-cgi=execute:!self

9. SuPHP will throw some error because it will try to lookup suphp.conf under /usr/local/etc directory. So we need to create a symbolic link to the real /etc/suphp.conf:

$ ln -s /etc/suphp.conf /usr/local/etc/suphp.conf

10. We also need to fix the permission and ownership for user ryan:

$ chmod 755 /home/ryan
$ chown ryan.ryan /home/ryan/public_html -Rf
$ find /home/ryan -type d -exec chmod 755 {} \
$ find /home/ryan -type f -exec chmod 644 {} \

11. Now suPHP configuration is done. We need to restart Apache service as well as prepare the service to start on boot:

$ chkconfig httpd on
$ service httpd restart

Done! Now we can upload the website into the user web directory at /home/ryan/public_html. SuPHP will allowed you to run the file only owned by the respective user. So step #10 is quite important in order to fix the permission and ownership, or else the web server will throw an “Internal Server Error” in the browser.

All related files location for suPHP:

SuPHP log – /var/log/httpd/suphp_log
SuPHP config – /etc/suphp.conf
SuPHP module config – /etc/httpd/conf.d/suphp.conf
SuPHP module – /etc/httpd/modules/mod_suphp.so
Apache error log –  /etc/httpd/logs/error_log

Apache: Create Fake PHPinfo

My boss recently wants me to create a dummy PHP information page as known as phpinfo. This page can gives much more information on your server environment and application supported inside the web server. His purpose is only want to mislead anyone who is trying to view the phpinfo.php file in the web server. And surprisingly, we detected connection to this phpinfo page after 2 days it being live in the production web server.

On following PHPinfo example, I have extract the information from a WAMP server and place it inside the Apache web server which run on Linux. With some variable changes, we can make a fake phpinfo.php file and view exactly like what real phpinfo page looks alike.

Variables as below:

OS: CentOS 5.2 64bit
Apache root document: /var/www/html
Server IP: 192.168.100.10
Phpinfo URL: http://192.168.100.10/phpinfo.php

1. Create a new file inside your root document of web server called phpinfo.php (or whatever name you want):

$ cd /var/www/html
$ touch phpinfo.php

2. Using text editor, open the files and paste the fake PHP info code below:

$ nano phpinfo.php

Paste following line:

Click here for the PHP source code

3. Save the file and you can view it via browser at http://192.168.100.10/phpinfo.php. You also can track user who access to this phpinfo page by adding following line in the php code above:

$to = '[email protected]'; //replace with your email address
$subject = 'Some one is viewing the fake phpinfo page!';
$message = '
Date: '.date('l jS \of F Y h:i:s A').'
Source IP: '.$_SERVER['REMOTE_ADDR'];
 
mail($to, $subject, $message);

Done. You can also use the same code to run on Windows environment. Even though this page has disallow search engine robot to access which will not being indexed in the search engine, you will notice someone will try to play with your phpinfo page. Believe me!

Linux: Run Command in Many Servers Simultaneously

Server administrators usually have many servers to manage. There will be some time when we need to run the same command in every server we have. In my case, I need to update MySQL version on all 6 database servers we have to the latest version via yum. It just need to run simple command but I need to repeat the process 6 times for each server. Furthermore, we will keep doing this in the future over and over again.

To achieve this, I am going to use Webmin cluster. So all servers must have been installed with Webmin and we need to integrate all of them and joining the Webmin cluster. In this tutorial, I will only use 2 servers as example of setup.

Variables as below:

OS: CentOS 6.2 64bit
Webmin (master)/MySQL server #1: 192.168.0.160
Webmin (node1)/MySQL server #2: 192.168.0.163
Webmin username: root
Webmin password: Gn&Pe42#e

1. Download and install Webmin for all servers. You need to repeat step number 1 to 3 on all servers:

$ cd /usr/local/src
$ wget http://cdnetworks-kr-2.dl.sourceforge.net/project/webadmin/webmin/1.580/webmin-1.580-1.noarch.rpm
$ rpm -Uhv  webmin-1.580-1.noarch.rpm

2. Configure firewall to open port 10000 for Webmin communication. Add following line into /etc/sysconfig/iptables before any REJECT line (-j REJECT):

-A INPUT -p tcp -m tcp --dport 10000 -j ACCEPT

Save the file and restart firewall:

$ service iptables restart

3. Start Webmin:

$ service webmin start

4. In master node, access the Webmin via http://192.168.0.160:10000 and login as root. Go to Webmin > Webmin Server Index > and click “Register a new server“. Enter the server information as screenshot below:

And click “Save“.

5. The server should now register in the Webmin but not joining the cluster yet. Go to Webmin > Cluster > Cluster Webmin Servers > and click Add Server for “this server” and “192.168.0.163“.

6. If the joining is completed, you should see all servers in the managed list like below:

7. Now we can start to run any command on all servers. What I need to do now is go to Webmin > Cluster > Cluster Shell Commands > and enter the command that I need to run and which server you want it to run. I will choose “<All hosts>” and click “Run Command Now”  as screenshot below:

The output will then appeared from both of the servers. In my real case, I need to repeat step 4,5,6 to add another 4 servers and finally just execute the command as step 7. This simple setup is just a great way for your server administration in the future. Cheers!

Linux: 2 Way File Synchronization and Replication using Unison + FTP

Usually, when we want to replicate or synchronize files between network, we will use rsync, scp or sftp. This kind of replication is one-way replication method, it will sync from master (source) to slave (destination) only. What about if we want to have 2 folders which both are masters? So we need a two-way replication method applied.

Why do I need to have 2 folders which sync each other? Because I already have a load balancer run on top of my web server. In this case, I need to do 2 way replication so the web contents will always be the same to any users who access the web site. The load balancer is run on Pound with normal round-robin algorithm. Following diagram might give us some better understanding:

I will use the HTTP load balancer server as the middle man to execute synchronization via FTP. I need to create 2 FTP account (each in Web#1 and Web#2) and using CurlFTPFS, I will mount both FTP account inside the HTTP load balancer server. Then Unison will do the 2 way replications.

Before start, make sure HTTP load balancer server has load balancer running which you can refer to this post and CurlFTPFS is running which you can refer to this post. Variables as below:

OS: CentOS 6.2 64bit
HTTP Load Balancer IP: 192.168.20.20
Web Server #1: 192.168.20.21
Web Server #2: 192.168.20.22
Directory to be sync: /home/mywebfile/

1. We will install using the simplest method which is yum. Make sure RPMforge is installed in your system. Follow this step if you have no idea on how to enable RPMforge repository:

$ yum install -y unison

2. I am assuming that you have install and configure CurlFTPFS. Mount both FTP accounts:

$ curlftpfs 192.168.20.21 /mnt/ftp/ftpuser1 -o allow_other
$ curlftpfs 192.168.0.22 /mnt/ftp/ftpuser2 -o allow_other

3. Configure Unison. Since we want to synchronize both folders /mnt/ftp/ftpuser1 and /mnt/ftp/ftpuser2 using root user, we need to create a default profile so Unison knows what to sync, where to sync and how to sync. Using text editor, open following files:

$ vim /root/.unison/default.prf

And add following line:

root=/mnt/ftp/ftpuser1
root=/mnt/ftp/ftpuser2
batch=true

4. Start and run Unison for first synchronization:

$ unison default

5. You will notice that both directories will now sync. But this one need to be done manually. To automate this, we can use cron job and setup to run unison every minutes:

$ crontab -e

And add following line:

* * * * * /usr/bin/unison default

Save the file and restart crond:

$ service crond restart

Or, we can use Fsniper to trigger “unison default” command. You may see this post on how to install and configure Fsniper. For more information on Unison, you can refer to the manual page at here.

Warning: CurlFTPFS is not really good in handling remote files synchronization if the connection between those FTP servers are slow. You might need to consider using other network file system like NFS and Samba to make sure the synchronization works smoothly.

Linux: Remove Files/Folder More Than Certain Time

One of our server encounter backup problem because there are too many files need to be backup, near 200 million total of files. What I need to do is to remove some files in some folders, and let it run automatically everyday to remove the unwanted files. In our web server, we have one temporary folder that use to have lots of temporary files. The folder is located under /home/mywebsite/temp_upload/ .

I started by checking the inodes (the number of files) in this folder:

$ du -sk /home/mywebsite/temp_upload/
10543660 /home/mywebsite/temp_upload/

As you can see, I have 10 million files inside this directory. Our developer has forgot to remove the unused files so I need to create a cron job to remove files which older than 3 months (90 days) in this directory. To remove, the command should be as below:

$ find /home/mywebsite/temp_upload/ -type f -mtime +90 | xargs rm -Rf

It takes some time to complete and once done, the inodes has dropped to 2077:

du -sk /home/mywebsite/temp_upload/
2077 /home/mywebsite/temp_upload

To automate this, just add the command into crontab and schedule to run on weekly basis (at 6 AM every Sunday):

$ crontab -e

Add following line:

0 6 * * 0 /bin/find /home/mywebsite/temp_upload/ -type f -mtime +90 | xargs rm -Rf

Restart crond to apply the cron changes:

$ service crond restart

Warning: Make sure you run the command during low peak hours. This process might overloading your server, as what happened to me due to wrong time zone 🙂

Now you should automate the files removal and you can focus on other things!