Enable Intel 82579LM NIC in VMware ESXi 5.0

We have just bought a new server with Supermicro X9SCL-F motherboard for our backup server. This server comes with 2 NICs:

  • Intel 82579LM Gigabit
  • Intel 82574L Gigabit

Unfortunately, once the hypervisor installation completed, only one network interface is detected by VMware ESXi 5.0 which is Intel 82574L port. Since our architecture required to have 2 different cards so we can use it as fault tolerance to bring high availability features.

What we need to do is basically like this:

  1. Download the driver here: http://dl.dropbox.com/u/27246203/E1001E.tgz
  2. Use ESXi-Customizer to merge the driver and generate a new VMware installation ISO
  3. Burn the custom ISO into disk or USB drive
  4. Reinstall the server

 

Using ESXi-Customizer

1. Download it from here: http://esxi-customizer.googlecode.com/files/ESXi-Customizer-v2.7.1.exe

2. Double click on it and extract the files. Open the folder (ESXi-Customizer-v2.7.1) and double click at ESXi-Customizer.cmd

3. You will see following windows. Kindly enter required details as screenshot below:

Note: My installation ISO is VMware-VMvisor-Installer-5.0.0.update01-623860.x86_64

4. Click Run. The process will start and you will be see following prompt:

Just accept it by clicking “Yes”.

5. Once finished, you will find your new ISO as ESXi-5.x-Custom.iso. You will use this new ISO for VMware ESXi hypervisor installation.

After the installation finish, you can verify this using vSphere > host > Configuration > Network Adapters and you should see similar screenshot as below:

CentOS 6: Install MySQL Cluster – The Simple Way

In this post I am going to install a MySQL cluster as refer to the architecture as below:

 

 

MySQL cluster is consists of 3 types of node:

  • Data node (mysql-data1 & mysql-data2)
  • Sql daemon node (mysql-mysqld1 & mysql-mysqld2)
  • Management node (mysql-management)

Data node will hold the database and it will replicated automatically to all data nodes. Sql daemon node is the interface between database and client. It will serve the query they got from data nodes, similar like a “gateway”. Management node is required in order to monitor and manage the whole cluster. Recommended minimum setup for high availability and scalability will be 5 servers as what I have highlights in the picture above. I will be  using CentOS 6.3 64bit for all servers.

All Servers

1. SELINUX must be disabled on all server. Change the SELINUX configuration file at /etc/sysconfig/selinux:

SELINUX=disabled

2. Firewall is disabled on all servers:

$ service iptables stop
$ chkconfig iptables off
$ setenforce 0

3. Entries under /etc/hosts for all servers should be as below:

web-server         192.168.1.21
mysql-mysqld1      192.168.1.51 
mysql-mysqld2      192.168.1.52
mysql-management   192.168.1.53
mysql-data1        192.168.1.54
mysql-data2        192.168.1.55

Management Node

1. Download and install MySQL Cluster (management & tools) package from here:

$ cd /usr/local/src
$ wget http://download.softagency.net/MySQL/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-management-7.0.35-1.rhel5.x86_64.rpm
$ wget http://download.softagency.net/MySQL/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-tools-7.0.34-1.rhel5.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-management-7.0.35-1.rhel5.x86_64.rpm
$ rpm -Uhv MySQL-Cluster-gpl-tools-7.0.34-1.rhel5.x86_64.rpm

2. Create the mysql-cluster directory and configuration file config.ini:

$ mkdir -p /var/lib/mysql-cluster
$ vim /var/lib/mysql-cluster/config.ini

And add following line:

[ndb_mgmd default]
DataDir=/var/lib/mysql-cluster
 
[ndb_mgmd]
HostName=mysql-management
 
[ndbd default]
NoOfReplicas=2
DataMemory=256M
IndexMemory=128M
DataDir=/var/lib/mysql-cluster
 
[ndbd]
HostName=mysql-data1
 
[ndbd]
HostName=mysql-data2
 
[mysqld]
HostName=mysql-mysqld1
 
[mysqld]
HostName=mysql-mysqld2

Data Nodes

1. Following steps should be executed on both data nodes (mysql-data1 and mysql-data2). Download and install the MySQL storage package from here:

$ cd /usr/local/src
$ wget http://download.softagency.net/MySQL/Downloads/MySQL-Cluster-7.0/MySQL-Cluster-gpl-storage-7.0.35-1.rhel5.x86_64.rpm 
$ rpm -Uhv MySQL-Cluster-gpl-storage-7.0.35-1.rhel5.x86_64.rpm

2. Add following line under /etc/my.cnf:

[mysqld]
ndbcluster
ndb-connectstring=mysql-management
 
[mysql_cluster]
ndb-connectstring=mysql-management

SQL Nodes

1. Following steps should be executed on both SQL nodes (mysql-mysqld1 and mysql-mysqld2). Remove mysql-libs using yum:

$ yum remove mysql-libs -y

2. Install required package using yum:

$ yum install libaio -y

3. Download the MySQL client, shared and server package from MySQL download site here:

$ cd /usr/local/src
$ wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.2/MySQL-Cluster-client-gpl-7.2.8-1.el6.x86_64.rpm/from/http://cdn.mysql.com/
$ wget http://dev.mysql.com/get/Downloads/MySQL-Cluster-7.2/MySQL-Cluster-shared-gpl-7.2.8-1.el6.x86_64.rpm/from/http://cdn.mysql.com/wget http://www.mysql.com/get/Downloads/MySQL-Cluster-7.2/MySQL-Cluster-server-gpl-7.2.8-1.el6.x86_64.rpm/from/http://cdn.mysql.com/

4. Install all packages:

$ rpm -Uhv MySQL-Cluster-*

5. Add following line into /etc/my.cnf:

[mysqld]
ndbcluster
ndb-connectstring=mysql-management
default_storage_engine=ndbcluster
 
[mysql_cluster]
ndb-connectstring=mysql-management

Start the Cluster

1. To start the cluster, we must follow this order:

Management Node > Data Node > SQL Node

2. So, login to management node (mysql-management) and execute following command:

$ ndb_mgmd -f /var/lib/mysql-cluster/config.ini
MySQL Cluster Management Server mysql-5.1.63 ndb-7.0.35
2012-11-22 07:36:55 [MgmtSrvr] INFO -- The default config directory '/usr/mysql-cluster' does not exist. Trying to create it...
2012-11-22 07:36:55 [MgmtSrvr] INFO -- Sucessfully created config directory

3.  Next is start the ndbd service in Data Node (mysql-data1):

$ ndbd
2012-11-22 07:37:24 [ndbd] INFO -- Angel connected to 'mysql-management:1186'
2012-11-22 07:37:24 [ndbd] INFO -- Angel allocated nodeid: 2

4. Next is start the ndbd service in Data Node (mysql-data2):

$ ndbd
2012-11-22 07:37:24 [ndbd] INFO -- Angel connected to 'mysql-management:1186'
2012-11-22 07:37:24 [ndbd] INFO -- Angel allocated nodeid: 3

5. Next is start the mysql service in SQL node (mysql-mysqld1):

$ service mysql start

6. Next is start the mysql service in SQL node (mysql-mysqld2):

$ service mysql start

Monitor the Cluster

Monitoring the cluster will required you to login into management server. To check overall status of cluster:

$ ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @192.168.1.54 (mysql-5.1.63 ndb-7.0.35, Nodegroup: 0, Master)
id=3 @192.168.1.55 (mysql-5.1.63 ndb-7.0.35, Nodegroup: 0)
 
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.1.53 (mysql-5.1.63 ndb-7.0.35)
 
[mysqld(API)] 2 node(s)
id=4 @192.168.1.51 (mysql-5.5.27 ndb-7.2.8)
id=5 @192.168.1.52 (mysql-5.5.27 ndb-7.2.8)

To check the Data nodes status:

$ ndb_mgm -e "all status"
Connected to Management Server at: localhost:1186
Node 2: started (mysql-5.1.63 ndb-7.0.35)
Node 3: started (mysql-5.1.63 ndb-7.0.35)

To check the memory usage of data nodes:

$ ndb_mgm -e "all report memory"
Connected to Management Server at: localhost:1186
Node 2: Data usage is 0%(23 32K pages of total 8192)
Node 2: Index usage is 0%(20 8K pages of total 16416)
Node 3: Data usage is 0%(23 32K pages of total 8192)
Node 3: Index usage is 0%(20 8K pages of total 16416)

Stopping the Cluster

1. To stop the cluster, we must follow this order:

SQL Node > Management Node / Data Node

2. Login to SQL node (mysql-mysqld1mysql-mysqld2) and run following command:

$ service mysql stop

3. Login to management node (mysql-management) and run following command:

$ ndb_mgm -e shutdown

 

Done! You should now able to create or import database on one of the SQL node. You can put a load balancer in front of the SQL node to take advantage on the performance and high availability.

Notes

You may notice that the distribution version I installed is “rhel5”. You can get the “el6” distribution package at this page: http://mirror.services.wisc.edu/mysql/Downloads/MySQL-Cluster-7.2/ and search for any “el6” package name.

High Availability: Configure Piranha for HTTP, HTTPS and MySQL

Piranha is a simple yet powerful tool to manage virtual IP and service with its web-based GUI.

As refer to my previous post on how to install and configure Piranha for HTTP service: http://blog.secaserver.com/2012/07/centos-configure-piranha-load-balancer-direct-routing-method/, in this post we will complete over the Piranha configuration with HTTP and HTTPS load balancing using direct-routing with firewall marks and MySQL load balancing using direct-routing only.

HTTP/HTTPS will need to be accessed by users via virtual public IP 130.44.50.120 while MySQL service will be accessed by web servers using virtual private IP 192.168.100.30. Kindly refer to picture below for the full architecture:

 

All Servers

SELINUX must be turned off on all servers. Change the SELINUX configuration file at /etc/sysconfig/selinux:

SELINUX=disabled

Load Balancers

1. All steps should be done in both servers unless specified. We will install Piranha and other required packages using yum:

$ yum install piranha ipvsadm mysql -y

2. Open firewall ports as below:

$ iptables -A INPUT -m tcp -p tcp --dport 3636 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 80 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 443 -j ACCEPT
$ iptables -A INPUT -m tcp -p tcp --dport 539 -j ACCEPT
$ iptables -A INPUT -m udp -p udp --dport 161 -j ACCEPT

3. Start all required services and make sure they will auto start if server reboot:

$ service piranha-gui start
$ chkconfig piranha-gui on
$ chkconfig pulse on

4. Run following command to set password for user piranha. This will be used when accessing the web-based configuration tools:

$ piranha-passwd

5. Turn on IP forwarding. Open /etc/sysctl.conf and make sure following line has value 1:

net.ipv4.ip_forward = 1

And run following command to activate it:

$ sysctl -p

6. Check whether iptables is loaded properly as the kernel module:

$ lsmod | grep ip_tables
ip_tables 17733 3 iptable_filter,iptable_mangle,iptable_nat

7. Since we will need to serve HTTP and HTTPS from the same server, we need to group the traffic to be forwarded to the same destination. To achieve this, we need to mark the packet using iptables and so it being recognized correctly on the destination server. Set the iptables rules to mark all packets which destined for the same server as “80”:

$ iptables -t mangle -A PREROUTING -p tcp -d 130.44.50.120/32 --dport 80 -j MARK --set-mark 80
$ iptables -t mangle -A PREROUTING -p tcp -d 130.44.50.120/32 --dport 443 -j MARK --set-mark 80

Load Balancer #1

1. Check the IP address is correctly setup:

$ ip a | grep inet
inet 130.44.50.121/28 brd 110.74.131.15 scope global eth0
inet 192.168.100.41/24 brd 192.168.10.255 scope global eth1

2. Login into Piranha at http://130.44.50.121:3636/. Login as user piranha and password which has been setup in step #4 of Load Balancers section.

3. Enable redundancy. Go to Piranha > Redundancy > Enable.

4. Enter the IP information as below:

Redundant server public IP     : 130.44.50.122
Monitor NIC links for failures : Enabled
Use sync daemon                : Enabled

Click ‘Accept’.

5. Go to Piranha > Virtual Servers > Add > Edit. Add information as below and click ‘Accept’:

 

6. Next, go to Real Server. This we will put the IP address of all real servers that serve HTTP. Fill up all required information as below:

7. Now we need to do the similar setup to HTTPS. Just change the port number for ‘Application port’ to 443. For Real Server, change the real server’s destination port to 443.

8. For MySQL virtual server, enter information as below:

 

9. For MySQL real servers, enter information as below:

 

10. Configure monitoring script for MySQL virtual server. Click on ‘Monitoring Script’ and configure as below:

 

11. Setup the monitoring script for mysql:

$ vim /root/mysql_mon.sh

And add following line:

#!/bin/sh
USER=monitor
PASS=M0Npass5521
####################################################################
CMD=/usr/bin/mysqladmin
 
IS_ALIVE=`$CMD -h $1 -u $USER -p$PASS ping | grep -c "alive"`
 
if [ "$IS_ALIVE" = "1" ]; then
    echo "UP"
else
    echo "DOWN"
fi

12. Change the script permission to executable:

$ chmod 755 /root/mysql_mon.sh

13. Now copy over the script and Piranha configuration file to load balancer #2:

$ scp /etc/sysconfig/ha/lvs.cf lb2:/etc/sysconfig/ha/lvs.cf
$ scp /root/mysql_mon.sh lb2:/root/

14. Restart Pulse to activate the Piranha configuration in LB#1:

$ service pulse restart

Load Balancer #2

In this server, we just need to restart pulse service as below:

$ chkconfig pulse on
$ service pulse restart

Database Cluster

1. We need to allow the MySQL monitoring user from nanny (load balancer) in the MySQL cluster. Login into MySQL console and enter following SQL command in one of the server:

mysql> GRANT USAGE ON *.* TO [email protected]'%' IDENTIFIED BY 'M0Npass5521';

2. Add the virtual IP manually using iproute:

$ /sbin/ip addr add 192.168.100.30 dev eth1

3. Add following entry into /etc/rc.local to make sure the virtual IP is up after boot:

$ echo '/sbin/ip addr add 192.168.100.30 dev eth1' >> /etc/rc.local

Attention: If you restart the interface that hold virtual IP in this server, you need to execute step #2 to bring up the virtual IP manually. VIPs can not be configured to start on boot.

4. Check the IPs in the server. Example below was taken from server Mysql1:

$ ip a | grep inet
inet 130.44.50.127/24 brd 130.44.50.255 scope global eth0
inet 192.168.100.33/24 brd 192.168.100.255 scope global eth1
inet 192.168.100.30/32 scope global eth1

Web Cluster

1. On each and every server, we need to install a package called arptables_jf from yum. We will used this to manage our ARP tables entries and rules:

$ yum install arptables_jf -y

2. Add following rules respectively for every server:

Web1:

arptables -A IN -d 130.44.50.120 -j DROP
arptables -A OUT -d 130.44.50.120 -j mangle --mangle-ip-s 130.44.50.123

Web 2:

arptables -A IN -d 130.44.50.120 -j DROP
arptables -A OUT -d 130.44.50.120 -j mangle --mangle-ip-s 130.44.50.124

Web 3:

arptables -A IN -d 130.44.50.120 -j DROP
arptables -A OUT -d 130.44.50.120 -j mangle --mangle-ip-s 130.44.50.125

3. Enable arptables_jf to start on boot, save the rules and restart the service:

$ service arptables_jf save
$ chkconfig arptables_jf on
$ service arptables_jf restart

4. Add the virtual IP manually into the server using iproute command as below:

$ /sbin/ip addr add 130.44.50.120 dev eth0

5. Add following entry into /etc/rc.local to make sure the virtual IP is up after boot:

$ echo '/sbin/ip addr add 130.44.50.120 dev eth0' >> /etc/rc.local

Attention: If you restart the interface that hold virtual IP in this server, you need to execute step #4 to bring up the virtual IP manually. VIPs can not be configured to start on boot.

6. Check the IPs in the server. Example below was taken from server Web1:

$ ip a | grep inet
inet 130.44.50.123/28 brd 110.74.131.15 scope global eth0
inet 130.44.50.120/32 scope global eth0
inet 192.168.100.21/24 brd 192.168.100.255 scope global eth1

You are now having a complete high availability MySQL and HTTP/HTTPS service with auto failover and load balance features by Piranha using direct routing method.

In this tutorial, I am not focusing on HTTPS because in this test environment I do not have SSL setup correctly and do not have much time to do that. By the way, you may use following BASH script to monitor HTTPS from Piranha (nanny):

#!/bin/bash
 
if [ $# -eq 0 ]; then
        echo "host not specified"
        exit 1
fi
 
curl -s --insecure \
	--cert /etc/crt/hostcert.pem \
	--key /etc/crt/hostkey.pem \
	https://${1}:443 | grep "" \
	&> /dev/null
 
if [ $? -eq 0 ]; then
        echo "UP"
else
        echo "DOWN"
fi

I hope this tutorial could be useful for some guys out there!