Linux: VMware Tools Installation Error

Yesterday, I was installing a new CentOS 6 server within VMWare Workstation 7 to be a file server for internal usage. As usual, it is highly recommended to install VMWare Tools to every guest operating system we installed to make sure the hardware integration will be smooth and avoid degraded performance on the virtual server.

During the installation process, I found following error:

Searching for a valid kernel header path...
The path "" is not valid.
Would you like to change it? [yes]
 
What is the location of the directory of C header files that match your running kernel?

What VMWare needs is the kernel-header and kernel-devel package from the same version with your current loaded kernel. You can check your loaded kernel by using following command:

$uname -r
2.6.32-71.el6.x86_64

SOLUTION 1

Solution #1 is highly recommended because it is better to update your kernel to the latest stable version provided by the repository. But you need to have downtime on this. Steps as below:

1. Update the kernel:

$ yum update kernel -y

2. Install the kernel-headers, kernel-devel and other required packages:

$ yum install kernel-headers kernel-devel gcc make -y

3. Reboot the server to make sure it load to the new kernel:

$ init 6

4. The kernel version has been updated including the kernel-headers and kernel-devel:

$uname -r
2.6.32-71.29.1.el6.x86_64
$rpm -qa | grep -e kernel-headers -e kernel-devel
kernel-headers-2.6.32-71.29.1.el6.x86_64
kernel-devel-2.6.32-71.29.1.el6.x86_64

SOLUTION 2

Solution #2 require you to install kernel-headers and kernel-devel with your current kernel version. Steps as below:

1. Install the same version of kernel-headers and kernel-devel via yum:

$ yum install kernel-headers-$(uname -r) kernel-devel-$(uname -r) -y

NOTE: If you have install gcc previously, you will facing error because the kernel-headers already installed but the version is the latest kernel version. You need to remove it first using following command:

$ yum remove kernel-headers -y

2. Install required files:

$ yum install gcc make -y

3. No need to reboot the server. Just make sure the kernel, kernel-headers and kernel-devel version are same:

$ uname -r
2.6.32-71.el6.x86_64
$ rpm -qa | grep -e kernel-headers -e kernel-devel
kernel-headers-2.6.32-71.el6.x86_64
kernel-devel-2.6.32-71.el6.x86_64

Once you have completed using one of the solution above, proceed to the VMware tools installation by following the wizard. The installation should be able to detect the kernel header path at this moment.

Linux: The Best and Safest Way to Copy Files

What is the best and safest way to copy files in Linux machine? In some cases where the directory that I need to copy over is too large with 10000+ files and 50GB+ of total size, I need to transfer the files in the most convenience and safest way. Conditions that I need are:

  • I can monitor the copying progress
  • I can resume if got some problem during the file transfer
  • Some summary report after completion
  • Log the progress into a file
  • I do not need any prompt for ‘yes’ or ‘no’ or whatsoever
  • And must be in one line of command!

The answer is using rsync!

I do not need to introduce this tool. You can find by yourself in the Internet cause it is so popular. So I just show you the best way to meet the condition as mentioned above.

Variables that I used:

OS: CentOS 6.0 64bit
Source directory: /mnt/nfs/contents
Destination directory: /home/user1/public_html
Log files location: /home/user1/logs/copy.txt

1. Lets check the total number of files in the source directory:

$ tree /mnt/nfs/contents/ | wc -l
16247

2. The rsync command will be:

$ rsync -Pavzh --log-file=/home/user1/logs/copy.txt /mnt/nfs/contents /home/user1/public_html &

3. Check the progress:

 $ tail -f /home/user1/logs/copy.txt

4. Count the total number of files in the destination directory:

$ tree /home/user1/public_html/contents/ | wc -l
16247

If the copy process being interrupted, you just need repeat step 2 and the copying process will continue from the last state before interruption. Good luck!

System Administration: Managing Remote Location

As a system administrator which administrating many branches, I need to support the end-user environment as well. Doing this from single location is quite hard and I need to create the best environment to manage all of these stuffs efficiently.

I am listing out some tips or what we can do to improve communication and collaboration between branches:

VPN between branches

  • VPN has ability to bring all of the computers in different branches connected to each other via a secured network. This will make sure that data communication between colleagues is protected and user can feel like they are in one single place.
  • The recommended way to do this is to setup a VPN (PPTP) server at one location (lets say head quarter). Create VPN account and assign to everyone in the company with dedicated internal IP (for better tracking).
  • All sensitive information should be located in one place and can only been accessed via VPN connectivity. This will prevenet data leakage and you have logs to every access to the internal system via VPN server.

Internal instant messaging system (chat)

  • Instant messaging is important to improve communication and collaboration. You can use any messaging service available online like GTalk, MSN Messenger, Yahoo Messenger and Skype. It depends on you, but it is highly recommended to use internal instant messenger system like Microsoft Lync 2010, BigAnt Office Messenger, Outlook Messenger and many more.
  • Using internal messenger will give some advantages like:
    • Prevent employee to chat with gossip friends (if you are using public messenger like GTalk and MSN)
    • Simple file transfer and sharing
    • Can back trace the chat history, if the boss suspected something is not right with employee (Good for the boss!)
    • Prevent outsider from sniffing your conversation

DMZ zone

  • Depending on how your network infrastructure being setup, you may need to have DMZ zone available to secure the internal network. DMZ zone is something that we called ‘another network zone that exposed to the public network’. Basically, it help you to isolate your internal network and in the same time able can connect to the web server that exposed to the public network.
  • Example of simple DMZ setup:
  • This is example if you not setup a DMZ with same peripherals as above:
     You can notice how unsecured it can be if you include the servers in one internal network.
  • To setup DMZ, what you need to do is just:
    • Create another network in your router with another subnet and IP range
    • Make sure the incoming connection from public network to web and email service to DMZ only via router
    • Make sure your internal LAN can be connected to DMZ via router
    • Make sure your external firewall blocks all incoming connection unless for web, email and NAT

Network drive and file sharing

  • File sharing and network drive is needed whenever users need to send big files, usually more than 10MB which usually not recommended to be sent via instant messenger or email.
  • The most popular and easy to setup file sharing is SAMBA, where you can map directly in each PC to the public sharing directory. SAMBA server/client comes by default for Windows, Mac and Linux.
  • Using VPN which connect all employees in one secure network will make SAMBA easier to setup and implement.
  • Other file sharing protocol like FTP might be time consuming to setup and slow due to binary data transfer. NFS in other hand is not come by default in Windows and you need to install the client to connect.

Collaboration portal

  • This is really important if you rely on the teamwork. Collaboration portal is something that most of companies think that is is a waste and should not be implement. This is wrong. I am suggesting you to try any collaboration portal and install it in your private server. Play around with that and you will see the importance of it.
  • Collaboration portal can help you to achieve:
    • Create, manage and monitor project, task and  report and assign it to users
    • One central point to store and share confidential document
    • Applying leave and check leave balance
    • Synchronize and connect the account to mail, calendar, instant messaging, active directory, CRM and other services
    • Edit the any document online, without need to download and resend it back
  • There are a lot of collaborative software available in the market and some of it is open-source. You can browse the list at http://en.wikipedia.org/wiki/List_of_collaborative_software

This is what I manage to setup the network office on company that I am working for. Do share with us if you have more point to highlight!

Export SVN Repository to Web Files in public_html

It is not save to have SVN repository to host the web files directly because web user can see the .svn folder which contains all the metadata, source code, structure and many more. My PHP programmers want to test the website in the internal web server directly. In this case, I will need to export the SVN repository into public_html directory (the HTTP document root) so they can see the changes they made in their source code.

The following script will export the SVN to a temporary folder, then from that temporary folder, it will use rsync append method to replace/remove any files associated with the temporary SVN directory. This will make sure the public_html will sync to the SVN directory faster.

Variable I use as below:

OS: CentOS 6.0 64bit
SVN URL: svn://192.168.100.100/svnrepo/web/php
Web directory: /home/user/public_html
Web URL: http://192.168.100.100/

1. Create the script:

$ touch /root/svn_export

2. Paste following code and change to the appropriate value:

 

# !/bin/bash
# SVN exporter from Subversion repository to web directory
 
HOMEDIR='/home/user'
SVNHOST='192.168.100.100'
SVNURL='svnrepo/web/php'
SVNUSER='exporter'
SVNPASS='expass123' 
PUBLIC_HTML='/home/user/public_html'
 
rm -Rf $HOMEDIR/svnexport.temp
svn export svn://$SVNHOST/$SVNURL --username $SVNUSER --password $SVNPASS $HOMEDIR/svnexport.temp
rm -Rf $HOMEDIR/svnexport.old
mv $HOMEDIR/svnexport $HOMEDIR/svnexport.old
mv $HOMEDIR/svnexport.temp $HOMEDIR/svnexport
rsync -avz $HOMEDIR/svnexport/ $PUBLIC_HTML

 

3. Change the permission to executable:

$ chmod 755 /root/svn_exporter

4. Run the script:

$ /root/svn_exporter

5. Put it into cron job every 1 minutes:

$ crontab -e

Insert following line:

* * * * * /root/svn_exporter

6. Save the file and restart the crond service:

$ service crond restart

Done! Every one minute, the SVN repository will be exported to public_html directory and your programmer can test it online via http://129.168.100.100.

Linux: Init Script for FSniper

As refer to my previous post on how to install fsniper, we can see that fsniper can be run on daemon mode. We can take this advantages by generate an init script to ease up the stop and start process. By default, you need to kill manually the PID and also start manually using following command:

$ fsniper --daemon

Since I am using fsniper quite a lot in order to monitor file changes on the server, I have create simple init script for fsniper. Lets create it!

1. Create the init script in init.d directory:

$ touch /etc/init.d/fsniper

2. Using your favourite text editor, open the file /etc/init.d/fsniper and paste following code:

export HOME=/root
 
case "$1" in
start)
echo -n "Starting Fsniper: "
/usr/local/bin/fsniper --daemon
echo -e "... [ \e[00;32mOK\e[00m ]"
;;
stop)
echo -n "Shutdown Fsniper: "
kill -9 `ps aux | grep "fsniper --daemon" | grep -v grep | awk {'print $2'}`
echo -e "... [ \e[00;32mOK\e[00m ]"
;;
restart)
$0 stop
sleep 1
$0 start
;;
*)
echo "Usage: `basename $0` start|stop|restart"
exit 1
esac
 
exit 0

3. Change the permission to executable:

$ chmod 755 /etc/init.d/fsniper

Done! You can now use following command to start fsniper:

$ service fsniper start
$ /etc/init.d/fsniper start

Subversion Authz Examples

After Subversion server already delivered, I need to deliver some ACL (Access Control List) so everyone can access to their repository respectively.

By understanding some rules at Subversion Path-based Authorization, we can create a great ACL so it can guarantee no repository can be overwritten by unwanted person. In order to deliver Path-based ACL, we need to have following options enabled in the svnserve.conf:

anon-access = none
auth-access = write
password-db = passwd
authz-db = authz
realm = My Subversion Repository

Following example is for 2 different developer team (PHP and Ruby) which accessing one repository under /svn directory in the server. The SVN path is: svn://192.168.1.100/svnrep . We also have boss and system administrator to be act as the admin/owner of the system.

In my /svn/conf/authz file, I have to put following directives with description:

# User defined in group
[groups]
admin = boss, sysadmin
phpteam = php1, php2, php3
rubyteam = ruby1, ruby2
 
# SVN root should only be access by boss and system admin
[/]
@admin = rw
* =
 
# Main web repository should only be access by boss and system admin, others cannot read/write at all
# By specifying svn path, we need to list which user/group who can and cannot access
[/web]
@admin = rw
* =
 
# PHP project repository can only be access by php developer, boss and sys admin
# By specifying svnrep: directive, we just need to list which user/group who can access
# Others will automatically rejected
[svnrep:/web/php]
@phpteam = rw
@admin = rw
 
# Ruby project repository can only be access by Ruby developer, boss and sys admin
# By specifying svnrep: directive, we just need to list which user/group who can access
# Others will automatically rejected
[svnrep:/web/ruby]
@rubyteam = rw
@admin = rw

After we create the ACL, the changes will takes immediately without restart. Our PHP team can now deliver their development work without interuption from Ruby development team, while in other hand, the boss can monitor their progress and the system administrator can still manage and perform maintenance for the Subversion repository.

Linux: Yum Repository from DVD

In my case, I need to setup one web server without internet connection. As I am convenience to use yum for any package installation in Linux, we need to tell yum to look for the CentOS DVD instead Internet (default).

Kindly find variables as below:

OS: CentOS 6 64bit
DVD path: /mnt/cdrom
Mount point: /media/CentOS

1. Create mount point directory:

$ mkdir /media/CentOS

2. Insert CentOS installation DVD #1 into and mount it to the mount point that we want:

$ mount /mnt/cdrom /media/CentOS
mount: block device /dev/sr0 is write-protected, mounting read-only

3. We need to tell yum to refer to the installation DVD instead of repository via internet. To do this, I need to enable the CentOS-Media repository. Open /etc/yum.repos.d/CentOS-Media.repo via text editor and changed following line to 1:

enabled=0

To:

enabled=1

4. Since I will not use the default one at all, I will move all other .repo files to another folder called yum.repos.d.bak under /etc directory:

$ mkdir /etc/yum.repos.d.bak
$ ls -1 | grep -v CentOS-Media.repo | xargs -I {} mv /etc/yum.repos.d.bak

Done! Now you can run yum package installer from installation disc directly. If you planned to use this method from now onwards, you might need to edit the /etc/fstab files or playing with /etc/rc.local script to automatically mount the installation disc when reboot.

Unable to Copy/Paste in Remote Desktop

Remote Desktop Connection (RDC) is most important stuff you need to have in order to manage Windows server using Remote Desktop Protocol (RDP).

Sometimes, when you been forcely disconnected from the RDC session, you cannot use copy and paste features between your host and remote host on the next session. For those who are facing this problem frequently, we need to reset the rdpclip.exe and rerun again. No need to restart the server remote server (like what I have done when I first encounter this problem).

To do this, login to the remote server via RDC and open Task Manager. Find rdpclip.exe > right click on it and ‘End Process Tree‘.

After that you need to restart again the process by opening Task Manager > New Task > type ‘rdpclip‘ > OK.

Voila! Your copy/paste function or as known as clipboard is working back!

FreeBSD: Setup IP and Port Redirection using NAT

Yesterday, our development team has deliver the new website in new server. This website is replacing our old website and really need to point to the new server immediately. Since my boss do not want to afford any data inconsistency cause by DNS propagation, I need to use this method to redirect all connections  to port 80 for old server to port 80 at new server. Once redirected, only then I will need to change the DNS records to the new server. This will result zero DNS propagation time.

I will use the most simple way to achieve this objective by using IPFirewall as firewall and natd as address/port redirector. Both package should have come by default in FreeBSD.

Variables I used is:

OS: FreeBSD 8.0 64bit
Old server main IP: 202.188.90.11
Old web server IP: 202.188.90.12
New web server IP: 202.188.100.77
Domain: mywebsite.net

1. Since we want to make this FreeBSD server as router, we need to make sure it has 2 interface setup. One is for us to connect via public network and another one is for IP redirection. Make sure you have following IP setup in /etc/rc.conf:

ifconfig_em0="inet 202.188.90.11 netmask 255.255.255.0"
ifconfig_em1="inet 202.188.90.12 netmask 255.255.255.0"

2. Restart the network interface and check whether the IP is embedded into the interface or not:

$ /etc/rc.d/netif restart
$ ifconfig | grep inet
        inet 202.188.90.11 netmask 0xffffff00 broadcast 202.188.90.255
        inet 202.188.90.12 netmask 0xffffff00 broadcast 202.188.90.255

3. In this case I am going to use em1 as the interface to receive connection for web server (since the domain is pointed to this IP/interface). Add following line to /etc/rc.conf using text editor:

gateway_enable="YES"
firewall_enable="YES"
firewall_type="OPEN"
natd_enable="YES"
natd_interface="em1"
natd_flags="-f /etc/natd.conf"

4. If you see the configuration above, we specify natd_flags to read configuration from /etc/natd.conf. So we need to create this file and put some rules into it using text editor:

port 8668
interface em1
redirect_port tcp 202.188.100.77:80 202.188.90.12:80

5. Sadly, we need to reboot the server to make this new route works. Reboot as follow:

$ init 6

6.  Lets check whether all the required application is running:

$ ps aux | grep natd
root    858  0.0  0.2 14256  1628  ??  Ss   11:37AM   0:00.01 /sbin/natd -f /etc/natd.conf -n em1
$ ipfw list
00050 divert 8668 ip4 from any to any via em1
00100 allow ip from any to any via lo0
00200 deny ip from any to 127.0.0.0/8
00300 deny ip from 127.0.0.0/8 to any
00400 deny ip from any to ::1
00500 deny ip from ::1 to any
00600 allow ipv6-icmp from :: to ff02::/16
00700 allow ipv6-icmp from fe80::/10 to fe80::/10
00800 allow ipv6-icmp from fe80::/10 to ff02::/16
00900 allow ipv6-icmp from any to any ip6 icmp6types 1
01000 allow ipv6-icmp from any to any ip6 icmp6types 2,135,136
65000 allow ip from any to any
65535 deny ip from any to any

7. Now, lets browse the website, http://mywebsite.net and see where it goes. It should load the website in new server, 202.188.100.77. We just redirect the website to the new server without any worries on DNS propagation!

cPanel: Berkeley DB error

I found out this error in /var/log/exim_mainlog. When trying to fix the Exim database using /scripts/exim_tidydb, the same error occurred:

$ tail -f /var/log/exim_mainlog
2011-09-08 10:08:13 1R1cV2-0003Yq-J2 Berkeley DB error: page 40: illegal page type or format
2011-09-08 10:08:13 1R1cV2-0003Yq-J2 Berkeley DB error: PANIC: Invalid argument
2011-09-08 10:08:13 1R1cV2-0003Yq-J2 Berkeley DB error: fatal region error detected; run recovery

This can be fixed by the following steps by re-updating Exim and clear up the exim

1. Backup /etc/exim.conf and /var/spool/exim/db :

$ cp /etc/exim.conf /etc/exim.conf.bak
$ cp /var/spool/exim/db /var/spool/exim/db.bak

2. Stop Exim:

$ service exim stop

3. Remove all files under /var/spool/exim/db to make sure we get the new Exim database :

$ rm -Rfv /var/spool/exim/db/*

4.Update Exim:

$ /scripts/eximup --force

5. Restore back the configuration of Exim:

$ cp /etc/exim.conf.bak /etc/exim.conf

6. Restart Exim to load our configuration:

$ service exim restart