Migrating from Rackspace Cloud to Cpanel

Fed up with cloud hosting? You’re not alone. Just recently, I assisted a mass exodus of over 50 mysql/joomla based sites. After the migration to just a modest dedicated server with Cpanel, MySQL queries improved by 200% on average. Some longer queries and page loading times saw improvement of over 1000%. Additionally, the dedicated server won’t fall on its face when a single script such as a DB backup process consumes ‘too many’ resources and the Cloud decides to put your whole site in the timeout corner.

Here are some scripts to migrate all your files and db’s from a RackSpace Cloud Sites instance to a Cpanel account quickly and easily. Run this script as the Cpanel user you’re migrating to to avoid ownership issues. If you run this as root, you’ll need to run the ownership repair script in this post.

I was able to pull from 10-20 sites simultaneously and even .htaccess and other ‘hidden’ files came across intact.

The progress will be saved in /home/cpanel_user/xferlog.txt so you can monitor it in realtime. You can launch multiple scripts simultaneously to transfer many sites at once.

#!/bin/sh

RACKSPACE_CLOUD_FTP_USERNAME="rackspace_ftp_user"
RACKSPACE_CLOUD_FTP_PASSWORD="rackspace_ftp_password"
DOMAIN="www.myrackspacedomain.com"

LOCAL_CPANEL_USERNAME=cpanel_username

wget -rc --level=0  --no-parent --cut-dirs=3 -nH \
   --directory-prefix=/home/$LOCAL_CPANEL_USERNAME/public_html/ \
   --user="$RACKSPACE_CLOUD_FTP_USERNAME" \
   --ftp-password="$RACKSPACE_CLOUD_FTP_PASSWORD" \
   ftp://$RACKSPACE_CLOUD_FTP_IP/$DOMAIN/web/content/* \
   -o /home/$LOCAL_CPANEL_USERNAME/xferlog.txt -nv &

Now, let’s migrate a mysql database (this can actually be used for migrating from any host, not just Rackspace). Place the file in the Cpanel user’s home folder so it can be run again right before your DNS switch so your records are totally up to date. You can run it as many times as you wish.

#!/bin/sh
# Rackspace Cloud to Cpanel DB copy

REMOTE_HOST="rackspace_cloud_mysql_ip"
REMOTE_DB="rackspace_cloud_mysql_db"
REMOTE_USER="rackspace_cloud_mysql_user"
REMOTE_PASS="rackspace_cloud_mysql_pass"

LOCAL_HOST="127.0.0.1"
LOCAL_DB="local_cpanel_mysql_db"
LOCAL_USER="local_cpanel_mysql_user"
LOCAL_PASS="local_cpanel_mysql_pass"

MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"

CMD="$MYSQLDUMP --lock-tables --add-drop-table \
   -h'$REMOTE_HOST' -u'$REMOTE_USER' -p'$REMOTE_PASS' $REMOTE_DB \
   | $MYSQL -h'$LOCAL_HOST' -u'$LOCAL_USER' -p'$LOCAL_PASS' --database $LOCAL_DB"

echo "Running: $CMD"
echo
eval $CMD

If you’re looking for cheap and reliable Cpanel, Windows, or other types of Managed Hosting, please check out Fast Serv.

$DOMAIN

Xen – Using Netfront instead of rtl8139 on CentOS, RedHat, Fedora

On an HVM domain, the guest OS will always automatically detect your virtual network device as a buggy and slow Realtek RTL8139. You can maximize your network performance with Netfront driver which is now built into latest Redhat / CentOS kernel. For older releases, you needed the kmod-xenpv package to do this, but it’s built in as of RHEL/CentOS 5.3 I believe. The method is highly undocumented from my research so hopefully this saves someone.

1. In /etc/modprobe.conf, change 8139cp to xen-kniv using this following command:

sed -i 's/8139cp/xen-vnif/g' /etc/modprobe.conf

2. Then blacklist the rtl8139 drivers

echo 'blacklist 8139cp' >> /etc/modprobe.d/blacklist
echo 'blacklist 8139too' >> /etc/modprobe.d/blacklist

3. After a reboot, you can test that it worked by first making sure you’re online (duh!), then check lsmod and make sure you’re rid of Realtek junk at last:

lsmod | grep 8139

The output should return nothing (blank).

Now let’s activate the Xen SCSI driver to maximize disk performance by inserting the following in /etc/modprobe.conf:

alias scsi_hostadapter xen-vbd

Be sure to remove the existing scsi_hostadapter entry… if you use both, you’ll zap your disk.

Now reboot to activate.

Good luck!

Easy Javascript-based Bookmark Link

Here’s a simple Javascript-based bookmarking script for your site. It automatically populates the page name and URL so you don’t have to. Put this in the <HEAD>:

<script language="javascript" type="text/javascript">
function addToFav() {
  if(window.sidebar){
    window.sidebar.addPanel(document.title, this.location,"");
  }else{
    window.external.AddFavorite(this.location,document.title);
  }
}
</script>

Then, you can add a link in your <BODY>:

Bookmark Us!

Sysctl and ip_conntrack_max optimization

On a busy webserver, you have to be very careful that you don’t run out of connection tracking buckets.

Check how many you have set as your max:

/sbin/sysctl net.ipv4.ip_conntrack_max

Check how many you’re using:

wc -l /proc/net/ip_conntrack

A good maximum setting for most web servers with at least 2Gb RAM is 65536. Change the setting and lock it in (Redhat variants):

echo "net.ipv4.ip_conntrack_max = 65535" >> /etc/sysctl.conf
/sbin/sysctl -w

Find files modified/created within N days ago

This proved to be useful in cleaning up a compromised site. List all the files created or modified within a certain time frame — in this case we are looking 30 days in the past:

find . -mtime -30 -type f -print

If you want to delete all files created/modified n days ago, you can do something like this:

find . -mtime -30 -type f -exec rm {} \;

Or this:

find . -mtime -30 -type f -print0 | xargs -0 rm

Sorting disk usage by folder in Linux

Normally you would use something like this:

du -k | sort -nr > sorted.txt

But the output is not pretty since we don’t like counting bytes. This will sort it in human readable format:

du -k | sort -nr | awk '
     BEGIN {
        split("KB,MB,GB,TB", Units, ",");
     }
     {
        u = 1;
        while ($1 >= 1024) {
           $1 = $1 / 1024;
           u += 1
        }
        $1 = sprintf("%.1f %s", $1, Units[u]);
        print $0;
     }
    ' > sorted.txt
Categories: Uncategorized. Comments Off on Sorting disk usage by folder in Linux