AS Number Stats on Linux

I’ve got a couple of Linux machines that are sitting outside of the Sflow ‘zone’ and AS traffic stats go unmeasured. I wanted to get a rough idea on the number of connections per AS number so here’s a little app that parses netstat and sorts AS numbers by number of connection.

NOTE1: This won’t work on Cpanel servers due to tmp restrictions.

NOTE2: Specify an alternative (e.g newer) GeoIPASNum.dat file with the –geo option.

wget http://djlab.com/stuff/asnum
chmod +x asnum
./asnum

Example output:

[root@mx1 ~]# ./asnum
(18) | RFC1918 or BOGON
(11) | AS29889 Fast Serv Networks, LLC
(4) | AS3320 Deutsche Telekom AG
(2) | AS7922 Comcast Cable Communications, Inc.

Quick and (very) dirty cron script

#!/bin.sh
#/root/doasnum.sh
#*/5 * * * * /root/./doasnum.sh >> /var/log/asnum.log

thedate=`date`
echo "***********************"
echo $thedate
/root/asnum
echo "***********************"

Don’t forget to rotate the logs.

#/etc/logrotate.d/asnum
/var/log/asnum.log
{
        rotate 7
        daily
        missingok
}

Clone a live linux system with Rsync over SSH

All commands are run on new server.

1. Boot into rescue mode (iso, pxe, ect).

2. Create partitions with ‘fdisk /dev/sda’. Type 83 for non-RAID fs, 82 for swap, type fd in the case of RAID (all partitions). Flag boot partition as bootable.

In the case of SSD, add -S 32 -H 32 to the fdisk command and start the first partition on sector 2 for proper alignment.

If using RAID, duplicate the partition table after creating it on the first disk:

dd if=/dev/sda of=/dev/sdb bs=1 count=64 skip=446 seek=446

2. Create RAID array (if applicable).

# For SSD, add: --chunk=128
mdadm --create /dev/md0 -e 0.90 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1  ## /boot
mdadm --create /dev/md1 -e 0.90 --level=1 --raid-devices=2 /dev/sda2 /dev/sdb2  ## Swap
mdadm --create /dev/md2 -e 0.90 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3  ## /

3. Create filesystems

For spin disk:

mkfs.ext4 /dev/md0 # /dev/sda1 for non-RAID
mkfs.ext4 /dev/md2 # /dev/sda3 for non-RAID

For SSD (non RAID):

mkfs.ext4 -b 1024 -E stride=128,stripe-width=128 -O ^has_journal /dev/sda1
mkfs.ext4 -b 1024 -E stride=128,stripe-width=128 -O ^has_journal /dev/sda3

For SSD (RAID):

mkfs.ext4 -b 1024 -E stride=128,stripe-width=256 -O ^has_journal /dev/md0 ## stripe-width = stride x N disks
mkfs.ext4 -b 1024 -E stride=128,stripe-width=256 -O ^has_journal /dev/md2 ## stripe-width = stride x N disks

4. Mount filesystems

mkdir /mount
mount /dev/md2 /mount  ## /dev/sda3 for non-RAID
mkdir {/mount/boot,/mount/dev,/mount/sys,/mount/proc,/mount/tmp}  
mount /dev/md0 /mount/boot ## /dev/sda1 for non-RAID

5. Sync filesystems with Rsync over SSH (Ex: 1.2.3.4 is source machine)

rsync -aHxv --numeric-ids --progress root@1.2.3.4:/* /mount --exclude=/dev --exclude=/proc --exclude=/sys --exclude=/tmp
rsync -aHxv --numeric-ids --progress root@1.2.3.4:/boot/* /mount/boot --exclude=/dev --exclude=/proc --exclude=/sys --exclude=/tmp  ## Only if /boot is on separate partition in source machine

If applicable: stop mysql on the source machine and resync the databases to prevent corruption:

rsync -aHxv root@1.2.3.4:/var/lib/mysql/* /mount/var/lib/mysql

6. Update mdadm.conf

mdadm --examine --scan > /mount/etc/mdadm.conf

7. Update fstab (if needed)

vi /mount/etc/fstab

8. Install bootloader

grub
root (hd0,0)
setup (hd0)
root (hd1,0)  ## for RAID
setup (hd1)  ## for RAID
exit

9. Optional: change IP address if both machines need to be online

vi /mount/etc/sysconfig/network-scripts/ifcfg-eth0

10. Hint: you can chroot into the cloned filesystem – for example – initramfs rebuilding:

cd /mount/
mount -t proc proc proc/
mount -t sysfs sys sys/
mount -o bind /dev dev/
chroot .

11. Cross fingers, reboot

reboot

Find symlinks on Cpanel

Find all possible root symlinks (leftover from attempted exploits) and save them as a list ‘symlinks.txt’.

ls /var/cpanel/users | grep -v "\`\|\.\|cpanel\|root\|mysql\|nobody" | \
while read CPUSER; do find /home/$CPUSER -type l -not \( -lname "/home/$CPUSER/*" \
-o -lname "*rvsitebuilder*" -o -lname "[^/]*" -o -lname "/usr/local/apache/domlogs/*" \
-o -lname "/usr/local/urchin/*" \) ; done \
> symlinks.txt &

Simple Monitoring with Email Alerts that works on LSI MegaRAID and Adaptec

This tool will poll the output of command(s) or URL(s), and send email alerts if the output changes, contains (or not contains) certain text, or becomes unavailable. It’s a light weight, reliable monitoring replacement for the pile of garbage most RAID vendors include with their cards. Complex excludes and finds can be specified as Perl regex patterns — useful for suppression various noise (RAID verify, ect). The default configuration includes examples that will monitor local LSI or Adaptec cards.

Note – Tested on CentOS only. if someone wants to try Alien on Debian please let me know how it works.

rpm -ivh http://djlab.com/stuff/cloud-mon-1.1-3.i686.rpm

After installation, edit /etc/cloud.ini then:

service cloud-mon restart

Make sure you have logrotate installed to prevent log growth. This package includes a simple logrotate script.

Guest isolation in XenServer 6.1 / XCP 1.6

Since XenServer 6.1 (XCP 1.6) there is a new feature that allows you to lock VIF to specific MAC and IP addresses. This is nice (and also very buggy!), but it doesn’t provide any security other than keeping VMs from stealing each others IPs. A better solution should allow to (optionally) isolate traffic between groups of VMs as well. For example, prevent users from accessing other users VMs over a common private/backend network, but allow communication between VMs in the same group and with external networks.

Step 1: We must relocate the VIF locking data into the VM’s vm-data store, similar to how security groups are managed by Nova (openstack). This lets us use more fields and options.

Step 2: Patch /opt/xensource/libexec/setup-vif-rules to fix *several* bugs as well as take an extra locking mode (isolated). In isolate mode, VMs can only communicate to IPs in the allowed list. Use a common set of IPs on each set of VMs in the same ‘security group’ and your various groups of guests are isolated from each other.

Step 3: Patch another bug in /etc/xensource/scripts/vif to prevent orphaned rules from piling up in openvswitch when VMs are restarted.

Patches are here:

http://djlab.com/stuff/xs61/setup-vif-rules.patch
http://djlab.com/stuff/xs61/vif.patch

This will only work with openvswitch networking mode and has been tested on XS 6.1 and XCP 1.6. Do not try it in bridged mode (you shouldn’t be using bridged mode anyways).