Enable megasas2.sys Critical Mass Storage Driver

This reg key will enable the ability to boot from MegaRAID storage devices using the megasys2.sys driver. Useful to run before hand when upgrading or otherwise changing hardware to something with a LSI 9260, 9271 or similar card. This reg key REQUIRES that you already have the correct megasas2.sys driver installed one way or another.

http://djlab.com/stuff/megasas2.reg

OpenVPN Slow Downloads on Windows clients

Recently I upgraded my remote office connection to 100Mbps (cable) and spent almost a month dealing with slow downloads over an openVPN client connection. Uploads were fine (maxing out the pipe) but downloads were 12-20 Mbps at best and fluctuating like crazy. TCP client mode actually performed better than UDP so I knew something was wrong. After making sure my MSS/MTU was fine and UDP iperf tests were able to max out my connection (without the VPN, proving my ISP wasn’t throttling UDP), the following config on the server side is what ended up fixing it:

sndbuf 0;
rcvbuf 0;
push "sndbuf 393216";
push "rcvbuf 393216";

I now get 105 Mbps downloads on my openVPN connection (115 without VPN). Turns out the default buffers are just too small for high speed tunnels. Who would have known.

Munin plugin for Motorola Surfboard 6141

I cooked up a Munin plugin for my Surfboard 6141 which monitors ALL available data from the status page. If you’re having intermittant signal issues and want to keep detailed data for cable technicians, this is for you!

Check it out on Github:

https://github.com/djamps/munin-surfboard6141

Debian PXE boot image from scratch

After facing issue after issue with Idera/R1soft’s PXE and CD-ROM boot media — specifically, ancient kernels which are unable to obtain network connectivity on modern hardware, I decided to roll my own PXE boot image including the R1soft agent and a few other tools with the latest Debian stable. It turned out to be easier than I expected — kudos to the debian-live project.

Boot into a debian Live-CD and set up the bootstrap environment. I’m using 7.6.0 (latest stable at this time).

apt-get install debootstrap squashfs-tools
mkdir -p work/chroot
cd work

If you’re creating the image from scratch, use the following commands.

debootstrap --arch=amd64 stable chroot
cp /etc/network/interfaces chroot/etc/network/interfaces # Needed for network connectivity

If you are re-working an existing image, you can import it instead.

wget http://webserver/debian-live/filesystem.squashfs
unsquashfs -f -d chroot filesystem.squashfs
rm -f filesystem.squashfs

Now we prep the image and chroot.

cp /etc/resolv.conf chroot/etc/resolv.conf # Needed for network connectivity
chroot chroot
mount none -t proc /proc
mount none -t sysfs /sys
mount none -t devpts /dev/pts
export HOME=/root
export LC_ALL=C

Now you can do whatever you want to the image by installing packages and modifying configurations. You need to install the kernel at a minimum if you want to load modules after boot such as raid. You can install kernel headers and compile 3rd party modules like the r1soft cdp agent. This is a minimal image, so don’t forget basic filesystem utilities like mdadm if you’re installing the r1soft agent; I learned the hard way and had to re-work the image several times. I enabled auto-login on tty1 through tty3 by installing mingetty and modifying inittab. I also like to redirect syslog to tty4 and disable any console logging on tty1 – tty3.

When you think you’re done, clean up and exit the chroot.

apt-get clean
rm -rf /tmp/*
rm /etc/resolv.conf
umount -lf /dev/pts
umount -lf /sys
umount -lf /proc
exit

Compress and package the new filesystem image.

mksquashfs chroot filesystem.squashfs -e boot

Send it off to your HTTP webserver:

scp filesystem.squashfs user@webserver:/var/www/html/debian-live/.

And a working pxelinux.cfg entry after we’ve got the kernel and initrd in the correct place on the tftp server:

LABEL deblive
        KERNEL /debian-live/debian-live-7.6.0-amd64-standard.vmlinuz
        APPEND initrd=/debian-live/debian-live-7.6.0-amd64-standard.initrd.img dhcp ethdevice=eth0,eth1 boot=live fetch=http://webserver/debian-live/filesystem.squashfs

Grub with Xen Guests

In order for grub to ‘see’ virtual disks, you have to tell it about them with the ‘device’ command:

# grub
grub> device (hd0) /dev/xvda
grub> root (hd0,0)
grub> setup (hd0)
grub> quit

Keep in mind, installing grub on the MBR is not always necessary with Xen PV guests, but is highly useful for V2P conversions.

Categories: Uncategorized. Comments Off on Grub with Xen Guests

XenServer 6.2 with Software RAID

Historically XenServer does not support software RAID out-the-box, and this is unchanged in the latest 6.2 release. We can convert it to RAID after installation.

First we set up the 2nd disk by copying the partition tables whilst enabling a degraded RAID on it, then copy data over with custom initrd.

sgdisk -R/dev/sdb /dev/sda  # Replicate partion table from /dev/sda to /dev/sdb with unique identifier
sgdisk --typecode=1:fd00 /dev/sdb
sgdisk --typecode=2:fd00 /dev/sdb
sgdisk --typecode=3:fd00 /dev/sdb
# Sleep 5 seconds here if you script this...
yes|mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 missing # Create md0 (root)
yes|mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb2 missing # Create md0 (swap)
yes|mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sdb3 missing # Create md0 (storage)
# Sleep 5 seconds here if you script this...
mkfs.ext3 /dev/md0 # Create root FS
mount /dev/md0 /mnt # Mount root FS
cp -xR --preserve=all / /mnt # Replicate root files
sed -i 's/LABEL=[a-zA-Z\-]*/\/dev\/md0/' /mnt/etc/fstab # Update fstab for new RAID device
mkdir /mnt/root/initrd-raid && cd /mnt/root/initrd-raid
mkinitrd -v --fstab=/mnt/etc/fstab /mnt/root/initrd-raid/initrd-`uname -r`-raid.img `uname -r`
zcat initrd-`uname -r`-raid.img | cpio -i
sed -i 's/raidautorun \/dev\/md0/raidautorun \/dev\/md0\nraidautorun \/dev\/md1\nraidautorun \/dev\/md2/' init
rm -f initrd-`uname -r`-raid.img
find . -print | cpio -o -Hnewc | gzip -c > /mnt/boot/initrd-`uname -r`-raid.img
rm -f /mnt/boot/initrd-2.6-xen.img
ln -s initrd-`uname -r`-raid.img /mnt/boot/initrd-2.6-xen.img
sed -i 's/LABEL=[a-zA-Z\-]*/\/dev\/md0/' /mnt/boot/extlinux.conf
cat /mnt/usr/share/syslinux/gptmbr.bin > /dev/sdb
cd /mnt && extlinux  --raid -i boot/
sgdisk /dev/sda --attributes=1:set:2
sgdisk /dev/sdb --attributes=1:set:2
cd && umount /dev/md0
sync
reboot

Now, make sure BIOS forces booting from 2nd disk. This is VERY important! After reboot, finish off:

sgdisk -R/dev/sda /dev/sdb  # Replicate partion table from /dev/sdb to /dev/sda with unique identifier
mdadm -a /dev/md0 /dev/sda1
mdadm -a /dev/md1 /dev/sda2
mdadm -a /dev/md2 /dev/sda3  # If this command gives error, you need to forget/destroy an active SR first
mdadm --detail --scan > /etc/mdadm.conf
xe sr-create content-type=user device-config:device=/dev/md2 host-uuid=[host_uuid] name-label="Local Storage" shared=false type=lvm

Before going into production, try booting from each disk (with the other removed from the boot priorities), then restore the boot priority to normal.

Be careful with Xenserver patches, especially any patch which requires a reboot – if the initrd image is rewritten (for example, kernel updated), you need to carefully rebuild the initrd for RAID support again which is NOT covered in this article.

EXT and LVM Local Storage in XenServer

On a new XenServer deployment EXT storage is default. Although EXT is nice (ability to manipulate raw VHD files and use sparse storage), LVM is faster and more stable. Here is how to switch between storage modes (be sure to back up all data first, as this will wipe everything in the repo):

Use ‘xe sr-list’ and ‘xe pbd-list’ to find your local storage SR and it’s PBD. Also note the physical device number / partition in the PBD, so you can re-create it on the same device later on.

Unplug the active SR’s PBD (physical block device):

xe pbd-unplug uuid=<pbd_uuid>

Destroy the SR:

xe sr-destroy uuid=<sr_uuid>

Create a new SR of the specified type and location:

xe sr-create content-type=user device-config:device=/dev/<device> host-uuid=<host_uuid> name-label="Local Storage" shared=false type=<lvm|ext>

You can verify the SR again with ‘xe sr-list’ and it should already be in XenCenter as the default.

Track deleted (but open) file usage

From time to time the du command doesn’t match up with df. This is due to deleted files that are still held open by a process. You can total the deleted but ‘still open’ file usage:

lsof | awk '/deleted/ {sum+=$7} END {print sum}'

Or, to simply list all the files and their processes:

lsof | grep deleted

Removing RAID metadata

Sometimes a hardware RAID controller or fakeraid (BIOS) can leave metadata that makes it impossible to install Windows or Linux, or it installs correctly, but causes a kernel panic or a 0xb7 blue screen error on the first boot. The only method I could find to delete the metadata *quickly* is to zero out the last 512KB of data on the disk using the following command:

dd if=/dev/zero of=$YOUR_DEV bs=512 seek=$(( $(blockdev --getsz $YOUR_DEV) - 1024 )) count=1024

Replace $YOUR_DEV with the physical device, such as /dev/sda

You could just zero the whole disk, but that could take hours. This command executes in less than a second.