github twitter linkedin instagram strava

§ WiFi Access Point in Debian

If you're lucky enough to have a wireless interface compatible with hostapd, using it in AP mode is trivial. Not sure if yours is compatible? The Linux Wireless wiki has a list of drivers – if yours has a "yes" in the cfg80211 and AP columns you may be in luck.

Go ahead and install hostapd:

apt-get install hostapd

Next, create a configuration file at /etc/hostapd/hostapd.conf with the following contents:


I left ssid and wpa_passphrase intentionally blank, you should populate these with your own values. The only thing to watch out for here is that ssid must not contain the characters < or >. All options are documented in this example config file. You can test the config by passing it as an argument to the hostapd binary (see below for notes on DHCP):

hostapd /etc/hostapd/hostapd.conf

Note that launching hostapd in this way will block your terminal session as it runs in the foreground but its a good way to check for errors. If your happy the SSID you entered is being broadcasted and no errors appear, you can Ctrl+C to quit and use the init script to launch it as a background service. First make sure your init script is pointing at your config file by checking the DAEMON_CONF variable in /etc/init.d/hostapd:


Then you can launch the service in the usual way:

service hostapd start

Hostapd acts as a wireless network switch and so does not include DNS or DHCP features (if you tried connecting to the SSID before, you'll see no IP address was assigned to you), these can be easily added with dnsmasq:

apt-get install dnsmasq

The nice thing about dnsmasq (targetting small-scale deployments generally) is that anything that appears in /etc/hosts becomes a DNS entry, which means adding new records can be done in a familiar way.

The DHCP configuration is declared in a file at /etc/dnsmasq.conf, e.g.:


The configuration above is fairly self-explanatory, a complete example config file is available from the authors. At this point, restarting your services should give you a working access point complete with DNS and DHCP:

service dnsmasq restart
service hostapd restart

Depending on how your wlan0 is configured, you may need to set a IP address for the interface:

ifconfig wlan0

There are a couple of options with the above, either you can set the IP address statically in /etc/network/interfaces or you can add the above line into your hostapd init script (see more on this below).

There are caveats with restarting hostapd in this way, if it fails to bring up the access point try rebooting your machine. If the access point works after a reboot you may need to augment the hostapd init script with additional commands to encourage a proper hardware reset of the wifi chip during service restarts. A common command that is required is:

rfkill unblock wifi

Some laptops have a hardware on/off switch which enables/disables the WiFi radio, rfkill is used for tracking the state of this switch – even if a physical switch isn't present on your machine. Running the command above clears any software locks on the wireless interface (that can be caused by previous executions of hostapd for example).

You are also able to power cycle the WiFi hardware using:

ip link set wlan0 down
ip link set wlan0 up

Reloading the kernel driver could help, e.g.:

modprobe -r brcmfmac
modprobe brcmfmac

Finally, turning off the power save feature can increase reliability:

iw wlan0 set power_save off

I've created a gist which puts all of this together into a reliable hostapd init script.

At this point, if you wish to gain internet access you'll need to enable IP forwarding onto another network interface if you have one available (e.g. eth0):

sysctl -w net.ipv4.ip_forward=1
iptables -A FORWARD --in-interface wlan0 -j ACCEPT
iptables --table nat -A POSTROUTING -s ! -d -j MASQUERADE

The above can be made persistent between reboots with the iptables-persistent Debian package and uncommenting the following line in /etc/sysctl.conf:

# Uncomment the next line to enable packet forwarding for IPv4

Note that throughout this article I have used the subnet, you can of course change this but be sure that you update it in all relevant places (dnsmasq config, hostapd init script, iptables etc).

§ Optimising Composer Behind a Forward Proxy

The composer dependency manager for PHP can be incredibly slow behind a forward proxy, especially if that proxy blocks the Git and SSH protocols. Fortunately, there are a number of things you can do to improve performance.

To help diagnose your situation, the first thing to do is run:

composer diag

The following ensures that your environment variables are set. You can place these lines in /etc/profile to apply to all users — or just in a single user's ~/.bashrc:

export http_proxy=$proxy
export HTTP_PROXY=$proxy
export https_proxy=$proxy
export HTTPS_PROXY=$proxy
export ftp_proxy=$proxy
export FTP_PROXY=$proxy
export all_proxy=$proxy
export ALL_PROXY=$proxy

Notice those last two variables HTTP_PROXY_REQUEST_FULLURI and HTTPS_PROXY_REQUEST_FULLURI which may not be part of your existing proxy configuration. Once you've loaded these variables into your session, run composer diag again.

By default, Composer first attempts to use the Git protocol for cloning repositories and the timeout is quite generous; this can result in incredibly poor performance during a composer update if that protocol is blocked by your proxy. To always use the HTTPS protocol, simply execute the following command:

composer config --global github-protocols https

The changes to your configuration will be saved in ~/.composer/config.json. Finally, if your composer diag recommends disabling xdebug, simply create a copy of your php.ini and comment out or remove the relevant line:

# Find current php.ini file
php --ini

# Create a copy
cp /etc/php/php.ini /etc/php/php-noxdebug.ini

# Open up the php-noxdebug.ini and remove the line:

# Run composer with the custom ini file
php -c /etc/php/php-noxdebug.ini `which composer` --optimize-autoloader update

# Create an alias in your ~/.bashrc or ~/.profile
alias composer="php -c /etc/php/php-noxdebug.ini `which composer`"

The final trick here is the --optimize-autoloader flag which will generate a classmap file for you, this helps to improve class autoloading performance during runtime.

§ Manually Downloading Vagrant Boxes

Vagrant sometimes takes a long time to download boxes from the internet, if this is the case for you then you could try a download accelerator such as DownThemAll!. The problem however, is to get Vagrant to recognise the box you've manually downloaded when running vagrant up.

The first thing that you need to decide is the name of your box, I'll use ubuntu1204_x86_64 in this example. Your Vagrantfile should contain at least:


Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| = "ubuntu1204_x86_64"
  config.vm.box_url = ""

You can see that I've decided to use one of the Puppet Labs images from the URL above. Copy and paste this URL into Firefox with DownThemAll! installed and you should get a decent transfer speed. Once downloaded, you'll want to extract the .box file into ~/.vagrant.d/boxes/ubuntu1204_x86_64:

cd ~/.vagrant.d/boxes
mkdir ubuntu1204_x86_64
cd ubuntu1204_x86_64
mv ~/Downloads/ .

Now that the .box file is in the right place, you just need to extract it with the usual tar command:

tar xzf

This should leave you with the following:

$ tree ~/.vagrant.d/boxes/ubuntu1204_x86_64/
└── virtualbox
    ├── Vagrantfile
    ├── box-disk1.vmdk
    ├── box.ovf
    └── metadata.json

Finally you're free to reference the ubuntu1204_x86_64 in any Vagrantfile on your system and when you run vagrant up it will pick up your manually cached version.

§ Building Your First Android App (without an IDE)

This article is a supplement to the official Building Your First Android App guide from Google's Android Developer website. I followed this guide to the letter but could not get the basic “Starting Another Activity” project to compile without adding some additional import statements and configuration. Although there are some instructions on avoiding the use of Eclipse or Android Studio, there are several assumptions made which are left out from even the 'complete' code examples. I realise this may be a simple case of Ctrl+Shift+O in Eclipse but I prefer to understand my fixes rather than have them automatically applied for me.

On Debian there are a few prerequisites:

sudo apt-get install ant openjdk-7-jdk

If you're on a 64-bit machine (i.e. output of uname -m is x86_64) then you'll also need to add the 32-bit architecture libraries:

sudo dpkg --add-architecture i386
sudo apt-get install ia32-libs

Installing the Android SDK and Platform Tools (including the Emulator) and deploying an app to either an AVD or to a hardware device is fairly straightforward. I didn't come across any issues after following the instructions linked to by the guide. I installed the SDK to /usr/local/android-sdk and added two entries to my PATH in ~/.bashrc:

export PATH=/usr/local/android-sdk/tools:$PATH
export PATH=/usr/local/android-sdk/platform-tools:$PATH

There are no instructions for installing the Support Library without an IDE, as it turns out you need to add the following line to the file:


Then you need to symlink (or copy) the required Support Library:

ln -s /usr/local/android-sdk/extras/android/support/v4/android-support-v4.jar /path/to/project/libs/android-support-v4.jar

In the DisplayMessageActivity class I had to add in the following at the top of the file:

package com.example.myfirstapp;

import android.os.Bundle;
import android.os.Build;
import android.view.MenuItem;
import android.widget.TextView;
import android.content.Intent;
import android.annotation.SuppressLint;

Then it was just a case of ant debug and deploying to the device (-r flag to replace previous version):

adb install -r bin/MyFirstApp-debug.apk

Prior to doing the above, ant debug was displaying several error messages, including:

I've created the Git repository skl/android-training to show the project in it's entirety.

§ Installing Crunchbang Linux on a Mid 2011 MacBook Air

After attending a lecture by Richard Stallman at the University of Lincoln, I felt somewhat compelled to "liberate" my MacBook Air by dual-booting a flavour of the GNU/Linux operating system. I read various articles proposing different approaches, with Ubuntu and rEFIt being seemingly the easiest/popular choice. The Unity interface is not my preference however, so I decided to look at other distributions with lightweight window managers that were known to work on the MacBook Air. An excellent set of notes popped up in one search suggesting Crunchbang Linux, which turns out to be based on Debian and Openbox – perfect!

The article linked above by Erik Schneider was straight forward to follow, my experience only differed in a couple of places. Firstly I created a 40GB partition, not 8GB, as this was to become my new primary development environment.

Secondly, after the installation of the Crunchbang OS, the live version of Crunchbang (which booted from a USB image) was entirely useless. Although it did manage to boot and reach the desktop, every menu option I tried (including launching the terminal) failed with various error messages. At this point I instead burned a Knoppix DVD (which includes the 64-bit OS as opposed to the 32-bit only CD) and booted it with the flag knoppix64 which you enter when the boot: _ prompt appears. Using this approach I managed to continue with the chroot portion of the notes and install the required packages with apt-get. The grub package versions did not require any customisation as Crunchbang is now based on Debian Wheezy (whereas it was based on the earlier Debian Squeeze in the article). Getting the WiFi adapter to work is as easy as apt-get install firmware-brcm80211.

Finally, I added a few lines to ~/.config/openbox/autostart:

## 1-finger tap left-click, 2-finger tap right-click and 3-finger middle-click
synclient TapButton1=1 TapButton2=3 TapButton3=2 &

## Load custom key map, baselining to GB layout first
( sleep 1 && setxkbmap gb && xmodmap ~/.Xmodmap ) &

## Dim the LCD (apt-get install xbacklight for dependency)
( sleep 1 && xbacklight -set 10 ) &

## Slightly increase the gamma (roughly calibrated to 1.8)
( sleep 1 && xgamma -gamma 1.150 ) &

Here's my ~/.Xmodmap which suits the UK MacBook Air layout. There are a couple of caveats: the # symbol is mapped to § key because Alt+3 behaves differently in Crunchbang (the § symbol is still available by pressing Shift+§) and I've remapped Caps Lock to Control for my Vim tendencies:

keycode  11 = 2 at 2 at twosuperior oneeighth twosuperior
keycode  48 = apostrophe quotedbl apostrophe quotedbl dead_circumflex dead_caron dead_circumflex
keycode  49 = numbersign section numbersign section bar bar bar
keycode  51 = backslash bar
keycode  66 = Caps_Lock NoSymbol Caps_Lock
keycode  94 = grave asciitilde
add lock = Caps_Lock
remove lock = Caps_Lock
add control = Control_L Control_R
remove Control = Control_L
keysym Caps_Lock = Control_L
add Control = Control_L

Be sure to reboot after the changes to the autostart script to make sure the keymap loads correctly. To get the keyboard backlight to work and to map it to the keyboard hotkeys, simply follow this article from the Crunchbang forum.

§ Vagrant: Bridged Network Configuration on SLES 11

Vagrant simplifies the creation of virtual machines by providing a layer atop, in my case, VirtualBox. Static IP address assignment through bridged network adapters is not yet available by default in the Vagrantfile configuration (this article is based on Vagrant 1.2.2). To achieve this on a SLES 11 VM, you need to manually configure the interface. Firstly, inform Vagrant that you wish to use custom bridged networking by adding the following to your Vagrantfile:

# Set "XXXXXX" to your desired MAC address suffix for this VM :public_network, :auto_config => false, :mac => "080027XXXXXX"

# Run `./` once the VM is online
config.vm.provision :shell do |s|
  s.path = ""

The reason that you should also set a static MAC address is so that your network gateway ARP table remains correct even after a vagrant destroy && vagrant up. By default, a new MAC address is generated for you per instance and you would have to wait for the ARP cache to clear before a new route would be established.

As you can see from above you need to create a file in the same directory as your Vagrantfile. Populate with the following:

# This file is designed to be called by `vagrant up`:
# usage: ./

# Desired static and router IP addresses

# Desired hostname and fully-qualified domain name

# Create the network interface 'eth1'
sudo tee /etc/sysconfig/network/ifcfg-eth1 <<EOF

# Set the default gateway
sudo tee /etc/sysconfig/network/routes <<EOF
# Destination   Gateway                 Netmask                 Device         ${MY_ROUTER_IP}                 eth1

sudo /sbin/service network restart

# Set the hostname (Vagrant seems to do this incorrectly for SLES)
sudo sed -i "s/.*/${MY_FQDN}/" /etc/HOSTNAME
sudo sed -i "s/" /etc/hosts
sudo tee -a /etc/hosts <<EOF
sudo hostname -F /etc/HOSTNAME

# Notify the gateway
echo "Waiting a couple of seconds before attempting to ping the gateway..."
sleep 2
ping -Ieth1 -c1 ${MY_ROUTER_IP}

Be sure to set MY_STATIC_IP and MY_ROUTER_IP to suit your network setup. You may also set the desired hostname of the machine by editing MY_HOSTNAME and MY_FQDN as the Vagrantfile option config.vm.hostname is not currently compatible with SLES 11.

There is a thread about adding support for static IP address assignment into Vagrant on Github.

§ PHP-FPM: Child exited with code 70

I came across an issue today on one of the production servers at work where the php-fpm.log contained entries similar to the following:

[09-Jul-2013 14:17:42] WARNING: [pool www] child 18868 exited with code 70 after 211.761716 seconds from start

I managed to reproduce this on PHP 5.4.13 and 5.4.14 and nginx 1.2.7 and 1.2.8. Nothing was appearing in the PHP error log or syslog. There was a related entry in the nginx error log:

recv() failed (104: Connection reset by peer) while reading response header from upstream

This actually ended up being caused by an ini_set('memory_limit', '16M'), which when removed and relying on the 128M set by php.ini instead resolved the issue.

By the way, I was unable to find any reference to the exited with code 70 message on Google. After digging through the PHP source code it appears to equate to the constant FPM_EXIT_SOFTWARE defined in fpm.h. It's used in fpm_main.c, fpm_process_ctl.c and fpm_unix.c. I thought I'd stick these references on here in the hope that it may help someone who's searching for the same error.

§ Arch Linux ARM with unified /bin and /lib for Raspberry Pi

The latest Arch Linux disk image for Raspberry Pi (2013-06-06) contains a couple of key changes that may require a re-image to your SD card. I was running an image of Arch from what must have been a little over a month ago when I attempted a pacman -Syu today. This failed with the error:

error: failed to commit transaction (conflicting files)
filesystem: /bin exists in filesystem

This error is documented on the Arch wiki, however the suggested fix starting with pacman -Qqo /bin /sbin /usr/sbin | pacman -Qm - failed for me with an out of memory error. I try to execute the command with only /bin but it was taking so long I ended up dd'ing the SD card with the 2013-06-06 image. The new image comes with the unified /bin and /lib directories out of the box so that this is no longer an issue.

Secondly, it's been a while since I've been in the Arch shell and discovered /etc/rc.d is no more, or at least it isn't what it used to be. Arch have shifted to the increasingly popular systemd with a rather nice explanation over on this forum post.

§ Improving Xen Performance on Debian

There are several Xen Best Practices set out on the Xen wiki, however they did not directly translate to Xen 4 on Debian Squeeze. The general idea is to dedicate some resources to dom0 (in this case an entire CPU core) so that it can cleanly deal with all the domU IO. Given a multi-core CPU you can specify which cores each domU should use by specifying the following line each <domain>.cfg file:

cpus = '2-3'

The above will set the domU to use CPU cores 3 and 4 (the cpus parameter is zero indexed). By setting a even distribution of assigned CPU cores amongst your domUs, you are helping to spread the load. Taking this a step further, if you assign all of your domUs a non-zero value for the parameter above, you can dedicate a single CPU core for dom0 by adding the following to /etc/default/grub:

GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=512M,max:512M dom0_max_vcpus=1 dom0_vcpus_pin"

Then execute update-grub and reboot the host machine. Incidentally, this will also fix the amount of allocated RAM to dom0 at 512MB (disabling ballooning). Be aware, some guides recommend adding nosmp to the boot flags, this is a Bad Idea™ and will render multi-CPU domUs nearly unusable. You can also execute the following after the reboot to prioritise the dom0 CPU cycles:

xm sched-credit -d Domain-0 -w 512

You need to execute the above on each boot so it might be an idea to create an init script, the Xen wiki recommends that it should be one of the last things to execute in the boot process.

§ Expanding an LVM Physical Volume

I recently purchased a 256GB SSD to replace the 120GB drive that I originally built into my Debian Xen host. Having powered off the machine, connected the new drive and booted into the Knoppix Live CD, I copied the data using dd. The following will execute the drive-to-drive clone as a background process:

fdisk -l # identify your source (if) and destination (of) devices
dd if=/dev/sdb of=/dev/sda bs=32M &

Note the blocksize of 32M, this increased the data transfer rate from 40-50MB/s up to 240-250MB/s in my case. The clone took approximately ten minutes, it would have taken nearly an hour with the default blocksize. You can inspect the progress of the operation by signalling the Process ID which was output from the dd command:

kill -SIGUSR1 <pid>

As I have the maximum of three primary and one extended partitions (boot, root, swap and lvm respectively) I opted to grow the extended partition and the LVM physical volume that resides within it. I first rebooted back into the Knoppix Live CD after removing the old drive (otherwise you may see Found duplicate PV error messages).

The next step was to expand the fourth (extended) partition of the new drive to fill the available space:

parted /dev/sda
u s # Change Unit to Sectors
print # Display current partitions

At this point take note of the starting sector of the fourth partition (37611518 in my case) and the final sector of the disk (231252993). Now delete the fourth partition and create the new partition with the same starting sector (this ensures the data within the partition remains readable) and set the new final sector:

rm 4
mkpart extended 37611518s 500117503s

Now do the same for the LVM partition:

rm 5
mkpart extended 37611518s 500117503s
toggle 5 lvm

The quit command will take you back to the normal prompt. It is recommended that you reboot the machine again at this point. Now you can see the current reported size of your LVM physical volume using the pvscan binary. To increase the size of the physical volume and take advantage of the newly available space, execute the pvresize binary:

pvresize /dev/sda5

Run pvscan again and the new size should be displayed. I use the vgs binary to give a nice readout of the available space within the volume group. Note this hasn't increased the size of any of the logical volumes within the volume group, only the physical volume itself.

§ The 5-minute LAMP Stack

With the release of Debian 7.0 "Wheezy", PHP 5.4 is available though apt-get from the word go. This means it now only takes a few minutes to setup a lightweight web server capable of dealing with the latest web applications with very little configuration. I use the term LAMP loosely in this article because it's easier to pronounce than LNMP.

My usual requirements for a web server are as follows:

These are now all available in the Wheezy repositories, so all we need are a few commands.

Firstly, for a fresh build you'll want to get the basics right:

apt-get install vim htop ntp nscd

echo "your.fullyqualified.hostname" > /etc/hostname
hostname -F /etc/hostname

dpkg-reconfigure tzdata

Grab the LAMP packages:

apt-get install nginx php5-cli php5-fpm php5-mysqlnd php5-xcache php5-curl mysql-server mysql-client

This is always good practice for your MySQL server:


Setup SMTP, see Linode's simple guide for step-by-step instructions on the second command here:

apt-get install exim4-daemon-light mailutils
dpkg-reconfigure exim4-config

Now let's lock down the server a bit, you may like to use this gist as a base for your firewall rules - be sure to read through them though or you'll lock yourself out:

apt-get install iptables-persistent fail2ban

vim /etc/iptables/rules.v4 # Add your rules to this file
service iptables-persistent restart

Time to wire PHP-FPM and NGINX together:

# Disable fix_pathinfo
sed -i "s/^;cgi.fix_pathinfo=1/cgi.fix_pathinfo=0/" /etc/php5/fpm/php.ini

# Tell FPM to use TCP
sed -i "s/^listen = \/var\/run\/php5-fpm.sock/listen =" /etc/php5/fpm/pool.d/www.conf
sed -i "s/^;listen.allowed_clients/listen.allowed_clients/" /etc/php5/fpm/pool.d/www.conf

service php5-fpm restart

mkdir /etc/nginx/global

Create the file /etc/nginx/global/fpm.conf and use the following for it's contents:

location ~*\.php$ {
    try_files                $uri =404;

    fastcgi_split_path_info  ^(.+\.php)(/.+)$;
    include                  fastcgi_params;
    fastcgi_index            index.php;
    fastcgi_param            SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_intercept_errors on;
    fastcgi_pass   ;

Nearly there, create your server block in /etc/nginx/sites-available/

server {
    listen 80;
    root         /var/www/;
    index        index.php;
    include      /etc/nginx/global/fpm.conf;

    location / {
        try_files $uri $uri/ /index.php?$query_string;

Finally, symlink your configuration file and reload NGINX:

ln -s /etc/nginx/sites-available/ /etc/nginx/sites-enabled/
service nginx reload

That's it! As long as your DNS is pointing in the right place you should be good to go.

The setup above has been designed to give decent performance for PHP web applications. The htop binary gives a much nicer view of resource usage, whilst ntp keeps your clock in sync and nscd will cache DNS queries made by your web applications.

The PHP Fast-CGI Process Manager allows the web server to delegate run-time compilation of your PHP scripts to a separate process, unlike mod_php5 for Apache. The MySQL Native Driver for PHP is the recommended option above mysqli and is part of the PHP core. Xcache will give you opcode caching, you could also use APC or Zend Opcache.

Hopefully all of the above will result in decreased server load, latency and maximise throughput on your server by making the most of the available resources.

§ Building a Xen Hypervisor

A hypervisor or virtual machine manager (VMM) is a piece of software, firmware or hardware that creates and runs virtual machines.


I started off by reading about which operating systems were best suited to what I would've called the Xen host. I now know the correct terminology to be "Domain 0" or dom0 – as hypervisor terminology dictates that each machine is a domain (including the host). It seems you can run Xen on a variety of Linux distributions but I noticed quite a bit of documentation and articles focusing on Debian. For this reason I picked Debian Squeeze.

The primary hard drive on my target system is 120 GiB, this doesn't leave much room for the VMs. Luckily the Xen wiki recommends LVM which is trivial to setup using the Debian OS installer during dom0 creation. With the LVM partition in place on the drive I can easily grow and shrink partitions as requirements for each virtual machine changes over time. This allows me to make the most of what little space I have available on the drive.

Initial installation and configuration of Xen with Xen Tools can be done by following the guide on the Debian wiki.

One concern that struck me was that VMs would be able to access my home LAN as everything is connected to the same router. To solve this issue I setup four LAN subnets and 802.1q VLAN tagging in order to create network segregation. The following steps are not required to setup Xen, they just happen to suite my particular network setup and requirements.

Setup the VLANs on your router and make note of the tag IDs (10, 20, 30 and 40 in my case).

Enable the 802.1q kernel module and make it permanent:

modprobe 8021q && echo '8021q' >> /etc/modules

Once you've done that you'll need to install the vlan package:

apt-get install vlan

Setup the /etc/network/interfaces with as many of the above VLANs as you require for the dom0 and domUs, here's mine (shortened for display):

# The local loopback interface
auto lo
iface lo inet loopback

# Instruct all interfaces to startup automatically
auto eth0
auto eth0.20 xenbr20

# The only physical interface (Ethernet Port 0)
iface eth0 inet dhcp

# VLAN ID: 20
iface eth0.20 inet manual
    vlan_raw_device eth0

# The Xen Bridge allowing VMs to access VLAN 20
iface xenbr20 inet static
    bridge_ports eth0.20
    bridge_stp on
    bridge_maxwait 10

In order to give the VMs within the VLANs internet access you'll need to enable IPv4 forwarding and NAT on dom0:

echo 1 > /proc/sys/net/ipv4/ip_forward
echo 'net.ipv4.ip_forward=1' > /etc/sysctl.conf
iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

Ensuring that iptables retains these rules on boot:

apt-get install iptables-persistent
iptables-save > /etc/iptables/rules

With all of this setup you can begin creating VMs. Here's an example using xen-tools to create a VM with hostname 'newvm':

xen-create-image --hostname=newvm \ # Each domU should have a different hostname
    --memory=256mb \
    --vcpus=1 \
    --lvm=vg0 \                     # Specify Logical Volume Group within LVM
    --dhcp \                        # Allow DHCP
    --pygrub \
    --dist=squeeze \                # Setting the distribution to Debian Squeeze
    --passwd                        # Prompt for root password during creation

Here's a quick bash script that makes it easy to delete VMs:

[ -z $1 ] && { echo "Specify hostname!"; exit 1; }

xm destroy $1
xm delete $1
xen-delete-image --lvm vg0 $1       # You may wish to change the LVM group here

Incidentally, LVM has an excellent snapshot feature which I've wrapped up into this gist.

§ A Website in a Single HTTP Request

This website has no assets. There are no CSS files, no external JavaScript sources files, no backend scripting engine and no database storage system. It's just one HTML file and that's it. I've decided to get back to basics with my personal site; maybe because I spend my working life optimising complex systems, it's a sort joyful escape.

There's something about plain text that's always appealed to me. Besides character encoding there's not much really to worry about. I know what I'm typing now will display correctly on your laptop/phone/tablet, regardless of your browser – because there isn't really anything to go wrong. I guess it's like my first car in a way, forget about the electric windows, heated windscreen and all those other fancy gadgets and you still have a car.

I must admit that resisting the temptation to use CSS3 transitions or pure JavaScript to spice up the interactivity layer is somewhat difficult. It does however provide a nice clean scope for the design. Either I can achieve it with raw HTML and a couple of lines of CSS or I'm simply not going to implement it here. I want to focus on the content of this site, I want to share what I'm learning as it's happening. Designing the next-best has been on my to-do list for too long.

All content on this site is free to share, just like the web should be.