technical

Posts containing technical information.

Routing by port number

Due to a very restrictive firewall at the CHPC, I need to run a VPN to get access to things like email, Jabber and SSH. This however degrades my web browsing experience, since that gets tunnelled as well. I therefore wanted a setup where only ports which are blocked get tunnelled through the VPN, while everything else goes out normally.

The routing part was fairly straightforward, which consists of an iptables rule to mark certain packets, and an alternate routing table for these marked packets. I first created a name for the new table by adding the following to /etc/iproute2/rt_tables.

10  vpn

I then added a default route to the new table specifying the IP address of the VPN server and the VPN interface, and a rule to use this table for packets marked by iptables.

ip route add default via 10.8.0.3 dev tun0 table vpn
ip rule add fwmark 0x1 table vpn

The following iptables rule will mark packets destined to the listed port numbers. Note that this is for packets originating from the firewall host — if you want this to apply to packets forwarded for other hosts it must be in the PREROUTING chain.

iptables -t mangle -A OUTPUT -p tcp -m multiport --dports 22,995,587,5223 -j MARK --set-mark 0x1

The actual routing worked, but packets were being sent with the wrong source IP. I therefore needed to NAT packets going out on the VPN interface (the IP address is the local IP of the VPN connection).

iptables -t nat -A POSTROUTING -o tun0 -j SNAT --to 10.8.0.4

I could then see packets going out on the VPN interface with the correct source IP as well as the replies, but it still wasn't working. I eventually discovered that rp_filter must be disabled in order for this to work.

echo 0 > /proc/sys/net/ipv4/conf/tun0/rp_filter

Photo blogging via email with Drupal

The one thing missing from my posting by email setup was support for images. The Mailsave module has finally been updated for Drupal 6, and so I can now submit attachments with email posts. The one shortcoming is that files are simply added to posts as normal attachments, and so images aren't automatically displayed. I therefore have to manually insert images in the body of the post, but I actually prefer this since it's a simpler system and gives me more control.

I also needed a way of resizing images on my phone since they are too big. I found the Nokia Image Editor1 which seems to work fairly well, although it only allows resizing to specific resolutions.


  1. It does work on my phone even though it's supposedly only for the Nokia 3250. 

Cisco un-Clean Access

The [CHPC][] installed a new network this past weekend as part of the [SANReN][] project. The new network consists of [Cisco][] equipment, including their [NAC][] (or "Clean Access") system. This requires all clients to authenticate before they are allowed access to the network, and can also enforce a configured security policy (such as requiring operating system updates and anti-virus).

The system works as follows. By default, the ports on the switch are in an "unauthenticated" [VLAN][]. When a client is connected, it is provided with an IP address (via [DHCP][]) in an "unauthenticated" subnet. The system then presents a captive portal which requires the user to authenticate with a username and password using their browser. If the authentication is successful, the port is moved to a different VLAN (depending on the user's access level), and the switch briefly disconnects the link which causes the client to negotiate a new IP address (in a different subnet).

Before the portal presents the login page it requires that a [Java applet][] be run on the client. The applet gathers various bits of information about the client (including the operating system) and submits this information to the portal. (I assume that the portal uses this information to determine what policies must be enforced. In our setup, Windows machines must have the Clean Access Client installed, while Linux and Mac OS X machines are simply allowed access.) The portal then presents the login page.

Being a geek, I wasn't very happy to go through this rigmarole everytime I connected to the network. (I also couldn't use my [normal browser][konq] since the applet didn't work in it.) So I set out to automate the process. Initially I tried to script everything (including the Java applet) but then I noticed that the output of the applet wasn't sent with the login form submission. The only other information the form contained was a session key and random string, both of which were present on the [HTML][] page which contained the applet. A manual test confirmed that the login page could be submitted successfully as long as the session key and random string were correct — the applet could be bypassed.

I quickly scripted the login process using a

[] script and [wget][]. I then installed it in <code>/etc/network/if-up.d</code> after adding some logic to only execute if the current IP address was on the unauthenticated network. The result is that I can plug in the cable, and my machine automatically authenticates to the system.
 
While searching for information about the Clean Access system, I came across this [Slashdot article][] about a guy who was suspended from university for bypassing the Clean Access checks. I only realised last night that this is exactly what my script does![^1] I haven't tested it on Windows yet, but the only possible change I can think of is to change the [user agent][]. Seriously Cisco, the fact that I managed to bypass the applet simply by submitting the login form programmatically is ridiculous.
 
I have attached my script to this post. The way in which I have parsed the HTML page is rather ugly and likely to only work on this specific version of Clean Access. I plan to rewrite it in [Python][] sometime.
 
<strong>Update:</strong> I have rewritten the script in Python, which should be a bit more solid since it parses the HTML using a [DOM][]. The script requires [libxml2dom][] and [ipy][]. After configuring the parameters it can be dropped in <code>/etc/network/if-up.d</code>[^2] where it should run automatically.
 
[^1]: Note that it doesn't bypass the authentication: you still need a valid account in order to gain access.
 
[^2]: Make sure not to use a dot in the filename though.
 
<em>[CHPC]: Centre for High Performance Computing
</em>[SANReN]: South African National Research Network
<em>[NAC]: Network Admission Control
</em>[VLAN]: Virtual LAN
<em>[DHCP]: Dynamic Host Configuration Protocol
</em>[HTML]: HyperText Markup Language
*[DOM]: Document Object Model
 
[chpc]: http://www.chpc.ac.za/
[sanren]: http://www.meraka.org.za/sanren.htm
[cisco]: http://en.wikipedia.org/wiki/Cisco_Systems
[vlan]: http://en.wikipedia.org/wiki/Virtual_LAN
[dhcp]: http://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol
[nac]: http://en.wikipedia.org/wiki/Cisco_NAC_Appliance
[java applet]: http://en.wikipedia.org/wiki/Java_applet
[konq]: http://en.wikipedia.org/wiki/Konqueror
[Slashdot article]: http://it.slashdot.org/article.pl?sid=07/04/27/203232
[user agent]: http://en.wikipedia.org/wiki/User_agent
[python]: http://en.wikipedia.org/wiki/Python_(programming_language)
[html]: http://en.wikipedia.org/wiki/HTML
[wget]: http://en.wikipedia.org/wiki/Wget
[bash]: http://en.wikipedia.org/wiki/Bash
[dom]: http://en.wikipedia.org/wiki/Document_Object_Model
[libxml2dom]: http://www.boddie.org.uk/python/libxml2dom.html
[ipy]: http://russell.rucus.net/2008/ipy/

Slipstreaming Windows XP SP3 in Linux

Unfortunately Windows is still a necessary evil sometimes: I keep a Windows virtual machine for times when it's absolutely necessary, and I still give my friends Windows tech support. I still like to do things properly, and so I wanted to create a Windows XP install CD with Service Pack 3 slipstreamed in1. I had two CDs to do, and slipstreamed the first one using a Windows VM, but then got curious and wondered if I could do it without Windows.

The answer is that it is possible using Wine to run the service pack installer. I followed this blog post (which was interesting since it's in French), but I then found another blog post which explains it in English. The steps are as follows:

  1. Copy contents of original CD to harddrive.
  2. Extract the service pack using cabextract.
  3. Use Wine to run the service pack installer.

    wine ~/sp3/i386/update/update.exe /integrate:~/xp/
    
  4. Use geteltorito to extract the bootloader from the original CD

  5. Make sure that all the filenames are upper case.

    convmv -r --upper --notest ~/xp/*
    
  6. Create the new CD image. I did this in K3b with the following settings.

    • Boot emulation: none
    • Boot load segment: 0x7c0
    • Boot load size: 0x4
    • Generate Joilet extensions
    • Omit version numbers in ISO9660 filenames (nothing else enabled under "ISO9660 Settings"
    • ISO Level 1
  7. Test in a virtual machine

It seems to be quite particular about the ISO9660 settings and the upper case filenames, so if it doesn't boot check the settings.


  1. This integrates the service pack into the install CD so that a fresh installation is already updated. 

Mobile interface to Vodacom4me and MyMTN

Vodacom4me and MyMTN allow you to send free SMSs from a computer. Unfortunately those sites are not accessible from a cellphone. I came across a site which provides a mobile interface for Vodacom4me and MyMTN1. This means that you can send SMSs from your cellphone for the cost of the GPRS/UMTS data required to access the site. I having been using this for quite a while, and it works fairly well.

However, there are a few aspects of the site which I don't like, and so I wrote my own version which performs the same function with the following extra features:

  1. Uses cookies to store login data instead of a URL with parameters which needs to be bookmarked (although it will fall back to this method if the user agent doesn't support cookies).
  2. Submits forms using POST instead of GET (but will fall back to GET if the user agent doesn't support POST).
  3. Allows multiple recipients (although only Vodacom4me supports this).
  4. Specifies the maximum message length in the textarea so that phones which support it can show how many characters are left.2
  5. Automatically logs into Vodacom4me/MyMTN again if session has expired.3
  6. Cleaner, less cluttered interface (mainly optimised for my phone ;-) ).
  7. Accessible over HTTPS for extra security.

The site is available at http://m.mene.za.net/ (or with HTTPS). Obviously the restrictions enforced by Vodacom4me and MyMTN still apply. Vodacom4me allows 20 SMSs per day to Vodacom numbers for Vodacom subscribers only. MyMTN allows 5 SMSs per day to MTN numbers for anyone. The source code is available for anyone who is interested (and brave enough).


  1. There is also an interface for CellC's site, but mine does not implement this. 

  2. This is technically not allowed by the HTML specification, but it works on my phone. 

  3. This allows the message composition page to be saved on phones which support this (like my Nokia E70) instead of reloading it every time a message is composed. 

Publishing SSH and GPG keys using DNS

I was looking through a list of DNS record types, and noticed the SSHFP and CERT records. I then proceded to implement these in my domain... just because I can ;-)

SSH Host Keys

The SSHFP record is used to publish the fingerprint of a host's SSH key. When an SSH client connects to a server for the first time, it can verify the host's key by checking for this DNS record. The format of the record is specified in RFC 4255, but there is also a tool which will generate the records for you.

$ sshfp -s mammon.gorven.za.net
mammon.gorven.za.net IN SSHFP 1 1 5e6772b6962f3328a0d73f7765097b7622f21447
mammon.gorven.za.net IN SSHFP 2 1 00e59b1843421f13d75e21abb06bf032a6e60b8b

The SSH client needs to be configured to check these records. Specifying "VerifyHostKeyDNS ask" in ~/.ssh/config will make the client look for SSHFP records, but will still prompt you to accept the key. (It will output a messaging saying that it found a matching key.) Specifying "VerifyHostKeyDNS yes" will skip the prompt if the record exists and matches the key presented by the server.

GPG Keys

The CERT record is used to publish public keys or fingerprints. It can be used for PGP, X.509 or SPKI keys. It is specified in RFC 4398, and there is very little mention of it other than this blog post I found. To generate records you need the make-dns-cert tool which is part of GnuPG. It isn't distributed in the Ubuntu package however, and so I had to compile GnuPG from source.

To determine the name of the record to use, convert your email address into a domain name by replacing the ampersand with a dot1. To publish your entire public key, run the tool as follows.

$ make-dns-cert -k ~/pubkey -n michael

The first parameter specifies the file containing your public key in binary format, and the second parameter specifies the domain name to use. To publish a reference to your public key, run the tool as follows.

$ make-dns-cert -f BF6FD06EA9DAABB6649F60743BD496BD6612FE85 -u http://michael.gorven.za.net/files/mgorven.asc -n michael

The first parameter specifies the fingerprint of your key, and the second parameter the URL at which the public key can be found. It is also possible to only publish the fingerprint or only the URL. Simply add the record which the tool outputs to your zone file2.

There is also another method to publish GPG keys called PKA. The only documentation I can find is a specification in German linked from the blog post mentioned above. I still managed to set it up though. This method uses a TXT record (similar to SPF). Here is my record.

michael._pka.gorven.za.net. TXT
"v=pka1\;fpr=BF6FD06EA9DAABB6649F60743BD496BD6612FE85\;uri=http://michael.gorven.za.net/files/mgorven.asc"

This specifies the fingerprint and URL, just as with the second CERT method above. In order to get gpg to check DNS for keys, you need to specify "--auto-key-locate cert,pka" on the command line or in the configuration file.


  1. So john@example.com becomes john.example.com

  2. It should be possible to clean the record up by using mnemonics, but I couldn't get nsd to accept it and so just left it as is. 

OpenVPN through an HTTP proxy server

I discovered that OpenVPN supports connections through an HTTP proxy server. This makes it possible to establish a VPN from a completely firewalled network where the only external access is through a proxy server1. It takes advantage of the fact that SSL connections are simply tunnelled through the server and aren't interfered with like unencrypted connections.

The server setup is almost identical to a normal configuration, except that the tunnel must use TCP instead of UDP (since the proxy server will establish a TCP connection). Since most proxy servers only allow SSL connections to certain ports, you will also need to change the port number that the server listens on. The best is 443 since that is used for HTTPS, but if the server is also running a web server on port 443, then 563 is probably the next best choice. This port is assigned to NNTPS, and is allowed by the default Squid configuration. The following two lines enable TCP connections and change the port number.

proto tcp-server
port 563

The client configuration is also very similar. It simply needs to enable TCP connections, set the correct port number, and specify the proxy server.

remote vpn.example.com 563
http-proxy cache.saix.net 8080
proto tcp-client

OpenVPN can also authenticate to the proxy server using either Basic or NTLM authentication. To enable this add "stdin basic" or "stdin ntlm" to the http-proxy line. This will prompt for the username and password when the VPN is started. For more details see the OpenVPN documentation.


  1. I am not commenting on the ethics of this. If you need to resort to this method, you probably shouldn't be doing it. 

Python Decorators

For my Masters project I need a method by which the user can specify which functions should be run on an SPE1. This method should be simple, clear and easy to turn on and off. I stumbled upon a blog post a little while ago (I think it was this one) which explained decorators in Python, which is the perfect tool for the job. Decorators are used to transform functions, but without changing the function itself or the calls to it.

def spe(func, *args):
    def run(*args):
        return compile(func, *args)
    return run

@spe
def sub(a, b):
    return a - b

print sub(2, 4)

The spe function is the actual decorator. The @spe line applies the decorator to the sub function. Implicitly, the following declaration is made:

sub = spe(sub)

The sub function is being wrapped by the spe function, and so all calls to sub (such as the print line) will use the wrapped function instead. The decorator creates and returns a new function called run which will (eventually) cause the original function to be compiled and executed on an SPE. This means that running a function on an SPE will be as simple as adding @spe before the function declaration2, without having to change the way in which the function is called. Turning it off is as simple as commenting out this line, and it is fairly clear as to what is happening.


  1. Trying to make this decision automatically would be a massive project in itself and would probably be worse than a human decision. 

  2. There will probably be some restrictions on what the function may contain, but that's a different matter. 

My Drupal setup

Seeing that I've spent countless hours setting up my Drupal installation, I thought that I would share this with others and document it for future reference. Drupal is an extremely powerful CMS which can be used to create a wide variety of sites. The disadvantage of this is that it requires a fair amount of work to setup a straightforward blog, which involves installing and configuring numerous modules.

Installation

Since there is no Ubuntu package for Drupal 6, I created my own package based on the drupal5 one. I set it up as a virtual host in Lighttpd by simply symlinking the domain name to /usr/share/drupal6. I created a MySQL database for the site and went through the Drupal install process. Since I'm using a multi-site installation, I also needed to alias the /files directory for each site.

$HTTP["host"] == "michael.gorven.za.net" {
    alias.url = ( "/files/" => "/home/mgorven/public_html/" )
}

Clean URLs

Clean URLs allows one to have URLs like /about instead of /index.php?q=about. This however requires that the web server rewrites URLs from the former to the latter. Drupal includes an htaccess file containing settings for Apache, but not for Lighttpd. Lighttpd does have a rewrite module, but it doesn't support the conditions that Drupal needs (such as checking if a file exists).

Lighttpd does however have a module which allows one to add scripts to the request process written in [Lua][]. A script has already been [developed][drupal.lua-devel] which implements the required rewriting for Drupal. The following lines in lighttpd.conf enable this for a specified site (after enabling [mod_magnet][] and downloading the [script][drupal.lua]).

$HTTP["host"] == "michael.gorven.za.net" {
    index-file.names = ( "index.php" )
    magnet.attract-physical-path-to = ( "/etc/lighttpd/drupal.lua" )
}

Blog

When Drupal is first installed, there is no mention of blogging as such. The first step is to enable the Blog core module1. This creates a blog content type and enables a blog for each user. (The module is designed for a multi-user blog, but can be used for a single user as well.) However, this doesn't give you all the functionality you expect from a blog engine.

Tagging is handled by the Taxonomy core module. You first need to create a vocabulary though, and enable it for blog posts. (This took me ages to discover.) In order to get nice URLs (including the date and post title, for example) you need to install the [Pathauto][] module and configure a pattern for blog posts. You may also want to define a pattern for tags.

There is also no archive functionality. The best way that I can find is the [Views][] module. It includes a pre-defined "archive" view which can display posts from a specific month, and links to the monthly pages. Even after much investigation I couldn't get the archive to behave like typical blog archives (i.e. /blog/2008/07/07, /blog/2008/07 and /blog/2008 for daily, monthly and yearly archives respectively).

Other Blog Features

The [Trackback][] and [Pingback][] modules implement automatic linking with other blogs. (I haven't actually tested these yet.) The Blog API core module allows the blog to be managed with external clients. The [Markdown][markdown-mod] module allows you to write posts using [Markdown][] syntax instead of HTML.

Comments

Drupal enables comments for blog posts by default. The [Akismet][akismet-mod] module implements spam filtering using the [Akismet][] service. The [CAPTCHA][captcha-mod] and [reCAPTCHA][recaptcha-mod] modules allows you to require users to answer a [reCAPTCHA][] when submitting comments. (I haven't actually enabled [CAPTCHAs][captcha] since I haven't gotten any comment spam yet. Or real comments for that matter...)

Posting by email

The [Mailhandler][] module allows you to submit posts via email. The configuration is fairly straightforward, except for the available commands which can be found [here][commands]. These can be specified at the beginning of emails and in the configuration as defaults. I use the following commands.

type: blog
taxonomy: [mail]
promote: 1
status: 1
comment: 2

This creates blog posts and tags them with the "mail" tag. Posts are published and promoted to the front page, and comments are enabled.

The one thing it doesn't handle is attachments (such as images). There are a couple of modules2 which support this, but they aren't available for Drupal 6 yet. ([Vhata][] has also hacked together a [photo blogging][phoblog] system, but this isn't implemented as a Drupal module.) I don't really need this feature, so I'm going to wait until these modules are updated.

OpenID

The [OpenID][openid-mod] module allows you to log into your site using [OpenID][]. The [OpenID URL][openidurl] module allows you to delegate your Drupal site as an OpenID by specifying your provider and/or your [Yadis][] document.

Yadis Advertisement

Yadis documents are advertised with a meta header in the HTML document, but this isn't the ideal method of doing so since the relaying party needs to download the entire HTML file. The [preferred methods][intertwingly] are to insert an X-XRDS-Location in the HTTP headers, or to automatically serve the Yadis document if the user agent specifies application/xrds+xml in the Accept header.

The former method can be accomplished with the setenv module for Lighttpd. The second is essentially a conditional rewrite, and so requires some Lua scripting again. The following script will do the job.

if lighty.request["Accept"] == "application/xrds+xml" then
    lighty.env["uri.path"] = "/files/yadis.xrdf"
end

The following lines in lighttpd.conf will announce the Yadis document for the root URL.

$HTTP["url"] == "/" {
    magnet.attract-raw-url-to = ( "/etc/lighttpd/yadis.lua" )
    setenv.add-response-header = ( "X-XRDS-Location" => "http://michael.gorven.za.net/files/yadis.xrdf" )
}

Random Stuff

The tag block is generated by the [Tagadelic][] module. The "Recent Tracks" block is generated from my [LastFM][] feed by the Aggregator core module, and the list of networks is simply a custom block. The [Atom][] feed is generated by the [Atom][atom-mod] module. The contact form, search and file upload are all core modules.

Missing Stuff

The one thing I haven't sorted out is image handling. There are a couple ways to [handle images][drupal-images] in Drupal, but none of these appeal to me (they're too complicated). I will probably just upload images as attachments and insert them manually in the body.


  1. It is however possible to run a blog without the Blog module. 

  2. [Mailsave][] and [Mobile Media Blog][mmb]. 

Gmail-like mail setup

I have been using Gmail for a while now, and really think that it's about the best email provider out there. I recently moved my mail over from Google Apps to my own server, but I wanted the major features that I liked. I've always used a desktop mail client and used POP3 and SMTP to receive and send mail.

These are the features I particularly like:

  1. Secure access with TLS/SSL
  2. Outgoing SMTP with authentication
  3. Messages sent via SMTP are automatically stored in the mailbox
  4. Messages downloaded via POP3 are still stored on the server
  5. IMAP and Web access

I therefore set out to recreate this setup as closely as possible. The first two are satisfied by a standard Postfix setup with TLS and SMTP AUTH. The last one is done with Dovecot and Roundcube.

To automatically store sent messages on the server, I used Postfix's sender_bcc_maps to BCC messages I send to myself, and the following Procmail recipe to move these messages to the Sent folder.

:0
* ^Return-Path.*me@example.com
.Sent/

To make POP3 access independent from IMAP, I first configured Dovecot to use a different mail location for each as follows.

protocol imap {
    mail_location = maildir:~/Maildir
}
protocol pop3 {
    mail_location = /var/mail/%u
}

I then used the following Procmail recipe to send incoming messages to both locations.

DEFAULT=$HOME/Maildir/
:0c:
/var/mail/mgorven

At the moment this is only setup for my user, but it should be possible to do it for all users by creating a global procmailrc and telling Postfix to deliver all mail using Procmail. This is working fairly well. The only part missing is that Gmail can archive or mark messages as read when they are downloaded via POP3, whereas in my setup POP3 and IMAP are completely independent.

Syndicate content