Clicking on Apple Macbook Pro trackpads under Ubuntu Precise

Apple trackpads don't have separate buttons, the entire trackpad is itself a clickable button. The default OS X behaviour is to treat clicking the pad with two fingers as the right button, and clicking the pad with three fingers as the middle button. Enabling tap to click (i.e. touching the trackpad but not actually clicking it) tends to result in false positives since the trackpad is so big. I therefore setup this behaviour under Ubuntu Oneiric.

When I upgraded to Ubuntu Precise two finger clicks started registering as the left button. (Three finger clicks still worked.) It turns out that this is due to the new clickpad support in Precise. The solution is to disable the ClickPad attribute. My Synaptics configuration now looks like this:

TapButton1              = 0
TapButton2              = 0
TapButton3              = 0
ClickFinger1            = 1
ClickFinger2            = 3
ClickFinger3            = 2
ClickPad                = 0

Sharing links from Konqueror, including to IRC

I follow the main feeds of a couple social news sites (namely Digg, Reddit and Muti). When I find an article which I like, I go back and vote it up on the site. However, when I come across good articles via other sources, I don't submit them to these news sites (or try to find out if they've already been submitted) simply because it's too much effort.

When I started aggregating my activity on these sites on my blog and on FriendFeed, I needed a way to share pages that I didn't get to via one of these social news sites. I ended up setting up Delicious because I found a plugin for Konqueror which made it easy to bookmark pages.

I still wanted to solve the original problem though, and so started looking for an easy way to submit links to these sites from Konqueror. Konqueror has a feature called service menus which allows you to add entries to the context menu of files. I then needed to work out how to submit links to these services, which turned out to simply involve loading a URL with a query parameter specifying the link you want to share.

I created entries for Reddit, Digg, Muti, Delicious, Facebook and Google Bookmarks. These take you to the submission page of the service where you can fill in the title1. Digg and Reddit will show existing submissions if the link has already been submitted.

I often share links on IRC, and wondered if I could integrate that with my menu. It turns out that WeeChat has a control socket, and I could send messages by piping them to the socket. I therefore wrote a script which prompted me for a headline or excerpt using kdialog, and then sent the link to the specified channel. My menu now looks like this:


If you want to set this up yourself, download share.desktop and put it in ~/.kde/share/apps/konqueror/servicemenus. If you want the icons, download shareicons.tar.gz, extract them somewhere, and fix the paths in social.desktop2. To setup the IRC feature (assuming you're using WeeChat), download postirc.sh and save it in ~/bin/. You will need to change the commands in social.desktop depending on the servers and channels you wish to use.

  1. One shortcoming is that the title of the page is not automatically filled in. 

  2. I couldn't work out how to use relative paths, or ~. 

My personal backup solution

I've been using an external harddrive to store backups of my laptop for a while now. At first I manually created a set of compressed tar archives about once a month. That was a bad system though because it used a lot of space and was a mission to retrieve files from backups. I then started using pdumpfs, which can do incremental backups by hard linking files which haven't changed. The problem I found with it however was that if a file's ownership or timestamps changed it wouldn't be hard linked even if the content hadn't changed.

I therefore set out to find a better backup solution. My requirements were as follows.

  1. Incremental backups
  2. Easy to access specific files from backups
  3. Able to delete certain backups, preferably arbitrarily1
  4. Compression
  5. Encryption

I finally settled on storeBackup which supports everything except number 5. It works similarly to pdumpfs, except it stores ownership and timestamp data separately and therefore can still hard link identical files even if these change. It compresses on a per file basis, which makes it easy to access specific files (as opposed to having to find them in an archive). Old backups can be deleted arbitrarily since they are only related by hard links. I then added encryption by backing up to an encfs encrypted directory.

  1. I want to be able to backup every week, but then delete old backups so that I have one backup per month for the last year. 

Routing by port number

Due to a very restrictive firewall at the CHPC, I need to run a VPN to get access to things like email, Jabber and SSH. This however degrades my web browsing experience, since that gets tunnelled as well. I therefore wanted a setup where only ports which are blocked get tunnelled through the VPN, while everything else goes out normally.

The routing part was fairly straightforward, which consists of an iptables rule to mark certain packets, and an alternate routing table for these marked packets. I first created a name for the new table by adding the following to /etc/iproute2/rt_tables.

10  vpn

I then added a default route to the new table specifying the IP address of the VPN server and the VPN interface, and a rule to use this table for packets marked by iptables.

ip route add default via dev tun0 table vpn
ip rule add fwmark 0x1 table vpn

The following iptables rule will mark packets destined to the listed port numbers. Note that this is for packets originating from the firewall host — if you want this to apply to packets forwarded for other hosts it must be in the PREROUTING chain.

iptables -t mangle -A OUTPUT -p tcp -m multiport --dports 22,995,587,5223 -j MARK --set-mark 0x1

The actual routing worked, but packets were being sent with the wrong source IP. I therefore needed to NAT packets going out on the VPN interface (the IP address is the local IP of the VPN connection).

iptables -t nat -A POSTROUTING -o tun0 -j SNAT --to

I could then see packets going out on the VPN interface with the correct source IP as well as the replies, but it still wasn't working. I eventually discovered that rp_filter must be disabled in order for this to work.

echo 0 > /proc/sys/net/ipv4/conf/tun0/rp_filter

Cisco un-Clean Access

The [CHPC][] installed a new network this past weekend as part of the [SANReN][] project. The new network consists of [Cisco][] equipment, including their [NAC][] (or "Clean Access") system. This requires all clients to authenticate before they are allowed access to the network, and can also enforce a configured security policy (such as requiring operating system updates and anti-virus).

The system works as follows. By default, the ports on the switch are in an "unauthenticated" [VLAN][]. When a client is connected, it is provided with an IP address (via [DHCP][]) in an "unauthenticated" subnet. The system then presents a captive portal which requires the user to authenticate with a username and password using their browser. If the authentication is successful, the port is moved to a different VLAN (depending on the user's access level), and the switch briefly disconnects the link which causes the client to negotiate a new IP address (in a different subnet).

Before the portal presents the login page it requires that a [Java applet][] be run on the client. The applet gathers various bits of information about the client (including the operating system) and submits this information to the portal. (I assume that the portal uses this information to determine what policies must be enforced. In our setup, Windows machines must have the Clean Access Client installed, while Linux and Mac OS X machines are simply allowed access.) The portal then presents the login page.

Being a geek, I wasn't very happy to go through this rigmarole everytime I connected to the network. (I also couldn't use my [normal browser][konq] since the applet didn't work in it.) So I set out to automate the process. Initially I tried to script everything (including the Java applet) but then I noticed that the output of the applet wasn't sent with the login form submission. The only other information the form contained was a session key and random string, both of which were present on the [HTML][] page which contained the applet. A manual test confirmed that the login page could be submitted successfully as long as the session key and random string were correct — the applet could be bypassed.

I quickly scripted the login process using a

[] script and [wget][]. I then installed it in <code>/etc/network/if-up.d</code> after adding some logic to only execute if the current IP address was on the unauthenticated network. The result is that I can plug in the cable, and my machine automatically authenticates to the system.
While searching for information about the Clean Access system, I came across this [Slashdot article][] about a guy who was suspended from university for bypassing the Clean Access checks. I only realised last night that this is exactly what my script does![^1] I haven't tested it on Windows yet, but the only possible change I can think of is to change the [user agent][]. Seriously Cisco, the fact that I managed to bypass the applet simply by submitting the login form programmatically is ridiculous.
I have attached my script to this post. The way in which I have parsed the HTML page is rather ugly and likely to only work on this specific version of Clean Access. I plan to rewrite it in [Python][] sometime.
<strong>Update:</strong> I have rewritten the script in Python, which should be a bit more solid since it parses the HTML using a [DOM][]. The script requires [libxml2dom][] and [ipy][]. After configuring the parameters it can be dropped in <code>/etc/network/if-up.d</code>[^2] where it should run automatically.
[^1]: Note that it doesn't bypass the authentication: you still need a valid account in order to gain access.
[^2]: Make sure not to use a dot in the filename though.
<em>[CHPC]: Centre for High Performance Computing
</em>[SANReN]: South African National Research Network
<em>[NAC]: Network Admission Control
</em>[VLAN]: Virtual LAN
<em>[DHCP]: Dynamic Host Configuration Protocol
</em>[HTML]: HyperText Markup Language
*[DOM]: Document Object Model
[chpc]: http://www.chpc.ac.za/
[sanren]: http://www.meraka.org.za/sanren.htm
[cisco]: http://en.wikipedia.org/wiki/Cisco_Systems
[vlan]: http://en.wikipedia.org/wiki/Virtual_LAN
[dhcp]: http://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol
[nac]: http://en.wikipedia.org/wiki/Cisco_NAC_Appliance
[java applet]: http://en.wikipedia.org/wiki/Java_applet
[konq]: http://en.wikipedia.org/wiki/Konqueror
[Slashdot article]: http://it.slashdot.org/article.pl?sid=07/04/27/203232
[user agent]: http://en.wikipedia.org/wiki/User_agent
[python]: http://en.wikipedia.org/wiki/Python_(programming_language)
[html]: http://en.wikipedia.org/wiki/HTML
[wget]: http://en.wikipedia.org/wiki/Wget
[bash]: http://en.wikipedia.org/wiki/Bash
[dom]: http://en.wikipedia.org/wiki/Document_Object_Model
[libxml2dom]: http://www.boddie.org.uk/python/libxml2dom.html
[ipy]: http://russell.rucus.net/2008/ipy/

Slipstreaming Windows XP SP3 in Linux

Unfortunately Windows is still a necessary evil sometimes: I keep a Windows virtual machine for times when it's absolutely necessary, and I still give my friends Windows tech support. I still like to do things properly, and so I wanted to create a Windows XP install CD with Service Pack 3 slipstreamed in1. I had two CDs to do, and slipstreamed the first one using a Windows VM, but then got curious and wondered if I could do it without Windows.

The answer is that it is possible using Wine to run the service pack installer. I followed this blog post (which was interesting since it's in French), but I then found another blog post which explains it in English. The steps are as follows:

  1. Copy contents of original CD to harddrive.
  2. Extract the service pack using cabextract.
  3. Use Wine to run the service pack installer.

    wine ~/sp3/i386/update/update.exe /integrate:~/xp/
  4. Use geteltorito to extract the bootloader from the original CD

  5. Make sure that all the filenames are upper case.

    convmv -r --upper --notest ~/xp/*
  6. Create the new CD image. I did this in K3b with the following settings.

    • Boot emulation: none
    • Boot load segment: 0x7c0
    • Boot load size: 0x4
    • Generate Joilet extensions
    • Omit version numbers in ISO9660 filenames (nothing else enabled under "ISO9660 Settings"
    • ISO Level 1
  7. Test in a virtual machine

It seems to be quite particular about the ISO9660 settings and the upper case filenames, so if it doesn't boot check the settings.

  1. This integrates the service pack into the install CD so that a fresh installation is already updated. 

OpenVPN through an HTTP proxy server

I discovered that OpenVPN supports connections through an HTTP proxy server. This makes it possible to establish a VPN from a completely firewalled network where the only external access is through a proxy server1. It takes advantage of the fact that SSL connections are simply tunnelled through the server and aren't interfered with like unencrypted connections.

The server setup is almost identical to a normal configuration, except that the tunnel must use TCP instead of UDP (since the proxy server will establish a TCP connection). Since most proxy servers only allow SSL connections to certain ports, you will also need to change the port number that the server listens on. The best is 443 since that is used for HTTPS, but if the server is also running a web server on port 443, then 563 is probably the next best choice. This port is assigned to NNTPS, and is allowed by the default Squid configuration. The following two lines enable TCP connections and change the port number.

proto tcp-server
port 563

The client configuration is also very similar. It simply needs to enable TCP connections, set the correct port number, and specify the proxy server.

remote vpn.example.com 563
http-proxy cache.saix.net 8080
proto tcp-client

OpenVPN can also authenticate to the proxy server using either Basic or NTLM authentication. To enable this add "stdin basic" or "stdin ntlm" to the http-proxy line. This will prompt for the username and password when the VPN is started. For more details see the OpenVPN documentation.

  1. I am not commenting on the ethics of this. If you need to resort to this method, you probably shouldn't be doing it. 

Gmail-like mail setup

I have been using Gmail for a while now, and really think that it's about the best email provider out there. I recently moved my mail over from Google Apps to my own server, but I wanted the major features that I liked. I've always used a desktop mail client and used POP3 and SMTP to receive and send mail.

These are the features I particularly like:

  1. Secure access with TLS/SSL
  2. Outgoing SMTP with authentication
  3. Messages sent via SMTP are automatically stored in the mailbox
  4. Messages downloaded via POP3 are still stored on the server
  5. IMAP and Web access

I therefore set out to recreate this setup as closely as possible. The first two are satisfied by a standard Postfix setup with TLS and SMTP AUTH. The last one is done with Dovecot and Roundcube.

To automatically store sent messages on the server, I used Postfix's sender_bcc_maps to BCC messages I send to myself, and the following Procmail recipe to move these messages to the Sent folder.

* ^Return-Path.*me@example.com

To make POP3 access independent from IMAP, I first configured Dovecot to use a different mail location for each as follows.

protocol imap {
    mail_location = maildir:~/Maildir
protocol pop3 {
    mail_location = /var/mail/%u

I then used the following Procmail recipe to send incoming messages to both locations.


At the moment this is only setup for my user, but it should be possible to do it for all users by creating a global procmailrc and telling Postfix to deliver all mail using Procmail. This is working fairly well. The only part missing is that Gmail can archive or mark messages as read when they are downloaded via POP3, whereas in my setup POP3 and IMAP are completely independent.

Postfix with SPF and DKIM

As most people know, email is horribly insecure. It is trivial to forge the From address in emails since there is no authentication when sending1. This means that one cannot trust the From address, and also that people can forge messages from your address. In order to address this a number of new schemes have been developed. These include SPF, DomainKeys, DKIM and SenderID. All of these aim to verify that mail is actually from the address it appears to be from. SPF and SenderID do so by restricting which hosts are allowed to send messages from a certain domain, while DomainKeys and DKIM use cryptographic signatures.

Unfortunately all of these schemes have problems due to the fact that they are an addition to the existing mail system. SPF and SenderID prevent plain forwarding (requiring additional schemes like SRS or whitelisting of forwarders), and MTAs and mailing lists which modify messages break DomainKey and DKIM signatures. Despite these issues, email forgery is an issue which needs to be addressed, and we cannot wait for a perfect solution before adopting it. Some major mail providers (including Gmail and Yahoo) are already implementing these schemes.

I have therefore configured SPF and DKIM in my Postfix mail setup. My SPF policy allows mail from my server and SOFTFAILs all other hosts, and all outgoing mail is signed with DKIM. Incoming mail is checked for SPF and DKIM, but aren't discared even if the checks fail. I will be keeping an eye on things and will revise my policy when I think it safe.

SPF Configuration

To create an SPF policy, add a TXT record to your DNS records according to the SPF syntax. The policy should authorise all hosts from which you send mail. (Mine simply authorises my mail server since I send all mail through it.) You also need a policy for the hostname presented by your mail server in its HELO/EHLO command. You should also create policies for all subdomains which aren't used for mail.

To check SPF records for incoming mail, I used the SPF policy daemon for Postfix. It is packaged for Ubuntu as postfix-policyd-spf-python. Simply follow the instructions in /usr/share/doc/postfix-policyd-spf-python/README.Debian2, and set defaultSeedOnly = 0 in the configuration file if you don't wish to reject mail which fails the test. Remember to whitelist any servers which forward mail to you (i.e. you have another address which gets forwarded to your mail server), unless they implement SRS3.

DKIM Configuration

To sign and check DKIM I use DKIMproxy. There isn't an Ubuntu package so I installed it from source. The instructions on the site are good, and include details for Postfix. You will need to generate a key to sign with and publish it in DNS, and then configure Postfix to sign outgoing messages and validate incoming messages. DKIMproxy won't discard messages with invalid signatures by default.

DKIM includes a component called ADSP which allows domains to publish their signing policy. The strongest policy states that all messages are signed with DKIM and any messages without signatures should be discarded. This will allow mail servers to reject messages not sent through your mail server. However, the standard is not finalised yet, and issues regarding mailing lists still need to be addressed.

  1. Yes, I know about SMTP authentication, but people can simply use a relay. 

  2. Just watch out for the location of the configuration file -- the README uses a different location to the package. 

  3. Gmail doesn't implement SRS as such, but does use a compatible rewriting scheme. 

Syndicate content