code |
TLDR: DEFLATE decompressor in 3K of RAM For a Pebble app I've been writing, I need to send images from the phone to
the watch and cache them in persistent storage on the watch. Since the
persistent storage is very limited (and the Bluetooth connection is relatively
slow) I need these to be as small as possible, and so my original plan was to
use the PNG format and
The constraintThe major constraint for Pebble watchapps is memory. On Pebble Classic apps
have 24K of RAM available for the compiled code ( Initially, trying to decompress something simply crashed the app. It took some
debug prints to determine that code in Huffman treesHuffman coding is a method to represent frequently used symbols with fewer bits. It uses a tree (otherwise referred to as a dictionary) to convert symbols to bits and vice versa. DEFLATE can use Huffman coding in two modes: dynamic and fixed. In dynamic mode, the compressor constructs an optimal tree based on the data being compressed. This results in the smallest representation of the actual input data; however, it has to include the computed tree in the output in order for a decompressor to know how to decode the data. In some cases the space used to serialise the tree negates the improvement in the input representation. In this case the compressor can used fixed mode, where it uses a static tree defined by the DEFLATE spec. Since the decompressor knows what this static tree is, it doesn't need to be serialised in the output. The original tinf implementation builds this fixed tree in The dynamic trees are themselves serialised using Huffman encoding (yo dawg).
The resultWith the stack saving I was able to move the heap allocation back to the stack.
(Since the stack memory can't be used for anything else it's kind of free
because it allows the non-stack memory to be used for something else.) The end
result is 1.2K of |
|||
I recently bought a Raspberry Pi, which is a credit card sized computer with an ARM processor. I'm using it as my TV frontend, running Raspbian and XBMC. I'm building my own packages for XBMC since it requires the latest development version. I initially installed my Pi with the foundation image, but found that it included a lot of packages which I didn't need. Since I have a slight obsession about doing things as efficiently as possible, I decided to build my own image with XBMC from scratch. I implemented a script in Bash, mkraspbianxbmc.sh which does this. It uses debootstrap to install a minimal Raspbian system in a chroot. It then installs XBMC and a couple extra packages, and does some necessary configuration. Finally it creates an image file with the necessary partitions, creates the filesystems, and copies the installation into the image file. The resultant image fits onto a 1GiB SD card. You can download a pre-built image from this page. The script can be modified to build images with different packages, or even a very minimal image which fits onto a 512MiB SD card. |
|||
Extracting specific track segments from a trackI have an i-Blue 747A+ GPS logger which I use to track my runs (amongst other things). Afterwards I use BT747 to retrieve the data from the device and create a GPX file of the run, which I then upload to Endomondo which gives me nice graphs and statistics. I need to modify the GPX file slightly before I can do so however: I use the button on the device to mark the beginning and end of the run, which appear as waypoints in the GPX file. BT747 creates separate track segments (within a single track) between each waypoint, but Endomondo ignores these. I therefore need to extract the single track segment covering the actual run and create a new GPX file with just that segment. I therefore wrote a script in Python, splittrack.py, to do this. It uses the gpxdata library to parse the input file, locates any track segment which match a set of time, distance, displacement and speed criteria1, and outputs a new GPX file with just those.
Integrating heart rate dataI then recently bought an Oregon Scientific WM100 heart rate logger. It listens to the broadcasts from a heart rate strap2 and records the measurements every 2 seconds. I retrieve the data using the wm100 driver for Linux which writes a CSV file like this:
In order to get this data into Endomondo, I needed to combine the GPS trace with the HRM data into a single file format which Endomondo accepts. I initially started implementing a library for the TCX format3, but then discovered that there is a GPX extension for including heart rata data which Endomondo accepts. So I wrote a script in Python, wm100gpx.py, which reads the input GPX and CSV files, merges the heart rate measurements into the GPX records, and outputs a new GPX file.
The entries look like this: <trkpt lat="37.392051" lon="-122.090240"> <ele>-44.400761</ele> <time>2012-10-15T01:20:13Z</time> <extensions> <gpxtpx:TrackPointExtension> <gpxtpx:hr>175</gpxtpx:hr> </gpxtpx:TrackPointExtension> </extensions> </trkpt>
|
|||
My wife recently got a Samsung Exhibit II 4G Android phone to replace her aging Nokia E63. Migrating her contacts was accomplished fairly easily by exporting them to CSV with Nokia PC Suite and then importing them into Google Contacts. Migrating SMSes was not so trivial however. Other approachesThere are a couple methods floating around the web, but none were suitable. This one uses Gammu to retrieve the SMSes from the Nokia, and then a script to convert them to an XML format readable by SMS Backup & Restore. It turns out that Gammu doesn't work on Symbian S60v3 devices however. This script can convert SMSes exported by the MsgExport app to XML for SMS Backup & Restore, but I didn't feel like paying for it or dealing with the Ovi Store. VeryAndroid is a Windows application which can convert SMSes from Nokia PC Suite CSV format and sync them directly to an Android device, and Nokia2AndroidSMS can convert SMSes from the OVI Suite database to XML for SMS Backup & Restore. I didn't want to deal with more Windows software though, so I just decided to write my own. FormatsI already had the Nokia PC Suite installed and was using it to migrate contacts, so I decided to work with the CSV output it generates for messages. A received SMS looks like this:
and a sent SMS looks like this:
The fields are:
Fields 4 and 6 are always empty for SMSes (they are probably used for MMSes, one being the message subject). I also decided to generate XML for the SMS Backup & Restore app. The XML format looks like this:
but can be reduced down to this:
The attributes of the
The scriptI implemented a script called [nokia2android.py] in [Python] to convert one or more CSV files to this XML format.
The XML file can then be transferred to the Android device (using USB or Bluetooth) and stored in
|
|||
After over a year of development, we have finally released Ibid. Ibid is a general purpose chat bot written in Python. We've suffered from a bit of feature creep, so despite being a 0.1 release it can talk 7 messaging protocols and has over 100 features provided by plugins. I think we also have an excellent architecture and very developer friendly plugin API. The 0.1.0 release can be downloaded from Launchpad or installed from our PPA. |
|||
Following on from yesterday's post, I decided to try implement proper content negotiation. After a fair amount of time spent getting to grips with [Lua][], I got a [script][] which works very nicely. It implements [server driven][] content negotiation for [media types][mime]. The basic idea of content negotiation is that a resource (e.g., this [graph][])
exists in multiple formats (in this case, [SVG][graph-svg], [PNG][graph-png] and
[GIF][graph-gif]). When a user agent requests the resource, it indicates which
formats it understands by listing them in the (The following description assumes knowledge of the The script works by searching the directory for files with the requested name
but with an additional extension (each of which is a variant). The [media
type][mime] is inferred from the extension using Some browsers include wildcard entries such as To install the [script][], download and save it somewhere (such as
|
|||
URLs shouldn't really contain file extensions (like Doing the same for static files (i.e. files served directly by the webserver)
isn't straightforward because most webservers use the file extension to
determine the MIME type to send in the I decided to try find a solution to this for my webserver of choice,
Lighttpd. Lighttpd has a module which embeds a [Lua][] interpreter and
allows you to write scripts which modify (or even handle) requests. So I wrote
a [script][] which searches the directory for files with the same name as
requested but with an extension. This means that any file can be accessed with
the file extension removed from the URL while still having the correct
The script currently chooses the first matching file, which means that having
multiple files with the same name but different extensions doesn't do anything
useful. The proper method however is to actually do [content negotiation][],
which chooses the format based on the preferences indicated by the HTTP client
in the To use this script, download it and save it somewhere (I use
|
|||
I follow the main feeds of a couple social news sites (namely Digg, Reddit and Muti). When I find an article which I like, I go back and vote it up on the site. However, when I come across good articles via other sources, I don't submit them to these news sites (or try to find out if they've already been submitted) simply because it's too much effort. When I started aggregating my activity on these sites on my blog and on FriendFeed, I needed a way to share pages that I didn't get to via one of these social news sites. I ended up setting up Delicious because I found a plugin for Konqueror which made it easy to bookmark pages. I still wanted to solve the original problem though, and so started looking for an easy way to submit links to these sites from Konqueror. Konqueror has a feature called service menus which allows you to add entries to the context menu of files. I then needed to work out how to submit links to these services, which turned out to simply involve loading a URL with a query parameter specifying the link you want to share. I created entries for Reddit, Digg, Muti, Delicious, Facebook and Google Bookmarks. These take you to the submission page of the service where you can fill in the title1. Digg and Reddit will show existing submissions if the link has already been submitted. I often share links on IRC, and wondered if I could integrate that with my menu. It turns out that WeeChat has a control socket, and I could send messages by piping them to the socket. I therefore wrote a script which prompted me for a headline or excerpt using kdialog, and then sent the link to the specified channel. My menu now looks like this: ![]() If you want to set this up yourself, download share.desktop and put it in
|
|||
A couple people on #clug were updating their Political Compass scores, which prompted me to jump on the bandwagon and do the test. I came out with the following scores. Economic Left/Right: -4.38 I then thought that it would be interesting to compare everyone's scores on a graph, so I wrote a [Python][] [script][py] to get the scores from Spinach and a [Gnuplot][] [script][p] to plot them. To add yourself to the graph, tell Spinach your score in the following format. The graph is regenerated every hour.
|
|||
I used Google Apps to host mail for this domain for a while, and wanted to close down the account since I don't use it anymore. Before I did that I wanted to move all the data onto my server. Transferring the emails was fairly straightforward using [POP3][], but I couldn't find a way to download the [Google Talk][] logs. [Gmail][] handles the logs as emails, but they aren't accessible using either POP3 or [IMAP][]. I therefore wrote a [Python][] script which downloads the logs via the web interface. On [Jeremy's][] [suggestion][] I used [BeautifulSoup][] to parse the [HTML][] this time, which worked very well. The script works with both Google Apps and normal Gmail, although my account got locked twice while trying to download the 3500 logs in my account. |
|||
