python |
Extracting specific track segments from a trackI have an i-Blue 747A+ GPS logger which I use to track my runs (amongst other things). Afterwards I use BT747 to retrieve the data from the device and create a GPX file of the run, which I then upload to Endomondo which gives me nice graphs and statistics. I need to modify the GPX file slightly before I can do so however: I use the button on the device to mark the beginning and end of the run, which appear as waypoints in the GPX file. BT747 creates separate track segments (within a single track) between each waypoint, but Endomondo ignores these. I therefore need to extract the single track segment covering the actual run and create a new GPX file with just that segment. I therefore wrote a script in Python, splittrack.py, to do this. It uses the gpxdata library to parse the input file, locates any track segment which match a set of time, distance, displacement and speed criteria1, and outputs a new GPX file with just those.
Integrating heart rate dataI then recently bought an Oregon Scientific WM100 heart rate logger. It listens to the broadcasts from a heart rate strap2 and records the measurements every 2 seconds. I retrieve the data using the wm100 driver for Linux which writes a CSV file like this:
In order to get this data into Endomondo, I needed to combine the GPS trace with the HRM data into a single file format which Endomondo accepts. I initially started implementing a library for the TCX format3, but then discovered that there is a GPX extension for including heart rata data which Endomondo accepts. So I wrote a script in Python, wm100gpx.py, which reads the input GPX and CSV files, merges the heart rate measurements into the GPX records, and outputs a new GPX file.
The entries look like this: <trkpt lat="37.392051" lon="-122.090240"> <ele>-44.400761</ele> <time>2012-10-15T01:20:13Z</time> <extensions> <gpxtpx:TrackPointExtension> <gpxtpx:hr>175</gpxtpx:hr> </gpxtpx:TrackPointExtension> </extensions> </trkpt>
|
|||
My wife recently got a Samsung Exhibit II 4G Android phone to replace her aging Nokia E63. Migrating her contacts was accomplished fairly easily by exporting them to CSV with Nokia PC Suite and then importing them into Google Contacts. Migrating SMSes was not so trivial however. Other approachesThere are a couple methods floating around the web, but none were suitable. This one uses Gammu to retrieve the SMSes from the Nokia, and then a script to convert them to an XML format readable by SMS Backup & Restore. It turns out that Gammu doesn't work on Symbian S60v3 devices however. This script can convert SMSes exported by the MsgExport app to XML for SMS Backup & Restore, but I didn't feel like paying for it or dealing with the Ovi Store. VeryAndroid is a Windows application which can convert SMSes from Nokia PC Suite CSV format and sync them directly to an Android device, and Nokia2AndroidSMS can convert SMSes from the OVI Suite database to XML for SMS Backup & Restore. I didn't want to deal with more Windows software though, so I just decided to write my own. FormatsI already had the Nokia PC Suite installed and was using it to migrate contacts, so I decided to work with the CSV output it generates for messages. A received SMS looks like this:
and a sent SMS looks like this:
The fields are:
Fields 4 and 6 are always empty for SMSes (they are probably used for MMSes, one being the message subject). I also decided to generate XML for the SMS Backup & Restore app. The XML format looks like this:
but can be reduced down to this:
The attributes of the
The scriptI implemented a script called [nokia2android.py] in [Python] to convert one or more CSV files to this XML format.
The XML file can then be transferred to the Android device (using USB or Bluetooth) and stored in
|
|||
After over a year of development, we have finally released Ibid. Ibid is a general purpose chat bot written in Python. We've suffered from a bit of feature creep, so despite being a 0.1 release it can talk 7 messaging protocols and has over 100 features provided by plugins. I think we also have an excellent architecture and very developer friendly plugin API. The 0.1.0 release can be downloaded from Launchpad or installed from our PPA. |
|||
I wrote three IRC bots in [Python][] this last week (although one was a rewrite). They probably aren't very useful to most people, but I'm going to share them anyway in case someone finds them interesting. The first one was prompted by [Adrian][], who is maintaining a [countdown][] until his wedding as a factoid in [Spinach][]. Since [Knab][] doesn't actually support countdowns, it has to be updated manually. This clearly isn't the Right Way to do this, and so I hacked together a [script][irccountdown] which connects to IRC and teaches Spinach the updated factoid. I run this as a daily [cronjob][] to keep the countdown up to date. As is usually the case with Python, there was already a library for accessing IRC, namely [irclib][]. It isn't documented very well, but has a couple example scripts which are fairly easy to follow. It follows an event based model, so you write functions which will be called when certain events occur (such as receiving a message). The final of the [Currie Cup][] was held on Saturday (which my team (the [Sharks][]) won), and I followed the match online using [SuperSport's][] live score site1. I then thought that it would be cool to have the score announced on IRC when it changed, and since I was bored I wrote a simple [bot][rugby] to do this. It worked well, but was very simple in that it only supported one hardcoded channel and one hardcoded game. Since I was also bored on Sunday I [rewrote][rugbybot] this bot properly. I added a subscription mechanism so that channels and users can subscribe and unsubscribe to games by sending the bot a command. It's mostly working except for listing the available games (since there aren't any rugby games coming up which means that I can't test it ;-) ). Games are specified by the ID used by SuperSport's site, and finding the right ID is currently a manual process.
|
|||
A couple people on #clug were updating their Political Compass scores, which prompted me to jump on the bandwagon and do the test. I came out with the following scores. Economic Left/Right: -4.38 I then thought that it would be interesting to compare everyone's scores on a graph, so I wrote a [Python][] [script][py] to get the scores from Spinach and a [Gnuplot][] [script][p] to plot them. To add yourself to the graph, tell Spinach your score in the following format. The graph is regenerated every hour.
|
|||
I used Google Apps to host mail for this domain for a while, and wanted to close down the account since I don't use it anymore. Before I did that I wanted to move all the data onto my server. Transferring the emails was fairly straightforward using [POP3][], but I couldn't find a way to download the [Google Talk][] logs. [Gmail][] handles the logs as emails, but they aren't accessible using either POP3 or [IMAP][]. I therefore wrote a [Python][] script which downloads the logs via the web interface. On [Jeremy's][] [suggestion][] I used [BeautifulSoup][] to parse the [HTML][] this time, which worked very well. The script works with both Google Apps and normal Gmail, although my account got locked twice while trying to download the 3500 logs in my account. |
|||
For my Masters project I need a method by which the user can specify which functions should be run on an SPE1. This method should be simple, clear and easy to turn on and off. I stumbled upon a blog post a little while ago (I think it was this one) which explained decorators in Python, which is the perfect tool for the job. Decorators are used to transform functions, but without changing the function itself or the calls to it.
The
The |
|||
Since I might be posting entries regarding my Masters project, I thought that I would provide a brief overview of the project to put it in perspective. I am doing my MSc in Electrical Engineering at UCT as part of the ACE group headed by Prof. Inggs. The group is based at the CHPC, which is part of the Meraka Institute, which in turn is part of the CSIR. The group's research is focused on developing new platforms and methods for HPC. My project is to investigate the suitability of the Cell processor for HPC. The Cell processor is found in the PlayStation 3 and in BladeCenters, and is a very powerful processor. It achieves this by using two different types of cores. The one type (PPU) is a general purpose core capable of running an operating system, while the other type (SPU) is designed specifically to crunch numbers. The disadvantage of this architecture is that it is very difficult to program for. When using the IBM Cell SDK, the user needs to write separate programs for each type of core, and needs to manage the SPEs manually as well as take care of all memory transfers. This requires a good knowledge of the architecture, and results in a lengthy development process and unportable code. For the Cell processor to be a successfull platform in HPC the development process must be made easier while still making efficient use of the Cell's capabilities. There are a number of commercial and non-commercial tools which aim to do this using a variety of methods. I have looked into these tools and have not found one which is both effective and open. I therefore aim to create my own platform with which to program the Cell processor. The idea is to use [Python][] as the end user language, and to make a backend which transparently runs certain functions on the SPEs. This will involve converting the functions into C or C++, adding code to manage execution on an SPE and do the required memory transfers, compile it with the [GCC][] compiler and then execute it. It is quite an ambitious plan, and there are a lot of potential pitfalls. If it succeeds however, I think that it will be a very easy way to develop for the Cell processor while still having portable code. |
|||
