Personal Blog

RSS feed Click the icon for the blog RSS feed.


12 Jun 2018 : GetiPlay now actually plays, too #
For some time now I've been meaning to add a proper media player to GetiPlay. Why, you may well ask, bother to do this when Sailfish already has a perfectly good media player built in? Well, there are two reasons. First, for TV and radio programmes, one of the most important controls you can have is 'jump back a few seconds'. I need this when I'm watching something and get interrupted, or miss an important bit of the narrative, or whatever. It's such a useful button, it's worth writing a completely new media player for. Second, it's just far more seamless to have it all in one application.

So I finally got to adding it in. Here's the video player screen.

The QT framework really does make it easy to add media like this. It still took a good few days to code up of course, but it'd be a lot quicker for someone who knew what they were doing.

I'm also quite proud of the audio player, with the same, super-useful '10 seconds back' button. It also stays playing no matter where you move to in the app. Here it is, showing the controls at the bottom of the screen.

If you'd like to get these new features in your copy of GetiPlay, just download the latest version from OpenRepos, grab yourself the source from GitHub, or check out the GetiPlay page.
6 Jun 2018 : Huge GetiPlay release 0.3-1 #
I'm really pleased to release version 0.3-1 of GetiPlay, the unofficial interface for accessing BBC iPlayer stuff on Sailfish OS. This latest version is a huge update compared to previous releases, with a completely new tab-based UI and a lovely download queue so you can download multiple programmes without interruption.

Immediate info about every one of the thousands and thousands of TV and radio programmes is also now just a tap away.

Install yourself a copy from OpenRepos, grab the MIT-licensed source from GitHub or visit the GetiPlay page on this site.
30 May 2018 : My last teaching at Cambridge #
In 2016 I did my first teaching at Cambridge, and now I've just finished what is likely to be my last ever supervision at Cambridge. The course was Part IB security (the second course out of three the students study), and as with all of the Cambridge courses, the structure is lectures and small-group supervisions (tutorials with two or three students). This term I was teaching students from St John's and Peterhouse colleges. My experience this term was made particularly good by a set of diligent and engaged students. In large classes, if there are too many questions it can become overwhelming, but with small groups there's much more scope to cover questions more deeply. Security covers the breadth of topics, from those that are quite straightforward to those that are much more conceptual, and all of the students this year were on the ball both asking very sensible questions, and answering questions for each other. That makes for a much more enjoyable teaching experience (and if you're reading this: good job; I hope you enjoyed the supervisions too).

The Computer Lab, Cambridge

So, I didn't think I'd say this, but I'll miss this teaching. I've had the privilege to experience teaching across multiple HE institutions in the UK (Oxford, Birmingham, Liverpool John Moores, Cambridge). Living up to the high teaching standards of my colleagues and what the students' rightfully demand has been hard across all of these, but it's been great motivation and inspiration at the same time.

And, having grown up in a household of teachers, and after twenty years in the business, I think I've now seen enough of a spectrum to understand both the importance of teaching, but also its limitations. The attitude and aptitude of students plays such a crucial role in their learning. When you only get to interact with students in one small slice of their overall curriculum, there's a limit to how much you can affect this. That's not to downplay the importance of encouraging students in the right way, but rather to emphasise that teaching is a group activity. Students need good teachers across the board, and also need to bring an appetite.

It's great to teach good, enthusiastic students, and to see them grasp ideas as they're going along. But my ultimate conclusion is a rather selfish one: the best way to learn a practical subject is to do it; the best way to learn a theoretical subject is to teach it.
8 May 2018 : Finally addressing gitweb's gitosis #

My life seems to move in cycles. Back in February 2014 I set up git on my home server to host bare repositories for my personal dev projects. Up until then I'd been using Subversion on the same machine, and since most of my projects are personal this worked fine. Inevitably git became a sensible shift to make, so I set up gitolite for administration and with the Web front-end served up using gitweb.

Unfortunately, back then I couldn't get access control for the Web frond-end to synchronise with gitolite. It's been a thorn ever since, and left me avoiding my own server in favour of others. There were two parts to the reason for this. First the inability to host truly private projects wsa an issue. I often start projects, such as research papers where I host the LaTeX source on git, in private but then want to make them public later, for example when the paper has been published. Second, I was just unhappy that I couldn't set things up the way I wanted. It was important for me that the access control of the Web front end should be managed through the same config approach as used by gitolate for the git repositories themselves. Anything else just seemed backwards.

Well, I've suddenly found myself with a bit of time to look at it, and it turned out to be far easier than I'd realised. With a few global configuration changes and some edits to the repository config, it's now working as it should.

So, this isn't intended as a tutorial, but in case anyone else is suffering from the same mismatched configuration approach between gitweb and gitolite, here's a summary of how I found to set things up in a coherent way.

First, the gitweb configuration. On the server git is set up with its own user (called 'git') and with the repositories stored in the project root folder /srv/git/repositories. The gitweb configuration file is /etc/gitweb.conf. In this file, I had to add the following lines:

$projects_list = $projectroot . "/../projects.list";
$strict_export = 1;

The first tells gitweb that the Web interface should only list the project shown in the /srv/git/projects.list file. The second tells gitweb not to allow access to any sub-project that's not listed, even if someone knows (or can guess) the direct URL for accessing it.

However, that projects.list file has to be populated somehow. For this, I had to edit the gitolite config file at /srv/git/.gitolite.rc. This was already set up mostly correctly (probably with info I put in it four years ago), apart from the following line, which I had to add:

$GL_GITCONFIG_KEYS = "gitweb.owner|gitweb.description|gitweb.category";

This tells gitolite that any of these three keys can be validly added to the overall gitolote configuration files, for them to be propagated on to the repositories. The three values are used to display owner, description and category in the Web interface served by gitweb. However, even more importantly, any project that appears in the gitolite file with one of these variables, will also be added to the projects.list file automatically.

That's great, because it means I can now add entries to my gitolite.conf that look like this:

repo    myproject
        RW+     =   flypig
        R       =   @all
        R       =   gitweb
        config gitweb.owner = flypig
        config gitweb.description = "A project which is publicly accessible"
        config gitweb.category = "Public stuff"

When these changes are pushed to the gitolite-conf repository, hey-presto! gitolite will automatically add the project to the projects.list file, and the project will be accessible through the Web interface. Remove the last four lines, and the project will go dark, hidden from external access.

It's a small change, but I'm really pleased that it's finally working properly after such a long time and I can get back to developing stuff using tools set up just the way I like them.

2 Apr 2018 : Apple believes privacy is a fundamental human right #
The latest update from Apple brought with it a rather grand statement about privacy, stating that "Apple believes privacy is a fundamental human right". So do I, as it happens, so I'm glad Apple are making it known. However, we've heard similar claims from companies like Microsoft in the past (remember Scroogled?), so I'm always sceptical when large multi-national companies that run successful advertising platforms make grand claims about their customers privacy. Maybe it's even made me a bit cynical.

I much prefer to judge companies by their privacy policies than by their slick advertising statements, and to their credit Apple seem to be delivering on their privacy claims by putting their privacy poliies right in front of their users. Unfortunately they've done it in a way that's totally unusable. The fact that all of the privacy statements are in one place is great. The fact that they're in a tiny box that doesn't allow you to export -- or even select and copy out -- all of the text, is a usability clusterfuck. Please Apple, by all means put the policy front and centre of your user interface, but provide us with a nicely formatted text file or Web page to view it all on as well.

  The Apple privacy window

If you're concerned about your privacy like me, you'll want to read through this material in full. But worry not. I've gone to the trouble of selecting each individual piece of text and pasting into a markdown file that, I think, makes things much more readable. View the whole thing on Github, and if you notice any errors or changes, please submit a pull request and I'll try to keep it up-to-date.

In spite of my cynicism, I actually believe Apple, Microsoft, Google and especially Facebook take user privacy incredibly seriously. They know that the whole model is built on trust and that users will be offended if they abuse this trust. Everyone says that 'the user is the product' on platforms like Facebook, as if to suggest they don't really care about you, but all of these companies also know that their value is based on the satisfaction of their users. They have to provide a good service or users will go elsewhere. The value they get from your data is based on their ability to control your data, which means privacy is important to them.

Unfortunately, the motivation these tech companies have for protecting your data is also something that undermines your and my privacy as users of their services. Privacy is widely misunderstood as being about whether data is made public or not, whereas -- at least by one definition -- it's really about having control over who has access to information about you. By this argument a person who chooses to make all of their data public is enjoying privacy, as long as they've done it without coercion, and can change their stance later.

The tech companies have placed themselves as the means by which we maintain this control, but this means we have to trust them fully, and it also means we have to understand them fully. Privacy policies are one of the most important tools for getting this understanding. As users, we should assume that their privacy policies are the only constraint on what they'll really be willing to do with our data. Anything they write elsewhere is subordinate to the policy, and given the mixture of jurisdictions and wildly varying capabilities of oversight bodies around the world, I'd even put more weight on these polices than I would on local laws. In short, the policies are what matters, and they should be interpreted permissively.
12 Mar 2018 : Spring time at Howe Farm Zoo #
The house Joanna and I are currently renting is right on the edge of Cambridge. The city centre is due  south east, but to the north and to the west it’s just fields and the odd motorway as far as the eyes can see (which it turns out, according to Google Maps, is the Cambridge American Cemetry 2 miles away).

The view according to Google.

The view according to Google Maps

The view according to my window.

The view according to my window

Because it’s so close to the edge of the city, it’s really quite rural and as a result we share our house and garden with large numbers of other animals. It’s not unusual for rabbits, squirrels, deer and pheasants to wander around the grounds (all 100 square meters of it). What’s more, the boundary between the outside and inside of our house is distressingly porous, with insects and arachnids apparently enjoying free movement between the two.

Last night my programming was interrupted by a vicious buzzing sound. It turned out to be a queen wasp, awoken from its slumber over the winter and now angrily headbutting my light shade in a bid to head towards the sun. I’m not keen on wasp stings to be honest, so extracting it was quite a delicate exercise that involved gingerly opening and closing the door, dashing in and out of the room, turning the light on and off and chasing the wasp with a Tupperware box. I got it eventually and dragged it out into the cold; I’m sure it’ll return.

I take this to be a clear sign that spring has arrived. The turning of the seasons are the four points of the year I love most, so I’m excited by this. Other signs that we’re reaching spring include the spiders that have started stalking me during my mornig showers, and the arrival of beautiful clumps of daffodils on the lawn in our garden. So, roll on spring I say. Let’s get the dull winter behind us and start to sprout.

Daffodils in the garden

6 Mar 2018 : Beauty and the User Agent String #
Thanks to OSNews for linking to this great article about the messed up history of the Browser User Agent String. There's a moral in this story somewhere, but only if you can overcome the immediate feeling of despair about human progress this article induces.
25 Feb 2018 : Being successful as a thief #
Ars has a great video interviewing Paul Neurath about the troubled development of Thief. I loved sneaking around in the Thief games, from the original right through Deadly Shadows and up to the latest remake. But apart from a wonderful excuse to replay the games in my head, the real message of the video is about the challenges and time pressures of development, something I'm acutely aware of right now with Pico.

"You have to make mistakes. You try things, you go down a lot of dead ends. In this case a lot of those dead ends didn't pan out. But we were learning... That was the key thing. We finally had the mental model after doggedly pursuing this for a year. Now we know what we need to do to get this done and we figured it out and got it done."

When I was young game developers were my heroes. It's good to know that such an inspirational series of games suffered failures and challenges, but still came out as the amazing games they were. We're all working towards the moments they experienced, when "it worked and it felt great."

12 Feb 2018 : Countdown #
I'm not convinced it was good use of my time, but I spent the weekend writing some code to solve the Countdown numbers game. In case you're not familiar with Countdown, here's a clip.

There are lots of ways to do this, but my solution hinges on being able to enumerate all binary trees with a given number of nodes. Doing this efficiently (both in terms of time and memory) turned out to be tricky, and there's a hinge for this too, based on how the trees are represented. The key is to note that each layer can't have more than n nodes, where n is the number of nodes the tree can have overall.

Each tree is stored as a list, with each item in the list representing the nodes at a given depth in the tree (a layer). Each item is a bit sequence representing which nodes in the layer have children.

These bit sequences would get long quickly if they represented every possible node in a layer (since there are 2, 4, 8, 16, 32, ... possible nodes at each layer). Instead, the index of the bit represents the index into the actual nodes in the layer, rather than the possible nodes. This greatly limits the length of the bit sequence, because there can no more than n actual nodes in each layer, and there can be no more than n layers in total. The memory requirement is therefore n2.

Here's an example:
T = [[1], [1, 1], [1, 1, 0, 0], [0 ,1, 0, 0]]
which represents a tree like this:

A binary tree

It's really easy to cycle through all of these, because you can just enumerate each layer individually, which involves cyclying through all sequences of binary strings.

It's not a new problem, but it was a fun exercise to figure out.

The code is up on GitHub in Python if you want to play around with it yourself.
30 Sep 2017 : Connecting to an iPhone using BLE and gatttool #

On the Pico project we've recently been moving from Bluetooth Classic to BLE. We have multiple motivations for this, and not just the low energy promise. In addition, BLE provides RSSI values, which means we can control proximity detection better, and frankly, Bluetooth has been causing us a lot of reliability problems that we've had to work around. We're hoping BLE will work better. Finally, we're developing an iPhone client. iOS, it seems doesn't properly support Bluetooth Classic, so BLE is our best option for cross-platform compatibility.

One of the challenges developing anything that uses a protocol built on top of some transport is that typically both ends of the protocol have to be developed simultaneously. This slow things down, especially when we're trying to distribute tasks across several developers. So we were hoping to use gatttool, part of bluez on Linux, as an intermediate step, to allow us to check the initial iOS BLE code worked before moving on to the Pico protocol proper.

So, here's a quick summary of how we used gatttool to write characteristics to the iPhone.

One point to note is that things weren't smooth for us. In retrospect we had the iPhone correctly running as a BLE peripheral, but we had real trouble connecting. I'll explain how we fixed this too.

Writing to a BLE peripheral using bluez is a four step process:

  1. Scan for the device using hcitool.
  2. Having got the MAC from the scan, connect to it using gatttool.
  3. Find the handle of the characteristic you want to write to.
  4. Perform the write.

The first step, scanning for a device, can be done using the following command.

sudo hcitool -i hci0 lescan

This commend is using hcitool to perform a BLE scan (lescan) using the local device (-i hci0). If you have more than one Bluetooth adaptor, you may want to specify the use of something other than hci0.

When we first tried this, we kept on getting input/output errors, even when run as root. I don't know why this was, but eventually we found a solution:

sudo hciconfig hci0 down
sudo hciconfig hci0 up

Not very elegant, but it seemed to work. After this, the scan started throwing up results.

flypig@delphinus:~sudo hcitool -i hci0 lescan
LE Scan ...
58:C4:C5:1F:C7:70 (unknown)
58:C4:C5:1F:C7:70 Pico's iPhone
CB:A5:42:40:F8:68 (unknown)
58:C4:C5:1F:C7:70 (unknown)
58:C4:C5:1F:C7:70 Pico's iPhone

Note the repeated entries. The device I was interested in was "Pico's iPhone", where we were running our test app. On other occasions when I've performed the scan, the iPhone MAC address came up, but without the name (marked as "unknown"). Again, I don't know why this is, but just trying the MACs eventually got me connected to the correct device.

Having got the MAC, now it's time to connect (step 2).

sudo gatttool -t random -b 58:C4:C5:1F:C7:70 -I

What's this all about? Here we're using gatttool to connect to the remote device using its Bluetooth address (-b 58:C4:C5:1F:C7:70). Obviously if you're doing this at home you should use the correct MAC which is likely to be different from this. Our iPhone is using a random address type, so we have to specify this too (-t random). Finally, we set it to interactive mode with -I. This will open gatttool's own command console so we can do other stuff.

If everything goes well, the console prompt will change to include the MAC address.


So far we've only set things up and not actually connected. So we should connect.

[58:C4:C5:1F:C7:70][LE]> connect
Attempting to connect to 58:C4:C5:1F:C7:70
Connection successful

Great! Now there's a time problem. The iPhone will throw us off this connection after only a few seconds. If it does, enter 'connect' again to re-establish the connection. There's another catch though, so be careful: the iPhone will also periodically change it's MAC address. If it does, you'll need to exit the gatttool console (Ctrl-D), rescan and then reconnect to the device as above.

Having connected we want to know what characteristics are available, which we do be entering 'characteristics' at the console.

[58:C4:C5:1F:C7:70][LE]> characteristics
handle: 0x0002, char properties: 0x02, char value handle: 0x0003, uuid: 00002a00-0000-1000-8000-00805f9b34fb
handle: 0x0004, char properties: 0x02, char value handle: 0x0005, uuid: 00002a01-0000-1000-8000-00805f9b34fb
handle: 0x0007, char properties: 0x20, char value handle: 0x0008, uuid: 00002a05-0000-1000-8000-00805f9b34fb
handle: 0x000b, char properties: 0x98, char value handle: 0x000c, uuid: 8667556c-9a37-4c91-84ed-54ee27d90049
handle: 0x0010, char properties: 0x98, char value handle: 0x0011, uuid: af0badb1-5b99-43cd-917a-a77bc549e3cc
handle: 0x0034, char properties: 0x12, char value handle: 0x0035, uuid: 00002a19-0000-1000-8000-00805f9b34fb
handle: 0x0038, char properties: 0x12, char value handle: 0x0039, uuid: 00002a2b-0000-1000-8000-00805f9b34fb
handle: 0x003b, char properties: 0x02, char value handle: 0x003c, uuid: 00002a0f-0000-1000-8000-00805f9b34fb
handle: 0x003e, char properties: 0x02, char value handle: 0x003f, uuid: 00002a29-0000-1000-8000-00805f9b34fb
handle: 0x0040, char properties: 0x02, char value handle: 0x0041, uuid: 00002a24-0000-1000-8000-00805f9b34fb
handle: 0x0043, char properties: 0x88, char value handle: 0x0044, uuid: 69d1d8f3-45e1-49a8-9821-9bbdfdaad9d9
handle: 0x0046, char properties: 0x10, char value handle: 0x0047, uuid: 9fbf120d-6301-42d9-8c58-25e699a21dbd
handle: 0x0049, char properties: 0x10, char value handle: 0x004a, uuid: 22eac6e9-24d6-4bb5-be44-b36ace7c7bfb
handle: 0x004d, char properties: 0x98, char value handle: 0x004e, uuid: 9b3c81d8-57b1-4a8a-b8df-0e56f7ca51c2
handle: 0x0051, char properties: 0x98, char value handle: 0x0052, uuid: 2f7cabce-808d-411f-9a0c-bb92ba96c102
handle: 0x0055, char properties: 0x8a, char value handle: 0x0056, uuid: c6b2f38c-23ab-46d8-a6ab-a3a870bbd5d7
handle: 0x0059, char properties: 0x88, char value handle: 0x005a, uuid: eb6727c4-f184-497a-a656-76b0cdac633b

In this case, there are many characteristics, but the one we're interested in is the last one, with UUID 'eb6727c4-f184-497a-a656-76b0cdac633b'. We know this is the one we're interested in, because this was the UUID we used in our iPhone app. We set this up to be a writable characteristic, so we can also write to it.

[58:C4:C5:1F:C7:70][LE]> char-write-req 0x005a 5069636f205069636f205069636f205069636f205069636f205069636f205069636f20
Characteristic value was written successfully

Success! On the iPhone side, we set it up to output the characteristic to the log if it was written to. So we see the following.

2017-09-29 20:09:21.206744+0100 Pico[241:25455] QRCodeReader:start()
2017-09-29 20:09:21.802875+0100 Pico[241:25455] BLEPeripheral: State Changed
2017-09-29 20:09:21.803002+0100 Pico[241:25455] BLEPeripheral: Powered On
2017-09-29 20:09:22.801024+0100 Pico[241:25455] BLEPeripheral:start()
2017-09-29 20:10:01.122027+0100 Pico[241:25455] BLE received: Pico Pico Pico Pico Pico Pico Pico

Where did all those 'Pico's come from? That's the value we wrote in, but in hexadecimal ASCII:

50 69 63 6f 20 50 69 63 6f 20 50 69 63 6f 20 50 69 63 6f 20 50 69 63 6f 20 50 69 63 6f 20 50 69 63 6f 20
P  i  c  o     P  i  c  o     P  i  c  o     P  i  c  o     P  i  c  o     P  i  c  o     P  i  c  o    

So, to recap, the following is the command sequence we used.

sudo hciconfig hci0 down
sudo hciconfig hci0 up
sudo hcitool -i hci0 lescan
sudo gatttool -t random -b 58:C4:C5:1F:C7:70 -I
[LE]> connect
[LE]> characteristics
[LE]> char-write-req 0x005a 5069636f205069636f205069636f205069636f205069636f205069636f205069636f20

When it's working, my experience is that gatttool works well. But BLE is a peculiar paradigm, very different from general networking and offers lots of opportunity for confusion.

30 May 2017 : Catastrophic success #
I’ve been using computers in a serious way for the last 32 years and have been taking backup seriously for about half of that. Starting with backup to CD-WR in 2002, then to removable disk-caddy a few years later, and USB hard drive in 2007. For most of that time I’ve been aware of the importance of off-site backups, but it wasn’t until October last year that I actually started doing it. Now my machines all perform weekly incremental backups to my home server, which all then in turn gets client-side encrypted and transferred to Amazon S3.
CD backup in 2002 Hard drive backup in 2007
CD backup in 2002 Hard drive backup in 2007

Despite all of this effort I’ve never had to resort to restoring any of these backups. It’s surprising to think that over all this time, none of my hard drives have ever failed catastrophically.

That was until last Thursday, when I arrived home to discover Constantia, my home server, had suffered a serious failure due to a sequence of power cuts during the day. A bit of prodding made clear that it was the hard drive that had failed. I rely heavily on Constantia to manage my diary, cloud storage, git repos, DNS lookup and so on, so this was a pretty traumatic realisation. On Friday I ordered a replacement hard drive, which arrived Sunday morning.

Luckily Constantia has her operating system on a separate solid state drive, so with a bit of fiddly with fstab I was able to get her to boot again, allowing me to install and format the new drive. I then started the process of restoring the backup from S3.

Backup in progress
Thirteen hours and 55 minutes later, the restore is complete. Astonishingly, Constantia is now as she was before the backup. Best practice is to test not just your backup process regularly, but your restore process as well. But it’s a time consuming and potentially dangerous process in itself, so I’m not proud to admit that this was the first time I’d attempted restore. I’m therefore happy and astonished to say that it worked flawlessly. It’s as if I turned Constantia off and then three days later turned her back on again.

Credit goes to the duplicity and déjà-dup authors. Your hard work made my life so much easier. What could have been hugely traumatic turned out to be just some lost time. On the other hand, it also puts into perspective other events that have been happening this weekend. BA also suffered a power surge which took out its systems on Saturday morning. It took them two days to get their 500 machines spread across two data centres back up and running, while it took me three days to get my one server restored.
28 May 2017 : Catastrophic failure #
A series of power cuts last Thursday left Constantia, my home server in a sorry state. On start-up, she would make a sort-of repeating four-note melody, then crash out to a recovery terminal.

Constantia is poorly

I've subsequently discovered that the strange noises were from the hard drive failing, presumably killed by the repeated power outages. A replacement hard drive arrived this morning (impressively on a Sunday, having been ordered from Amazon Friday evening), which I'm in the process of restoring a backup onto.

Old drive on the left, new drive on the right

Right now I'm apprehensive to say the least. This is the first real test of my backup process, which stores encrypted snapshots on Amazon S3 using Déjà Dup. If it works, I'll be happy and impressed, but I'm preparing myself for trouble.

When I made the very first non-incremental backup of Constantia to S3 it took four days. I'm hoping restoring will be faster.
6 May 2017 : Detectorists #
Last week while away in Paris at EuroUSEC I received a distraught phone call from Joanna. She'd been mowing the lawn (reason enough for distress in itself) and in the process lost her engagement ring. She was pretty upset to be honest, which made me upset being so far away and not able to help. The blame, it transpired, could be traced back to the stinging nettles in our garden. Joanna had been stung while clearing them and moved the ring onto her right hand as a result. That left it more loose than usual, and it probably then fell off while bailing grass cuttings.

We determined to search and find the ring when I got back, and as a backup plan we'd source a metal detector and try that if it came to it. Having seen every episode of Detectorists and loved them, we knew this would work. Secretly, neither of us were quite so certain.

Our unaided search proved fruitless. We scoured the garden over the whole weekend, but ultimately decided our rudimentary human senses weren't going to cut it. We ordered a £30 metal detector from Amazon. In case you're not familiar with the metal-detector landscape, that really is at the bottom end of the market. We weren't really prepared to pay more for something we anticipated using only once, and that might anyway turn out to be pointless. As you can see, we really didn't fancy our chances.

We used the metal detector for a bit, but again, didn't seem to be getting anywhere. It would happily detect my silver wedding ring, and buzzed aggressively when I swooshed it too close to my shoes (metal toe caps; they confuse airport security no end as well), but finding anything other than my feet was proving to be a lot harder.
We discovered that the detector doesn't just detect metal in the general, but can differentiate between different types of metal depending on how it's configured. Joanna's ring is white gold, not silver, so we had to find another piece of white gold in the house to test it on.

Soon after that we started to uncover treasure. First a scrunched up piece of aluminium foil buried a few centimetres under our lawn. Then a rusty corner of a piece of old iron sheeting about 5mm think, buried some 10cm below the ground. As you can imagine we were feeling a lot more confident after having found some real treasure.
And then, just a few minutes later, the detector buzzed again and scrabbling through the grass cuttings revealed Joanna's lost engagement ring, lost no more.

We were pretty chuffed with ourselves. And we were pretty chuffed with the metal detector. If the Detectorists taught us anything, it's that finding treasure is hard. Granted our treasure-hunting creds are somewhat undermined by us having lost the treasure in the first place, but we found the treasure nonetheless. And it was gold we found, so justification enough for us to perform a small version of the gold dance.
Joanna found a ring I found a piece of rusty metal
Joanna found a white-gold ring... ...while I found a rusty old sheet of iron
15 Apr 2017 : Terrible computing choices #
I've just done a terrible thing. For literally months I've been planning my next laptop upgrade, weighing the alternatives and comparing specs. This wil end up being my daily workhorse, and these aren't cheap machines so it's worth getting it right. I narrowed it down to two different devices: the Dell XPS 13 and the Razer Blade Stealth.
Razer Blade Stealth Dell XPS 13
Razer Blade Stealth Dell XPS 13

Physically the RBS is a beautifully crafted device, small and light but with a solidity and finish that left me drooling when I handled it in the Razer store in San Franciso. In comparison the XPS is dull and uninspiring. It's competently made for sure, but suffers from the sort of classic PC over-design that makes the Apple-crowd smug. For the record if I owned an RBS I'd find it hard to hide my smugness.

The XPS is indisputably the better machine. It has a larger screen in a smaller chassis and a much better battery life all for a slightly lower price. In spite of this, the excitement of the RBS won out over the cold hard specs of the XPS. The Dell is simply not an exciting machine in the same way as the RBS with its magically colourful keyboard.

Why then, after all this, have I just gone and ordered the Dell? After making my decision to buy the RBS I dug deeper into how to run Linux on it. The Web reports glitches with a flickering screen, dubious Wi-fi drivers, crashing caps-lock keys and broken HDMI output. On the other hand, Dell supports Ubuntu as a first-class OS, which reassures me that the experience will be glitch-free.

After months of deliberation I chose specs over beauty, which I fear may mean I've finally strayed into adulthood. It feels like a terrible decision, while at the same time almost certainly being the right decision. Clearly I'm still not convinced I made the right choice, but at least I finally did.


Razer Blade Stealth

Dell XPS 13


3.5GHz Intel Core i7-7500U

3.5GHz Intel Core i7-7500U


16GB, 1866MHz LPDDR3

16GB, 1866MHz LPDDR3





Intel HD620

Intel HD620


3840 x 2160

3200 x 1800

Screen size (in)



Battery (WHr)



Height (mm)



Width (mm)



Depth (mm)



Weight (kg)









Backlit keyboard

Whoa yes



USB-C, 2 x USB-3, HDMI, 3.5mm

USB-C, 2 x USB-3, SD card, 3.5mm, AC


Real nice Dull :(

Linux compat

Unsupported, glitches

Officially supported

Price (£)



20 Mar 2017 : Rise of the Tomb Raider #
Rise of the Tomb Raider was released for PC over a year ago now, so it's about time I got back on track with my quest to complete all the Tomb Raider games. After scouring caverns, military bases, villages and, well, tombs, for artefacts and challenges, I've finally got there again.
It was a good game as always, not as tight as the originals but enjoyable and kept me searching for treasure. Perhaps the biggest surprise was to find myself chasing chickens through tombs as the ultimate game finale.

Here it is, added to my ongoing list of completed Croft games, previously updated a few years back now.
  • Tomb Raider.
  • Unfinished Business and Shadow of the Cat.
  • Tomb Raider II: Starring Lara Croft.
  • Tomb Raider III: Adventures of Lara Croft.
  • The Golden Mask.
  • Tomb Raider: The Last Revelation.
  • Tomb Raider: The Lost Artefact.
  • Tomb Raider Chronicles.
  • Tomb Raider: The Angel of Darkness.
  • Tomb Raider Legend.
  • Tomb Raider Anniversary.
  • Tomb Raider Underworld.
  • Lara Croft and the Guardian of Light.
  • Tomb Raider (reboot).
  • Lara Croft and the Temple of Osiris.
  • Rise of the Tomb Raider.
And, because chickens don't make for the most visually-stunning sceenshots, here's a spectacular vista from the section in Syria, including obligatory lens flare and carefully undisturbed artefact.

Classic Tomb Raider beauty
10 Mar 2017 : Minor Pico victories #
Late last night (or more correctly this morning) my SailfishOS phone completed its first ever successful authentication with my laptop using Pico over Bluetooth. A minor, but very fulfilling, victory. One step close to making Pico a completely seamless part of my everyday life.

Authentication-wrangling results
4 Mar 2017 : A tale of woe: failing to heed the certificate-pinning warnings #
As I mentioned previously, last month I discovered rather abruptly that Firefox revoked the StartCom root certificate used to sign the TLS certificate on my site. Ouch. To ease the pain, I planned to move over to using Let's Encrypt, a free service that will automatically generate a new certificate for my site every few months. Both StartCom and Let's Encrypt use a similar technique: they verify only that I have control over the apache2 user on my server by demonstrating that I can control the contents of the site. But the pain hurt particularly badly because I'd been using certificate-pinning, which essentially prevents me using any other certificates apart from a small selection that I keep as backups. Let's Encrypt doesn't give you control over the certificates it signs. The result: anyone who visited my site in the last month (of which there are no-doubt countless millions) would be locked out of it. It's the certificate-pinning nightmare everyone warns you about. So I ratcheted the pinning down from a month to 60 seconds and waited for browsers across the world to forget my previously-pinned certificate.
Today, the 30 days finally expired. In theory, my previously pinned certificates are no longer in force and it's safe for me to switch over to Let's Encrypt. And so this is what I've done.
Check for yourself by visiting and hitting the little green padlock that appears in the address bar. Depending on the browser it should state that it's a secure connection, verified by Let' Encrypt.
Does the stark black-and-white page render beautifully? Then great! Does it say the certificate has expired, is invalid, or has been revoked? Well, then I guess I screwed up, so please let me know.
I didn't really learn my lesson though. In my desparate need to get a good score on, I've turned certificate-pinnng back on (thanks Henrik Lilleengen for leading me astray). Nothing could possibly go wrong this time, right?
22 Feb 2017 : Fedora’s horribly hobbled OpenSSL implementation #
For reasons best known to their lawyers, Red Hat have chosen to hobble their implementation of OpenSSL. According to a releated bug, possible patent issues have led them to remove a large number of the elliptic curve parametrisations, as you can see by comparing the curves supported on Fedora 25:
[flypig@blaise ~]$ openssl ecparam -list_curves
  secp256k1 : SECG curve over a 256 bit prime field
  secp384r1 : NIST/SECG curve over a 384 bit prime field
  secp521r1 : NIST/SECG curve over a 521 bit prime field
  prime256v1: X9.62/SECG curve over a 256 bit prime field
with those supported on Ubuntu 16.04:
flypig@Owen:~$ openssl ecparam -list_curves
  secp112r1 : SECG/WTLS curve over a 112 bit prime field
  secp112r2 : SECG curve over a 112 bit prime field
  secp128r1 : SECG curve over a 128 bit prime field
  secp128r2 : SECG curve over a 128 bit prime field
  secp160k1 : SECG curve over a 160 bit prime field
  secp160r1 : SECG curve over a 160 bit prime field
  secp160r2 : SECG/WTLS curve over a 160 bit prime field
  secp192k1 : SECG curve over a 192 bit prime field
  secp224k1 : SECG curve over a 224 bit prime field
  secp224r1 : NIST/SECG curve over a 224 bit prime field
  secp256k1 : SECG curve over a 256 bit prime field
  secp384r1 : NIST/SECG curve over a 384 bit prime field
  secp521r1 : NIST/SECG curve over a 521 bit prime field
  prime192v1: NIST/X9.62/SECG curve over a 192 bit prime field
  prime192v2: X9.62 curve over a 192 bit prime field
  prime192v3: X9.62 curve over a 192 bit prime field
  prime239v1: X9.62 curve over a 239 bit prime field
  prime239v2: X9.62 curve over a 239 bit prime field
  prime239v3: X9.62 curve over a 239 bit prime field
  prime256v1: X9.62/SECG curve over a 256 bit prime field
  sect113r1 : SECG curve over a 113 bit binary field
  sect113r2 : SECG curve over a 113 bit binary field
  sect131r1 : SECG/WTLS curve over a 131 bit binary field
  sect131r2 : SECG curve over a 131 bit binary field
  sect163k1 : NIST/SECG/WTLS curve over a 163 bit binary field
  sect163r1 : SECG curve over a 163 bit binary field
  sect163r2 : NIST/SECG curve over a 163 bit binary field
  sect193r1 : SECG curve over a 193 bit binary field
  sect193r2 : SECG curve over a 193 bit binary field
  sect233k1 : NIST/SECG/WTLS curve over a 233 bit binary field
  sect233r1 : NIST/SECG/WTLS curve over a 233 bit binary field
  sect239k1 : SECG curve over a 239 bit binary field
  sect283k1 : NIST/SECG curve over a 283 bit binary field
  sect283r1 : NIST/SECG curve over a 283 bit binary field
  sect409k1 : NIST/SECG curve over a 409 bit binary field
  sect409r1 : NIST/SECG curve over a 409 bit binary field
  sect571k1 : NIST/SECG curve over a 571 bit binary field
  sect571r1 : NIST/SECG curve over a 571 bit binary field
  c2pnb163v1: X9.62 curve over a 163 bit binary field
  c2pnb163v2: X9.62 curve over a 163 bit binary field
  c2pnb163v3: X9.62 curve over a 163 bit binary field
  c2pnb176v1: X9.62 curve over a 176 bit binary field
  c2tnb191v1: X9.62 curve over a 191 bit binary field
  c2tnb191v2: X9.62 curve over a 191 bit binary field
  c2tnb191v3: X9.62 curve over a 191 bit binary field
  c2pnb208w1: X9.62 curve over a 208 bit binary field
  c2tnb239v1: X9.62 curve over a 239 bit binary field
  c2tnb239v2: X9.62 curve over a 239 bit binary field
  c2tnb239v3: X9.62 curve over a 239 bit binary field
  c2pnb272w1: X9.62 curve over a 272 bit binary field
  c2pnb304w1: X9.62 curve over a 304 bit binary field
  c2tnb359v1: X9.62 curve over a 359 bit binary field
  c2pnb368w1: X9.62 curve over a 368 bit binary field
  c2tnb431r1: X9.62 curve over a 431 bit binary field
  wap-wsg-idm-ecid-wtls1: WTLS curve over a 113 bit binary field
  wap-wsg-idm-ecid-wtls3: NIST/SECG/WTLS curve over a 163 bit binary field
  wap-wsg-idm-ecid-wtls4: SECG curve over a 113 bit binary field
  wap-wsg-idm-ecid-wtls5: X9.62 curve over a 163 bit binary field
  wap-wsg-idm-ecid-wtls6: SECG/WTLS curve over a 112 bit prime field
  wap-wsg-idm-ecid-wtls7: SECG/WTLS curve over a 160 bit prime field
  wap-wsg-idm-ecid-wtls8: WTLS curve over a 112 bit prime field
  wap-wsg-idm-ecid-wtls9: WTLS curve over a 160 bit prime field
  wap-wsg-idm-ecid-wtls10: NIST/SECG/WTLS curve over a 233 bit binary field
  wap-wsg-idm-ecid-wtls11: NIST/SECG/WTLS curve over a 233 bit binary field
  wap-wsg-idm-ecid-wtls12: WTLS curvs over a 224 bit prime field
    IPSec/IKE/Oakley curve #3 over a 155 bit binary field.
    Not suitable for ECDSA.
    Questionable extension field!
    IPSec/IKE/Oakley curve #4 over a 185 bit binary field.
    Not suitable for ECDSA.
    Questionable extension field!
  brainpoolP160r1: RFC 5639 curve over a 160 bit prime field
  brainpoolP160t1: RFC 5639 curve over a 160 bit prime field
  brainpoolP192r1: RFC 5639 curve over a 192 bit prime field
  brainpoolP192t1: RFC 5639 curve over a 192 bit prime field
  brainpoolP224r1: RFC 5639 curve over a 224 bit prime field
  brainpoolP224t1: RFC 5639 curve over a 224 bit prime field
  brainpoolP256r1: RFC 5639 curve over a 256 bit prime field
  brainpoolP256t1: RFC 5639 curve over a 256 bit prime field
  brainpoolP320r1: RFC 5639 curve over a 320 bit prime field
  brainpoolP320t1: RFC 5639 curve over a 320 bit prime field
  brainpoolP384r1: RFC 5639 curve over a 384 bit prime field
  brainpoolP384t1: RFC 5639 curve over a 384 bit prime field
  brainpoolP512r1: RFC 5639 curve over a 512 bit prime field
  brainpoolP512t1: RFC 5639 curve over a 512 bit prime field
I only discovered this when trying to build a libpico rpm. The missing curves cause particular problems for Pico, because we use prime192v1 for our implementation of the Sigma-I protocol. Getting around this is awkward, since we don’t have a crypto-negotiation step (maybe there’s a lesson there, although protocol negotiation is also a source of vulnerabilities).
There’s already a bug report covering the missing covers, but given that the situation has persisted since at least 2007 and remains unresolved, it seems unlikely Red Hat’s lawyers will relent any time soon. They’ve added the 256-bit prime field version since this was licensed by the NSA, but the others remain AWOL.
Wikipedia shows the various patents expiring around 2020. Until then, one way to address the problem is to build yourself your own OpenSSL RPM without all of the disabled code. Daniel Pocock produced a nice tutorial back in 2013, but this was for Fedora 19 and OpenSSL 1.0.1e. Things have now moved on and his patch no longer works correctly, so I’ve updated his steps to cover Fedora 25.
Check out my blog post about it if you want to code along.
22 Feb 2017 : Building an unhobbled OpenSSL 1.0.2j RPM for Fedora 25 #
For most people it makes sense to use the latest (at time of writing) 1.0.2k version of OpenSSL on Fedora 25 (in which case, see my other blog post). However, if for some reason you need a slightly earlier build (version 1.0.2j to be precise), then you can switch out the middle part of the process I wrote about for 1.0.2k with the following set of commands.
# Install the fedora RPM with all the standard Red Hat patches
cd ~/rpmbuild/SRPMS
rpm -i openssl-1.0.2j-1.fc25.src.rpm
# Install the stock OpenSSL source which doesn’t have the ECC code removed
# Patch the spec file to avoid all of the nasty ECC-destroying patches
cd ../SPECS
patch -p0 <
# And build
rpmbuild -bb openssl.spec
And to install the resulting RPMs:
cd ~/rpmbuild/RPMS/$(uname -i)
rpm -Uvh --force openssl-1.0.2j*rpm openssl-devel-1.0.2j*rpm openssl-libs-1.0.2j*rpm
I’m not sure why you might want to use 1.0.2j over 1.0.2k, but since I already had the patch lying around, it seemed sensible to make it available.
22 Feb 2017 : Building an unhobbled OpenSSL 1.0.2k RPM for Fedora 25 #
Fedora’s OpenSSL build is actually a cut-down version with many of the elliptic curve features removed due to patent concerns. These are available in stock OpenSSL and in other distros such as Ubuntu, so it’s a pain they’re not available in Fedora. Daniel Pocock provided a nice tutorial on how to build an RPM that restores the functionality, but it’s a bit old now (Fedora 19, 2013) and generated errors when I tried to follow it more recently. Here’s an updated process that’ll work for OpenSSL 1.0.2k on Fedora 26.
Prepare the system
Remove the existing openssl-devel package and install the dependencies needed to build a new one. These all have to be done as root (e.g. by adding sudo to the front of them).
rpm -e openssl-devel
dnf install rpm-build krb5-devel zlib-devel gcc gmp-devel \ 
  libcurl-devel openldap-devel NetworkManager-devel \
  NetworkManager-glib-devel sqlite-devel lksctp-tools-devel \
  perl-generators rpmdevtools
Set up an rpmbuild environment
If you don’t already have one. Something like this should do the trick.
Obtain the packages and build
The following will download the sources and apply a patch to reinstate the ECC functionality. This is broadly the same as Daniel's, but with more recent package links and an updated patch to work with them.
# Install the fedora RPM with all the standard Red Hat patches
cd ~/rpmbuild/SRPMS
rpm -i openssl-1.0.2k-1.fc25.src.rpm
# Install the stock OpenSSL source which doesn&rsquo;t have the ECC code removed
# Patch the spec file to avoid all of the nasty ECC-destroying patches
cd ../SPECS
patch -p0 <
# And build
rpmbuild -bb openssl.spec
Install the OpenSSL packages
cd ~/rpmbuild/RPMS/$(uname -i)
rpm -Uvh --force openssl-1.0.2k*rpm openssl-devel-1.0.2k*rpm openssl-libs-1.0.2k*rpm
Once this has completed, your ECC functionality should be restored. You can check by entering
openssl ecparam -list_curves
to list the curves your currently installed package supports. That should be it. In case you want to use the slightly older 1.0.2j version of OpenSSL, you can follow my separate post on the topic.
24 Dec 2016 : You are old, Acer Laptop #
"You are old, Acer Laptop" this blog-writer wrote,
"And your battery has become rather shite;
And yet you incessantly compile all this code –
Do you think, at your age, it is right?"

"In my youth," Acer Laptop replied to the man,
"I feared it might injure my core;
But now that I'm perfectly sure I have none
Why, I do it much more then before."

"You are old," said the man, "As I mentioned before,
And have grown most uncommonly hot;
Yet you render in Blender in HD or more –
Pray, don't you think that's rather a lot?"

"In my youth," said the Acer, as he wiggled his lid,
"I kept all my ports very supple
By the use of this app—installed for a quid—
Allow me to sell you a couple?"

"You are old," said the man, "And your threading's too weak
For anything tougher than BASIC;
Yet you ran Java 5 with its memory leak –
Pray, how do you manage to face it?"

"In my youth," said the laptop, "I took a huge risk,
And argued emacs over vim;
And the muscular strength which it gave my hard disk,
Has lasted through thick and through thin."

"You are old," said the man, "one can only surmise
That your circuits are falling apart;
Yet you balanced a bintree of astonishing size—
What made you so awfully smart?"

"I have answered three questions, now leave me alone,"
Said the Acer; "It's true I'm not brand new!
Do you think I'm like Siri on a new-fangled phone?
Be off, or I'll have to unfriend you!"

My current laptop is getting a bit long-in-the-tooth. It's an Acer Aspire S7 which Joanna and I bought cheap a couple of years ago as an ex-display machine. It's a thin, light ultrabook that's worked really well with Linux and still feels powerful enough to use as my main development machine. Amongst it's excellent qualities, the only two negatives have been a rather loud fan, and a less-than-perfect keyboard.

Still, it's getting a bit worn-out now and I've used it so much some of the keys have worn through to the backlight. I've also noticed some very appealing ultrabook releases recently, including the new Acer Swift 7 and the Asus Zenbook 3. Both of these hit important milestones, with the Swift being less than 1cm think, and the Zenbook coming in at under 1kg in weight. Impressive stuff.

My rather bruised keyboard

With these two releases having piqued my interest, and with my current machine due for renewal, it seemed like a good time to reassess the ultrabook landscape and figure out whether I can justify getting a new machine.

Most manufacturers now offer some impressive ultrabook designs. HP has its Elitebook and Spectre  ranges, Apple's MacBook Pro now falls firmly within the category, Dell has the XPS devices and Razer is a newcomer to the ultrabook party with its new Blade Stealth laptop. They all seem to have received decent reviews and there's clearly been some design-love spent on them all.

However, they are also all expensive machines (around the £1000 mark). I'm going to use this as my main work machine for the next couple of years, during which time it'll get daily use, so I have no qualms about spending a lot on a good laptop. On the other hand, if I make a bad decision it'll be an expensive mistake. Given this, it's only sensible I should spend some time considering the various options and try to make a decision not just based on instinct, but on the hard specs for each machine.

There are plenty of reviews online which there's no need for me to duplicate; however I have some particular requirements and preferences, so this analysis is based firmly on these.

My requirements are for a thin, light laptop that's got a really good screen (the larger and higher the resolution the better). When I say thin, I mean ideally 1cm or thinner. By light, I mean as close to 1kg as possible. By good screen, it should be at least a 13in screen with better-than-FHD resolution (given FHD is what my current laptop supports). Any new machine must be better than my current laptop by a significant margin. My current laptop is still perfectly usable, and I'm happy with the size, weight, processing speed and resolution; but it doesn't make sense to get a new machine if it's not going to be a noticeable upgrade.

I've been single-booting Linux for many years now, and plan to do the same with whichever laptop I get next. That means the Windows/macOS distinction is irrelevant for me: they'll get wiped off as the first thing I do with the machine either way.

Before starting this task I was certain I'd end up getting the Acer Swift 7. Based on the copy I'd read, it's the thinnest 13.3in laptop you can buy and looks quite attractive to me (apart from the horrible 'rose gold' colour; ugh). If this didn't work out, I thought the numbers would point in the direction of an Apple device, given almost everyone I know in the Computer Lab uses an Apple laptop (there must be something in that, right?). After carefully working through the specs, I've been really surprised by the results.

The MacBook Pro appears to be decent in most areas, but in fact is worse than the best of its competitors in almost all respects. Since I don't want to run macOS, the only thing in its favour is the attractive design. The MacBook Air is really showing its age now, and is even beaten by the MacBook Pro on everything except price.

The Swift 7 is thin, but turns out to be a really poor choice. That just goes to show how unreliable my gut instinct is and I'm glad I didn't buy it without looking at the alternatives. It's running an M-class processor with no touchscreen or keyboard backlight. The port selection is average and in practice its only strengths are the thin chassis and fanless design. Both are nice features, but the result of the package is hardly an upgrade over my existing machine.

The Razer Blade Stealth was originally down as my alternative choice. It has a gloriously high-resolution (3840 * 2160) screen, and personally I love the multi-coloured keyboard lighting. Some might say it's a just gimmick, and I could never justify a purchase because of it (especially bearing in mind it almost certainly won't work properly on Linux), but I still think it's glorious. Unfortunately the Stealth turns out to have a small screen size and suffers problems running Linux. Both are show-stoppers for me.

The Zenbook also looks really appealing, with its incredibly lightness. Unfortunately, like the Stealth it suffers from a smaller screen size and Linux problems. Too bad.

I kept the Spectre in for comparison, but I could never have gone for it given it's horrific aesthetics. I admit, I'm shallow. Nevertheless, it turns out it doesn't offer enough of an upgrade over my existing system anyway (same resolution, worse dimensions and weight).

The unequivocal standout winner is the Dell XPS. In some ways I'm sad about this, as in my mind I associate Dell with being the height of box-shifting PC dullness. Dell's aggressive product placement really puts me off. The machine itself doesn't have a particularly spectacular design. Yet there's no denying the numbers, and the screen really does appear to be way-ahead of the competition, with its unusually thin bezel, high-resolution and decent size. I was tempted by the 15 in version, given its discreet graphics, but the size and weight just nudge outside the area I feel is acceptable for me.

That leaves only the XPS 13 standing. To top everything else off, Dell is the only company to officially support Linux (Ubuntu) on its machines, which it deserves credit for. I'm not sure whether I'll end up getting a new laptop at all, but if I do I'd want it to be this.

Scroll past the pictures to see my full 'analysis' of the different laptops.
Acer Aspire S7 Acer Swift 7 Hp Spectre 13t
Acer Aspire S7 Acer Swift 7 HP Spectre 13t
Razer Blade Stealth Apple MacBook Pro Apple MacBook Air
Razer Blade Stealth MacBook Pro MacBook Air
Asus Zenbook 3 Dell XPS 13 Dell XPS 15
Asus Zenbook 3 UX390UA Dell XPS 13 Dell XPS 15

Colour coding:
The same as my current Acer Aspire S7

Acer Aspire S7

Acer Swift 7

HP Spectre 13t (or 13-v151nr)

Razer Blade Stealth

MacBook Pro


1.9GHz Intel Core i5-3517U

1.2GHz Intel Core i5-7Y54

2.5GHz Intel Core i7-6500U

2.7GHz Intel Core i7-7500U

3.3GHz Intel Core i7

RAM (max)

4GB, 1600MHz DDR3



16GB, 1866MHz LPDDR3

16GB, 2133MHz LPDDR3

NVM (max)







Intel HD4000

Intel HD615

Intel HD620

Intel HD620

Intel HD550





2560x1440 or 3840x2160


Screen size (in)






Battery (hours)






Height (mm)






Width (mm)






Depth (mm)






Weight (kg)


















Backlit keyboard







USB2*2, 3.5mm, HDMI, SD, AC

USB3*2, 3.5mm

USB3*3, 3.5mm

USB3*2, 3.5mm, HDMI, AC

USB3*2, 3.5mm



Linux compat


Reportedly works OK


Works with glitches (e.g. WiFi)

Currently flaky (will improve)

Price (£)






Price spec


8GB, 256GB

8GB, 256GB

16GB, 256GB

8GB, 256GB


Has been perfect, apart from the poor keyboard

Underpowered, and not big enough upgrade to be worthwhile

Ugly, ugly, ugly

Really tempting, good value, but small screen size is a problem

Quite big and heavy. Decent, but the Dell XPS 13 is better in every respect


Acer Aspire S7

MacBook Air

Asus Zenbook 3 UX390UA

Dell XPS 13

Dell XPS 15


1.9GHz Intel Core i5-3517U

2.2GHz Intel Core i7

Intel Core i7-7500U

3.1GHz Intel Core i5-7200U

3.5GHz Intel Core i7-6700HQ

RAM (max)

4GB, 1600MHz DDR3

8GB, 1600MHz LPDDR3

16GB, 2133MHz LPDDR3

8GB, 1866MHz LPDDR3

32GB, 2133MHz DDR4

NVM (max)







Intel HD4000

Intel HD6000

Intel HD620

Intel HD620








Screen size (in)






Battery (hours)






Height (mm)






Width (mm)






Depth (mm)






Weight (kg)


















Backlit keyboard







USB2*2, 3.5mm, HDMI, SD, AC

USB3*2, 3.5mm, TB, SD, AC

USB3, 3.5mm

USB3*3, 3.5mm, SD, AC

USB3*3, 3.5mm, SD, HDMI, AC



Linux compat



Works but volume, FP, HDMI issues

Officially supported

Reported to work well

Price (£)






Price spec


8GB, 256GB

16GB, 521GB

8GB, 256GB

16GB, 512GB


Has been perfect, apart from the poor keyboard

The low resolution being worse than my current laptop, as well as being thick, rules this out

Thin and really light, makes it really appealing, but the small screen size is a problem

Relatively thick and heavy, but the screen is really great

Just a bit too big and heavy to be viable

9 Dec 2016 : Cracking PwdHash #
On Wednesday Graham Rymer and I presented our work on cracking PwdHash at the Passwords 2016 conference. It's the first time I've done a joint presentation, which made for a new experience. It was also a very enjoyable one, especially having the chance to work with such a knowledgeable co-author.

The work we did allowed us to search for the original master passwords that people use with PwdHash. Passwords which are used to generate the more complex site-specific passwords given to websites, and which may then have been exposed by recent password leaks in hashed form. We were surprised, both by the number of master passwords we were able to find, and the speed with which hashcat was able to eat its way through the leaked hashes.

Running on an Amazon EC2 instance, we were able to work through the SHA1-hashed leak by generating 40 million hashes per second. In total we were able to recover 75 master passwords from the leak, as well as further master passwords from the and leaks.

Feel free to download the paper and presentation slides, or watch the video captured during the conference (unfortunately there's only audio with no video for the first segment).

Here are a few of the master passwords Graham was able to recover from the password leaks.
Domain Leaked hash Password
Stratfor e9c0873319ec03157f3fbc81566ddaa5 frogdog
Rootkit 2261bac1dfe3edeac939552c0ca88f35 zugang
Rootkit 43679e624737a28e9093e33934c7440d ub2357
Rootkit dd70307400e1c910c714c66cda138434 erpland
LinkedIn 508c2195f51a6e70ce33c2919531909736426c6a 5tgb6yhn
LinkedIn ed92efc65521fe5074d65897da554d0a629f9dc7 Superman1938
LinkedIn 5a9e7cc189fa6cf1dac2489c5b81c28a3eca8b72 Fru1tc4k3
LinkedIn ba1c6d86860c1b0fa552cdb9602fdc9440d912d4 meideprac01
LinkedIn fd08064094c29979ce0e1c751b090adaab1f7c34 jose0849
LinkedIn 5264d95e1dd41fcc1b60841dd3d9a37689e217f7 linkedin

I'll leave it as an exercise for the reader to decide whether these are sensible master passwords or not.
16 Oct 2016 : Fixing snap apps with relocatable DATADIRS #

It's luminous, not florescent

Recently I've been exploring how to create snaps of some of my applications. Snap is the new 'universal packaging format' that Canonical is hoping will become the default way to deliver apps on Linux. The idea is to package up an app with all of its dependencies (everything needed apart from ubuntu-core), then have the app deployed in a read-only container. The snap creator gets to set what are essentially a set of permissions for their app, with the default preventing it from doing any damage (either to itself or others). However, it's quite possible to give enough permissions to allow a snap to do bad stuff, so we still have to trust the developers of the snap (or spend your life reading through source-code to check for yourself). If you want to know more about how snaps work, the material out there is surprisingly limited right now. Most of the good stuff - and happily it turns out to be excellent - can be found at

Predictably the first thing I tried out was creating a snap for functy, which means you can now install it just be typing 'snap install functy' on Yakkety Yak. If your application already uses one of the conventional build systems like cmake or autotools, creating a snap is pretty straightforward. If it's a command-line app, just specifying a few details in a yaml file may well be enough. Here's an example for a fictional utility called useless, which you can get hold of from GitLab if you're interested (the code isn't fictional, but the utility is!).

The snapcraft file for this looks like this.
name: useless
version: 0.0.1
summary: A poem transformation program
  Has a very limited purpose. It's mostly an arbitrary example of code.
confinement: strict

    command: useless
    plugs: []

    plugin: autotools
      - pkg-config
      - libpcre2-dev
      - libpcre2-8-0
    after: []

This just specifies the build system (plugin), some general description, the repository for the code (source), a list of build and runtime dependencies (build-packages and stage-packages respectively) and the command to actually run the utility (command).

This really is all you need. To test it just copy the lot into a file called snapcraft.yaml, then enter this command while in the same directory.
snapcraft cleanbuild

And a snap is born.

This will create a file called useless_0.0.1_amd64.snap which you can install just fine. When you try to execute it things will go wrong though: you'll get some output like this.
flypig@Owen:~/Documents/useless/snap$ snap install --force-dangerous useless_0.0.1_amd64.snap

useless 0.0.1 installed
flypig@Owen:~/Documents/useless/snap$ useless
Opening poem file: /share/useless/dong.txt

Couldn't open file: /share/useless/dong.txt

The dong.txt file contains the Edward Lear poem "The Dong With a Luminous Nose". It's a great poem, and the utility needs it to execute properly. This file can be found in the assets folder, installed to the $(datadir)/@PACKAGE@ folder as specified in assets/
uselessdir = $(datadir)/@PACKAGE@
useless_DATA = dong.txt COPYING
EXTRA_DIST = $(useless_DATA)

In practice the file will end up being installed somewhere like /usr/local/share/useless/dong.txt depending on your distribution. One of the nice things about using autotools is that neither the developer not the user needs to know exactly where in advance. Instead the developer can set a compile-time define that autotools will fill and embed in the app at compile time. Take a look inside src/
bin_PROGRAMS = ../useless
___useless_SOURCES = useless.c

___useless_LDADD = -lm @USELESS_LIBS@

___useless_CPPFLAGS = -DUSELESSDIR=\"$(datadir)/@PACKAGE@\" -Wall @USELESS_CFLAGS@

Here we can see the important part which sets the USELESSDIR macro define. Prefixing this in front of a filename string literal will ensure our data gets loaded from the correct place, like this (from useless.c)
char * filename = USELESSDIR "/dong.txt";

If we were to package this up as a deb or rpm package this would work fine. The application and its data get stored in the same place and the useless app can find the data files it needs at runtime

Snappy does things differently. The files are managed in different ways at build-time and run-time, and the $(datadir) variable can't point to two different places depending on the context. As a result the wrong path gets baked into the executable and when you run the snap it complains just like we saw above. The snapcraft developers have a bug registered against the snapcraft package explaining this. Creating a generalised solution may not be straightforward, since many packages - just like functy - have been created on the assumption the build and run-time paths will be the same.

One solution is to allow the data directory location to be optionally specified at runtime as a command-line parameter. This is the approach I settled on for functy. If you want to snap an application that also has this problem, it may be worth considering something similar.

The first change needed is to add a suitable command line argument (if you're packaging someone else's application, check first in case there already is one; it could save you a lot of time!). The useless app didn't previously support any command line arguments, so I augmented it with some argp magic. Here's the diff for doing this. There's a fair bit of scaffolding required, but once in, adding or changing the command line arguments in the future becomes far easier.

The one part of this that isn't quite boilerplate is the following generate_data_path function.
char * generate_data_path (char const * leaf, arguments const * args) {
    char * result = NULL;
    int length;

    if (leaf) {
        length = snprintf(NULL, 0, "%s/%s", args->datadir, leaf);
        result = malloc(length + 2);
        snprintf(result, length + 2, "%s/%s", args->datadir, leaf);

    return result;

This takes the leafname of the data file to load and patches together the full pathname using the path provided at the command line. It's simple stuff, the only catch is to remember to free the memory this function allocates after it's been called.

For functy I'm using GTK, so I use a combination of GOptions for command line parsing and GString for the string manipulation. The latter in particular makes for much cleaner and safe code, and helps simplify the memory management of this generate_data_path function.

Now we can execute the app and load in the dong.txt file from any location we choose.
useless --datadir=~/Documents/Development/Projects/useless/assets

There's one final step, which is to update the snapcraft file so that this gets added automatically when the snap-installed app is run. The only change now is set the executed command as follows.
    command: useless --datadir="${SNAP}/share/useless"

Here's the full, updated, snapcraft file.
name: useless
version: 0.0.1
summary: A poem transformation program
  Has a very limited purpose. It's mostly an arbitrary example of code.
confinement: strict

    command: useless --datadir="${SNAP}/share/useless"
    plugs: []

    plugin: autotools
      - pkg-config
      - libpcre2-dev
      - libpcre2-8-0
    after: []

And that's it! Install the snap package, execute it (just by typing 'useless') and the utility will run and find it's needed dong.txt file.

There's definitely a sieve involved
2 Oct 2016 : Server time-travel, upgrading four years in nine days #
Years of accumulation has left me with a haphazard collection of networked computers at home, from a (still fully working) 2002 Iyonix through to the ultrabook used as my daily development machine. There are six serious machines connected to the network, if you don't count the penumbra of occasional and IoT devices (smart TV, playstation, Raspberry Pis, retired laptops, A9).

All of this is ably managed by Constantia, my home server. As it explains on her webpage, Constantia's a small 9 Watt fanless server I bought in 2011, designed to be powered using solar panels in places like Africa with more sun than infrastructure (and not in a Transcendence kind of way).

Constantia's physical presence

Although Constantia's been doing a phenomenal job, until recently she was stuck running Ubuntu 12.04 LTS Precise Pangolin. Since there's no fancy graphics card and 12.04 was the last version of Ubuntu to support Unity 2D, I've been reticent to upgrade. A previous upgrade from 8.04 to 10.04 many years ago - when Constantia inhabited a different body - caused me a lot of display trouble, so she has form in this regard.

Precise is due to fall out of support next year, and I'd already started having to bolt on a myriad PPAs to keep all the services pumped up to the latest versions. So during my summer break I decided to allocate some time to performing the upgrade, giving me the scope to fix any problems that might arise in the process.

This journey, which started on 19th September, finished today, a full two weeks later.

As expected, the biggest issue was Unity, although the surprise was that it ran at all. Unity has a software graphics-rendering fallback using LLVMPipe, which was actually bearable to use, at least for the small amount of configuration needed to get a replacement desktop environment up and running. After some research comparing XFCE, LXDE and Gnome classic (the official fallback option) I decided to go for XFCE: lightweight but also mature and likely to be supported for the foreseeable future. Having been running it for a couple of weeks, I'm impressed by how polished it is, although it's not quite up there with Unity in terms of tight integration.

The XFCE desktop running on Constantia with beautiful NASA background

There were also problems with some of the cloud services I have installed. MediaWiki has evaporated, but I was hardly using it anyway. The PPA-cruft needed to support ownCloud, which I use a lot, has been building up all over the place. Happily these have now been stripped back to the standard repos, which makes me feel much more comfortable. Gitolite, bind, SVN and the rest all transferred with only minor incident.

The biggest and most exciting change is that I've switched my server backups from USB external storage to Amazon AWS S3 (client-side encrypted, of course). Following a couple of excellent tutorials on configuring deja-dup to use S3 by Juan Domenech, and on S3 IAM permissions by Max Goodman, got things up and running.

But even with these great tutorials, it was a bit of a nail-bighting experience. My first attempt to back things up took five days continuous uploading to reach less than 50% before I decided to reconfigure. I've now got it down to a full backup in four days. By the end of it, I feared I might have to re-mortgage to pay the Amazon fees.

So, how much does it cost to upload and store 46 GiB? As it turns out, not so much: $1.06. I'm willing to pay that each month for effective off-site backup.

The upgrade of Constantia also triggered some other life-refactoring, including the moving of my software from SourceForge to GitLab, but that's a story for another time.

After all this, the good news is that Constantia is now fully operational an up-to-date with Ubuntu 16.04 LTS Xenial Xerus. This should get her all the way through to 2021. Kudos to the folks at Aleutia for creating a machine up to the task, and to Ubuntu for the unexpectedly smooth upgrade process.

The bad news is that nextCloud is now waxing as ownCloud wanes. It doesn't seem to yet be the right time to switch, but that time is approaching rapidly. At that point, I'll need another holiday.
18 Jul 2016 : Using a CDN for #
This weekend I've been playing around with Amazon's CloudFront CDN. I've been setting up a new site,, and although I'm not expecting it to be heavily used, the site is bandwidth-heavy and entirely static on the server-side, so a good candidate for deployment via CDN. For those unfamiliar with the term, CDN stands for Content Delivery Network, able to push the content of a website out to multiple servers across the world. This moves the content closer to the end-users, in theory reducing latency and making the site feel more responsive.

There are other benefits of using a CDN. Because the site is served from multiple locations it also makes it less susceptible to denial of service attacks. Since I work in security, there's been a lot of discussion in my research group about DoS attacks and I recently saw a fascinating talk by Virgil Gligor on the subject (the paper's not yet out, but Ross Anderson has written up a convenient summary).

The availability that DoS attempts to undermine offers a wholly different dynamic from the confidentiality, integrity and authenticity that I'm more familiar with. These four together make up the CIAA 'triad' (traditionally just CIA, but authenticity is often added as another important facet of information security). Tackling DoS feels much more practical than the often cryptographic approaches used in the other three areas. An attacker can scale up their denial of service by sending from multiple sources (for example using a botnet), while a CDN redresses the balance by serving from multiple sources, so there's an elegant symmetry to it.

In addition to all of that, CloudFront looks to be pretty cheap, at least compared to spinning up an EC2 instance to serve the site. That makes it both educational and practical. What's not to like?

Amazon makes it exceptionally easy to serve a static site from an S3 bucket. Simply create a new bucket, upload the files using the Web interface and select the option to serve it as a site.

S3 bucket

The only catch is that you also have to apply a suitable policy to the bucket to make it public. Why Amazon doesn't provide a simpler way of doing this is beyond me, but there are plenty of how-tos on the Web to plug the gap.

S3 bucket policy

Driving a website from S3 offers serviceable, but not great, performance. A lot of sites do this, and already in May 2013 netcraft identified 24.7 thousand hostnames running an entire site served directly from S3 (with many more serving part of the site from S3). It's surely much higher now.

Once a site's been set up on S3, hosting it via CloudFront is preposterously straightforward. Create a new distribution, set the origin to the S3 bucket and use the new address.

S3 distribution origin settings

The default CloudFront domains aren't exactly user-friendly. This is fine if they're only used to serve static content in the background (such as the images for a site, just as the retail Amazon site does), but an end-user-facing URL needs a bit more finesse. Happily it's straightforward to set up a CNAME to alias the cloudfront subdomain. Doing this ensures Amazon can continue to manage the DNS entry it points to, including which location to serve the content from. So I spent £2.39 on the domain and am now fully finessed.

Finally I have three different domain names all pointing to the same content.
The process, which is in theory very straightforward, was in practice somewhat glitchy. The bucket policy I've already mentioned above. The part that caused me most frustration was in getting the domain name to work. Initially the S3 bucket served redirects to the content (why? Not sure). This was picked up by CloudFront, which happily continued to serve the redirects even after I'd changed the content. The result was that visiting the CloudFront URL (or the domain) redirected to S3, changing the URL in the process, even though the correct content was served. It took several frustrating hours before I realised I had to invalidate the material through the CloudFront Web interface before all of the edge servers would be updated. Things now seem to update immediately without the need for human intervention; it's not entirely clear what changed, but it certainly hindered progress before I realised.

The whole episode took about a day's work and next time it should be considerably shorter. The cost of running via CloudFront and S3 is a good deal less than the cost of running even the most meagre EC2 instance. Whether it gives better performance is questionable.

Comparing basic S3 access with the equivalent CloudFronted access gives a 25% speed-up when accessed from the UK. However, to put this in context, serving the same material from my basic fasthosts web server results in a further 10% speed-up on top of the CloudFront increase.

Loading times for S3
Loading times accessing the site on S3 (2.54s total).

Loading times for CloudFront
Loading times accessing the site via CloudFront (1.92s total).

Loading times for fasthosts
Loading times accessing the site on (1.75s total).

If I'm honest, I was expecting CloudFront to be faster. On the other hand this is checking only from the UK where my fasthosts server is based. The results across the world are somewhat more complex, as you can see for yourself from the table below.

Ping times for the three access methods from across the world (all times in ms, from dotcom).
Location S3 CloudFront Fasthosts
Amsterdam, Netherlands 16 9 12
London, UK 13 19 9
Paris, France 14 10 26
Frankfurt, Germany 26 7 23
Copenhagen, Denmark 32 18 33
Warsaw, Poland 48 26 38
Tel-Aviv, Israel 79 58 72
VA, USA 88 93 86
NY, USA 76 105 87
Amazon-US, East 80 99 100
Montreal, Canada 92 100 92
MN, USA 106 114 106
FL, USA 107 118 109
TX, USA 114 117 138
CO, USA 129 118 138
Mumbai, India 142 124 130
WA, USA 135 144 136
CA, USA 141 149 137
South Africa 157 165 155
CA, USA (IPv6) 278 149 230
Tokyo, Japan 243 229 231
Buenos Aires, Argentina 263 260 224
Beijing, China 260 253 249
Hong Kong, China 283 293 287
Sydney, AU 298 351 334
Brisbane, AU 313 343 331
Shanghai, China 332 419 369

We can render this data as a graph to try to make it more comprehensible. It helps a bit, but not much. In the graph, a steeper line is better, so CloudFront does well at the start and mid-table, but also has the site with the longest ping time overall. The lines jostle for the top spot, from which it's reasonable to conclude they're all giving pretty similar performance in the aggregate.

Pings times cummulative over location

In conclusion, apart from the unexpected redirects, setting up CloudFront was really straightforward and the result is a pretty decent and cheap website serving platform. While I'm not in a position to compare with other CDN services, I'd certainly use CloudFront again even without the added incentive of wanting to know more about it.

I'm now looking in to adding an SSL cert to the site. Again Amazon have made it really straightforward to do, but the trickiest part is figuring out the cost implications. The site doesn't accept any user data and SSL would only benefit the integrity of the site (which, for this site, is of arguable benefit), so I'd only be doing it for the experience. If I do, I'll post up my experiences here.
24 Jun 2016 : A bit more. #
Not comfort at all, but looking at the results across the country, Cambridge was one of the few places in England outside London that voted to remain (overwhelmingly, 74% to 26%). I was also happily surprised given the north-south balance that Liverpool (58% to 41%) and the Wirral (52% to 48%) also voted to remain. That could be because both areas have benefited greatly from European investment, but that must be true of many other parts of England too. Maybe they're just saner people? Less surprising is that Castle Point voted overwhelmingly to leave (73% to 27%).
For me the argument about popular sovereignty was far more important than the argument about the economy and my guess would be that this persuaded many who voted to leave (although my darker more cynical side fears it may have been immigration). It's sad for me that this argument about sovereignty was exactly my reason for wanting to remain. So many important international decisions where the UK has now lost its voice and vote.
24 Jun 2016 : EU Referendum #
As a British European I feel like part of my identity, and part of my voice in the world, was taken away from me today. I just hope as a country, we can turn this decision to leave the EU into something positive.
27 Feb 2016 : Losing My Religion #
For the last 18 years this site has stuck rigidly to a dynamic-width template. That's because I've always believed fixed-width templates to be the result of either lazy design or a misunderstanding of HTML's strengths. Unfortunately fashion seems to be against me, so in a bid to regain credibility, I'm now testing out a fixed-width template.

Look closely at the original design from 1998 and you'll see the structure of the site has hardly changed, while the graphics - which drew heavy inspiration from the surface of the LEGO moon - have changed drastically. At the time I was pretty pleased with the design, which just goes to show how much tastes, as well as web technologies, have changed in the space of two decades.

By moving to a fixed-width template I've actually managed to annoy myself. The entire principle of HTML is supposed to be that the user has control over the visual characteristics of a site. 'Separate design and content' my jedi-master used to tell me, just before mind-tricking me into doing the dishes. The rot set in when people started using tables to layout site content. The Web fought back with CSS, which was a pretty valiant attempt, even if we're now left with the legacy of a non XMl-based format (why W3C? Why?!).

But progress marches sideways and Javascript is the new Tables. Don't get me wrong, I think client-side programmability is a genuine case of progress, but it inevitably prevents proper distinction between content and design. It doesn't help that Javascript lives in the HTML rather than the CSS, which is where it should be if it's only purpose is to affect the visual design. Except good interactive sites often mix visuals and content in a complex way, forcing dependencies across the two that are hard to partition.

Happily computing has already found a solution to this in the form of MVC. In my opinion MVC will be the inevitable next stage of web enlightenment, as the W3C strives to pull it back to its roots separating content from design. Lots of sites implement their own MVC approach, but it should be baked into the standards. The consequence will be a new level of abstraction that increases the learning-curve gradient, locks out newcomers and spawns a new generation of toolkits attempting to simplify things (by pushing the content and design together again).

Ironically, the motivation for me to move to a fixed-width came from a comment by Kochise responding to a story about how websites are becoming hideous bandwidth-hogs. Kochise linked to a motherfucking website. So much sense I thought! Then he gave a second link. This was still a motherfucking website, but claimed to be better. Was it better? Not in my opinion it wasn't. And anyway, both websites use Google Analytics, which immediately negates anything worthwhile they might have had to say. The truly remarkable insight of Maciej Cegłowski in the original article did at least provoke me into reducing the size of my site by over 50%. Go me!

It highlighted something else also. The 'better' motherfucking website, in spite of all the mental anguish it caused me, did somehow look more modern. There are no doubt many reasons, but the most prominent is the fixed column width, which just fits in better with how we expect websites to look. It's just fashion, and this is the fashion right now, but it does make a difference to how seriously people take a site.

I actually think there's something else going on as well. When people justify fixed-width sites, they say it makes the text easier to read, but on a dynamic-width site surely I can just reduce the width of the window to get the same effect? This says something about the way we interact with computers: the current paradigm is for full-screen windows with in-application tabs. As a result, changing the width of the window is actually a bit of a pain in the ass, since it involves intricate manipulation of the window border (something which the window manager makes far more painful than it should be) while simultaneously messing up the widths of all the other open tabs.

It's a rich tapestry of fail, but we are where we are. My view hasn't changed: fixed width sites are at best sacrificing user-control for fashion and at worst nothing more than bad design. But I now find myself at peace with this.

If you think the same, but unlike me your're not willing to give up just yet, there's a button on the front page to switch back to the dynamic width design.
1 Feb 2016 : Pebble SDK Review #
Although Pebble smartwatches have been around for some time, I only recently became one of the converted after buying a second-hand Pebble Classic last October. Over Christmas I was lucky enough to be upgraded to a Pebble Time Round. This version was only released recently, and the new form factor requires a new approach to app development. Not wildly different from the existing Classic and Time variants, but enough to necessitate recompilation and some UI redesign of existing apps.

As a consequence many of the apps I'd got used to on my Classic no longer appear in the app store for the Round. This, I thought, offered a perfect opportunity for me to get to grips with the SDK by upgrading some of those that are open source.

Although I'm a total newb when it comes to Pebble and smartwatch development generally, I have plenty more experience with other toolchains, SDKs and development environments, from Visual Studio and QT Creator through to GCC and Arduino IDE, as well as the libraries and platforms that go with them. I was interested to know how the Pebble dev experience would compare to these.

It turns out there are essentially two ways of developing Pebble apps. You can use a local devchain, built around Waf, QEMU and a custom C compiler. This offers a fully command line approach without an IDE, leaving you to choose your own environment to work in. Alternatively there's the much slicker CloudPebble Web IDE. This works entirely online, including the source editor, compiler and pebble emulator.

CloudPebble IDE
I worked through some of the tutorials on CloudPebble and was very impressed by it. The emulator works astonishingly well and I didn't feel restricted by being forced to use a browser-based editor. What I found particularly impressive was the ability to clone projects from GitHub straight into CloudPebble. This makes it ideal for testing out the example projects (all of which are up on GitHub). Without having to clutter up your local machine. Having checked the behaviour on the CloudPebble emulator, if it suits your needs you can then easily find the code to make it work and replicate it in your own projects.

Although there's much to recommend it, I'm always a bit suspicious of Web-based approaches. Experience suggests they can be less flexible than their command line equivalents, imposing a barrier on more complex projects. In the case of CloudPebble there's some truth to this. If you want to customise your build scripts (e.g. to pre-generate some files) or combine your watch app with an Android app, you'll end up having to move your build locally. In practice these may be the fringe cases, but it's worth being aware.

So it can be important to understand the local toolchain too. There's no particular IDE to use, but Pebble have created a Python wrapper around the various tools so they can all be accessed through the parameters of the pebble command.
Pebble Tool command:

    build               Builds the current project.
    new-project         Creates a new pebble project with the given name in a
                        new directory.
    install             Installs the given app on the watch.
    logs                Displays running logs from the watch.
    screenshot          Takes a screenshot from the watch.
    insert-pin          Inserts a pin into the timeline.
    delete-pin          Deletes a pin from the timeline.
    emu-accel           Emulates accelerometer events.
    emu-app-config      Shows the app configuration page, if one exists.
    emu-battery         Sets the emulated battery level and charging state.
    emu-bt-connection   Sets the emulated Bluetooth connectivity state.
    emu-compass         Sets the emulated compass heading and calibration
    emu-control         Control emulator interactively
    emu-tap             Emulates a tap.
    emu-time-format     Sets the emulated time format (12h or 24h).
    ping                Pings the watch.
    login               Logs you in to your Pebble account. Required to use
                        the timeline and CloudPebble connections.
    logout              Logs you out of your Pebble account.
    repl                Launches a python prompt with a 'pebble' object
                        already connected.
    transcribe          Starts a voice server listening for voice
                        transcription requests from the app
    data-logging        Get info on or download data logging data
    sdk                 Manages available SDKs
    analyze-size        Analyze the size of your pebble app.
    convert-project     Structurally converts an SDK 2 project to an SDK 3
                        project. Code changes may still be required.
    kill                Kills running emulators, if any.
    wipe                Wipes data for running emulators. By default, only
                        clears data for the current SDK version.

Although it does many things, the most important are build, install and logs. The first compiles a .pbw file (a Pebble app, essentially a zip archive containing binary and resource files); the second uploads and runs the application; and the last offers runtime debugging. These will work on both the QEMU emulator, which can mimic any of the current three watch variants (Original, Time, Time Round; or aplite, basalt and chalk for those on first name terms), or a physical watch connected via a phone on the network.

CloudPebble IDE
It's all very well thought out and works well in practice. You quickly get used to the build-install-log cycle during day-to-day coding.

So, that's the dev tools in a nutshell, but what about the structure, coding and libraries of an actual app? The core of each app is written in C, so my first impression was that everything felt a bit OldSkool. It didn't take long for the picture to become more nuanced. Pebble have very carefully constructed a simple (from the developer's perspective) but effective event-based library. For communication between the watch and phone (and via that route to the wider Internet) the C hands over to fragments of Javascript that run on the phone. This felt bizarre and overcomplicated at first, but actually serves to bridge the otherwise rough boundary between embedded (watch) and abstract (phone) development. It also avoids having to deal with threading in the C portion of the code. All communication is performed using JSON, which gets converted to iterable key-value dictionaries when handled on the C side.

This seems to work well: the UI written in C remains fluid and lightweight with Javascript handling the more infrequent networking requirements.

The C is quite restrictive. For example, I quickly discovered there's no square root function, arguable one of the more useful maths functions on a round display (some trig is provided by cos and sin lookup functions). The libraries are split into various categories such as graphics, UI, hardware functions and so on. They're built as objects with their own hierarchy and virtual functions implemented as callbacks. It all works very well and with notable attention to detail. For example, in spite of it being C, the developers have included enough hooks for subclasses to be derived from the existing classes.

The downside to all of this is that you have to be comfortably multilingual: C for the main code and interface, Javascript for communication with a phone, Java and Objective-C to build companion Apps and Python for the build scripts. Whew.

Different people will want different things in a development environment: is it well structured? Does it support a developer's particular preference of language? Is it simple at the start but flexible enough to deal with more complex projects? Does it support different development and coding styles? How much boilerplate overhead is there before you can get going? How familiar does it all feel?

It just so happens that I really like C, but dislike Javascript, although I'm certain there are many more developers who feel the exact opposite. The Pebble approach is a nice compromise. I was happy dealing with the C and the Javascript part was logical (e.g. no need to deal with browser incompatibilities). If you're a died-in-the-wool Web developer, there's even a pre-build JS shim for creating watch faces.

So it also seems to work well together and I've come away impressed. Many developers will find the CloudPebble interface slicker and easier to use. But after wading through the underlying complexities – and opacity – of IDEs like Visual Studio and Eclipse, the thoughtful clarity of the Pebble SDK makes for a refreshing change. I wouldn't recommend it for a complete newcomer to C or JS, but if you have any experience with these languages, you'll find yourself up and running with the Pebble toolchain in no time.
24 Jan 2016 : Deconstructing Gone Home #
They say that mastery is not a question of specialization, but sureness of purpose and dedication to craft. Gone Home demonstrates that the application of all three can generate wonderful results. While most games revel in their use of varied and multifarious mechanics – singleplayer, multiplayer, cover mechanisms, rewards, dizzying weapon counts and location changes – Gone Home sticks to a single plan with minimal mechanics and delivers it flawlessly.

Everything is driven by the narrative, which takes a layered approach. There’s no choice as such and this isn’t a choose-your-own adventure. In spite of that, a huge amount of trust is bestowed on the player which allows them to miss large portions of the story if they so choose. This trust is rooted in the mechanics of the game rather than the story, and ultimately makes the game far more rewarding.

To understand this better we need to deconstruct the game with a more analytic approach. A good place to start with this is the gameplay mechanics, but it will inevitably require us to consider the story as well. So, here be spoilers. If you’ve not yet played the game, I urge you to do so before reading any further.

There are spoilers beyond this door

Even though the game is full 3D first-person perspective, the mechanics are pretty sparse. The broad picture is that you have scope to move around the world, pick up and inspect objects, discover ‘keys’ to unlock new areas, and listen to audio diaries. This is a common mechanic used in games and even the use of audiologs has become somewhat of a gaming trope. The widely acclaimed Bioshock franchise uses them as an important (but not the only) narrative device. They’re used similarly in Dead Space, Harley Quinn’s recordings in Batman, the audiographs in Dishonored, and the audio diaries in the rebooted Tomb Raider. Variants include Deus Ex’s email conversations and Skyrim’s many books that provide context for the world. There are surely many others, but while some of these rely heavily on audiologs to maintain their story, few of them use it as a central gameplay mechanic. Bioshock, for example, emphasises fight sequences far more and includes interactions with other characters such as Atlas or Elizabeth for story development. Gone Home provides perhaps the most pure example of the use of audiologs as a central mechanic.

So mechanically this is a pure exploration game. This makes it an ideal game for further analysis, since the depth of mechanics remain tractable. As we’ll see, the mechanics in play actually feel sparser than they are. By delving just a bit into the game we find there’s more going on than we might have imagined on first inspection.

Starting with the interactions, we can categorise theses into eight core ‘active’ mechanics and a further five ‘passive’ types.

Active interaction types
  1. Movement and crouching/zooming
  2. Picking up objects
    1. Full rotation
    2. Lateral rotation
    3. Reading (possibly with multiple pages)
    4. Adding an object to your backpack
    5. Triggering a journal entry
    6. Playing an audio cassette
  3. Return object to the same place
  4. Throw object
  5. Turn on/off an object (e.g. light, fan, record player, TV)
  6. Open/close door (including some with locks or one-way entry)
  7. Open/close cupboards/drawers
  8. Lock combinations
Passive interaction types
  1. Hover text
  2. Reading object
  3. Finding clues in elusive hard-to-see places
  4. Viewing places ahead-of-time (e.g. conservatory)
  5. Magic eye pictures
The distinction between active and passive is not just qualitative. All of the active interactions require specifically coded mechanisms to allow them to operate. This contrasts with the passive interactions, which capitalise on design elements made available through the existing toolset (e.g. the placement or design of objects).

While the key mechanism for driving the narrative forward is exploration through inspecting objects, it’s perhaps more enlightening to first understand the mechanisms used to restrict progress. All games must balance player agency against narrative cohesion. If the player skips too far forward they may miss information that’s essential for understanding the story. If the player is forced carefully along a particular route the sense of agency is lost, and can also lead to frustration if progress is hindered unnecessarily. Sitting in between is a middle ground that trusts the player to engage with the game and relies on them to manage and reconstruct information that may be presented out-of-order, incomplete and in multiple ways.

There are then seven main ‘bulkheads’ (K1-K7) that define eight areas that force the narrative to follow a given sequence. On top of this there are two optional ‘sidequest bulkheads’ (K8, K9). The map itself can be split into twelve areas, and the additional breakpoints help direct the flow of the player, although where no keys are indicated this occurs through psychological coercion rather than compulsion.

Gone Home progression map
These areas shown in the diagram are as follows.
  • P1. Porch
  • P2. Ground floor west
  • P3. Upstairs
  • P4. Stairs between upstairs and library
  • P5. Three secret panels
  • P6. Locker
  • P7. Basement
  • P8. Stairs between basement, guestroom and ground floor east
  • P9. Ground floor east
  • P10. Room under the stairs
  • P11. Attic
  • P12. Filing cabinet (optional)
  • P13. Safe (optional)
The keys needed to unlock progress are the following.
  • K1. Christmas duck key
  • K2. Sewing room map
  • K3. Secret panel map
  • K4. Locker combination
  • K5. Basement key
  • K6. Map to room under stairs in conservatory
  • K7. Attic key
  • K8. Safe combination in library (optional)
  • K9. Note in guestroom (optional)

Some of the keys in Gone Home
Given there are twenty five audio diaries, and a huge number of other written items and objects which add to the story, it’s clear that The Fullbright Company (the Gone Home developers) assume a reasonable amount of flexibility in the ordering of the information within these eight areas. It’s very easy to miss a selection of them on a single run-through of the game.

The diaries themselves only capture the main narrative arc – Sam’s coming-of-age – which interacts surprisingly loosely with the other arcs that can be found. These can be most easily understood by categorising them in terms of characters:
  1. Sam’s coming-of-age (sister)
  2. Terrance’s literary career (Dad)
  3. Jan’s affair (Mum)
  4. Oscar’s life (great uncle)
  5. Kaitlin’s travel (protagonist)
Other incidental characters are used to develop these stories, such as Carol, Jan’s college housemate whose letters are used to frame Jan’s possible affair with her new work colleague. However, these five characters and five story arcs provide the main layers that enrich the game.
An interesting feature of these stories is that they each conform to different literary genres, and this helps to obscure the nature of the story, allowing the ending to remain a surprise up until the last diary entry. Terrance’s career has elements of tragedy which are reinforced by the counterbalancing romance of Jan’s affair. Oscar’s story, which is inseparable from that of the house itself, introduces elements of horror. Kaitlin’s story is the least developed, but is perhaps seen best as a detective story driven by the player. Even though you act through Kaitlin as the protagonist, it’s clearly Sam who’s star of the show. Even though it’s clear from early on that the main narrative, seen through Sam’s eyes, is a coming-of-age story, the ending that defines the overall mood (love story, tragedy?) is left open until the very end.

Perhaps another interesting feature is the interplay between the genres and the mechanics. The feel of the game, with bleak weather, temperamental lighting, darkened rooms and careful exploration, is one of survival horror. Initially it seems like the game might fall into this category, with Oscar’s dubious past and the depressed air. This remains a recurrent theme throughout. But ultimately this is used more to provide a backdrop to Sam’s story, transporting Kaitlin through her (your) present-day experiences to those of Sam as described through her audio diaries and writings.

Ultimately then, it’s possible to deconstruct Gone Home into its thirteen main interaction types, eight areas and five narrative arcs. This provides the layering for a rich story and involving game, even though, compared to many of its contemporaries in the gaming arena, it’s mechanically rather limited. By delving into it I was hoping it might provide some insight into how the reverse can take place: the construction of a game based on a fixed set of mechanics and restricted world. It goes without saying that the impact of the story comes from its content and believability, along with pitching the trust balance in the right spot. Neither of these can be captured in an easily reproducible form.

Nonetheless it would be really neat if it were possible to derive a formal approach from this for simplifying the process of creating new material that follows a similar layered narrative approach. Unlike many games Gone Home is complex enough to be enjoyable but simple enough to understand as a whole. It was certainly one of my favourite games of 2014, and if there's a formula for getting such things right, it's a game that's worth replicating.

You Can Do *Better*
Addendum: I wrote this back in July 2014 while lecturing on the Computer Games Development course at Liverpool John Moores Univrsity and recently rediscovered it languishing on my hard drive. At the time I thought it might be of interest to my students and planned to develop it into a proper theory. Since I never got around to doing so, and now probably never will, I felt I may as well publish it in its present form.
23 Jan 2016 : How Not to Write #
Each week I read a column in the Guardian Weekly called "This column will change your life" by Oliver Burkeman. It's full of insightful but unsubstantiated claims about how efficiency, mental state, tidiness or whatnot can be improved if only you can follow some simple advice. Always a good read.

Oliver Burkeman's Column

This week it explained how getting over writer's block is simply a case of being disciplined: the trick to writing is to write often and in small doses. Not only should you create a schedule to start, but you should also create a schedule to stop. Once your time runs out, stop writing immediately ("even if you've got momentum and could write more"). It's the same advice that was given to me about revision when I was sixteen and is probably as valid now as it was then.

The advice apparently comes from a book by Robert Boice. I was a bit dismissive of the claim in the article that used copies sell for $190, but I've just checked on Amazon and FastShip is selling it for $1163 (Used - Acceptable). That's $4 per page, so it must be saturated with wisdom.

My interest was piqued by the fact that the book's aimed at academics struggling to write. I wouldn't say I struggle to write, but I would say I struggle to write well. Following Boice's advice, writing often and in small doses should probably help with that, but here are a few other things I genuinely think will probably help if - like me - you want to improve your writing ability.

  1. Read a lot. Personally I find it much easier to get started if I already have a style in mind. Mimicking a style makes the process less personal, and that distance can make it easier (at least for me, but this might only work if you suffer from repressed-Britishness). For the record and to avoid any claims of deception from those who know me, I do hardly any reading.
  2. Plan and structure. Breaking things into smaller pieces makes them more manageable and once you have the headings it's just a case of filling in the blanks. Planning what you intend to say will result in better arguments and more coherent ideas.
  3. Leave out loads of ideas. Clear ideas are rarely comprehensive and if you try to say everything you'll end up with a web of thoughts rather than a nice linear progression.
  4. Let it settle overnight. Sometimes the neatest structures and strongest ideas are the result of fermentation rather than sugar content. I don't really know what that means, but hopefully you get the idea.
  5. Don't let it settle for another night. It's better to write something than to allow it to become overwhelming.
  6. And most important of them all... oh, time's up.

How Not to Live Your Life

21 Jan 2016 : Are smartwatches better than watches? #
The Pebble Time Round is a beautiful device in many ways. Aesthetically it's one of the few smartwatches that manages to hide its programmable heart inside the slim dimensions of a classic analogue shell. This sets it apart from its existing Pebble brethren, all of which have what can only charitably be described as an eighties charm. Given one of my most treasured possessions during my teenage years was a Casio AE-9W Alarm Chrono digital watch, and for the last three months I've been proudly wearing a Pebble Classic, I feel I speak with some authority on the matter.

Pebble Classic (above) and Pebble Time Round (below)

The Pebble Time Round can't entirely shed its geek chic ancestry. The round digital face suffers from a sub-optimally wide bezel. The colour e-ink display - although with many advantages - simply isn't as vivid and crisp as most other smartwatches on the market.

In spite of this, Pebble have managed to create a near perfect smartwatch for my purposes. I still get a kick out of receiving messages on my watch. My phone, which used to sit on my desk in constant need of attention now stays in my pocket muted and with vibration turned off. Whenever some communication arrives I can check it no matter what I'm doing in the space of three seconds. For important messages this isn't a great advantages; where the real benefit lies is in avoiding the disruption caused by all those unimportant messages that can be left until later.

Obviously the apps are great too. In practice I've found myself sticking to just a few really useful apps, but those few that do stick make me feel like I'm living in the future I was promised as a child. Most of all, the real excitement comes from being able to program the thing. There's nothing more thrilling than knowing there's a computer on my wrist that's just waiting to do anything asked of it, imagination and I/O permitting. I would say that though, wouldn't I?!

Of course, that's not just true for Pebble; you could say the same for just about any current generation smartwatch: Google Wear, iWatch, Tizen, whatever. Still, it's great that Pebble are forging a different path to these others, focussing on decent battery life, nonemissive displays and a minimalist interface design.

For the last decade I've been dismissive of watches in general and never felt the need to wear one. I arrived late to the smartwatch party, but aving taken the time to properly try some out, I'm now convinced they're a viable form factor. Even if only to fulfil the childhood fantasies of middle-aged geeks like me, they'll surely be here to stay (after all, there's a lot of us around!).

3 Jan 2016 : Slimming Down in 2016 #
Today is the last day of my Christmas break and the last thing I need is distractions, but when I saw this article on The Website Obesity Crisis by Maciej Cegłowski I couldn't stop myself reading through to the end. Maciej ("MAH-tchay") is a funny guy, and the article - which is really the text and slides from a presentation he gave in October - is really worth a read.

The central point Maciej makes is that websites have become script-ridden quagmires of bloat. Even viewing a single tweet will result in nearly a megabyte of download. He identifies a few reasons for this. First that ever increasing bandwidth and decreasing latency means web developers don't notice how horrifically obese their creations have become. While the problem is well-known with no end of articles discussing the issue and presenting approaches for fixing it, they invariably miss the point. They focus on complex, clever optimisations, rather than straightforward byte-count. Those that do consider byte-count can make things worse by shifting the goalposts upwards, inflating what can be considered 'normal'. Finally, the unsustainability of the Web economy has led to the scaffold of scripts used by advertisers and trackers to accelerate in complexity.

There are some sublime examples in the presentation, like the 400-word article complaining about bloat that itself manages to somehow accumulate 1.2 megabytes of fatty residue on its way through the interflume arteries. If you've not read it, go do so now and heed its message.

Like I said, the last thing I need is distractions right now, which is why the article immediately prompted me to check my own website's bandwidth stats. Having nodded along enthusiastically with everything written in Maciej's presentation, I could hardly just leave it there. I needed to apply the Russian Literature Test:

"text-based websites should not exceed in size the major works of Russian literature"

What I found was pretty embarrassing. The root page is one of the simplest on my site. Here's what it looks like:

The root page of the site

Yet it weighed in at 800KB. That's the same size as a "the full text of The Master and Margarita" by Bulgakov. Where's all that bandwidth going? The backend of my site is like Frankenstein's monster: cobbled together from random bits of exhumed human corpse. Nonetheless it should make it relatively terse in its output and it certainly shouldn't need all that. Checking with Mozilla's developer tools, here's what I found.

The original network analysis

There are some worrying things here. For some reason the server spent ages sitting on some of the CSS requests. More worrying yet is that the biggest single file is the widget script for AddThis. I've been using AddThis to add a 'share' button to my site. No-one ever uses it. The script for the button adds nearly a third of a megabyte to the size, and also gives AddThis the ability to track anyone visiting the site without their knowledge.

Not good; so I dug around on the Web and found an alternative called AddToAny. It doesn't use any scripts, just slurps the referrer URL if you happen to click on the link. This means it also doesn't track users unless they click on the link. Far preferable.

After making this simple change, the network stats now look a lot healthier.

The network analysis with AddThis scripts removed

Total bandwidth moved from 800KB to 341KB, cutting it by over a half (see the totals in the bottom right corners). It also reduced load time from 2s down to 1.5s.

But I wasn't done yet. I harbour a pathological distrust of Apple, Google, Facebook and Microsoft, and ditched my Google account over a year ago. I've always been sad about this because Google in particular makes some excellent products that I'd otherwise love to use. Google Fonts is a case in point, with its rich collection of high quality typefaces and a really easy API for using them on the web. Well look there in the downloads and you'll see my site pulls down 150KB of font data from Google. That's the Cabin font used on the site if you're interested.

Sadly then, in my zeal to minimise Google's ability to track me, I totally ignored the plight of those visiting my site. Every time the font is downloaded Google gets some juicy analytics for it to hoard and mine.

The solution I've chosen is to copy the fonts over to my own server (the fonts themselves are open source, so that's okay). Google's servers are considerably faster at responding than my shared-hosting server, but the change doesn't seem to impact the overall download time, and even reduces the overall size by 0.17KB (relative URLs are shorter!). Okay, that's not really a benefit, but the lack of tracking surely is.

The network analysis with Google Fonts removed

The final result has increased page load and reduced bandwidth usage to less than Fyodor Dostoyevsky's The Gambler, which I think is fitting given Dostoyevsky was forced to keep it short, writing to a deadline to pay off his gambling debts. Russian Literature Test passed!

I feel chuffed that my diversionary tactics yielded positive results. All is not quite peachy in the orchard though. Many will argue that including a massive animated background on my page is hypocritical, and they're probably right. Although the shader's all of 2KB of source, it'll be executed 100 million times per second on a 4K monitor. Some of the pages also use Disqus for the comments. I've never really liked having to use Disqus, but I feel forced to include some kind of pretence at being social. Here's why it's a problem.

The network analysis when there are Disqus comments on the page

Not only does Disqus pull in hundreds of KB of extra data, it also provides another perfect Trojan horse for tracking. I've not yet found a decent solution to this, and I fear the Web is just too busy eating itself to allow for any kind of sensible fix.

27 Dec 2015 : Finally, Syberian snow #
Not in real life, but finally I'm getting the snow I feel I deserve in Syberia II. Good work Microïds!

Snowing in Syberia II

16 Dec 2015 : Let's not Encrypt just yet #
The TLS certificate for Constantia, my home server, ran out this evening. I've been using StartSSL for my certificate for several years now, and given their free automated service I've been very pleased. The downside is you can only generate one certificate at a time, so if you screw it up, there's not much that can be done (apart from ponying up). That always made me nervous as I've been known to screw things up in the past.

With the new Let's Encrypt service I was tempted to try that, but the certificates need renewing every 90 days, so I stuck to what I know. It seems I'm getting better at it though: the new certificate appears to have worked without a hitch.

14 Dec 2015 : Siberian Odyssey #
After many years of very careful observation, I've discovered I'm worryingly susceptible to advertising. If I see someone drinking a cool beer on TV my thirst will fire up. Technology adverts make me fiddle with my phone. Pizza ads will make my hungry. (Apparently I'm still immune to sports adverts though).

One of the consequences is that at certain times of year I like my games to match the season. Costume Quest at Halloween, A Bird Story in the Spring, Broken Sword in the Summer. It helps me get into the right frame of mind.

Syberia game, but not in Siberia (or even Syberia)

Last Christmas I decided Syberia would be the way to get into the Christmas spirit. Lots of wintry images, ice and snow. I played through the whole game solving the puzzles and waiting for the ice and snow to kick in. Eventually, I thought, the game would have to take me to Siberia. It's the name of the f**cking game!

So, eventually after 13 hours of play I got on a train heading for Syberia, only for the game to abruptly end.

It turns out Benoît Sokal - the game's director - misjudged how long the story was and Syberia (or even Siberia) doesn't happen until game 2.

I've now waited the entire year and it's time to go for a second attempt: my game this Christmas is going to be Syberia II. I enjoyed the first game, so I don't regret having played it, but this one had better take me to Siberia or I'll be contacting trading standards!

5 Sep 2015 : Flying livestock at Gatwick #
On their journey towards Crete via Gatwick my mum and step dad noticed this rather elegant flying pig. Or maybe it's meant to be a flying horse?! I'd like to think the implied pig reference wasn't entirely unintentional!

Pegasus airlines demonstrates their appreciation for porcine aviation

25 Jul 2015 : GameJam videos #
Game Jam was exactly a month ago and while it was pretty intense at the time, it was also a load of fun.

Alongside all their incredible help with the event, OpenLab also commissioned this great video summarising the event.

If you're still up for more footage after watching that, check out the showreel of the five phenomenal games the teams created.

And you can even download, install and play the games themselves.

18 May 2015 : Compiling OpenVDB using MinGW on Windows #

OpenVDB seems to work best on Linuxy systems. Nick Avramoussis has posted some useful and clear instructions on how to build it using VC++10/11. Unfortunately C++ libraries aren't portable between compilers, and I needed it integrated into an existing project built using MinGW.

This post chronicles my experiences with getting it to work. If you're planning to travel the same path, you should know from the start that it's quite an odyssey. OpenVDB has several dependences which also need to be built with MinGW as well. But it is possible. Here's how.

The Dependencies

OpenVDB relies on several libraries you'll need to build before you can even start on the feature presentation. The best place to start is therefore downloading each of these dependencies and collecting them together.

I've listed the version numbers I'm using. It's likely newer versions will work too.

  1. Boost 1.58
  2. ilmbase 1.0.3 source code (part of OpenEXR)
  3. OpenVDB 3.0.0. Not a dependency, but you're certainly going to need it
  4. TBB 4.3 Update 5 Source

You also need zlib, but MinGW comes with a version you can use for free. Finally, grab yourself this skeleton archive which contains some files needed to complete the build.

The Structure

Each of these will end up generating a library you'll link in to OpenVDB. In theory it doesn't matter where you stick them as long as you can point g++ to the appropriate headers and libraries. Still, to make this process (and description) easier, it'll be a big help if your folders are structured the same way I did it. By all means mix it around and enjoy the results!

I've unpacked each archive into its own folder all at the same level with the names boost, ilmbase, openvdb, tbb and test. The last contains a couple of test files, which you can grab from the skeleton archive. You can download a nice ASCII-art version of the folder structure I ended up with (limited to a depth of 2) to avoid any uncertainty.

In the next few sections I'll explain how to build each of the prerequisites. This will all be done at the command line, so you should open a command window and negotiate to the folder you unpacked all of the archives into.

Building Boost

Boost comes with a neat process for building with all sorts of toolchains, including MinGW. Assuming the folder structure described above, here's what I had to do.

cd boost
bootstrap.bat mingw
.\b2 toolset=gcc
cd ..

If you've download the skeleton archive, you'll find the build-boost.bat script will do this for you. This will build a whole load of boost libraries inside the boost\stage\lib folder. As we'll see later, the ones you'll need are libboost_system-mgw48-mt-1_58 and libboost_iostreams-mgw48-mt-1_58.

Building ilmbase

Actually, we don't need all of ilmbase; we only need the Half.cpp file. Here's what I did to build it into the library needed.

cd ilmbase\Half
g++ -UOPENEXR_DLL -DHALF_EXPORTS=\"1\" -c -I"." -I"..\" Half.cpp
cd ..\..
ar rcs libhalf.a ilmbase\Half\*.o

This will leave you with a library libhalf.a in the root folder, which is just where you need it.

Building TBB

TBB comes with a makefile you can use straight away, which is handy. This means you can build it with this.

cd tbb
mingw32-make compiler=gcc arch=ia32 runtime=mingw tbb
cd ..

Now copy the files you need into the root.

copy tbb\build\windows_ia32_gcc_mingw_release\tbb.dll .
copy tbb\build\windows_ia32_gcc_mingw_release\tbb.def .

Building OpenVDB

Phew. If everything's gone to plan so far, you're now ready to build OpenVDB. However, there are a few changes you need to make to the code first.

Following the steps from Nick's VC++ instructions, I made these changes:

  1. Add #define NOMINMAX in Coord.h and add #define ZLIB_WINAPI in
  2. Change the include path in Types.h from <OpenEXR/half.h> to <half.h>
  3. Add #include "mkstemp.h" to the top of openvdb\io\ This is to add in the mkstemp function supplied in the skeleton archive, which for some reason isn't included as part of MinGW.

The following should now do the trick.

cd openvdb
g++ -DOPENVDB_OPENEXR_STATICLIB=\"1\" -UOPENEXR_DLL -DHALF_EXPORTS=\"1\" -c -w -mwindows -mms-bitfields -I"..\..\libzip\lib" -I".." -I"..\boost" -I"..\ilmbase\Half" -I"..\tbb\include" *.cc io\*.cc math\*.cc util\*.cc metadata\*.cc ..\mkstemp.cpp
cd ..
ar rcs libopenvdb.a openvdb\*.o

And bingo! You should have a fresh new libopenvdb.a library file in the root folder of your project.

Testing the Library

Okay, what now?

You want to use your new creation? No problemo! The skeleton archive has a couple of test programs taken from the OpenVDB cookbook.

These tests also provide a great opportunity to demonstrate how the libraries can be integrated into the MinGW build process. Here are the commands I used to build them.

g++ -DOPENVDB_OPENEXR_STATICLIB=\"1\" -UOPENEXR_DLL -DHALF_EXPORTS=\"1\" -w -c -I"." -I"boost" -I"ilmbase\Half" -I"tbb\include" test\test1.cpp
g++ -DOPENVDB_OPENEXR_STATICLIB=\"1\" -UOPENEXR_DLL -DHALF_EXPORTS=\"1\" -w -c -I"." -I"boost" -I"ilmbase\Half" -I"tbb\include" test\test2.cpp
g++ -g -O2 -static test1.o tbb.dll zlib1.dll -Wl,-luuid -L"." -o test1.exe -lhalf -lopenvdb -L"boost\stage\lib" -lboost_system-mgw48-mt-1_58 -lboost_iostreams-mgw48-mt-1_58
g++ -g -O2 -static test2.o tbb.dll zlib1.dll -Wl,-luuid -L"." -o test2.exe -lhalf -lopenvdb -L"boost\stage\lib" -lboost_system-mgw48-mt-1_58 -lboost_iostreams-mgw48-mt-1_58

The key points are the pre-processor defines for compilation:

  2. Define: HALF_EXPORTS
  3. Undefine: OPENEXR_DLL

the include folders needed also for compilation:

  1. boost
  2. ilmbase\Half
  3. tbb\include

and the library folders needed during linking:

  1. tbb.dll
  2. zlib1.dll (can be found inside the MinGW folder C:\MinGW\bin
  3. libhalf.a
  4. libopenvdb.a
  5. libboost_system-mgw48-mt-1_58.a
  6. libboost_iostreams-mgw48-mt-1_58.a

Finally you should be left with two executables test1.exe and test2.exe to try out your new creation.

27 Apr 2015 : New home help #
A homeless friend of mine thinks he may finally be getting a place to stay and it could be an opportunity for him to turn things around. It would be great news, but the prospect of him landing in an empty flat with almost no furnishings is depressing at best.

He doesn't have access to the Internet, so asked if I'd try to track down stuff people might be throwing out, but which would make good furnishings for someone with no money moving into a new place.

Anyone know of sites to search for local people offering to have things taken off their hands for little or no cash?

Anyone in the Liverpool area have spare stuff you would otherwise be thinking of throwing away?

I want to help, but I'm not really sure where to start, so any suggestions would be good. Please drop me an email, or comment below if you have any.

7 Apr 2015 : Sailfish Really Is Linux #
One of the great things about smartphone operating systems is that, despite being really quite mature, they're nonetheless still fairly well differentiated. This means there are good reasons to choose one over another. For example iOS has a very mature app ecosystem, but with restrictions that prevent some types of software being made available (crucially restrictions on software that downloads other code). In contrast, Android and Google Play have much more liberal policies. This results in a broader ecosystem, but where the overall average quality is often said to be lower.

Android also has the claim of being Linux, which in theory means it has access to the existing - incredibly mature - Linux software ecosystem. In practice for most people this is moot, since their focus is on the very different type of software available from the Play Store. For developers though, this can be important. For me the distinction is important partly because I'm already familiar with Linux, and partly as a matter of principal. In my world computing is very much about control. I love the idea of having a computer in my pocket not because it gives me access to software, or as a means of communication, but because it's a blank slate just waiting to perform the precise tasks I ask of it. That sounds authoritarian, but better to apply it to a computer than a person. I'm pretty strict about it too. Ever since being exposed to the wonder of OPL on a Psion 3a (way back in 1998), direct programmability has always been one of the main critiera when choosing a phone.

This weekend was the Easter Bank Holiday, meaning a lengthy train ride across the country to visit my family. I wanted to download some radio programmes and possibly some videos to watch en-route, but didn't get time before we set off. I'd managed to install the Android version of BBC iPlayer on my Jolla, but for some reason this doesn't cover BBC Radio, which has been split off into a separate application. Hence I embarked on a second journey while sitting on the train: installing get_iplayer entirely using my phone. This meant no use of a laptop with the Sailfish IDE, and building things completely from source as required.

The experience was enlightening: during the course of the weekend I was able to install everything from source straight on my phone. This included the rtmp streaming library and ffmpeg audio/video converter all obtained direct from their git repositories, all just using my phone.

Banished downloaded using get_iplayer

Why would anyone want to do this when you can download the BBC radio app from the store? You wouldn't, but I still think it's very cool that you can.

Here's how it happened.

get_iplayer is kind-of underground software. It shouldn't really exist, and the BBC barely tolerates it.

It's written in Perl and is currently available from Getting it is just a matter of running the following command in the shell:

git clone git://

Perl is already installed on Sailfish OS by default (or at least was on my phone and is in the repositories otherwise). There were some other Perl libraries that needed installing, but which were also in the repositories. I was able to add them like this:

pkcon install perl-libwww-perl
pkcon install perl-URI

Because it's Perl, there's no need to build anything, and at this point get_iplayer will happily query the BBC listing index and search for programmes. However, trying to download a programme generates an error about rtmpdump being missing.

The rtmpdump library isn't in the Sailfish repositories, but can be built from source really easily. You can get it from, and I was able to clone the source from the git repository:

git clone git://

Building from source requires the open-ssl development libraries, which are in the repositories:

pkcon install openssl-devel

After this it can be built (although note developer mode is needed to complete the install):

cd rtmpdump
make install
cd ..

As part of this build the librtmp library will be created, which needs to be added to the library path.

echo /usr/local/lib > /etc/

This should be enough to allow programmes to be downloaded in flv format. However, Sailfish won't be comfortable playing these unless you happen to have installed something to play them with. get_iplayer will convert them automatically as long as you have ffmpeg installed, so getting this up and running was the next step. Once again, the ffmpeg source can be cloned directly from its git repository:

git clone git://

ffmpeg installation

The ffmpeg developers have done an astonishing job of managing ffmpeg's dependencies. It allows many extras to be baked into it, but even without any of the other dependencies it'll use the autoconfig tools to allow a minimal build to be created:

pkcon install autotools
cd ffmpeg
make install
cd ..

ffmpeg is no small application, and compiling it on my phone took over an hour and a half. I know this because we watched an entire episode of Inspector Montalbano in the meantime, which get_iplayer helpfully tells me is 6000 seconds long!

Inspector Montalbano info from get_iplayer

Nonetheless, once completed the puzzle is complete, and get_iplayer will download and convert audio and video to formats that can be listened to or viewed on the Sailfish media player.

For me there's something beautiful about the ability to build, install and run these applications directly on the phone. get_iplayer is command-line, so lacks the polished GUIs of the official applications, but it's still very efficient and usable. I get that this makes me sound like Maddox, but that only makes me more right.

Three, my mobile carrier, insists I'm using tethering and cuts my connection whenever I try to download files using get_iplayer. It's annoying to say the least, but highlights the narrow gap between GNU/Linux on a laptop and GNU/Linux on a Sailfish OS phone.

7 Feb 2015 : Impressed by GitHub #
We recently started working on the Horizon 2020-funded Wi-5 project, and one of the questions that immediately came up was "where to host our code repositories?" The nature of the project is that not all of the code can be made public, so private repositories are essential. After looking at GitHub's pricing policy, I'd almost come to the conclusion we might have to rule it out, until stumbling on their Education Team. A quick submission later and they got back to say they'd upgraded the Wi-5 GitHub organisation to the Silver plan for free. I'm genuinely impressed. Thank you GitHub!
31 Dec 2014 : Automarking Progress #
I've always hated marking. Of all the tasks that gravitate around the higher education process, like lecturing, tutoring, creating coursework specifications and writing exams, marking has always felt amongst the least rewarding. I understand its importance, both as a means of providing feedback (formative) and applying judgement (summative). But good feedback takes a great deal of time, and assigning a single number that could significantly impact a student's life chances also takes a great deal of responsibility. Multiply that by the size of a class, and it can become impossible to give it the time - and energy - it deserves.

Automation has always offered the prospect of a partial solution. My secondary-school maths teacher - who was a brilliant man and worth listening to - always said that maths was for the lazy. It uncovers routes to generalisations that reduce the amount of thinking and work needed to solve a problem. Programming is the practical embodiment of this. So if there's one area which needs the support of automation in higher education, it must be marking.

Back in 1995 when I was doing my degree in Oxford, they were already using automated marking for Maple coursework. When I started at Liverpool John Moores in 2004 I was pretty astonished that they weren't doing something similar for marking programming coursework. Roll on ten years and I'm still at LJMU, and programming coursework is still being marked by hand. We have 300 students on our first year programming module, so this is no small undertaking.

To the University's credit, they've agreed to provide funds as a Curriculum Enhancement Project to research into whether this can be automated, and I'm privileged to be working alongside my colleagues Bob Askwith, Paul Fergus and Michael Mackay to try to find out. As I've implied, there are already good tools out there to help with this, but every course has its own approach and requirements. Feedback is a particularly important area for us, so we can't just give a mark based on whether a program executes correctly and gives the right outputs.

For this reason while Google has spent the tail-end of 2014 evangelising about their self-driving cars, I've been busy setting my sites for automation slightly lower. If a computer can drive me to work, surely it's only right it should then do my work for me when I get there?

There are many existing approaches and tools, along with lots of literature to back it up. For example Ceilidh/CourseMarker (Higgins, Gray, Symeonidis, & Tsintsifas, 2005; Lewis & Davies, 2004), Try (Reek, 1989), HoGG (Morris, 2003), Sphere Engine (Cheang, Kurnia, Lim, & Oon, 2003), BOSS (Joy & Luck, 1999), GAME (Blumenstein, Green, Nguyen, & Muthukkumarasamy, 2004), CodeLab, ASSYST (Jackson & Usher, 1997) and others.

Unfortunately many of these existing tools don't seem to be available either publicly or commercially. For those that are, they're not all appropriate for what we need. CourseMarker looked promising, but its site is down and I've not been able to discover any other way to access it. CodeLab is a neat site, which our students would likely benefit from, but at present it wouldn't give us the flexibility we need to fit it in with our existing course structure. The BOSS online submission system looks very viable but deploying it and getting everyone using it would be quite an undertaking; it's something I definitely plan to look into further though. Finally Sphere Engine provides a really neat and simple way to test out programs. In essence it's a simple web service that you upload a source file to, which it then compiles and executes with a given set of inputs. It returns the generated output which can then be checked. It can do this for an astonishing array of language variants (around 65 at the last count: from Ada to Whitespace) and is also the engine that powers the fantastic Sphere Online Judge. Sphere Engine were very helpful when we contacted them, and the simplicity and flexibility of their service was a real draw. Consequently the approach we're developing uses Sphere Engine as the backend processor for our marking checks.

Compilation and input/output checks aren't our only concerns though. The feedback sheet we've been using for the last few years on the module covers code efficiency, good use of variable names, indentation and spacing, and appropriate commenting, as you can see in the example here.

Marking by human hand

With the aim of matching these as closely as possible, we're therefore applying a few other metrics:

Comment statistics: Our automated approach doesn't measure comments, but rather the spacing between them. For Java code the following regular expression will find all of the comments as multi-line blocks: '/\*.*?\*/|//.*?$(?!\s*//)' (beautiful huh?!). The mean and standard deviation of the gap between all comments is used as a measure of quality. Obviously this doesn't capture the actual quality of the comments, but in my anecdotal experience, students who are commenting liberally and consistently are on the right tracks.

Variable naming: Experience shows that students often use single letter or sequentially numbered variable names when they're starting out, as it feels far easier then inventing sensible names. In fact, given the first few programs they write are short and self-explanatory, this isn't unreasonable. But at this stage our job is really to teach them good habits (they'll have plenty of opportunity to break them later). So I've added a check to measure the length of variable names, and whether they have numerical postfixes by pulling variable declarations from the AST of the source code.

Indentation: As any programmer knows, indentation is stylistic, unless you're using Python or Whitespace. Whether you tack your curly braces on the end of a line or give them a line of their own is a matter of choice, right? Wrong. Indentation is a question of consistency and discipline. Anything less than perfection is inexcusable! This is especially the case when just a few keypresses will provoke Eclipse into reformatting everything to perfection anyway. Okay, so I soften my stance a little with students new to programming, but in practice it's easiest for students to follow a few simple rules (Open a bracket: indent the line afterwards an extra tab. Close a bracket: indent its line one tab fewer. Everything else: indent it the same. Always use tabs, never spaces). These rules are easy to follow, and easy to test for, although in the tests I've implemented they're allowed to use spaces rather than tabs if they really insist.

Efficient coding: This one has me a bit stumped. Maybe something like McCabe's cyclomatic complexity would work for this, but I'm not sure. Instead, I've lumped this one in as part of the correct execution marks, which isn't right, but probably isn't not too far off how it's marked in practice.

Extra functionality: This is a non-starter as far as automarking's concerned, at least in the immediate future. Maybe someone will one day come up with a clever AI technique for judging this, but in the meantime, this mark will just be thrown away.

Our automarking script performs all of these checks and spits out a marking sheet based on the feedback sheet we were previously filling out by hand. Here's an example:

Marking but not as we know it

As you can see, it's not only filling out the marks, but also adding a wodge of feedback based on the student's code at the end. This is a working implementation for the first task the students have to complete on their course. It's by far the easiest task (both in terms of assessment and marking), but the fact it's working demonstrates some kind of viability. I'm confident that most of the metrics will transfer reasonably elegantly to the later assessments too.

There's a lot of real potential here. Based on the set of scripts I marked this year, the automarking process is getting within one mark of my original assessment 80% of the time (with discrepancy mean=1.15, SD=1.5). Rather than taking an evening to mark, it now takes 39.38 seconds.

The ultimately goal is not just to simplify the marking process for us lazy academics, but also to provide better formative feedback to the students. If they're able to submit their code and get near-instant feedback before they submit their final coursework, then I'm confident their final marks will improve as well. Some may say this is a bit like cheating, but I've thought hard about this. Yes, it makes it easier for them to improve their marks. But their improved marks won't be chimeras, rather they'll be because the students will have grasped the concepts we've been trying to teach them. Personally I have no time for anyone who thinks it's a good idea to dumb down our courses, but if we can increase students' marks through better teaching techniques that ultimately improves their capabilities, then I'm all for it.

As we roll into 2015 I'm hoping this exercise will reduce my marking load. If that sounds good for you too, feel free to contribute or join me for the ride: the automarking code is up on GitHub, and this is all new for me, so I have a lot to learn.

14 Dec 2014 : Adafruit Backlights as Nightlights #
Yesterday I spent a fun and enlightening day at DoESLiverpool for their monthly Maker Day. It was my first time, and I'm really glad I went (if you live near Liverpool and fancy spending the day building stuff, I recommend it). I got loads of help from the other makers there, and at the end of the day I'd built a software-controllable blinking light and gained a new-found confidence for soldering (not bad for someone who's spent the last twenty years finding excuses to avoid using a soldering iron). Thanks JR, Jackie, Doris, Dan and everyone else I met on the day! Here's the little adafruit Trinket-controlled light (click to embiggen):

Adafruit Trinket with a backlight module attached, alongside a tealight

The light itself is an adafruit Backlight Module, an LED encased in acrylic that gives a nice consistent light across the surface. In the photos it looks pretty bright, and Molex1701 asked whether it'd be any good for a nightlight. Thanks for the question!

The only thing is I know nothing about lights and lumens and wouldn't trust my own judgement when wandering around in the semi-dark. So to answer the question I thought it'd be easiest to take a few photos. The only room in the flat where we get total darkness during the day is the bathroom, so I stuck the adafruit in the bath along with some helpful gubbins for reference (ruler, rubber duck, copy of Private Eye) and took some photos. As well as the backlight module, there are also some photos with the full light and a standard tealight (like in the photo above) for comparison. I reckon tealights must be a pretty universal standard for photon output levels.

These firsts three below (also clickable) show the same shot in different lighting conditions from afar. Respectively they're the main bathroom light, the backlight module, and a tealight.

Bathtub with standard florescent bulb from above
Bathtub with backlight light inside
Bathtub with tealight light inside

Here are two close-up shots with backlight and tealight respectively.

Bathtub close-up with backlight light inside
Bathtub close-up with tealight light inside

As you can see from the results, the backlight isn't as bright as a tealight. Whether it'd be bright enough to use as a nightlight is harder to judge, but my inclination is to say it probably isn't. Maybe if you ran a couple of them side-to-side they'd work better. It's also worth noting the backlight module is somewhat directional. There is light seepage from the back of the stick, but most of the light comes out from one side and things are brighter when in line with it.

It may also be worth saying something about power output. Yesterday JR, Doris and I measured the current going through it. The backlight was set up with 3.3V and drew 10 mA of current. The battery I'm using is a 150mAh Lithium Ion polymer battery, so I'm guessing the backlight should run for around 15 hours (??) on a single charge. Add in the power needed for the trinket and a pinch of reality salt and it's probably much less. Last night it ran from 8pm through to some time between 4am and 10am (it cut out while I was asleep), so that's between 8-14 hours.

If you do end up building a nightlight from some of these Molex1701, please do share!

11 Aug 2014 : Thieving scum! #
It's been nearly seven years since my previous venture into the criminal mind of the master thief, but as part of my holiday therapy I'm becoming Garrett again. There have been many great stealth games to fill the gap since the last Thief release, including the quite brilliant Dishonored from 2012. This was the closest yet to reproducing the setting and atmosphere of the Thief series, and many would say it surpassed it in many ways. Dunwall captured the same steampunk aesthetic, divided society and solitary exploration as The City. The no-kill stealth mechanics and multipath approach to gameplay were bloodline descendants of the original Thief. As a game it was an astonishing achievement. But it lacked one crucial element: a voice. The prospect of taking on the role of Garrett the master thief is just too exciting. To become a truly accomplished larcenist, you have to submit to his amoral self-justification. His sardonic narrative is a crucial counterbalance to the despair and suffering of the environment.

There have been criticisms levied at the game for its gameplay linearity, repetitive ambient dialogue and failure to achieve the same level of psychological tension. These are all no-doubt valid criticisms, but while I've so far only played through the first chapter, none of these are yet detracting from my enjoyment of the game. The shadows still make you feel invisible and there's still a sense of invincibility as you nick a diamond right from under the jeweller's nose. I can tell already: this is going to be really great therapy!

Thief in the rain Thief streets More Thief streets

7 Jul 2014 : Real or Render? Render or Real? #
The astonishing ability of computers to turn entirely imaginary objects into realistic representations is obvious just by watching pretty much any recent blockbuster movie. I know I go on about it a lot, but it bears repeating that with 3D printing you can take it a step further: turning entirely imaginary objects into their physical counterparts. This isn't the first time I've compared renders to reality (or is it the other way around? I forget), but the question of how close they can get remains a bit of a fascination.

So what do you think? One of these images is a render, created using Blender Cycles. The other is a photograph of a 3D print generated from the same model and cast in bronze. Which is which though?

If you're not sure, click on the image for a larger version, and leave a comment if you think you've figured it out!

Cubic Celtic knot rendered using Blender Cycles and 3D printed in raw bronze

11 Jun 2014 : How much information's created when I stare out the window? #
This afternoon I received an advertising email from the Viglen Marketing team. It boldly repeats the oft-quoted statement of Eric Schmidt from Google's Atmosphere convention in 2010:

"Between the birth of the world and 2003, there were 5 exabytes of information created. We now create 5 exabytes every 2 days."

Every time I read this quote my faith in human intelligence dies a little more. It's an old quote now, but it still riles me: it's such a patently absurd statement. I can understand Dr. Schmidt making it for the sake of theatre, but please don't repeat it as if it's fact.

There have been far more detailed and convincing critiques of the claim than I'm able to offer, but I wouldn't even extend the benefit of the doubt that these lavish on Google's Executive Chairman. The fact is, the same amount of information is being created now as has ever been the case. If you want to some how massage the quote into plausibility you have to narrow its meaning beyond recognition. Perhaps it means data recorded, rather than information created? Perhaps it only means by humans? Perhaps it means only in sharable form? When the information is useful? On Earth? When someone is watching?

How much information is there in a cave painting? I'd wager more than Google explicitly stores in all of its data warehouses. How much information gets sucked into a black hole every second? I can't even be bothered to think about it. It's just the basic difference between discrete and continuous stuff.

Frankly, it probably means "data that has been recorded permanently by humans in discrete form". So why not say so?

This morning I was relatively happy; now I'm just annoyed.

Stupid quotes that shouldn't be repeated

22 May 2014 : Technology vs The Law #
Broken CD image by omernos The problem with technology is that it has created a new and unique power struggle; a struggle that the law has found itself on the wrong side of. The legal bullying of Ladar Levinson that ultimately resulted in him having to shut down his company Lavabit, is a nasty symptom of the way the law reacts when it feels threatened.

I won't go into the details here, but recommend you take a look at Levinson's description of what happened in his Guardian article.

How can the legal system have got so fucked up that this can happen? How is it - to use Levinson's words - that he can find himself "standing in a secret courtroom, alone, and without any of the meaningful protections that were always supposed to be the people's defense against an abuse of the state's power"?

To understand this, we need to figure out where the law gets its power from. The nature of the law has always been inextricably linked with power. It's the people in power who define the laws and this gives them credibility through process (although it doesn't give any guarantee that the laws are just). How do you get to be in power? If you're lucky, you might live in a country where there's a process for this too. In the US they exercise what they call democracy (it's not exactly what I'd call democracy, but it's still a lot better than what we have here in the UK). Still, the legitimacy of the process is really seeded elsewhere: it's a redistribution of powers granted conditionally by those who are physically most powerful. Some might say the legitimacy comes from something like the constitution, but in practice the legitimacy of a constitution comes from the war that was won beforehand. Without the demonstration of superior power, the constitution would have rather been just a manifesto put together by a bunch of terrorists.

All laws are founded on power and all power is founded on force. Except that technology has a tendency to destabilise this equation. Take guns (I'm not a big fan of guns in practice, and I'm going to conveniently classify them as technology for the purposes of this argument). Guns have the potential to be an amazing leveller. Prior to their introduction, the force behind the power was premised on physical strength and numbers. Suddenly with guns physical strength becomes an irrelevance. And this isn't just about the advantage of being the first to have one. If everyone owned a gun then actual physical strength would no longer be a consideration since everyone would have the means to end another person's life at the click of a button. I'm not advocating this as a wise move of course (just think what would happen if there was a "Terminate user" option next to the "Report abuse" link on YouTube), but it does illustrate the point.

The law is ultimately reliant on physical force for its legitimacy. Not only does it rely on political power (which is underwritten by force), but it also uses force as its last-resort sanction. There are many intermediate sanctions (removal of money and property, restrictions of rights, threat of surveillance, storing details on a database...), but if these fail, or if someone refuses to submit to them, the ultimate sanctions are incarceration or death, both of which are physical threats. And it's not just legal outcomes, but also the legal process that relies on the threat of physical force. During an investigation, if someone refuses to comply with a search warrant, the police are within their rights to break down the door. Take away the physical threat and you leave the law impotent.

New technologies, and especially encryption and distributed networking technologies, pose a real threat to this. While you can break down a door with a sledgehammer, you can't decrypt an encrypted message by smashing open a computer. If the encryption is done right, you can't decrypt the message at all: you're fighting against the laws of nature and mathematical axioms*. Up until now, the solution sought by the law has been to go after the encryptor rather than the encryption (take for example RIPA in the UK). But technology is nibbling away at this too. Distributed technologies support actions that have no single enactor; information and processes that don't belong to anyone. You can't pursue a physical protector if none exists.

The events surrounding Lavabit and the actions of the intelligence and police services uncovered by Edwared Snowdon demonstrate a response by the law to try to address a threat which is conceptual rather than physical. The growing realisation that physical solutions can't work has led to laws and processes that were designed to protect being contorted into tools that many people no longer recognise as just.

Unless the law can find new ways to deal with the conceptual threats to its processes that new technologies have introduced, the temptation to become increasingly draconian will remain. There need to be new solutions that don't amount to "if we can't attack the problem with physical force, we'll attack an innocent bystander instead."

On the other hand, individuals will continue to invest in more robust cryptography and make more widespread use of distributed technologies (by which I absolutely do not mean the Cloud!) as a way of preserving the privacy and (ethical) rights that recent events suggest the law has started taking away.

* May be subject to change.

15 May 2014 : Treading More Lightly #
Footprints image by mailsparky Some time ago I started the process of disentangling myself from Google's clutches. This morning I finally finished the process by deleting the last vestiges of my account.

When Google first appeared it demonstrated a refreshingly open and efficient approach to the Internet, so I was making prolific use of their services until a couple of years ago. Since switching away from Google's search it's felt like their other services have become increasingly irrelevant to me.

In spite of this I discovered this morning the tentacles were still embedded pretty deep. I had documents scattered all over Google Drive, a languishing Google+ profile mostly used for access to hangouts, a Google Talk account (as a front for getting people to use Jabber), Google Analytics, Android accounts, an old Blogger blog; the list goes on.

And this was just the exposed information. I dread to think about the mountain of data being amassed in the background. The reality check really hit last year when Google's services went offline for four minutes in August. Subsequent reports suggested that Internet traffic dropped by 40% as a result. That's a dangerous over-reliance we have there. I was also impressed when one of my students, involved in the CodePool project (if you're reading this: you know who you are!) attempted to remove her Web footprint; I was surprised at how successful she was.

This isn't an attempt to remove my Web presence though and sadly I don't expect the data accumulation to stop. I'm sure Google will continue to know more about my movements than anyone else, whether company or individual. The biggest problem for me is that, even though everyone knows that Google knows, we don't really know the extent of knowledge Google can derive from our data. That's a real concern.

Google still offers outstanding services. I've found no replacement for the public-facing calendar sharing of Google Calendar. I'll inevitably continue to use Google Scholar, Google Maps and Google Images but without the login. Yet most of Google's services are replicated by smaller and less intrusive companies. I'm under no illusion about the motives of these smaller rivals: they still want my data and ad-revenue. But by virtue of their size they're less of a threat to my privacy.

23 Feb 2014 : Adventures with The Other Half #
It's fair to say this is a misleading title. As you'll discover if you take the trouble to read through (and now you've started, you'd be missing out if you didn't), this has nothing to do with either feats of derring-do or my wife Joanna.

No, this is to do with my Jolla phone. Back in the day, before smartphones were ubiquitous, many phone manufacturers tried to lure in the punters by offering interchangeable fascias or backplates. Not very subtle, or high-tech, but presumably effective.

Well, Jolla have decided to return to this, while taking the opportunity to update it for the 21st Century. Each Jolla smartphone appears to be built in two halves, split parallel to the screen and with the back half ("The Other Half") replaceable to provide not just different styles, but also additional functionality. The extra functionality is provided by cleverly using NFC-detection of different covers, along with the ability for covers to draw power from and communicate with the main phone via a selection of pins on the back.

At the moment there are only four official Other Halves that I'm aware of: Snow White (the one that comes as standard), Keira Black, Aloe and Poppy Red (the preorder-only cover). They use the NFC capability to change the styling of the phone theme as the cover is changed, but in the future there's a hope that new covers might provide things light wireless charging, solar charging, pull-out keyboard, etc.

For me, the interesting thing about the phone has always been the Sailfish OS that powers it. As anyone who's ever set eyes on me will attest, I've never been particularly fashion conscious, so the prospect of switching my phone cover to match my outfit has never offered much appeal. However, since the good sailors at Jolla have released a development kit for The Other Half, and since it seemed like an ideal challenge to test out the true potential of future manufacturing - by which I mean 3D printing - this was not an opportunity I could not miss.

Rather brilliantly, the development kit includes a 3D model which loads directly into Blender.


From there it's possible to export it in a suitable format for upload directly to the Shapeways site. The model is quite intricate, since it has various hooks and tabs to ensure it'll fit cleanly on to the back of the phone. Sadly this means that most of the usual materials offered by Shapeways are unavailable without making more edits to the model (sadly, it will take a bit more work before it can be printed in sterling silver or ceramic!). My attempt to print in polished Strong & Flexible failed, and eventually I had to go with Frosted Ultra Detail. Not a problem from a design perspective, but a bit more expensive.

The result was immaculate. All of the detail retained, a perfect fit on the phone and a curious transparent effect that allows the battery, sim and SD card to be seen through the plastic.


Although a perfect print, it wasn't a good look. Being able to see the innards of the phone is interesting in an industrial kind of way, but the contouring on the inside results in a fussy appearance.

The good news is that all of the undulations causing this really are on the inside. The outer face is slightly curved but otherwise smooth. The printing process results in a very slight wood-grain effect, which I wasn't anticipating, but in hindsight makes sense. The solution to all of this was therefore to sand the outside down and then add some colour.


The colour I chose was a pastel blue, or to give its full title according to the aerosol it came in, Tranquil Blue. Irrespective of the paint company's choice of name, the result was very pleasing, as you can see from the photos below. The 3D-printed surface isn't quite as nicely textured as the original Other Half cover that came with the phone, but I believe most people would be hard-pressed to identify it as a 3D-printed cover. It looks as good as you might expect from mass-produced commercial plasticware.

With the design coming straight from the developer kit, I can't claim to have made any real input to the process. And that's an amazing thing. Anyone can now generate their own 3D printed Other Half direct from Shapeways with just a few clicks (and some liberal unburdening of cash, of course!). A brand-new or updated design can be uploaded and tested out just as easily.

It's genuinely exciting to see how 3D printing can produce both practical and unique results. The next step will be to add in the NFC chip (it turns out they're very cheap and easy to source), so that the phone can identify when the cover is attached.



9 Feb 2014 : Jolla: Easy Wins #
This weekend I tried my hand at a bit of SailfishOS programming, and once again have been pleasantly surprised.

There's no shortage of places to get Apps from for a Jolla phone: the Jolla Store, the Yandex Store and the OpenRepos Warehouse being just a few. But even with this smörgåsbord of stores there are still obvious gaps. For example, I wanted to connect my phone through my home VPN, so that I can access things like SMB shares and ssh into my machines.

The iPhone has an OpenVPN client, but the frustrating file management on the iPhone meant I never got it up and running. Unsurprisingly Android has good OpenVPN support which combines well with the broad range of other good network tools for the platform.

In contrast the various SailfishOS stores are sadly bereft of OpenVPN solutions. However, a quick search using pkcon showed the command line openvpn client available in the Jolla repositories. I was astonished when, after a few commands to transfer the relevant client certificates and install the tool, it was able to connect to my VPN first time.


This is what I'm loving about SailfishOS. It speaks the same language as my other machines and runs the same software. Getting it to talk to my VPN server was really easy, even though you won't find this advertised in the headline features list.

Still, having a command line tool isn't the same as having a nicely integrated GUI App, so this seemed like a great opportunity to try out Jolla's Qt development tools. I've not done any Qt development in the past so started by working through the examples on the Sailfish site.

Qt seems to be a nice toolkit and it's set up well for the phone, but Qt Quick and QML in particular require a shift in approach compared to what I'm used to. Qt Quick obfuscates the boundary between the QML and C++ code. It's effective, but I find it a bit confusing.


Still, after a weekend of learning and coding, I've been able to knock together a simple but effective front-end for controlling OpenVPN connections from my phone.

As well as providing a simple fullscreen interface, you can also control the connection directly from the home screen using the clever SailfishOS multi-tasking cover gestures: pull the application thumbnail left or right to connect to or disconnect from the server.


What I think this demonstrates is how quick and easy it is to get a useful application up and running. The strength is the combination of the existing powerful Linux command line tools, and the ability to develop well-integrated SailfishOS user interfaces using Qt. I'm really pleased with the result given the relatively small amount of effort required.

If I get time, there's plenty more to be done. Currently the configuration runs directly from the openvpn script, but allowing this to be configured from the front-end would be an obvious and simple improvement. After this, mounting SMB shares will be next.

2 Feb 2014 : Smartphone Homecoming #
First, a warning: if technology doesn't interest you then you're likely to find what you read below just a bit odd. If it does then you might find it a bit opinionated. If you're normal, you'll find it boring. If you're not sure which category you fall into, go ahead and read on, and then check back here to find out!

For many months now I've been stuck in the smartphone wilderness, wandering between platforms trying to find one that makes me feel empowered in the way a good computer should.

Well, I think I've finally found my nirvana, having received my Jolla smartphone yesterday. After playing around with it for just a day, it's already in a much more usable state than the iPhone it's replacing. Although the hardware's nothing to write home about, the whole package is beautifully designed with a flair you rarely see on a mobile device. Programs run well, with fluid and transparent multitasking. The gestures are simple, consistent and brilliantly effective: you can use the phone with just a single hand. For a first device, the completeness of the functionality is impressive. Best yet, the console is just a couple of clicks away, giving full access to the entire device (I already have gcc and python installed).

I have to admit, this is all very exciting. I've used multiple devices over the last year trying to find something interesting without luck, so it's worth considering the path that brought me here. It can be neatly summarised by the photo below.

My smartphone experience has been coloured by the earlier devices that defined my computing development. The strength of a device has always been measured - for me - by the potential to program directly on the device. What's the point of carrying a computer around if you can't use it to compute?! From Psions to Nokia Communicators through to the ill-fated Meamo devices, this has always been by far their most exciting trait.

When Maemo/Meego was killed off, the only real alternatives were iOS and Android. I tried both. Android is the spiritual successor to Windows. Its strength is defined by the software that runs on top of it, and it's open enough to interest developers. It's not so bad that people want to avoid it but nonetheless doesn't excel in any particular way. The iPhone on the other hand is an astonishing device. It achieves simplicity through a mixture of control and illusion. In its own way it's perfect, making an excellent communication device. A computing device: less so.

As an aside, both devices are also Trojan horses. Google just wants you logged in to your Google account so it can collect data. Apple wants to seduce you in to its ecosystem, if necessary by making it harder to use anything else. Both are fine as long as the value proposition is worth it.

In February 2013 I finally decided to retire my N900. The provocation for this was actually the release of the Ubuntu Touch developer preview. I purchased a Nexus 4, which is a beautiful piece of hardware, and flashed it with Ubuntu. Sadly, the operating system wasn't ready yet. I've kept the OS on the phone up-to-date (the device is now dual-boot) and in fact it's still not ready yet. If it fulfils its goal of becoming a dual mobile/desktop OS, it could have real potential. But (in the immortal words of Juba) "not yet".

So, in May 2013 I moved to an iPhone. The main motivation for this was to try to establish what data Apple collects during its use, especially given the way Siri works. I've continued using it for this purpose until now, maintaining it exclusively as my main phone in order to ensure valid results. After ten months of usage I think I've given it a fair tryout, but it's definitely not for me. It implements non-standard methods where existing standards would have worked just as well. Options are scattered around the interfaces or programs through a mixture of soft-buttons, hardware-buttons and gestures. I find this constantly frustrating, since most of the time the functionality I'm after doesn't actually exist. Yes, mystery meat navigation has escaped the nineties: it's alive and well on the iPhone. The hardware - while well made - is fussy with its mixture of materials and over-elaborate bevelling. However, ultimately what rules it out is the lack of support for programming the device on the device. There are some simple programming tools, but nothing that really grants proper control.

Finally I've ended up with a Jolla phone running Sailfish OS. There's no doubt that this is the true successor to Maemo. If you have fond memories of the Internet Tablet/N900/N9/N950 line of devices, then I'd recommend a Jolla. If you like Linux and want a phone that really is Linux, rather than a Java VM that happens to be running on the Linux kernel, then I'd recommend a Jolla. Clearly, I'm still suffering from the first-flush of enthusiasm, but it definitely feels good to be finally in possession of a phone that I feel like I can control, rather than one that controls me.

For the record, the photo shows (from right to left) Ubuntu Touch running on a Nexus 4, an iPhone 5 running iOS 7.0.4, Android 4.4.2 KitKat on a Nexus 4 and a Jolla device running Sailfish OS (Naamankajärvi). There are actually only three devices here: both Nexuses are the same. The overall photo and Android device was taken using the Jolla; the Jolla and Ubuntu phones were shot with the iPhone; the iPhone photo was taken with the Android.

I had an interesting experience getting all of the photos off the phones and onto my computer for photoshopping together. Getting the photos off the Jolla and Android devices was easy enough using Bluetooth transfer. The iPhone inexplicably doesn't support Bluetooth file transfer (except with the uselessly myopic AirDrop), and getting anything off the device is generally painful. Eventually I used a third-party application to share the photos over Wi-Fi. However, it was Ubuntu Touch that gave the most trouble. The Nexus 4 doesn't support memory cards, Ubuntu Touch doesn't yet support Bluetooth and the only option offered was to share via Facebook. I gave up on this. No doubt Ubuntu Touch will improve and ultimately outdo iOS on this, but... not yet.

8 Jan 2014 : Digital Forensics: can it really be an academic discipline? #
Although Digital Forensics isn't my main research area, it is one that I've had involvement with for some time. I work with many very talented researchers in the area of digital forensics, and have worked in the past with the Police in testing new digital forensics tools.

Yet in spite of this, I've struggled with the underpinnings of digital forensics for some time. Unlike security research, which is built on a set of clear principals that remain consistent over time, the principal techniques of digital forensics appear to me to be inevitably temporary and fleeting.

To be clear, I do understand that there are clearly defined goals for good digital forensics practice, and that the overarching aim is to collect evidence within the constraints of these requirements. For example, the need to collect data in a non-destructive way, while ensuring traceability, collecting information about provenance, and ideally supporting repeatability of collection. If digital forensics constrained itself to the pure pursuit of managing data based on these principals, then that would provide scope for a practically useful, but theoretically unremarkable area for future research.

I also understand that there are interesting questions related to how data can be analysed and interpreted to better understand it . But to me this falls under intelligence gathering rather than digital forensics. It fits into a much broader class of research (data analysis) which exists separately and independently.

Instead, at the heart of most digital forensics research you'll invariably find a data collection technique that's designed to uncover unexpected data. Data that the user never intended to persist or become accessible. As others have noted, this goal is diametrically opposed to the central goal of security, which is to offer a strict decision over what access is granted and to whom (where access can apply to not just data but also actions). Presumably, a tightly configured and accurately implemented security policy would prevent any effective digital forensics techniques from being used.

As a consequence, much digital forensics research focusses on bypassing security measures, making use of unanticipated data leaks or amalgamating data that had hitherto been considered benign. As soon as these techniques have been identified, a good security process should provide a countermeasure.

Certainly this offers a lucrative seam of challenges to undertake research around. However, each is just the exploitation of a transient mistake, framed from a perspective of intent. Consequently, when I read about digital forensics research I always struggle to understand the enduring principals which have been uncovered by it.

In contrast, the enduring principals of security research are clear. The aim there is to provide control: the ability to allow or disallow access to digital functionality or information based on a stated security policy. The security policy might change, and so the controls and feedback given to the user might also change, but good security research accommodates this without diverging from this clearly defined goal.

No doubt security doesn't always work like this and there are many challenges to achieving it. Security policies must be suitably complete, definable and understood by the user to achieve the intended results. There must be methods for applying the policy which ensure that the model (policy) and design (controls) align. Finally, the implementation must be correct, so that it - ideally verifiably - matches the requirements.

There will always be difficulties that arise in achieving this, but there is no reason why the methods developed today, which fulfil these objectives within a particular context, shouldn't be as applicable in the future as they are now. I'll grant that the completeness part may be an unachievable aspiration. But this doesn't make the steps towards it any less valid.

On the other hand, the goal of digital forensics is always moving: not forwards but sideways. So what are the underlying principals of digital forensics that an academic research discipline can uncover? How will digital forensics survive as a research area in the future, other than through the drive for practical outcomes? What area is there left for digital forensics to inhabit, once the security problem has been solved?

31 Dec 2013 : Music in the Air #
After wrangling for days with all of the other services to try to get them set up properly on our new home server (mostly the print and folder shares), setting up media streaming has been a breath of fresh air. A quick install of MiniDLNA from the repositories and some minor tweaking of the configuration file, and we can now access music and video from anywhere in the house. Particularly nice is the fact we can upload music via ownCloud and immediately access it direct from the TV. It's all very impressive, for negligible effort on my part (which is just how I like it)!
28 Dec 2013 : Constantia Mk II Goes Live #
After over five years running as my main server resource the time has finally arrived to retire my mini Koolu server, called Constantia. The last few days have been spent transferring its contents over to a new server, ready to take on the same role. The switch has been necessitated by the ageing hardware of the Koolu device. While it's still running beautifully the last Ubuntu release to support the hardware dropped out of its support period earlier this year.

The new hardware is an Aleutia T1 device. With its fanless chassis, low (10W) power consumption, tiny (20cm × 18cm) footprint and supported hardware it makes an ideal successor to the Koolu, as you can see below (Koolu on the left, Aleutia on the right).

Aleutia build the devices for projects such as solar classrooms in rural Africa, but they were also very happy to supply a single machine, even going to the trouble of preloading Ubuntu with a bespoke configuration.

I've been working with it for a couple of days now, and first impressions are good. There's a big performance improvement (noticeable even when accessing basic server tasks over the LAN). The more recent Ubuntu support means a host of new useful features and so far the the new server has picked up the following roles:

  • DNS server.
  • Apache SSL/TLS web server.
  • Subversion repository.
  • SMB shared folders.
  • Shared print server.
  • OwnCloud cloud storage and services.
  • Trac project management.
  • OpenVPN secure remote access.
  • DLNA media streaming.

Most of these were transferred over seamlessly; for example clients see the Subversion repository as just a continuation of what was there before. I'm looking forward to the improved performance, increased functionality and a more robust server to run the network for the next five years or more! Constantia Mk II can be found at

23 Nov 2013 : A Very Exciting Day #
Today is very exciting. I hear you ask: is it because of the Liverpool Derby? The Day of the Doctor? The Xbox One release? 1D Day?*

No. Today is when I get to try out my new server, which will be replacing Constantia and which will basically be running my entire life. For the last five years this has been very ably managed by a Koolu box (actually an FIC built Ion A603 with an AMD Geode LX processor) running Ubuntu 8.04. It's served beautifully all this time and never let me down. Sadly, Ubuntu 8.04 drifted out of its LTS support cycle earlier this year and the hardware combination isn't usable with newer versions of Ubuntu. It's taken me ages to choose a worthy successor given my demanding requirements (very small, passively cooled, low power, silent, good Linux and software compatibility, etc.). Finally I settled on an Aleutia T1 Fanless PC.

Hence my excitement. It's not the highest specced device in the world, but it runs at 10 Watts, is fanless, with supported chipsets. It arrived yesterday and I've not yet even turned it on. Actually getting it to the stage where it can replace my existing server wholesale is going to take a lot of configuration and data transfer between the two, but that'll all be part of the fun challenge.

In my small world, this is a big event, which could very well end in disaster. If this is my last ever post, you'll know why.

* (The Liverpool what? A little. Waiting for SteamBoxen. Please save me!)

27 Jul 2013 : Raiding Revisited #

Over the years I've collected a lot of screenshots of the various games I've played. Still, the games that have captured the essence of adventure and exploration most consistently for me over a long period of time are those from the Tomb Raider series.

The thing they've consistently managed to get right throughout the series is the sense of scale needed to pull the adventure forwards. Surprisingly evocative vistas and large internal cavernous rooms (captured using clever cinematic long-shots) are balanced against intricate mazes with hidden alcoves. The large scale of the vistas offers the promise of future adventure; the claustrophobic corridors achieve the sense of exploration.

On top of this, there have even been some beautiful weather effects (contrast the atmospheric storm at Dr Willard's Scottish castle against the bright burning sunlight of the Coastal Ruins in Alexandria).

The Tomb Raider Reboot didn't disappoint. To celebrate this (it's a small, private celebration to which only me and the Internet have been invited) below are a selection of some of the more powerful screenshots captured during my playthrough of the game.

7 Jul 2013 : Tombs: Raided #

No one other than me will care about this, but I've finally completed the full complement of Tomb Raider games. It's been a long slog, over 10 years in the passing. It doesn't help that they continue to make things harder by releasing new games every so often.

Perhaps surprisingly, but fittingly, the last game that I managed to complete wasn't the latest Tomb Raider reboot, but instead was Unfinished Business, where Lara returns to the Atlantean Hive from Tomb Raider 1. To be fair, I'd already completed this, but had taken the shortcut to skip the Atlantean Stronghold level. I've now done it properly.

Although there are lots of Tomb Raider games I've not played, most of them are mobile, Gameboy or Xbox exclusives which I don't imagine I'll ever get to have access to. I like to think of them as not being canon! Here's the full list of conquered games.

  • Tomb Raider.
  • Unfinished Business and Shadow of the Cat.
  • Tomb Raider II: Starring Lara Croft.
  • Tomb Raider III: Adventures of Lara Croft.
  • The Golden Mask.
  • Tomb Raider: The Last Revelation.
  • Tomb Raider: The Lost Artefact.
  • Tomb Raider Chronicles.
  • Tomb Raider: The Angel of Darkness.
  • Tomb Raider Legend.
  • Tomb Raider Anniversary.
  • Tomb Raider Underworld.
  • Lara Croft and the Guardian of Light.
  • Tomb Raider (reboot).

From all of these, the standout levels are the Venice Level from Tomb Raider II, and (ironically, given the bad reviews) the Louvre level from Angel of Darkness. I loved leaping around those roofs. The latest Tomb Raider was a great game and worked really well as a fresh approach. Still, Edge had it spot on when they said it was a reversal of the formula: from precise platforming and loose shooting to loose platforming and precise shooting. I'd rather have precise platforming and no shooting myself. In spite of this it was a great game and it would be a lie to say I didn't enjoy it a lot.

Thankfully Tomb Raider is a bit like Doctor Who. There's more than enough non-canon material to fill a lifetime, so I have absolutely no plans to stop here. Even the original block-based adventures have their place in modern gaming as rare examples of games worth playing on a laptop without the need for a mouse (it's a dubious accolade I admit). With a bit of luck they'll continue to release great games in the future.

Below are a few more images taken from my final foray into the original world of Lara Croft: Unfinished Business.

28 Jun 2013 : Bitcash #
A while ago I traded my first Bitcoins, and now the purchased product has arrived, all the way from Switzerland. What did I buy with my Bitcoins? Well, a Bitcoin of course! Except this one is a Cacasius physical Bitcoin. It’s an interesting idea: create a physical coin that contains the private key to access an "actual" (virtual!) Bitcoin. The private key is printed on the coin under an opaque tamper-proof cover, so that anyone can easily ensure that the coin is worth the valid amount by checking the seal. Consequently it can be passed between people like a normal coin. With a real coin if you want the government to make good on its promise to pay the bearer you’re out of luck. With this coin, to redeem the amount just pull off the cover to reveal the key. In practice you’d never want to do this, and the virtual Bitcoin you’d get wouldn’t necessarily be worth any more (or less) than the government’s promise, but it’s still a neat idea.

Cacasius physical Bitcoins

3 May 2013 : Finished Business #
After destroying the Scion and fighting off Natla I thought my work would be done. Not so. There were many other mysteries to solve, and given my late arrival, lots of adventures to pursue in a largely random order. Finally, after completing all of the other adventures, it was time to return to the Atlantean Stronghold and destroy the mutants created by Natla once and for all. To that end, I returned with intent to ultimately destroying the remaining inner Hive.

From the top of the structure overlooking where the hive pyramid erupted from the rock I could see the far cliffs ahead, but no way to reach them. On the ground below, in the far distance, I perceived two golden doors, tempting me forwards as the best means of progress. Alas, despite my best investigatory efforts, there was no way to open the doors and although I knew I needed to ascend to reach my goal, the only way forwards was now down into the hive pyramid itself.

Working my way through the pyramid, I dispatched various terrestrial and winged mutants en route, including those showering me with deadly darts and explosive projectiles. Luckily many of them suffered from idiosyncratic perception difficulties – no doubt a result of the mutation process – that made them more likely to follow my shadow than me. Fooling them using acrobatic prowess – dangling from ledges and leaping on top of blocks – and showering them with persistent pistol fire while dodging their own deadly projectiles saw me prevail. Yet this was no easy fight through the chambers and passageways.

As I continued onwards the way became more treacherous still, with lava flows cutting off my path, dangerous precipices to be scaled over lethal spikes and watery pools containing hidden switches that bore the secrets to opening the passageways ahead. Oftentimes I saw glimpses of future dangers, obliquely viewable through the many impenetrable glass and tissue structures of which the pyramid was built. But these ominous forewarnings only drove me harder to complete my journey.

Eventually, working down and then higher again into the rocks above, I found myself overlooking the same pyramid again, but now from the opposite side, from an angle where my goal was visible. Leaping into the unknown, I dived through the darkened hole in the pyramid with only serendipity and an unwavering belief in the existence of a path forwards to trust in. My faith was rewarded, with the pool below deep enough to buffer the impact of my fall. I climbed out to find myself in the inner hive of the mutants, and able to finally finish what had been started all those years ago in the Peruvian Andes searching for the Scion.

28 Apr 2013 : Bitcoins #
Today I traded my very first Bitcoins. It's possibly the worst time to be buying them, given the amazing amount of publicity they've been getting recently (and the upsurge in their value that's resulted). Still, today I'm buying them for a reason rather than as an investment, so I've convinced myself that it's okay. Why the rush? I just discovered that Casascius is no longer selling physical Bitcoins to individuals. Since I'm keen to have one, the high price is just something I have to suck up. I'm looking forward to getting hold of a physical coin (even if it epitomises everything Bitcoins aren't!), and it's exciting to actual own some of the currency. The Web right now manages to make Bitcoins look a lot more daunting than they actually are, which is quite an accomplishment.
9 Dec 2012 : PiBot2 parts #
I've been really quite shocked (in a good way) at the interest that PiBot has generated. Apparently the world needs more Raspberry flavoured Lego robots, so to help anyone aspiring to own their own robot army, here's the list of parts that was used for PiBot2.

Pretty much everything came from Amazon, so most of the links are to the UK Amazon site. Apologies if you're from outside the UK or are currently boycotting Amazon for their dubious tax practices, but all of these should be readily available from lots of other places too.

The table is split into two parts. The first part covers just those bits and pieces that you're likely to need to get a Raspberry Pi up and running. If you've already got a Raspberry Pi, you probably already have all of these things. The second part covers the materials needed to get the robot working.

Parts needed for the Pi
Raspberry PiThe computer.£25.92
Logitech K340 Wireless KeyboardKeyboard works well with Pi.£34.95
Logitech M505 Wireless MouseMouse works well with Pi. The Logitech unifying receiver takes one USB port for both keyboard and mouse.£30.98
HDMI cableTo connect the Pi to a screen.£1.03
Micro USB Mains ChargerTo power the Pi when it's not attached to the battery.£2.75
16GB Micro SDHC cardTo run the OS from.£8.22
Parts needed for the Robot
LEGO MINDSTORMS NXT 2.0The actual robot. This includes the motors and ultrasonic sensor needed for control.£234.99
TeckNet iEP387 7000mAh 2.1Amp Output USB BatteryFor powering the Pi when it's on the move. I tried cheaper less powerful chargers (including AA batteries), but they couldn't provide enough juice to keep the Pi running.£23.97
USB 2.0 A/b Cable - 1mFor connecting the Pi to the Mindstorm control Brick.£1.69
USB A to Micro B Cable - 6 inchFor connecting the Pi to the battery.£2.13

The total bill for this lot was around £370. However, £235 of this is the LEGO Mindstorm and £65 is for the wireless keyboard and mouse, so if you've already got these I'd say the rest is pretty reasonable. I had to try a number of wireless keyboards before finding one which didn't cause the Raspberry Pi to reset randomly though. If anyone knows of a cheaper keyboard/mouse combo that works well with the Pi, let me know and I can alter the list.

If you're building a PiBot, I hope this helps to get things underway. I'd be really interested to know how other people get on; it'd be fantastic to feature some other PiBot designs on the site!

9 Aug 2012 : PiBot2 #
After a frantic buying spree on Amazon and some tense anticipation each day with the post, PiBot has now been augmented (Deus Ex style) with better hardware, neater design and improved software. Meet PiBot2!. The upgrades include a much larger (7000 mAh) battery, a USB connector that doesn't cut power when riding over bumps; a mere 1m long cable (as compared to the previous 5m long version), and auto-roaming code that will explore the room without intervention (mostly!).

The cable is still a good 80cm too long, and the exploration code is simple to say the least, but it's one step further on. Using PyGame for the code means proper asynchronous keyboard input, so that human-control and auto-exploration can be switched between seamlessly. The next part of the plan is to draw objects in the PyGame window as PiBot senses them. I don't expect this to work very well, but I plan to have fun trying it!

Below are a few screenshots of the new PyBot, along with the code in its latest state.

#!/usr/bin/env python

import pygame
import sys
from pygame.locals import *
import nxt
import nxt.locator
from nxt.sensor import *
from nxt.motor import *
from time import sleep

def input(events, state):
    for event in events:
        if event.type == QUIT:
            state = 0
        if event.type == KEYDOWN:
            if event.key == K_q:
                print "q"
                state = 0
            elif event.key == K_w:
                print "Forwards"
                both.turn(100, 360, False)
            elif event.key == K_s:
                print "Backwards"
                both.turn(-100, 360, False)
            elif event.key == K_a:
                print "Left"
                leftboth.turn(100, 90, False)
            elif event.key == K_d:
                print "Right"
                rightboth.turn(100, 90, False)
            elif event.key == K_f:
                print "Head"
                head.turn(30, 45, False)
            elif event.key == K_r:
                state = explore(state)

    return state

def explore(state):
    if state == 1:
        state = 2
        print "Explore"
    elif state == 2:
        state = 1
        print "Command"
    return state

def autoroll():
    if Ultrasonic(brick, PORT_2).get_sample() < 20:
        both.turn(-100, 360, False)
        leftboth.turn(100, 360, False)

def update(state):
    if state == 2:
    return state

window = pygame.display.set_mode((400, 400))
fpsClock = pygame.time.Clock()

brick = nxt.locator.find_one_brick()
left = Motor(brick, PORT_B)
right = Motor(brick, PORT_C)
both = nxt.SynchronizedMotors(left, right, 0)
leftboth = nxt.SynchronizedMotors(left, right, 100)
rightboth = nxt.SynchronizedMotors(right, left, 100)
head = Motor(brick, PORT_A)

state = 1
print "Running"
while (state > 0):
    state = input(pygame.event.get(), state)
    state = update(state)

print "Quit"

22 Jul 2012 : The PiBot Raspberry Pi NXT robot #

Inspired by the amazing things the Boreatton Scouts group are doing with their Raspberry Pis, as well as a conversation with David Lamb and Andrew Attwood – two colleagues of mine at LJMU – I thought it was about time I actually tried to use my Pi for something other than recompiling existing software. I'm not a hardware person. Not at all. But I do have a Lego Mindstorms NXT robot which has always had far more potential than I've ever had the energy to extract from it.

But after reading about how it's possible to control the NXT brick with Python using nxt-python, and with David pointing out how manifestly great it would be to get the first year undergraduates learning programming using it, I couldn't resist giving it a go.

It turned out to be surprisingly easy. The hard parts? First was getting the Pi to discover the NXT brick over USB. The instructions for this aren't too great, but in fact it turned out to be as simple as copying the NXT MAC address into the PyUSB configuration file. Second was getting the Pi, battery pack and 5 metres (yes, you read that right) of USB lead to balance on top of the robot!

The PiBot components

I'm not exactly sure why I bought such a huge lead given I knew it would all end up on top of the robot, but that's planning for you!

The result really is as crazy and great as I'd hoped. I wrote a 50 line python programme to read key presses and drive the robot appropriately – right, left, forward and back – and nxt-python does all of the hard work. The keyboard is wireless, attached to the Raspberry Pi using a micro dongle. The USB lead connects the Pi with the NXT brick. The Raspberry Pi is powered by a USB phone charger. The monitor lead and ethernet aren't needed when the machine's running, which means the robot/pi combination is entirely self-contained and can be controlled using the wireless keyboard.

It was also possible to read data from the sensors, allowing the robot to drive itself entirely autonomously around the room avoiding objects and generally exploring. The next step is to collect more input about the distance it's travelled so that it can be mapped on to a virtual room on the Raspberry Pi and build a picture of the world.

Here's a video of Joanna controlling the Heath-Robinson contraption as well some photos showing all of the different parts balanced on top of one another.

The PiBot components

The wonderful thing about all of this is that although it requires a huge amount of effort and insight to get each of the individual pieces working, none of the effort was mine. Pulling the pieces together is really straightforward, building on so much clever work by so many people. It's got to the stage where you can grab a phone charger, some Lego, a £35 PC the size of a credit card, a wireless keyboard, an entirely open source software stack, 5m of USB cable and a Sunday afternoon and end up with a complete robot you can programme directly in Python. Brilliant.

The PiBot components

#!/usr/bin/env python

import nxt
import sys
import tty, termios
import nxt.locator
from nxt.sensor import *
from nxt.motor import *

brick = nxt.locator.find_one_brick()
left = nxt.Motor(brick, PORT_B)
right = nxt.Motor(brick, PORT_C)
both = nxt.SynchronizedMotors(left, right, 0)
leftboth = nxt.SynchronizedMotors(left, right, 100)
rightboth = nxt.SynchronizedMotors(right, left, 100)

def getch():
	fd = sys.stdin.fileno()
	old_settings = termios.tcgetattr(fd)
		ch =
		termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
	return ch

ch = ' '
print "Ready"
while ch != 'q':
	ch = getch()

	if ch == 'w':
		print "Forwards"
		both.turn(100, 360, False)
	elif ch == 's':
		print "Backwards"
		both.turn(-100, 360, False)
	elif ch == 'a':
		print "Left"
		leftboth.turn(100, 90, False)
	elif ch == 'd':
		print "Right"
		rightboth.turn(100, 90, False)

print "Finished"

19 Feb 2012 : The World Wild West #
The Web used to be like the Wild West: lawless and anarchic, yet at the same time inspirational and free. But frontiers get pushed back, and beasts get tamed. Today the Web is a far 'safer' place, with much of the control ceded to governments and corporations. One of the happier casualties of this appears to be spam, which through a combination of law and technology, is now a far less aggressive problem than it was back then.
Since the start of this site, I've always used a public email address that was separate from the private email address I gave to people personally. The reason was to reduce spam, and also because companies couldn't be trusted to use my email address responsibly. Today the amount of spam I receive, even on the public address, is bearable and companies are much more likely to actually comply with the data protection laws preventing distribution of contact details. As a result, I've decided to finally move over to using just a single, simple, email address. The plan is to make my life easier and have fewer addresses to deal with. Whenever I write out my name on official forms it's always hard to fit it into the space provided. Finally I can now avoid having the same problem with my email address as well!
12 Feb 2012 : Celtic Knots: moving from 2D to 3D #
A couple more prints have arrived from Shapeways and once again I'm really pleased with the results. The first was a bit of an experimental print for a number of reasons. It's another 3D Celtic knot, but this time I tried it with much thinner threads, right down to the minimum of 0.7mm thickness recommended by Shapeways. I get the feeling this recommendation is intended for walls, so I'd feared the threads wouldn't be strong enough to hold together. In fact, the final result is perfectly sturdy and the threads seem quite robust. Second, I tried the polished version of the "white strong and flexible" material (which is apparently a kind of nylon). The polishing process involves shaking the model with lots of tiny polishing balls, so again I'd feared this might affect the models strength. And again, it seems my fears were unfounded. Finally, I generated the model to have gaps where the threads cross over, the hope being it would be printed in four separate pieces. Unfortunately I apparently didn't give enough clearance, and some of the threads fuse at these intersections. Nonetheless, some of them are still loose, and the result is really great. I may try it again with a bit more of a gap next time though.

The second knot is a proper 2D Celtic knot. The idea is that this is generated from the same seed as the 3D knot, making it in some sense the 'same' knot. That's not really true, but until I figure out what's really meant by 'the same', this is as close as I can think of. I was pleased to find that, since they're both printed with the same dimensions, resting the 2D version on one of the faces of the 3D knot, they align nicely and really look like one is an extruded version of the other.
Once again, printing out these knots has provided some really nice results, leaving the biggest problem the question of what to print next.
4 Feb 2012 : From theory to practice #
Yesterday I received another print from Shapeways. It's my first metal creation using a clever printing process that takes a 3D model as input and creates a completely formed bronze object as output.
Perhaps unsurprisingly it's another 3D Celtic knot. Once again, in spite of the dubious model I provided, the result is just brilliant. It's a real chunk of metal that looks like it's been hewn and polished into a complex shape through hours of craftsmanship. It did take hours of work of course, but in reality it was largely done using completely routine machine production techniques. Here's a shot of the result.

We all have dreams about the things we want to do when we're grown up, like becoming pop stars, train drivers, footballers or whatever. As we grow older we find we have to shed some of these hopes. There comes a point when the realisation sets in that perhaps there are people better suited to fighting dragons. Having spent practically all my life working with either maths or computers, I'd pretty much given up hope of ever doing something that actually produced physical results. It sounds like a strange dream, but the prospect of being able to create something tangible has always seemed exciting.
It's surprising then to find a path of entirely abstract ideas can lead so naturally into a process of creating physical constructs. This is the solution 3D printing offers. It allows people to turn abstract ideas into physical form, without ever having to leave the comfort of a computer screen. No need to get your hands dirty.
Of course, the physical infrastructure needed to get to this point is phenomenal (electricity, Internet, banking, etc.). Someone had to build it and huge numbers of people are still needed to maintain it. But as far as I'm concerned, sitting behind a computer screen, it's still an utterly seamless and physically effortless process. Only thought required.
You can download the 3D model for this, or buy a physical artefact, direct from the Shapeways website.
6 Sep 2011 : 3D printing #
In case you're interested to know more about my recent 3D printing experience, I've put together some more words and pictures. Feel free to take a look. Alternatively, there's also a link in case you want to print a copy of the Celtic knot yourself. That's right, you really can print your own. Still doesn't seem right.
3 Sep 2011 : 3D printing #
Today I received my first ever 3D print. It's a 3D Celtic Knot which was generated by some code I put together while Joanna and I were in Tuscany a couple of weeks back. The model was sent off to a company called Shapeways in the Netherlands, and today I received the final printed object. The technology that allows you to print 3D objects is just phenomenal, both in terms of how clever it is, and the astonish potential it promises. It really does provide the opportunity to create just about anything, turning the wildest imaginings into reality.
I was pretty nervous getting it out of its box as I really wasn't sure how well it would come out, but in fact I'm astonished at how clean the printing is and how sturdy the object is. I hope to put together the full story of my 3D printing experience, from theory to reality, tomorrow.
One of the things I love about the idea of 3D printing, is that it really seems as close as we can get right now to the Star Trek replicator way of doing things. That might seem like an irrelevance - just a nerdy reaction - but I see it as a real vision of how things will change in the future. I find the shift from small-scale-bespoke, through mass-production, to mass-bespoke just a little exhilarating to be lucky enough to experience.
3D printed celtic knot
31 Aug 2011 : Syncing my Google Calendar #
At work I use Outlook, since the University uses MS Exchange and the nature of collaboration tools is that you have to use what other people are using. However, for some time now I've also been syncing this with a Google Calendar so that I can also make some of the details available on this site. Google provides a free syncing tool, but this had various limitations, such as only being able to sync one calendar, making it no good for what I wanted. The solution was to use a piece of software called SyncMyCal. For the record, this is a great piece of software that does a straightforward task very well. Once it's properly configured, it's the kind of software that works best if you don't notice it again, which is exactly how things were until recently. It was well worth the asking price.
So, this worked great for ages, until half a year or so ago the University started upgrading the Exchange servers, and I upgraded my machine to Outlook 2010. SyncMyCal was only compatible with Outlook 2003.
My solution at the time was to continue running OUtlook 2003 with SyncMyCal on a separate machine. This kind of worked, but had problems. The machine would get turned off and I wouldn't notice, or it would reboot after an automatic update leaving Outlook asleep on the hard drive. My Google calendar was only updated intermittently. Nobody really cared, except for me, since it increased the disorder in my world and kept me locked in to running an old machine just for the sake of syncing.
Until yesterday that is. On the offchance I checked the SyncMyCal site yesterday and found they'd finally released an Outlook 2010 version of their tool. Yay!
The result is that now my calendars are syncing normally, the version on this site is telling the truth, rather than some partial version of it, and the world - for me at least - has become a little more ordered!
3 Oct 2010 : Guardian of Light #
Guardian of Light It's been a long time since I wrote anything in this plog, but I guess some things just warrant waking up from a slumber. What's the big new? Well, I've just completed the latest Tomb Raider game. Actually, scratch that as it's not a Tomb Raider game, it's the latest Lara Croft game: The Guardian of Light. It's a lot different from previous instalments in the series, in that rather than being third-person following Lara, it's third-person bird's-eye-view 'isometric'. I don't think it's true isometric because they left some perspective in, but you get the idea. It was an enjoyable game (I finished pretty quickly for me, which says something), but compared to the previous ones I found it easier to forget that I was playing as Lara Croft. The puzzles were good, and ironically I thought the combat was much better than in the 3D-view games. It's just a shame that there wasn't more dialogue and story elements to keep the game grounded in the Tomb Raider world. Guardian of Light Anyway, I'm glad I played it, and that the series is continuing with the same energy. I had to finish it after all, just to keep up my Tomb Raider completion rate. Eventually I still plan to go back and complete Unfinished Business (I only have The Hive to do, but it really is still unfinished business); I console myself with the fact that this was really an add-on for the original game.
30 Sep 2010 : The seasons change again #
The trees decided that Autumn has begun today!
22 Sep 2008 : Sparky the Dragon #
Joanna wakes up Sparky for the first time Joanna's been asking to have a baby red dragon for a present for every birthday and Christmas for several years now. With the release of the Pleo, it finally looked like it might be possible to fulfil her wish. The result of my attempt to do this is Sparky the Dragon.
15 Sep 2008 : Raiding the last tomb... so far #
Tomb Raider 1 Finally, after what feels like an æon, I've managed to complete Tomb Raider 1. It feels like it's taken some Herculean effort (not that I'd know!) after nearly 12 years, and for me is something to celebrate. To most people it won't sound like a big deal at all, but the real reason it feels like such an achievement is that this actually means I've now finished all of the Tomb Raider games. All eight of them. I can finally say that I've scaled the Tomb Raider mountain. Tomb Raider 1, looking back at the carnage It seems kind of odd to have finished the first game last, but when it was released I wasn't really interested in playing it (even though I loved computer games and remember being mesmerised watching my house mate Alex complete it at the time). Rather strangely I didn't start playing the games until thoroughly enjoying playing through Angel of Darkness. Apparently I was the only person who did enjoy it, but it got me hooked and I moved on to the others. Tomb Raider 1, the end awaits After completing Legend I didn't think I'd ever play the first game (I thought it'd feel too much like playing it twice) and so would never finish them all, but eventually that human-collector-instinct got the better of me and I had to give it a go. It was well worth it, in spite of its venerable age. So, I'm glad I can revel in the fact I've completed them all, at least until they release Underworld, which I'm nonetheless looking forward to. In the meantime, I've not yet played all of the extra Gold levels, so there's still work to be done. And for the record, maybe Angel of Darkness was my favourite, although Chronicles was the best of the vintage games and I loved the Venice level in Tomb Raider 2 as well.
11 Oct 2007 : Autumn #
Autumn at the Commercial Road to Vauxhall Road junction Every time the season changes my journey to work becomes far more enjoyable. It seems to happen so quickly. One day the trees are green and suffering the end-of-summer storms, the next day the air is still and crisp, and the world has turned a golden orange colour. Yesterday evening on my journey home there was a deep mist. The halo of the lights and the glowing golden trees made things feel just a bit magical.
I've decided that Liverpool is a beautiful place at this time of year. I'm sure this is true of everywhere else too, but It's only recently that I remember noticing such vibrant changes in the seasons. I was wondering why this might be, and then I realised. When we lived in Pall Mall there were basically no plants or trees on my way to work. A total absence of nature. Thinking about it, that's really strange, and it makes me realise how important it is to live where there is more than just concrete. It's also true that the relative harshness of the climate here (being a northern port city), compared to other places I've lived, is quite bracing. And also just a little annoying when you have to cycle somewhere!
12 Sep 2007 : Deadly Shadows #
Deadly Shadows Kirkdale industry at dusk Industrial silhouette I've just finished the game Thief - Deadly Shadows. It's a great game, full of dark atmosphere. Like many of the games I enjoy most, it's always the atmosphere that makes the game immersive and enjoyable. What I particularly enjoy about Thief, is that it reminds me of wandering around the docks in Liverpool at dusk. With all of the industrial architecture and crumbling infrastructure it's a scary place, but with the continuously working industry -- container ships arriving all through the night -- it also feels alive with a kind of eternal energy. Whilst I enjoy visiting the docks I basically get too fearful to stay there when it gets really dark. Entirely psychological fear I'm sure. That's the beauty of Thief. You can do things you'd never dream of doing in real life, with some of the same fear, but without the consequences. The pictures are of the game and a couple of photos of Liverpool Docks I took trying to overcome my fear one summer evening!
21 Apr 2007 : With Every HeartBeat #
Kleerup and RobynI felt rubbish this morning, but my day was saved by a beautiful piece of music. So much of the time, music seems to be in the background. It's so rare that you hear a song that's so powerful that you can't help but give in to its effect on your emotions. There's a track called "Last Night a DJ Saved My Life," and every so often, I can understand it. Ironically, this track doesn't hold much power, but it has its truth nonetheless.
The music I heard today was called "With Every HeartBeat" by Kleerup and Robyn. It stopped me in my tracks and I swear I stopped breathing for the 4 minutes the track played.
Later I listed to "Be Mine," also by Robyn. It's another beautiful, if equally tragic, track. When you're not feeling too great they're utterly self indulgent. And beautiful. And somehow helpful.
14 Apr 2007 : A walk in the park #
Kirkdale skyline
I went for a walk around the neighbourhood this evening. It's amazing how warm it is. The air is completely still and although it's not been sunny all day, the air is hot but not humid. It's a very unusual combination for around here at this time of year. It's especially surprising that it's so warm in the evening. It's now nearly 10pm and I have the windows fully open as I sit in my study. The temperature in here is the same as it is outside and it feels just perfect. Like a mediterranean evening.
So I went for a walk because it's wonderfully warm and I'd not yet been outside today, but mostly because I find the industrial area that we live in to be utterly mesmerising at night. The huge great industrial storage drums and buildings. They sit, looking both alive and silent at night like sleeping giants. Some of them have glistening lights whereas others are just dark looming silhouettes against the night sky. We're near the docks, which is an important part of the magic, because it feels like a space port or space station. Technological, but also grimy and real.
I didn't feel safe walking around the neighbourhood. There were few people around and I was on my own. There was some noise because the only people around at 9:30pm on a Saturday evening in this part of town are kids. Kids are intimidating and I walked past a couple of gangs of kids, which felt a bit uncomfortable. But they didn't actually cause any trouble at all. Just walked straight past.
We don't live in what could be called a nice part of town, although I find it okay. So I wonder whether it really is a dangerous or scary place to be. I'm sure almost all of the fear I experienced was self inflicted. I suppose by definition all of fear is self inflicted, but what I mean is that it's almost certainly entirely unnecessary. But in spite of reminding myself this I couldn't get it out of my mind. I wonder whether there really is something to fear? Lots of people say that it's a modern phenomenon, but I'm sure walking around industrial areas has always been scary. "Everyday do something that scares you." This is so important. It wasn't the reason for my walk, and I hope over the next few months the weather is such that I can do it more often. Part of the reason for doing scary things is realising that there's no need for the fear. I'm not entirely convinced just yet, but I'm glad I had the walk nonetheless.
Another Kirkdale skyline
6 Mar 2007 : Spring #
DaffodilsA couple of days ago, on the 27th February in fact (the day before St. David's day) I suddently noticed hundreds of daffodils had appeared in the grassy patches that line my journey in to work. It felt like they'd just appeared overnight. I don't know if this is a reflection of what actually happened, or of my not noticing them earlier, but it was a very uplifting realisation. At the time, I put it down to my enjoyment of Spring as a season. After all, my birthday tends to fall in Spring so it's bound to enthuse me a bit! On further thought though, I realised that I feel uplifted at every change in the season. The onset of Spring, Summer, Autumn and Winter are all exciting times. Perhaps it's the possibility of renewal, the chance for a change? Whatever it is, I'm hoping the optimism of the new season brings positive effects.
6 Mar 2007 : The secret of success #
"Success begins at you believing you can do it. But it ends where that belief begins to blind you." (As said by l3v1 in a comment on OSNews).
6 Feb 2007 : Pigeon AI #
It wasn't until after writing up that last post about birds in computer games that I discovered - whilst search for suitable screenshots on the Web - that the birds in Broken Sword Angel of Darkness were actually intended as a clue in the game. Not quite as incidental as I'd imagined! It just goes to show how much thought has often gone into such things.
6 Feb 2007 : Free as a bird #
BirdOver the years, I have to admit, I've played a fair few computer games. A common theme that runs through many games is that of 'struggle'. You take on the role of an - often reluctant - hero, struggling against some alien or mystical aggressor so as to fulfil your destiny and, coincidentally, complete the game.
Another common theme that I've noticed is that many computer games like to include incidental aesthetic features. Bushes that move with the wind, birds that fly away in reaction to some movement, the changing light as the sun moves through the sky. These all add to the atmosphere of the game, immersing the player and increasing the feeling that the environment is real. It's touches like these that I find particularly beautiful in computer games; things that aren't necessary, but which nonetheless add depth. Half-life 2, Broken Sword Angel of Death, Ico, Shadow of the Colossus, Tomb Raider Legend and Prince of Persia the Two Thrones are games with these kinds of incidental effects that immediately spring to mind.
Birds from Shadow of the ColossusHowever, whenever I notice such things - usually as a result of me disturbing some quietly perched birds - it also has another effect, serving to highlight the disparity between the tumultuous struggle inherent to the gameplay and the unchanging indifference of the world it inhabits.
Why are evil overlord alien races or mystical enemies never interested in subjugating or annihilating the bird population? Why are they only interested in the humans? Why is it that they are happy to share their world with the birds, but not with the humans? There might be all manner of pain and suffering, battling, fighting and enslaving going on - massive gun battles and destruction - yet the birds just seem to go about their business oblivious to the disaster going on around them. It's not just the birds either. It's pretty much all of the other animals: fish, lizards, insects and so on. All of nature in fact.
Okay, perhaps I'm reading too much into what is really just an incidental addition to a game. But I think it's an interesting metaphor for everyday life. It's easy to get caught up in the whirlwind of stress and work that can consume our daily lives, whilst at the same time the world goes on, oblivious and unaffected. The stress and work is often entirely of our own making.
You can extend this further to more serious matters too. I'm lucky to never have experienced wartime in any real sense (although Britain seems to have been at war with at least one country for as long as I can remember; it's testament to the aggressive nature of our democracy that this always seems to be happening somewhere else). But even during the most horrendous times, nature carries on, the weather changes and animals continue their lives indifferent to the human follies around them. It's somehow reassuring.
It may seem a little odd, but it's this that I'm reminded of when I disturb a flock of birds in a computer game.
3 Feb 2007 : The longest journey of all #
AthenaI've really fallen in love with small games (maybe "become addicted to" would be more honest!). Most games are so huge and cinematic that you can't really pick them up for a quick game and put them down again. Sometimes you want something with epic production values, but just at the moment I'm enjoying a games that's epic in a totally different way: The Odyssey: Winds of Athena is a small game based on Homer's epic poem. It's sort of puzzle based, but mostly it requires dexterity, creating currents in the water and wind with the mouse to blow Odysseus's boats to safety. At the same time as providing fun and frustrating gameplay, it also charts its way through the classic story, which is enjoyable in itself.
The Odyssey: Winds of AthenaIt isn't expensive, and I for one reckon I'd spend a lot more overall buying several games of this size and price than on fewer of the more costly variety. This is certainly what's happened over the last couple of weeks (last weekend I bought Gumboy's Crazy Adventures; another great game).
To be fair, it could be that the reason I find these kinds of games more appealing is because they tend to be 2D and puzzle oriented, with quirky rather than realistic game mechanics. This is the kind of game I grew up with before 3D games became the norm. Nevertheless, there's no denying that they're fun to play, and much better if you just want to have a quick break between doing other things, so I'm indebted to the Out of Eight PC Game Reviews site for introducing it to me.
30 Jan 2007 : Crazy Czech Adventures #
One of the big attractions of computer games for me is the brilliant use of graphics that they routinely incorporate, and I often think that they're one of the best ways (both as a medium and as a commercially viable proposition) to be able to create beautiful things. Samorost 2However, in spite of this it's not often that I am totally awestruck by the beauty and atmosphere of a game. There are only two recent games that I can think of that have had what I consider to be a quite profound beauty. The first is Samorost, or more accurately, Samorost 2 which I discovered first, created by Amanita Design. The second, which I only stumbled across over the weekend in spite of the fact that it's a good few months old, is called Gumboy Crazy Adventures, by Cinemax. Gumboy Crazy AdventuresYou'd not expect something with a title that includes the words gum and crazy to inspire beautiful imagery, but it has the same magical, whimsical, and fantastical atmosphere that I found so beautiful in Samorost. Although the graphics and sound - which form a major part of the beauty - are similar between the two games, the gameplay differs greatly between them. Luckily, they're also both great fun to play. Both Cinemax and Amanita Design are based in the Czech Republic. The combination of eerie sound and music, incomprehensible speech, and graphics that integrate real rustic objects is very unusual to me, and I wonder if it's something that's grown out of traditional East European art. Given how much I seem to enjoy it, this is something I probably ought to look into more deeply.
21 Dec 2006 : Too much of too little #
It's a strange phenomenon, finding yourself with nowhere to call home. This is the situation I find myself in this evening. This is not the same as having nowhere to live, the tragedy and unpleasantness of which the word 'strange' doesn't even begin to capture. On the contrary, it is ironically the fact that - as of today - I technically have two flats to live in that I find I don't have anywhere to sleep. One flat is practically empty, save for a rather lonely bed that belongs to our landlord. The other flat is packed full of our possessions, all laboriously moved over the last few days, and all carefully concealed in easily transportable (but not easily accessible) boxes. So the choice is between an empty, cold flat where the few remaining contents are easily accessible but offer little in the way of comfort, or a congested, cold flat packed full of practically everything you could wish for, none of which can be effectively used.
I'm not complaining mind. The circumstance represents a small part of a much larger journey and is at any rate entirely of my own making. Even if none of this were the case, it would be churlish to claim that this is a bad situation, when there are so many deeper levels of being worse off that people suffer.
It is a strange journey nonetheless, both in circumstance and emotional effect. We've decided where we're going to stay tonight. There really was only one choice, even if it's not very practical. When you're on a journey, the only choice, after all, is to move forwards.
6 Nov 2006 : Corporate art that doesn't work #
Whenever I visit a company's website or watch a company presentation, it often strikes me how utterly irrelevant to the content the images used are. The pictures invariably portray exceptionally happy people in bright sunshine doing fun things. Sun Borland artnonwork Sometimes, if a designer really exceeds expectations, there might be a stock image of someone using a computer for a technology site say, but often they won't even have bothered to do this. I understand why happy people sell products, but it just saddens me that so little effort appears to have gone in to finding relevant images. Maybe I just don't understand? Take for example the Sun JDK download site. It'll probably be gone by tomorrow, but take a look at the capture of the site that I just took. As I say, maybe I'm missing the point, but what's the connection between migrating projects between IDEs and some kids climbing a tree? I'm inclined to think it's even quite irresponsible. Climbing trees requires full attention. You wouldn't want to try migrating projects up a tree: it'd be dangerous! The sad thing is that it's got to the point where I hardly even notice the artwork anymore. It's just a fleeting wash of colour that passes through my consciousness. This is a real shame, because the pictures themselves are often very good and a lot of effort was probably put in to them. What's more, I'm sure a designer could have some real fun working out some pithy connection between the picture and the content. Surely there must be some exciting and relevant pictures that could go with JBuilder migration? How hard can it be...
15 Aug 2006 : The Last of the Colossi #
Well, that's the last of the colossi dealt with. It's another great game, but I think the story is even more ambiguous than Ico. There's definitely a link between the two games, and maybe that's the part that makes the most sense, but the meaning of the game -- if there is one -- is a little less clear. It might require a bit more thought tonight, and that's no bad thing. I've never played a game, and don't know of any either, that are anything like Shadow of the Colossus. This isn't true for Ico, although perhaps that's because all of the similar games follow rather than precede it. Nonetheless, it's impressive to find such a great and original game as Shadow of the Colossus.
13 Aug 2006 : Shadow of the Colossus #
Not a great deal of success doing more today than yesterday, it has to be said! I did manage to dispose of a further 9 colossi or so. I've now reached 13 of them, which means there should be only 3 more to go. Not quite an achievement, but it is a very good game. It'll be a shame when it's finished.
12 Aug 2006 : Saturday #
I didn't really achieve a great deal today. Got up relatively early. Helped Tom to transport his parents' budgie, started playing Shadow of the Colossus, listened to the radio a bit. But that really is about it. That's pretty bad really. I'll have to try to achieve a bit more tomorrow. Having said that, yesterday was quite busy (and we even got to have a nice Indian takeaway), so I did need the break. The real problem, though, is deciding exactly what I should really be doing. I have plenty of things that I want to do, but many of them involve starting up new projects, and I'm not sure I'm ready to start getting into something too deeply whilst I still have so many half finished things to do. I should probably get down and finish a few things off properly.
11 Aug 2006 : Tidying up some important tasks (end of the week) #
I managed to meet with Bo today about forums and Content Management Systems for the WARP. It was a useful discussion. There've also been a couple of things that I needed to do that have been playing on my mind for a while now. I had to suggest some changes to the Chinacom Security Symposium programme, and the reorganisation was actually a surprisingly tricky thing to get right. I also had to write an important email to foster some collaboration with another organisation on some of the work that we're doing. I've finally got around to tackling these tasks, and both of them are now done. Quite a relief. It sounds really stupid, but things really begin to weigh on my mind after a bit. Of course I do have a number of other things that I still need to do -- this is a universal constant -- but these are manageable, and hopefully I will now feel a bit more free to get engrossed in some of the programming work that I've been needing (and hoping) to do recently.
11 Aug 2006 : Ico and Prince of Persia #
Just as a quick addendum to the previous entry, whilst playing Ico it surprised me how similar many of the game elements were to the first 3D Prince of Persia - The Sands of Time. Of course, Ico predates The Sands of Time, and when Sands of Time was released, I think Ico had only been a critical but not popular success in the UK (I don't know about Canada, where Sands of Time was developed, though). At any rate, it does look like Sands of Time has taken many good ideas from Ico: the idea of protecting someone else, gradually falling in love with her, the way the camera movement when you enter a new area gives a clue as to where you should go, the general gameplay characteristics combining architectural puzzles with intermittent fighting, the need to use both characters to solve puzzles. There seem like a whole host of similarities. To some extent, Sands of Time could be seen as a more mainstream Ico (the fighting is much more involved in the Prince of Persia title, for example), and it certainly has interesting new elements of its own too (such as the ability to rewind time). None of this reuse of ideas is a bad thing of course, but it does perhaps highlight how Ico is a game that has left an important legacy. The gaming world is better for it. It might also explain why they are both such great games, probably two of my favourite. The use of two characters (only one of which you control) to solve puzzles is part of what makes both games great and also, I believe, why the Sands of Time is so much more fulfilling than its two sequels.
11 Aug 2006 : Ico (possible spoilers if you've not played it) #
I just finished ico the game this evening. It was really quite sublime in many ways. There seems to be a lot of confusion about what it all means and what actually happens in the game. As far as I can tell, it seems to be an allegory on coming of age and the importance of mutual help. Ico is a misunderstood miscreant. Yorda's mother is protective of her, and not ready to believe that she is able to survive on her own. This may be true, but what we grow to understand is that whilst Yorda is able to reform Ico, so Ico can also help Yorda to grow independent of her mother. The result is that by helping each other, they are able to survive on their own, against Yorda's mother's wishes. As with anything like this, the beauty is as much in the enigma as in the truth, but that's my interpretation.
10 Aug 2006 : Submitted review #
Submitted the review I've been working on. Have lots of other stuff still to do, though...
9 Aug 2006 : More paper reading #
Finished reading paper S. Kremer, O. Markowitch and J. Zhou, "An Intensive Survey of Fair Non-Repudiation Protocols". Also finished writing up a review of a paper.
8 Aug 2006 : Reading and reviewing papers #
Today I read a couple of papers. They were J. Riordan and B. Schneier, "Environmental Key Generation towards Clueless Agents" and G. Vigna, "Protecting Mobile Agents through Tracing". I've also been reviewing a paper, and working out the Security Track timetable for Chinacom this year. Neither of these last two tasks are finished yet, though.
21 Jun 2006 : Lacking motivation today #
I spent many hours yesterday writing reviews for Chinacom in to the middle of the night. It was a shame, because Joanna has just been offered a new job and we were supposed to be celebrating. Today the result has been that I just can't seem to get motivated. I'm going to have to try harder at being motivated.
16 Jun 2006 : The beginning #
Well, this is going to be a bit of a test, and I'll see how it goes. I'm not big on keeping a diary, but I figure it may be useful to keep some notes about things on a daily basis. You never know, I may even get the hang of it (yeah, right!). Anyway, this is just getting the ball rolling. I guess time will tell how it pans out.