flypig.co.uk

Personal Blog

RSS feed Click the icon for the blog RSS feed.

Blog

5 most recent items

25 Feb 2018 : Being successful as a thief #
Ars has a great video interviewing Paul Neurath about the troubled development of Thief. I loved sneaking around in the Thief games, from the original right through Deadly Shadows and up to the latest remake. But apart from a wonderful excuse to replay the games in my head, the real message of the video is about the challenges and time pressures of development, something I'm acutely aware of right now with Pico.

"You have to make mistakes. You try things, you go down a lot of dead ends. In this case a lot of those dead ends didn't pan out. But we were learning... That was the key thing. We finally had the mental model after doggedly pursuing this for a year. Now we know what we need to do to get this done and we figured it out and got it done."

When I was young game developers were my heroes. It's good to know that such an inspirational series of games suffered failures and challenges, but still came out as the amazing games they were. We're all working towards the moments they experienced, when "it worked and it felt great."

 
Comment
12 Feb 2018 : Countdown #
I'm not convinced it was good use of my time, but I spent the weekend writing some code to solve the Countdown numbers game. In case you're not familiar with Countdown, here's a clip.
 

There are lots of ways to do this, but my solution hinges on being able to enumerate all binary trees with a given number of nodes. Doing this efficiently (both in terms of time and memory) turned out to be tricky, and there's a hinge for this too, based on how the trees are represented. The key is to note that each layer can't have more than n nodes, where n is the number of nodes the tree can have overall.

Each tree is stored as a list, with each item in the list representing the nodes at a given depth in the tree (a layer). Each item is a bit sequence representing which nodes in the layer have children.

These bit sequences would get long quickly if they represented every possible node in a layer (since there are 2, 4, 8, 16, 32, ... possible nodes at each layer). Instead, the index of the bit represents the index into the actual nodes in the layer, rather than the possible nodes. This greatly limits the length of the bit sequence, because there can no more than n actual nodes in each layer, and there can be no more than n layers in total. The memory requirement is therefore n2.

Here's an example:
T = [[1], [1, 1], [1, 1, 0, 0], [0 ,1, 0, 0]]
which represents a tree like this:

A binary tree

It's really easy to cycle through all of these, because you can just enumerate each layer individually, which involves cyclying through all sequences of binary strings.

It's not a new problem, but it was a fun exercise to figure out.

The code is up on GitHub in Python if you want to play around with it yourself.
 
Comment
30 Sep 2017 : Connecting to an iPhone using BLE and gatttool #

On the Pico project we've recently been moving from Bluetooth Classic to BLE. We have multiple motivations for this, and not just the low energy promise. In addition, BLE provides RSSI values, which means we can control proximity detection better, and frankly, Bluetooth has been causing us a lot of reliability problems that we've had to work around. We're hoping BLE will work better. Finally, we're developing an iPhone client. iOS, it seems doesn't properly support Bluetooth Classic, so BLE is our best option for cross-platform compatibility.

One of the challenges developing anything that uses a protocol built on top of some transport is that typically both ends of the protocol have to be developed simultaneously. This slow things down, especially when we're trying to distribute tasks across several developers. So we were hoping to use gatttool, part of bluez on Linux, as an intermediate step, to allow us to check the initial iOS BLE code worked before moving on to the Pico protocol proper.

So, here's a quick summary of how we used gatttool to write characteristics to the iPhone.

One point to note is that things weren't smooth for us. In retrospect we had the iPhone correctly running as a BLE peripheral, but we had real trouble connecting. I'll explain how we fixed this too.

Writing to a BLE peripheral using bluez is a four step process:

  1. Scan for the device using hcitool.
  2. Having got the MAC from the scan, connect to it using gatttool.
  3. Find the handle of the characteristic you want to write to.
  4. Perform the write.

The first step, scanning for a device, can be done using the following command.

sudo hcitool -i hci0 lescan

This commend is using hcitool to perform a BLE scan (lescan) using the local device (-i hci0). If you have more than one Bluetooth adaptor, you may want to specify the use of something other than hci0.

When we first tried this, we kept on getting input/output errors, even when run as root. I don't know why this was, but eventually we found a solution:

sudo hciconfig hci0 down
sudo hciconfig hci0 up

Not very elegant, but it seemed to work. After this, the scan started throwing up results.

flypig@delphinus:~sudo hcitool -i hci0 lescan
LE Scan ...
58:C4:C5:1F:C7:70 (unknown)
58:C4:C5:1F:C7:70 Pico's iPhone
CB:A5:42:40:F8:68 (unknown)
58:C4:C5:1F:C7:70 (unknown)
58:C4:C5:1F:C7:70 Pico's iPhone

Note the repeated entries. The device I was interested in was "Pico's iPhone", where we were running our test app. On other occasions when I've performed the scan, the iPhone MAC address came up, but without the name (marked as "unknown"). Again, I don't know why this is, but just trying the MACs eventually got me connected to the correct device.

Having got the MAC, now it's time to connect (step 2).

sudo gatttool -t random -b 58:C4:C5:1F:C7:70 -I

What's this all about? Here we're using gatttool to connect to the remote device using its Bluetooth address (-b 58:C4:C5:1F:C7:70). Obviously if you're doing this at home you should use the correct MAC which is likely to be different from this. Our iPhone is using a random address type, so we have to specify this too (-t random). Finally, we set it to interactive mode with -I. This will open gatttool's own command console so we can do other stuff.

If everything goes well, the console prompt will change to include the MAC address.

[58:C4:C5:1F:C7:70][LE]>

So far we've only set things up and not actually connected. So we should connect.

[58:C4:C5:1F:C7:70][LE]> connect
Attempting to connect to 58:C4:C5:1F:C7:70
Connection successful

Great! Now there's a time problem. The iPhone will throw us off this connection after only a few seconds. If it does, enter 'connect' again to re-establish the connection. There's another catch though, so be careful: the iPhone will also periodically change it's MAC address. If it does, you'll need to exit the gatttool console (Ctrl-D), rescan and then reconnect to the device as above.

Having connected we want to know what characteristics are available, which we do be entering 'characteristics' at the console.

[58:C4:C5:1F:C7:70][LE]> characteristics
handle: 0x0002, char properties: 0x02, char value handle: 0x0003, uuid: 00002a00-0000-1000-8000-00805f9b34fb
handle: 0x0004, char properties: 0x02, char value handle: 0x0005, uuid: 00002a01-0000-1000-8000-00805f9b34fb
handle: 0x0007, char properties: 0x20, char value handle: 0x0008, uuid: 00002a05-0000-1000-8000-00805f9b34fb
handle: 0x000b, char properties: 0x98, char value handle: 0x000c, uuid: 8667556c-9a37-4c91-84ed-54ee27d90049
handle: 0x0010, char properties: 0x98, char value handle: 0x0011, uuid: af0badb1-5b99-43cd-917a-a77bc549e3cc
handle: 0x0034, char properties: 0x12, char value handle: 0x0035, uuid: 00002a19-0000-1000-8000-00805f9b34fb
handle: 0x0038, char properties: 0x12, char value handle: 0x0039, uuid: 00002a2b-0000-1000-8000-00805f9b34fb
handle: 0x003b, char properties: 0x02, char value handle: 0x003c, uuid: 00002a0f-0000-1000-8000-00805f9b34fb
handle: 0x003e, char properties: 0x02, char value handle: 0x003f, uuid: 00002a29-0000-1000-8000-00805f9b34fb
handle: 0x0040, char properties: 0x02, char value handle: 0x0041, uuid: 00002a24-0000-1000-8000-00805f9b34fb
handle: 0x0043, char properties: 0x88, char value handle: 0x0044, uuid: 69d1d8f3-45e1-49a8-9821-9bbdfdaad9d9
handle: 0x0046, char properties: 0x10, char value handle: 0x0047, uuid: 9fbf120d-6301-42d9-8c58-25e699a21dbd
handle: 0x0049, char properties: 0x10, char value handle: 0x004a, uuid: 22eac6e9-24d6-4bb5-be44-b36ace7c7bfb
handle: 0x004d, char properties: 0x98, char value handle: 0x004e, uuid: 9b3c81d8-57b1-4a8a-b8df-0e56f7ca51c2
handle: 0x0051, char properties: 0x98, char value handle: 0x0052, uuid: 2f7cabce-808d-411f-9a0c-bb92ba96c102
handle: 0x0055, char properties: 0x8a, char value handle: 0x0056, uuid: c6b2f38c-23ab-46d8-a6ab-a3a870bbd5d7
handle: 0x0059, char properties: 0x88, char value handle: 0x005a, uuid: eb6727c4-f184-497a-a656-76b0cdac633b

In this case, there are many characteristics, but the one we're interested in is the last one, with UUID 'eb6727c4-f184-497a-a656-76b0cdac633b'. We know this is the one we're interested in, because this was the UUID we used in our iPhone app. We set this up to be a writable characteristic, so we can also write to it.

[58:C4:C5:1F:C7:70][LE]> char-write-req 0x005a 5069636f205069636f205069636f205069636f205069636f205069636f205069636f20
Characteristic value was written successfully

Success! On the iPhone side, we set it up to output the characteristic to the log if it was written to. So we see the following.

2017-09-29 20:09:21.206744+0100 Pico[241:25455] QRCodeReader:start()
2017-09-29 20:09:21.802875+0100 Pico[241:25455] BLEPeripheral: State Changed
2017-09-29 20:09:21.803002+0100 Pico[241:25455] BLEPeripheral: Powered On
2017-09-29 20:09:22.801024+0100 Pico[241:25455] BLEPeripheral:start()
2017-09-29 20:10:01.122027+0100 Pico[241:25455] BLE received: Pico Pico Pico Pico Pico Pico Pico

Where did all those 'Pico's come from? That's the value we wrote in, but in hexadecimal ASCII:

50 69 63 6f 20 50 69 63 6f 20 50 69 63 6f 20 50 69 63 6f 20 50 69 63 6f 20 50 69 63 6f 20 50 69 63 6f 20
P  i  c  o     P  i  c  o     P  i  c  o     P  i  c  o     P  i  c  o     P  i  c  o     P  i  c  o    

So, to recap, the following is the command sequence we used.

sudo hciconfig hci0 down
sudo hciconfig hci0 up
sudo hcitool -i hci0 lescan
sudo gatttool -t random -b 58:C4:C5:1F:C7:70 -I
[LE]> connect
[LE]> characteristics
[LE]> char-write-req 0x005a 5069636f205069636f205069636f205069636f205069636f205069636f205069636f20

When it's working, my experience is that gatttool works well. But BLE is a peculiar paradigm, very different from general networking and offers lots of opportunity for confusion.

Comment
30 May 2017 : Catastrophic success #
I’ve been using computers in a serious way for the last 32 years and have been taking backup seriously for about half of that. Starting with backup to CD-WR in 2002, then to removable disk-caddy a few years later, and USB hard drive in 2007. For most of that time I’ve been aware of the importance of off-site backups, but it wasn’t until October last year that I actually started doing it. Now my machines all perform weekly incremental backups to my home server, which all then in turn gets client-side encrypted and transferred to Amazon S3.
 
CD backup in 2002 Hard drive backup in 2007
CD backup in 2002 Hard drive backup in 2007

Despite all of this effort I’ve never had to resort to restoring any of these backups. It’s surprising to think that over all this time, none of my hard drives have ever failed catastrophically.

That was until last Thursday, when I arrived home to discover Constantia, my home server, had suffered a serious failure due to a sequence of power cuts during the day. A bit of prodding made clear that it was the hard drive that had failed. I rely heavily on Constantia to manage my diary, cloud storage, git repos, DNS lookup and so on, so this was a pretty traumatic realisation. On Friday I ordered a replacement hard drive, which arrived Sunday morning.

Luckily Constantia has her operating system on a separate solid state drive, so with a bit of fiddly with fstab I was able to get her to boot again, allowing me to install and format the new drive. I then started the process of restoring the backup from S3.

Backup in progress
Thirteen hours and 55 minutes later, the restore is complete. Astonishingly, Constantia is now as she was before the backup. Best practice is to test not just your backup process regularly, but your restore process as well. But it’s a time consuming and potentially dangerous process in itself, so I’m not proud to admit that this was the first time I’d attempted restore. I’m therefore happy and astonished to say that it worked flawlessly. It’s as if I turned Constantia off and then three days later turned her back on again.

Credit goes to the duplicity and déjà-dup authors. Your hard work made my life so much easier. What could have been hugely traumatic turned out to be just some lost time. On the other hand, it also puts into perspective other events that have been happening this weekend. BA also suffered a power surge which took out its systems on Saturday morning. It took them two days to get their 500 machines spread across two data centres back up and running, while it took me three days to get my one server restored.
Comment
28 May 2017 : Catastrophic failure #
A series of power cuts last Thursday left Constantia, my home server in a sorry state. On start-up, she would make a sort-of repeating four-note melody, then crash out to a recovery terminal.

Constantia is poorly

I've subsequently discovered that the strange noises were from the hard drive failing, presumably killed by the repeated power outages. A replacement hard drive arrived this morning (impressively on a Sunday, having been ordered from Amazon Friday evening), which I'm in the process of restoring a backup onto.

Old drive on the left, new drive on the right

Right now I'm apprehensive to say the least. This is the first real test of my backup process, which stores encrypted snapshots on Amazon S3 using Déjà Dup. If it works, I'll be happy and impressed, but I'm preparing myself for trouble.

When I made the very first non-incremental backup of Constantia to S3 it took four days. I'm hoping restoring will be faster.
Comment