"All the world's a stage, and all the men and women merely players..."Last year we acquitted ourselves well at Over The Air, winning Best Overall Prototype for Octobastard, a many-limbed pile of technology we had loving nailed together in an all-night frenzy. Octobastard was many things, but immediately after the event I caught myself thinking that next time around we ought to try for something less obviously overengineered to the gills...
...so this year, I had strong urges for us to produce something not just clever, but beautiful. I wanted to do something huge and participative - mainly because I've been thinking a lot recently about helping people feel they're a meaningful part of something bigger. Smule do this, I think our Ghost Detector did it, Burning Man does it... it's a bit of a theme for me right now. And I read recently about David Byrne's audio installation at the RoundHouse, which filled me with wow.
So in the week before the event I was chatting to Thom and James (FPers who also attended OTA) and we got some ideas together. We work with applications every day at work and we know how tricky it can be to get a large number of people to install and run any piece of software on their phones in a short period of time, so we wanted to avoid that... which led us back to the fundamentals of radio. Could we do something along the lines of detecting radio signals (GSM, Wi-Fi or Bluetooth) and perhaps turn them into something interesting? Yes, it turns out we could. In the end we settled on Bluetooth, because every phone has it and we can access it from software we write for phones - and Project Bluebell took form.
The idea was to turn the whole audience of the event into unwitting musicians, and have them create a Son et Lumiere with their very presence.
To do this, we wrote a piece of custom J2ME software which we installed onto 4 commodity phones (cheapo Sony Ericsson devices with PAYG SIMs and a fiver of credit apiece). We hid these phones in the corners of the auditorium. This software scanned for Bluetooth devices nearby, and recorded their names, as well as whether they were mobile phones or something else (e.g. laptops, which were quite common at the event, and which we wanted to ignore).
These four receivers then reported the phones near them to a server, every 10 seconds or so. The server, a web application run inside Google App Engine, received all these reports and stored them in a database. It then exposed a list of the most recently seen phones, together with their location (which receiver had picked them up), and exposed this through a simple API over HTTP.
Two pieces of software consumed data through this API: firstly, an audio processor which turned the Bluetooth names of phones into simple tunes (simply by analysing characters in the names and decomposing them into notes, note lengths and delays) and mixed the tunes together, using a library called JFugue.
And secondly, a visualiser written in Processing which took the same data and used it as the input for several games of Conway's Life, which each run simultaneously in slightly different shades of blue atop one another. Periodically this visualiser would refresh and dump some new cells into each game, depending on phones found since the last check - ensuring that the animation was continually running but seeded with real data coming from the room. We'd also occasionally show some of the device names we'd found, which we felt might help build more of a connection between folks on the floor and the light-show.
"...Full of strange oaths and bearded like the pard,
Jealous in honour, sudden and quick in quarrel..."
All the above was produced between 11am on the Friday and around 4:30am Saturday morning: you can do a lot with a small number of smart caffeinated geeks, particularly if you're comfortable with their becoming vaguely hysterical in the process. Who convinced us that the glowsticks were a hotline to Google? Why did we start extracting new software development methodologies from Deal or No Deal? What exactly *did* we do to that bear?
How would we improve the product? I'd personally like to see a stronger link between individuals and output - so that as a participant I can see myself and the difference I'm making. I think it's possible to work this out right now by looking and listening carefully, but it could be made more obvious.
I was surprised at how well the music side of it worked (generating anything that sounds like music from what is effectively random and varying data was one of the tougher challenges). I'd like to see if we could vary the music further and perhaps add samples. I'd also like to set this up and run it again somewhere, to see how it works over a long period of time with a smaller, faster-moving audience.
Adam Cohen-Rose has kindly put some video of the second demonstration online, where you can see the Life and hear a little of the music. I'm going to try and get a better quality video of the two together over the next few days, I'll post this up when I have it, perhaps with some audio for iTunes :)
We also had a team who were covering the event for the BBC get quite interested, and I understand we may get a mention in an upcoming episode of Click.
"Last scene of all,
That ends this strange eventful history,
Is second childishness and mere oblivion..."
We were absolutely maxi-chuffed when Project Bluebell was awarded Best of Show by the panel of judges. Thank you :) Only a year to prepare for the next one, now...
Update: Project Bluebell was covered by BBC news; we also appear in the video clip bundled into this article.