I've been getting quietly interested in the sensors embedded in mobiles over the last few years, and Carsonified kindly gave me a chance to think out loud about it, in a talk at Future of Mobile a couple of weeks ago.

At dConstruct I noted that Bryan and Steph produce slides for their talks which work well when read (as opposed to presented), and I wanted to try and do the same; hopefully there's more of a narrative in my deck than in the past. And it goes something like this...

Our mental models of ourselves are brains-driving-bodies, and the way we've structured our mind-bicycles is a bit like this too: emphasis on the "brain" (processor and memory). But this is quite limiting, and as devices multiply in number and miniaturise, it's an increasingly unhelpful analogy: a modern mobile is physically and economically more sensor than CPU. And perhaps we're due for a shift in thinking along the lines of geo-to-heliocentrism, or gene selection theory, realising that the most important bit of personal computing isn't, as we've long thought, the brain-like processor in the middle, but rather the flailing little sensor-tentacles at the edges.

There's no shortage of sensors; when I ran a test app on my Samsung Galaxy S2, I was surprised to find not just acceleration, a magnetometer and orientation but light, proximity, a gyroscope, gravity, linear acceleration and a rotation vector sensor. There's also obviously microphone, touch screen, physical keys, GPS, and all the radio kit (which can measure signal strengths) - plus cameras of course. And in combination with internet access, there's second-order uses for some of these: a wi-fi SSID can be resolved to a physical location, say.

Maybe we need these extra senses in our devices. The bandwidth between finger and screen is starting to become a limiting factor in the communication between man and device - so to communicate more expressively, we need to look beyond poking a touch-screen. Voice is one way that a few people (Google thus far, Apple probably real soon) are looking - and voice recognition nowadays is tackled principally with vast datasets. Look around the academic literature and you'll find many MSc projects which hope to derive context from accelerometer measurements; many of them work reasonably well in the lab and fail in the real world, which leads me to wonder if there's a similar statistical approach that could be usefully taken here too.

But today, the operating system tends to use sensors really subtly, and in ways that seem a bit magical the first time you see them. I remember vividly turning my first iPhone around and around, watching the display rotate landscape-to-portrait and back again. Apps don't tend to be quite so magical; the original Google voice search was the best example I could think of of this kind of magic in a third-party app: hold the phone up to your ear, and it used accelerometer and proximity sensors to know it was there, and prompt you to say what you were looking for - beautiful.

Why aren't apps as magical as the operating system use of sensors? In the case of iPhone, a lot of the stuff you might want to use is tucked away in private APIs. There was a bit of a furore about that Google search app at the time, and the feature has since been withdrawn.

(In fact the different ways in which mobile platforms expose sensors are, Conway-like, a reflection of the organisations behind those platforms. iOS is a carefully curated Disneyland, beautifully put together but with no stepping outside the park gates. Android offers studly raw access to anything you like. And the web is still under discussion with a Generic Sensor API listed as "exploratory work", so veer off-piste to PhoneGap if you want to get much done in the meantime.

I dug around for a few examples of interesting stuff in this area: GymFu, Sonar Ruler, NoiseTube and Hills Are Evil; and I spoke to a few of the folks behind these projects. One message which came through loud-and-clear is that the processing required for all their applications could be done on-device. This surprised me, I expected some of this analysis to be done on a server somewhere.

Issues of real-world messiness also came up a few times: unlike a lab, the world is full of noise, and in a lab setting you would never pack your sensors so tightly together inside a single casing. GymFu found, for instance, that the case design of the second generation iPod touch fed vibrations from the speaker into the accelerometer. And NoiseTube noticed that smartphone audio recording is optimised for speech, which made it less adequate for general-purpose noise monitoring.

Components vary, too, as you can see in this video comparing the proximity sensor of the iPhone 3G and 4, which Apple had to apologise for. With operating systems designed to sit atop varying hardware from many manufacturers (i.e. Android), we can reasonably expect variance in hardware sensors to be much worse. Again, NoiseTube found that the HTC Desire HD was unsuitable for their app because it cut off sound at around 78dB - entirely reasonable for a device designed to transmit speech, not so good for their purposes.

And don't forget battery life: you can't escape this constraint in mobile, and most sensor-based applications will be storying, analysing or transmitting data after gathering it - all of which takes power.

I closed on a slight tangent, talking about a few places we can look for inspiration. I want to talk about these separately some time, but briefly they were:

  • The Dark Materials trilogy of Philip Pullman, and daemons in particular. If these little animalistic manifestations of souls, simultaneously representing their owners whilst acting independently on their behalf, aren't good analogies for a mobile I don't know what is. And who hasn't experienced intercision when leaving their phone at home, eh…? Chatting to Mark Curtis about this a while back, he also reckoned that the Subtle Knife itself ("a tool that can create windows between worlds") is another analogy for mobile lurking in the trilogy;
  • The work of artists who give us new ways to play with existing senses - and I'm thinking particularly of Animal Superpowers by Kenichi Okada and Chris Woebken, which I saw demonstrated a few years back and has stuck with me ever since;
  • Artists who open up new senses. Timo Arnall and BERG. Light paintings. Nuff said.

It's not all happy-clappy-future-shiny of course. I worry about who stores or owns rights to my sensor data and what future analysis might show up. When we have telehaematologists diagnosing blood diseases from camera phone pictures, what will be done with the data gathered today? Most current projects like NoiseTube sidestep the issue by being voluntary, but I can imagine incredibly convenient services which would rely on its being gathered constantly.

So in summary: the mental models we have for computers don’t fit the devices we have today, which can reach much further out into the real world and do stuff - whether it be useful or frivolous. We need to think about our devices differently to really get all the possible applications, but a few people are starting to do this. Different platforms let you do this in different ways, and standardisation is rare - either in software or hardware. And there’s a pile of interesting practical and ethical problems just around the corner, waiting for us.

I need to thank many people who helped me with this presentation: in particular Trevor May, Dan Williams, Timo Arnall, Jof Arnold, Ellie D’Hondt, Usman Haque, Gabor Paller, Sterling Udell, Martyn Davies, Daniele Pietrobelli, Andy Piper and Jakub Czaplicki.