This comment from Amit Singhal of Google, tucked into an interview on SearchEngineLand, jumped out at me:

"But out of the gate, whereas we had limited users to train this system with, I’m actually very happy with the outcome of the personal results."

For me, it was a gentle reminder that the search algorithms most of us use unwittingly every day aren't predictable or understandable. As such it reminded me of this recent piece by Don Norman:

"It is no longer easy or even possible to understand why a machine has taken the action it did: not even the designer of the machine may know, because not only are the algorithms complex and difficult to understand in the realities of a dynamic, ever-changing real environment, but the learning algorithms may have adjusted weights and rules in ways not easy to decipher. If the designers cannot always predict or understand the behavior, what chance does the ordinary person have?"

Much of the effort put into designing user interfaces for software emphasises the consideration of, and deliberate design for, quite specific experiences. In some cases (Nudge and Persuasion Design spring to mind) we're trying to steer an audience in specific directions.

What techniques do we need when the system our audience interacts with is too complex to predict? Clear seams between the human-designed and machine-provided aspects of an experience, like those many call for between the real world and ubicomp? Total front-end simplicity á la Google and Siri seems obvious and ideal, but implies that as behind-the-scenes computing gets more complex, the front-of-house technology will need to keep pace. Right now, it's all proprietary…

(I'm also tickled by the idea that religious dignitaries - who've been designing and nudging around the unknowable for millennia - might be roped in to consult on all this)