PICNIC07: Blaise Aguera Y Arcas, Microsoft Live labs
Showing off a couple of things:
Firstly Seadragon, a client/server image manipulation package. Shows demo of hundreds of photos on-screen, including some extremely high-res (300 megapixel) ones. It's all about taking the idea of multi-resolution and generalising it to the web. "These images are small: those are FAR AWAY...": you don't need to see all of an image to view it from a distance. The client tells the server what it needs to see, so arbitrary objects can be interacted with over even moderate bandwidth. Demos zooming into the entire text of Bleak House.
(Interesting, a lovely visual demo and a great demonstration of zooming between levels of scale... but where's the rocket science here? Isn't this the sort of thing Google Earth has been demonstrating for a while?)
Shows demo of multi-resolution advertising, where you zoom into details of tech specs of a car. Reminds me of those TV ads which broadcast recipes you had to read by recording to VHS and playing back slowly.
(Ah, the difference between this and conventional mapping software is that the latter usually moves between discrete layers of magnification, whilst Seadragon doesn't)
Phototourism: taking photos and showing the relationships between them. Shows a set of photos taken in St Mark's Square, Venice; you can zoom in and out of the image and see great levels of detail. Relationships between the photos have been inferred, and features common to them have been related between photos, then plotted in 3D space. Shows a nice demo of scanning around a 360 degree view using photos pasted together. Shows a camera-view showing where photographers were when every photo was taken.
(I'm not clear how much of a manual process it is to take these photos and plot them in 3D space)
Shows a demo of photos from Flickr, all of Notre Dame cathedral. Shows placement of all the cameras used to take these photos - these were all done by individual people. From this set of photos they build up a 3D zoomable reconstruction of the cathedral.
There are not many cities on earth where we have enough photos to reconstruct them; on the other hand top-down approaches to mapping can't scale up to the volume of visual information out there (or reach inside all buildings). Knitting together public spaces and views of the world with personal ones is the biggest promise of Photosynth.
Creating 3D environments out of video is easier than out of still images, thanks to close correlation between sequential frames. "Video painting", where you take a camera, sweep it across a space, and get a reconstruction out.
Imagine a shop using their physical environment as a basis for their virtual environment.
They haven't made any releases of Photosynth, and they're not ready to announce a date, "but it won't be long now".
Conference organiser points out that connecting photos together and that connecting interior to exterior raises significant privacy concerns.