The TED ideas web site has posted a great clip demonstrating Photosynth (and other visualization technologies) and application that composites 3d spacial renderings from the global pool of images tagged with metadata. The example of Notre Dame cathedral is truly boggling. They hyperlink to photos people have taken and posted onto sites like Flickr and position them in a model of real space on top, in front, or beside other identified images. The result is a composite of the real object. The implications are far reaching (unless Microsoft Labs who bought the company that created Photosynth a year ago) fail to deliver on it’s potential.
I think we will see composite software like this ported onto mapping software like Google Earth to model the world (and then the Universe) and forming the basis for a self creating and maintaining VR world. I can’t wait.