The traditional limitations of augmented reality (as far as something that is a couple years old can be traditional) can be summarized like this:
- your virtual world lives on a marker. also so-called markerless AR solutions make use of some kind of beautfied marker in the form of a photograph, drawing, etc. The only exception that come to my mind is PTAM, but it still needs camera calibration (read: waving the camera so that the software can make an idea of the physical space) Having a physical object that pops out a virtual world can be a great feature, but in some scenarios you simply don’t want to carry a marker with you.
- your virtual world does not really interact with the physical one. You can’t easily touch what’s on your marker or make it collide with something that does not live on marker too.
- there is no foreground/background segmentation. Normally virtual stuff is always in foreground, you can’t stick your hand in front of it.
With the depth data captured by the kinect you can avoid these limitations: you can project a 3d virtual world over a the 3d physical world, you can make a physical object collide with your desk or your hand, you can walk between physical and virtual stuff and have a correct foreground/background segmentation.
Here’s a quick demo of this deep augmented reality idea: the virtual ball has x,y,z coordinates that make sense also in the physical room and can bounce on people and object.