Posts Tagged ‘openkinect’

Here’s the teaser for the performance I’ll present @ Borderline Biennale 2011 together with Stefano Moscardini and Lukas Zpira. There will be blood, steel, neural waves, computer vision and synths; if you can be around Lyon at the beginning of september, be there!


Read Full Post »

As you could imagine reading my last post, I’m working on an interactive floor using a projector + Kinect combo. It’s still early to speak about the final installation, but I just recorded a couple videos of the current beta and wanted to share them.


Read Full Post »

The traditional limitations of augmented reality (as far as something that is a couple years old can be traditional) can be summarized like this:

  • your virtual world lives on a marker. also so-called markerless AR solutions make use of some kind of beautfied marker in the form of a photograph, drawing, etc. The only exception that come to my mind is PTAM, but it still needs camera calibration (read: waving the camera so that the software can make an idea of the physical space) Having a physical object that pops out a virtual world can be a great feature, but in some scenarios you simply don’t want to carry a marker with you.
  • your virtual world does not really interact with the physical one. You can’t easily touch what’s on your marker or make it collide with something that does not live on marker too.
  • there is no foreground/background segmentation. Normally virtual stuff is always in foreground, you can’t stick your hand in front of it.

With the depth data captured by the kinect you can avoid these limitations: you can project a 3d virtual world over a the 3d physical world, you can make a physical object collide with your desk or your hand, you can walk between physical and virtual stuff and have a correct foreground/background segmentation.

Here’s a quick demo of this deep augmented reality idea: the virtual ball has x,y,z coordinates that make sense also in the physical room and can bounce on people and object.

Read Full Post »

Obviously the rgb and depth camera in the kinect need to be calibrated in order to associate the correct pixel with the correct depxel (ok, it’s an ugly neologism, but you know what I mean). Also, every kinect is different so you can’t just take your friend’s calibration data and use it for your device: you have to do the dirty job yourself.

This morning I quickly created a simple program that makes calibration fast and painless It’s OF friendly and based on Arturo‘s code; my code is *DIRTY* ‘cause it’s written in a stream of cosciousness mood while doing 1k other things, but it works good enough, it pops out calibration data in xml format and someone might find it useful, so here it is.

Read Full Post »

Read Full Post »