Feeds:
Articoli
Commenti

Posts Tagged ‘ofxkinect’

Recently a client bought a Kinect to be used with an OpenFrameworks app I wrote for them; we were doing some normal depth tracking, so we did expect a smooth ride, but, after a few seconds from when the Kinect got plugged, the application froze.
To keep it short, it seems that the Kinect model 1473 (the one you’ll find in shops these days) comes with a new firmware that auto-disconnects the camera after a few seconds, causing a freeze whenever you plug it into a computer and try to use it with libfreenect; this of course means that most creative coding toolkits are affected by the problem: I did run into it using ofxKinect, but it will happen also with the libreenect based Cinder Block, Processing library, etc…

Luckily Theo Watson already came up with a solution: you can find a fixed libfreenect here or, if you’re using OF, you can update to the last version on github.
The fix will work also with the Kinect for Windows and, of course, it will not break compatibility with the older 1414 Kinects.
Finally, if you don’t know the model of your Kinect, this picture will explain how to check it out:

Annunci

Read Full Post »

Athen Video Art Festival is over and I’m back to Italy (at least for a few days). To keep it short: George is a wonderful person and a very talented artist, people loved Trails and here’s a short video of the installation:

Read Full Post »

Can you guess what’s going on here?
(altro…)

Read Full Post »

The traditional limitations of augmented reality (as far as something that is a couple years old can be traditional) can be summarized like this:

  • your virtual world lives on a marker. also so-called markerless AR solutions make use of some kind of beautfied marker in the form of a photograph, drawing, etc. The only exception that come to my mind is PTAM, but it still needs camera calibration (read: waving the camera so that the software can make an idea of the physical space) Having a physical object that pops out a virtual world can be a great feature, but in some scenarios you simply don’t want to carry a marker with you.
  • your virtual world does not really interact with the physical one. You can’t easily touch what’s on your marker or make it collide with something that does not live on marker too.
  • there is no foreground/background segmentation. Normally virtual stuff is always in foreground, you can’t stick your hand in front of it.

With the depth data captured by the kinect you can avoid these limitations: you can project a 3d virtual world over a the 3d physical world, you can make a physical object collide with your desk or your hand, you can walk between physical and virtual stuff and have a correct foreground/background segmentation.

Here’s a quick demo of this deep augmented reality idea: the virtual ball has x,y,z coordinates that make sense also in the physical room and can bounce on people and object.

Read Full Post »


Obviously the rgb and depth camera in the kinect need to be calibrated in order to associate the correct pixel with the correct depxel (ok, it’s an ugly neologism, but you know what I mean). Also, every kinect is different so you can’t just take your friend’s calibration data and use it for your device: you have to do the dirty job yourself.

This morning I quickly created a simple program that makes calibration fast and painless It’s OF friendly and based on Arturo‘s code; my code is *DIRTY* ‘cause it’s written in a stream of cosciousness mood while doing 1k other things, but it works good enough, it pops out calibration data in xml format and someone might find it useful, so here it is.

Read Full Post »

Read Full Post »