Feeds:
Articoli
Commenti

Vomitino

Featured image
Vomitino is a “tongue in cheek firmware” for the HexBright flashlight.
The word “vomitino”, in italian, means something like “little puker” and was chosen for 2 reasons:

– it follows the -ino desinence tradition, so common in Arduino based projects

– it refers to the fact that the firmware implements a “stroboscopic dazzle mode”, inspired by the LED Incapacitator, a security device that is rumoured be able to stop an hypothetic attacker, inducing vomiting and disorientation.

While I used the same frequency range, the Vomitino lacks some of the features of its original inspirer (i.e. multicoloured light). It’s meant to be just a joke and should not be considered a “self defence tool” at all: in the end it’s just very bright and somewhat annoying strobe light, just like the ones you find in disco :)
If you’re interested in the original device, go check Lady Ada’s complete replica. If you’re curious, but don’t want to invest too much time and money into it, you can grab my humbler version on github and flash it on your HexBright.

Keep in mind that some people have weird reactions to flashy lights, some simply do not like them and I’m not responsible if you nag poeple with my code :)

Expression-Swapping demo

A few weeks ago I met a young designer interested in digitally augmented mirrors: in particular he was interested in messing with people faces. Since this is the kind of stuff I have some experience with, we ordered a couple drinks and brainstormed about how he could do this and that.
I ended up writing a little demo showing how to easily change parts of people’s face in realtime and, since I think it could be helpful for other people too, I wanted to share it and quickly explain how it works.
Basically I track the user’s face with Jason Saragih’s library and create a mesh that can be overlaid on the lower part of the tracker’s face mesh; then I can use this “partial mesh” to create a UV map from the user’s mouth expression, or to blend a saved mouth expression into the live feed.

You can find the source code on my github and here’s a video showing how it works:

Recently I found some time to fiddle a little bit with CSS3D and I took the chance to restyle a little the company home page; normally I don’t do web based stuff, but I admit that all this super performant new stuff (I’m mainly thinking of WebGL and the possibility to use some GPU power in a webpage) is making javascript interesting again :)

While I’m here, I’ll also make a quick summary of all the things happened in the last months and that I was to busy (maybe also a little lazy?) to write about:

– During Milan’s Design Week I had the opportunity to give a hand with the setup of M.I.T. Tangible Media Group‘s Transform; needless to say it was an interesting toy to play with and a great way to spend a week with stimulating people;

– a good part of the spring was spent on art projects, including a software tool for Eileen Cowin, a couple dynamic sculptures for George Theonas and some experimental real time visuals for an upcoming Empress Stah show; also a couple SPECTRE installations travelled to a lovely gallery in Paris.

Right now I’m finishing a VJ tool making use of laser projectors and working on some new computer vision stuff.

Performing with Robots


Yesterday Marco Tempest was on TED‘s stage again, but this time he was not alone; he performed together with our new creation (creature?): EDI, the Magic Robot!

EDI is a heavily customised Baxter robot: we created for him a set of custom manipulators, a few top secret hardware attachments (you know, for magic ;) ), and a complete “software brain” that enabled him to display a personality, to train with humans and to learn from them.

Waiting for the official video to be released, you can read the post on the ted blog.

A final note: as you can imagine, giving life to EDI was an exciting, but complex job, so I think it’s not surprising that Marco, David (our robotics guru, borrowed directly from MIT’s Media Lab) and I were a little apprehensive about this first live performance. Given the common belief that machines are cold and feelingless, it might surprise that EDI was anxious too: if you don’t believe me, have a look at the following behind-the-scenes video ;)

Today I was writing a function to save a specific configuration file from an OF application and I noticed that ofSystemSaveDialog() (the function commonly used to open a save dialog) does not allow me to specify a default save path.

Since I wanted my files saved in a specific location, I quickly wrote a custom function that includes a path argument; it’s super easy and mac only (Objective C ++), but I thought someone could find it useful, so here it is:


ofFileDialogResult customSaveDialog(string defaultName, string messageName, string defaultPath){
ofFileDialogResult dr;

NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
NSSavePanel * saveDialog = [NSSavePanel savePanel];

[saveDialog setMessage:[NSString stringWithUTF8String:messageName.c_str()]];

if(!defaultPath.empty()){
NSString * s = [NSString stringWithUTF8String:defaultPath.c_str()];
s = [[s stringByExpandingTildeInPath] stringByResolvingSymlinksInPath];
NSURL * defaultPathUrl = [NSURL fileURLWithPath:s];
[saveDialog setDirectoryURL:defaultPathUrl];
}

[saveDialog setNameFieldStringValue:[NSString stringWithUTF8String:defaultName.c_str()]];

NSInteger buttonClicked = [saveDialog runModal];

NSWindow * appWindow = (NSWindow *)ofGetCocoaWindow();
if(appWindow) {
[appWindow makeKeyAndOrderFront:nil];
}

if(buttonClicked==NSFileHandlingPanelOKButton){
dr.filePath = string([[[saveDialog URL] path] UTF8String]);
}
[pool drain];

if( dr.filePath.length() > 0 ){
dr.bSuccess = true;
dr.fileName = ofFilePath::getFileName(dr.filePath);
}
return dr;
}

Recently a client bought a Kinect to be used with an OpenFrameworks app I wrote for them; we were doing some normal depth tracking, so we did expect a smooth ride, but, after a few seconds from when the Kinect got plugged, the application froze.
To keep it short, it seems that the Kinect model 1473 (the one you’ll find in shops these days) comes with a new firmware that auto-disconnects the camera after a few seconds, causing a freeze whenever you plug it into a computer and try to use it with libfreenect; this of course means that most creative coding toolkits are affected by the problem: I did run into it using ofxKinect, but it will happen also with the libreenect based Cinder Block, Processing library, etc…

Luckily Theo Watson already came up with a solution: you can find a fixed libfreenect here or, if you’re using OF, you can update to the last version on github.
The fix will work also with the Kinect for Windows and, of course, it will not break compatibility with the older 1414 Kinects.
Finally, if you don’t know the model of your Kinect, this picture will explain how to check it out:

I know it’s been a while since my last post, but I’ve been really busy on many, many projects.
Anyway I just wanted to quickly mention a brand new interactive digital signage tool I developed for the folks at EasySoft: it’s essentially an augmented-reality jukebox where you can load a client’s media assets (logos, video clips,…), select an interaction model (computer vision algorithm + particle system style) and watch people play on your led wall of choice.

We just had a christmas themed test run at Stazione Termini, Rome’s main station and people seemed to enjoy

Iscriviti

Ricevi al tuo indirizzo email tutti i nuovi post del sito.