I finally had some time to rehandle the chromaSampler:
I mainly made some minor tuning in the code (mouseless colour picking, timed snapshots of the ongoing painting, live thresholding adjustment). Since some people were interested in the source code, i posted it here; it’s licensed BSD, so you can use it as you wish, just drop me a line if you decide to make something cool out of it🙂 .
I also put together a brief description to introduce the toy at expositions, shows etc. Here it is:
chromaSampler is something between a groovebox for streaming pixels, a generative action painting experience and a mixed-reality mirror.
Visitors, captured by a camera and mirrored on a screen, can sample colours from the objects in their pockets, their clothes, whatever they wish; this way they are able to create impromptu virtual brushes that can be waved in the air in order to paint the digital canvas offered by the screen.
Every single stroke painted by every single visitor is absolutely unique, since the shape, texture and colour of the virtual brush that generated it, are direct consequences of the physical attributes of the unique object that the visitor chose to sample: this creates a deep correlation between the exposition context, the individuality of the people that choose to visit the installation and the resulting artwork.
Technically speaking, chromaSampler takes simple computer vision techniques (originally developed for tasks like driving robots or surveillance) and bends them in order to create an expressive tool that samles elements from its physical and emotional surroundings, trying to track their psychogeographic print.
Finally, for those who did not see the original video: