generative modeling for Rhino
We have been working on a method of interacting with the Grasshopper Canvas with intuitive, multi-touch gestures registered by the RGBD camera on the Microsoft Kinect. The Kinect depth resolution is enough to detect touch events on an arbitrary surface, so we've recreated a giant canvas using a projector, a mirror, and a pane of tinted glass. So far, we've managed to create the basic canvas navigation features through sending mouse and keyboard events to Grasshopper...these are just the first in a host of potential gestures...
Read more at: