Grasshopper

algorithmic modeling for Rhino

Now using two webcams (the other is above, out of view) to get 3D info out of my gestures. Grasshopper detects which object in the Rhino scene I am closest to, and then by pausing on that object, it is automatically selected in Rhino itself (moving out of camera view deselects everything). The goal of this is to create gestural workability within the pre-existing Rhino commands.

Major credit goes to Andy Payne for his Firefly components, and to Andrew Heumann and Chris Tietjen for scripting help.

Views: 336

Comment

You need to be a member of Grasshopper to add comments!

Comment by Andy Payne on February 27, 2013 at 10:04am

Very cool.  BTW, you're going to love the LEAP controller :)

Comment by Scott Penman on February 27, 2013 at 8:12am

Thanks - I've fooled around a bit with gestural stuff before: http://www.grasshopper3d.com/video/object-creation-line-and-extrusion

I might go back to it eventually, but at the moment I'm trying to avoid setting up a preprogrammed set of gestural commands. It would be feasible to do a simple set, but Rhino's complexity of commands is just too great to be completely gestural....

Comment by Daniel Piker on February 27, 2013 at 3:30am

Great work!

How about some movement based gestures as well - such as tapping or circling ?

Comment by Nick Tyrer on February 27, 2013 at 3:13am

Very Impressive dude, minority report. Though i'm slightly distracted by that book. Its Huge!

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service