Grasshopper

algorithmic modeling for Rhino

Hi all,

First of all David has already responded to some of the ideas I was planning to raise here.

With 1.0 getting close I thought I'd come back out from lurking on these forums and give a bit of feed back:

Last year a group of graduate students at McGill did this: http://web.farmmresearch.com/pavilion/

I was a member of this group and was one of the people involved in the creation of the grasshopper file (see screen shot at bottom of post - please excuse the spelling errors if you find them). There were effectively no drawings, only the grasshopper script, a bit of post processing in soildworks, cnc fabrication, and hand assembly using labels etched in at the time of cnc fabrication.

As you can see from the screen shot the GH file is a hellish nightmare, right on the edge of the memory limit of rhino 4 (which we had to use because of a minor difference in the way rhino 5-GH handled sweeping), and, towards the end, it took about 15 minutes of computer time just to get to the point were we could work on the script because of all the calculations need in the stack.

Here is a list of things that would have really helped us:

1. More programming experience. i had a bit, most others didn't. David can't help to much with this in a direct way.

2. Clusters: at the time we started clusters didn't work well, by the time they did we were almost done - oh well. I do have some ideas about how it would be nifty for clusters to have version control so that the master assembly could revert thinks that cause other things to break. - I'm sure this can be done manually right now but i have never tried it. could GH be plumbed into something like svn?

3. Concurrent editing - Like a video game. It would, i imagine, require some sort of cloudy file storage and server solution or something but there were MANY hours were 3 or more people were crowded around one computer trying to solve a problem. Alowing everyone to have there own view of the file that stayed in sync would just have been more comfortable...

4. Progress bars and the UI and Display running in a separate thread from the solver. Davids addition of esc to allow us two sometimes save ourselves if we connected a wire wrong but often not. It would be cool to be able to interact with the grasshopper file right away and sill know that the solver was working away (especially when a single run of the file took 15 minutes). Progress bars would be nice on any component taking more than a second or two to run (which was the case with many of our components - especially the ones that we wrote ourselves) but they only make sense to add after the threads for the solver and the UI have been separated.

So, anyone else done a really huge GH file? Thoughts?

Views: 3703

Replies to This Discussion

That would be very useful. Would the cache be binary? Unity has a similar system called AssetBundles that works really well.

The file would be written with GH_IO.dll, so even though binary will probably be the default, xml would also be supported. I probably won't do this for GH1 as the data type SDK is too fragile. It should not be a problem for GH2 though.

--

David Rutten

david@mcneel.com

Tirol, Austria

Perhaps you could simply extend the enable/disable components option so that the furthest downstream components had the option to retain the last calculated info...

On the face of it the idea of having cache files as xrefs is pretty cool. If nothing else it would keep the file size down. In addition if you emailed your GH file to someone with out the cache or deleted the cache file locally maybe GH could detect that and know to resolve that section of code....

In addition if you emailed your GH file to someone with out the cache or deleted the cache file locally maybe GH could detect that and know to resolve that section of code....

I was actually imagining it working between files. You'd have one file which generates your basic surface, that surface gets cached. The next file generates a lattice over the surface, cache again. File number 3 creates the necessary beams over the lattice, cache. And so on and so forth.

You could of course put it all in the same file as well, nothing stopping you. But it seems like the [Data Dam] would be less work in that case.

Another benefit is that you can easily send other people only parts of your algorithms and provide cache files for the parts you don't want to share.

Anyway, it seems like people like this idea so it'll probably make it into GH2 early on.

--

David Rutten

david@mcneel.com

Tirol, Austria

The *.ghcache sounds like a very interesting idea.

How would you imagine to access the data? Would there be something like a drop down, a rack , a file linked cluster, a sub-menu, a save-state-like menu, something completely different, ...?

I imagine one component with variable inputs. You plug in any data you want. Then you get to click on the component itself to save the data to a file. Or maybe saving is automated, not sure yet.

Another component would be the inverse. Specify a file and it reads the data and populates the outputs.

--

David Rutten

david@mcneel.com

Tirol, Austria

Hello:

This is a topic that I wanted to write since quite a long time ago. I also have done huge files and encountered similar problems to yours. In the file attached there is one of the large files used in my thesis.

To tackle large definitions and their extremely long and confusing wires I used hidden lines and panel to create variables. So that I can partition large files into sections without having wires everywhere.

Here Is the way I do it:

I take the result of a component and connect the correspondent primitive to it, after that I put a panel on top of it with the name of the variable and leave the yellow color.

then I create a copy of the two originals and connect the first to the seconds and make the wire connecting them hidden. After that I change the color of the second panel to some other (in my case green). So that I know that it is a child object to the first.

I have attached an example also.

This allows me to cut definitions in sections and improve their clarity. Having all those little green panel telling you what is what and the type of primitive you use helps in documenting and understanding the definition.

Also you do not need to bake parts. However auto-bake and auto-read is another trick to speed up your definitions.

this is the way I tackle large definitions. I know that is cumbersome, and probably everyone has or will develop it´s own notation. but I believe that it necessary to define this sort of variables in large definitions.

Perhaps David given those examples will find a better solution or develop something in the UI.

Regards

Diego

 

Attachments:

And here is the example grasshopper file and another very large file which uses my "variable definition technique" and that requires its own post due to its 5mb file size. Also I had to cut I bit in order to fit in the 5mb limit.

Attachments:

Hi,

I know this thread is already two months old, but I'm sure many people are struggling with this issue every day. I have developed a GH add-on that tries to address the issue as described above.

Elefront is an add-on that is some sort of enhanced version of the [geometry cache] component. It allows the user to bake objects to Rhino, while at the same time storing an infinite amount of meta-data INSIDE the rhino object. These objects can then be referenced back into GH in a different session. Referencing can be done simply by a nickname that was provided upon baking, much like [geometry cache], but referencing can also happen based on any of the user attributes (meta-data) that were stored inside these objects.

This way, a Rhino model can become more of a database rather than just a model. In order to keep your models clean and in order to enable collaboration on your GH projects, the reference components also work on objects that are brought in your Rhino model as a worksession.

This way you can divide your Grasshopper process into sizable chunks, that output data models that can be queried like a database.

The workflow would be very similar to what David is describing in the *.ghcache proposal, except for that the *.ghcache file would be an actual model, that can be opened and interpreted by a human. Even without the use of GH. These intermediate models can also be used for several different purposes. For instance, one could "bake" a wireframe of a steel structure, that has all attributes for all steel members embedded in the wires. This wireframe can be used by a structural engineer for calculation purposes, while someone else can interpret the data to define IFC models or Rhino surface models from the very same wireframe.

The add-on is free and can be downloaded here:

http://www.food4rhino.com/project/elefront

Please let me know if you find this interesting.

Best regards,

Ramon van der Heijden

I think I would need to try out this workflow to know if it would be satisfying. Enhanced bake tools would have really aided fabrication steps for us. Pragmatically your example of the structural engineer seems like a good example although it would be more cool to me they were able to set up their simulation and analysis in the GH environment so that the structural solution space could be explored with minimum barriers. On the other hand if things can exit AND renter the GH environment in an automated and non breaking way then elefont or .ghcache could be a great solution.

To recapitulate, the things that I worry about in baking based approaches are:

  1. Breaking the ability to rebuild the entire solution from scratch easily/automatically.
  2. Encouraging collaborators to write fragile gh/scripted subsections because they come to not expect variance in the inputs. (generally a problem on any project with novice programers like we were)

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service