Grasshopper

algorithmic modeling for Rhino

Custom (vb, c#, python, etc) Component that loads a cluster and/or .gh file?

Hello everyone!

I'm trying to create a custom 'metacomponent' that loads a cluster and/or a .gh file. Given a .ghcluster, a .gh or a .ghx file, is there any way to create a custom component that can load/parse the grasshopper file and effectively perform the calculations done by the cluster?

The idea is to create a custom database that loads pre-made clusters, so that this meta-component can easily switch between operating as the different clusters. A team of collaborators would then be able to easily switch between all the clusters made by each other, without having to swap files / load clusters over, etc.

I'm proficient with python/vb.net/c#, and have made some custom components before; I'm just wondering if this is absolutely outside the bounds of the sdk, or if there's any way that it's possible. I could imagine a Python component that loads other Python code snippets, but would like to do this specifically with grasshopper components. Even a component that 'summons' or 'bakes' the components into the grasshopper canvas would be okay, although not as ideal.

Thanks!

Dan

Views: 2553

Replies to This Discussion

I think it should be possible. GH_Cluster is defined in the core dll so you have access to it from a VB/C#/Python script. It's also not too difficult to set up a new GH_Document in memory, insert a cluster, provide it with some data, have it solve itself and harvest the outputs. There'll be a lot of code involved of course, but it's probably not very difficult code.

Clusters are quite different in the next release, so anything you write now may well break when 0.9.0050 goes out.

I'm not quite sure what the SDK of the current cluster object looks like, but you can easily create a new cluster from a document like so:

using Grasshopper.Kernel;

using Grasshopper.Kernel.Special;

...

GH_Cluster cluster = new GH_Cluster;

cluster.CreateFromFilePath("C:\someghfile.ghx");

  - or -

cluster.CreateFromDocument(someGHDocumentYouAlreadyHave);

Then you need to run the cluster. I think it's best to just put it in a brand new document.

GH_Document doc = new GH_Document();

doc.AddObject(cluster, false);

Next you'll have to create some parameters and copy data into them, also add them to the document and hook them up to cluster inputs. 

Is this the sort of approach you were looking for?

--

David Rutten

david@mcneel.com

Poprad, Slovakia

Hi David,

This is spectacularly exactly the kind of approach I was looking for! I'm ecstatic to know that it's possible. Thanks so very very much for replying so helpfully. I won't be able to get started on this until May, but I'll be certain to update this thread as I progress.

Any hints / notes on in what way the clusters will be different in the next release, by any chance?

In any case, thanks again!

Best,

Dan

There's lots of changes, though most probably won't matter to you. Hopefully I'll have released 0.9.0050 come May (the new installer is building fine as of yesterday and that was the last major missing piece).

I'm pretty interested in whether this will work without me adding functions to the SDK. 

--

David Rutten

david@mcneel.com

Poprad, Slovakia

Hi David,

I'm finally getting around to resuming production on this. I'm having a little trouble hooking up cluster/document outputs and getting that result. I think it's mostly because I don't have a good model for the relationship between GH_documents, GH_clusters, IGH_Goo vs IGH_Param, and the Grasshopper solver. Any help or links to documentation would be much appreciated. (P.S. - It looks like the Grasshopper SDK is missing the sub-section  'Examples>C#>Simple Parameters>Collecting and Post-Processing Data')

Here's the relevant section of my code. The cluster in question (located at 'filepath') is a test cluster that creates a sphere - no inputs, one BRep output.

// create cluster

GH_Cluster thiscluster = new GH_Cluster();
thiscluster.CreateFromFilePath(filepath);

// add cluster to document

GH_Document newdoc = new GH_Document();
newdoc.AddObject(thiscluster, true);

//create output parameter

Grasshopper.Kernel.Parameters.Param_Geometry paramOut = new Grasshopper.Kernel.Parameters.Param_Geometry();

//connect output parameter to cluster parameter
thiscluster.Params.RegisterOutputParam(paramOut);

//compute and collect data

thiscluster.ComputeData();

thiscluster.CollectData();

// connect component output to output paramter 

DA.SetData(1, paramOut);

1) When I do this, the output of the component says:

Grasshopper.Kernel.Parameters.Param Geometry. Yet I can't seem to convert it to Geometry within Grasshopper. Do I need to cast this output to a Brep?

2) Alternately, should I be using a GH_Param object instead? 

3) What's the role of the document - do clusters have to exist within documents in order to operate?

Thanks a lot!

Alternately, any example snippets or code would be helpful. As it turns out, this page is the only search result for "GH_Cluster" in Google.. 

Thanks so much!

Here's how to load a file as a cluster, solve it and harvest the output data:

// Create cluster object and load it from the file.

Grasshopper.Kernel.Special.GH_Cluster cluster = new Grasshopper.Kernel.Special.GH_Cluster();
cluster.CreateFromFilePath(F);

// Create a new document, enable it and add the cluster.

GH_Document doc = new GH_Document();
doc.Enabled = true;
doc.AddObject(cluster, true, 0);

// Get a pointer to the data inside the first cluster output.

IGH_Structure data = cluster.Params.Output[0].VolatileData;

// Create a copy of this data (the original data will be wiped)

DataTree<object> copy = new DataTree<object>();
copy.MergeStructure(data, new Grasshopper.Kernel.Parameters.Hints.GH_NullHint());
A = copy;

// Cleanup!

doc.Enabled = false;
doc.RemoveObject(cluster, false);
doc.Dispose();
doc = null;

--

David Rutten

david@mcneel.com

Attachments:

That was it! Thanks a million, David! 

Question - do you foresee this process being any more memory intensive then loading/processing the cluster directly? I'm sure there's a little bit of overhead, but will creating/removing a document for each cluster computation be okay?

There is some overhead and technically a document is not necessary. It won't work at the moment however because I specifically forbid components from solving themselves if they are no part of a document.

Steve Baer has been doing some cool work exposing GH components as Python calls, so we'll remove the must-be-part-of-a-document restriction in GH2.

Another big overhead is the reading and parsing of the GH file. So if you can in any way cache this cluster, then that'll be a good way to improve performance.

--

David Rutten

david@mcneel.com

RSS

About

Translate

Search

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service