Grasshopper

algorithmic modeling for Rhino

I'm evaluating rhino/grasshopper as a development eco-system for our project, where people with different background and skills can use it and/or contribute to.

A scenario can be as follows:

An algorithm developer is skilled in creating grasshopper definitions. At some point one of these definition is so useful that we want to incorporate it as a component created with the SDK for better testing and deployment. We could create a cluster/user object but in my experience it is horrible for deployment. In order to keep the "code" as comprehensive as possible for everybody, it would be great if we could write the grasshopper plugin components with grasshopper components. This way "porting" the knowledge is almost 1to1. This very similar to the GHPython library ghpythonlib.components, but AFAIK it is also not very suitable for code re-use and deployment.

So the question is:

Is it possible to get access to the grasshopper component API with the SDK? 

Views: 892

Replies to This Discussion

It is possible (python proves it can be done) but it is difficult and you still get a huge amount of overhead compared to 'clean' code. I think it very unlikely that GH1 will ever get better SDK tools for invoking components, however I'd love to see a centralised mechanism for this in GH2, something which can be leveraged by Python, C# and VB scripting as well as GHA developers.

David, for that would not be enough to make the functionality and the gh_component shell in different dlls? Is there any consideration for not take this approach in GH2? You're going to try something more systemic?

By the way, can you overtake if there is any major change which is attractive to tell? :3

There are three major problems that make components difficult to use from code.
- their availability depends on the runtime state of GH.
- some of them have a variable number of inputs/outputs.
- some of them have options beyond input values.

Problem number 1 means the names of all invokeable components can only be determined at the moment you're trying to call them, which rules out a type-safe scheme, problems 2 and 3 means components don't always behave like method invokes.

The lack of type-safety and baseline support for tuples in python make it pretty ideal for this purpose, but I'm not sure yet how to expose this for any language.
Tnx. Then I consider this not as an option. The question remains: how easy is it to "port" a definition/cluster to a SDK plugin where only rhinocommon API is available? Assuming only standard grasshopper components are used.

E.g. the multiplication component is not only overloaded but it can also deal with different data tree structures, i.e., a scalar multiplied by an array results in an multiplied array.

We decided to write this and provide this in Rhino 6. We are now working on this.

Giulio
--
Giulio Piacentino
for Robert McNeel & Associates
giulio@mcneel.com

Hi Giulio, what do you mean exactly with this? Do you mean that all standard grasshopper components will be ported to rhinocommon or at least to a assembly that exposes a similar API?

Thanks for the reply. It seems that GHPython is more realistic for the "porting" scenario. How can I re-use Python code by means of e.g. modules and still have a proper way to deploy it. Furthermore, re-use by means of clusters seems also hard to deploy since paths are absolute (currently replacing the paths in the ghx definitions before deploying). And updated user objects seem to "duplicate" when added to when older version is in place.

Hi Wei

I do not think we have all your setup in mind, so it's hard to give suggestions.


One way to deploy GhPython libraries in Rhino WIP, is to use the GhPython assembly compiler. This compiler has a simple interface that is meant to allow users to get started and test its functionality, and an advanced way of creating assemblies that directly leverages clr.CompileModules().

To activate the simple mode, you need to switch to compilation mode. There is not much documentation at present, but it is possible to compile several files together via the _EditPythonScript editor. All compiled assemblies are then dependent from the Python runtime that is shipped with Rhino, but are fully pre-compiled, so they require less time to load.

Other methods are: a shared-on-the-network module folder, having a local copy of a shared module folders, and putting all functionality in a single 'code' component.


In general, I think that these methods make rather sense for thoroughly thought-out libraries, and not particularly for small scripted 'helper' functions.

If you could tell us more about the background of this, we might have better suggestions.

Hi Giulio, thank you for your reply. We have clusters that already consist a lot of components. These clusters are considered as our own components and are (re)used in other (main) definitions for faster development. So we have a set of main definitions that are easy to use and build on common parts. Currently, we sharing the "set" with other users, i.e. main definitions and clusters with GIT so that we have some control over the versions. Hence, the struggling with hard coded absolute path to the cluster. We stick to the cluster definitions because it is fast for prototyping and therefore for some easy to modify. So using ghpythonlib.components is the closest thing to grashopper definitions. I think automatic conversion should be possible to some degree:)

I haven't worked with the wip version yet. However, triggered by you reply I successfully compile a module with ironpython and referenced it in a python component. During development, though, it is rather annoying the dll is locked by rhino and can't be updated easily. Is this approach in general different from your suggestion with the wip version and "internal" tooling?

Furthermore, I noticed that compiling a module with ironpython will always be "successful", i.e., no feedback at all. Maybe this is because Python is dynamically typed and no real validation can be done. So I read that the best way for quick development would be: first, develop directly in a python component, second: compile the script in an assembly. If so then it would be better if one could just switch the "environment" if one decides the code is ready. So in the first setup one can just import a develop module (let's call is dev_mod) as you would with normal Python for development. This top module will import our library. That library can be compiled once it is stable and the only change in the python component will be the import of the production module (prod_mod) instead. The difference of production top module is that it references the our lib assembly so that it can be imported. Does this make sense?

I also briefly tried to setup a visual studio environment for creating ironpython projects but failed to find to set it to a dll target. I was wondering whether it is possible to set break points for debugging that way. Similar to what was done with atom for Python debugging in a python component.

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service