algorithmic modeling for Rhino
Hi Giulio and Steve,
I've recently noticed a couple of bottleneck with the GHPython component which can quite severely impede on performance. Thought I would bring them up here so as to hopefully help others facing similar issues.
1) Letting Grasshopper perform the "implied" loop can be substantially slower than making the loop yourself inside the Python script. This is understandable, however the strangest thing is that it is MUCH slower if the definition has been saved than when it has not (by about a factor of 10)!
2) Setting type hints seems to be slower than inputting data with "No Type Hint". This depends a bit on which type is being input, but this seems to be fairly consistent. In the attached example by about a factor of 3. I suppose this is understandable, but not exactly ideal.
3) Outputtings lists with many items will often take longer than the actual computation performed by the script. I suppose this is more of a Grasshopper thing. My workaround has been to wrap the list in a Python list and pass this along as an item, which will be ALOT faster with large lists (this was crucial to both the Tower and ShapeOP where we pass around large amounts of constraints).
4) Calling certain RhinoCommon methods appear to be randomly much more expensive than using the C# scripting component. For instance, when iterating over a mesh's vertices and calling Mesh.Vertices.GetConnectedVertices() the elapsed time sum of these calls seem to be comprised of only a few vertices which randomly change every time the script is run. The amount of vertices differ on different machines, but the pattern remain consistent.
I'm not sure if these bottlenecks are just examples of me being dumb, if so I hope you can enlighten me to the errors of my ways :)
Attached some screenshots of an unsaved/saved definition which demonstrates the described issues. Also please find the gh definition attached.
Edit: Logged this on Github here.
Update: Added point 4), new screenshot and file demonstrating this behaviour.
Guys, this a very interesting discussion, and I blame myself for getting here just now.
My question is, how all of this reflects to a .gha component?
Lastly, I have been dealing with a big amount of output elements from a custom made component, in c#.
Surprisingly, the calculation itself is in the order of milliseconds.
Otherwise, when outputting results the cost raises to the order of set of ten seconds.
Has anybody explored that es well?
Giulio elaborates on the performance of compiled vs. scripting C# components here.
Thanks for the hint Anders.
What I was asking about is a little different.
I was interested in the reasons of the costs of outputting data.
If explicit casting is more expensive than letting GH do that.
In my script, the calculation occurs anyway.
But, when the value are output with the DA.SetData method, it takes 10s.
If the calculation occurs, but no outputs are sent to cast to the GH component, it takes 10ms, which is 1000 times slower.
I don't really compile much, so David or Giulio would have to chime in here. That said, wrapping output data in Grasshopper.Kernel.Types certainly speeds things up on the GHPython side of things.
Yes, as you can see if you open the examples above: sometimes it takes more time for Grasshopper to go through the long list of types it knows about, and figure out which one you gave it, and create the corresponding Grasshopper.Kernel.Types type, than to do the calculation itself.
In those cases, just go through the "small pain" of creating the correct the Grasshopper.Kernel.Types type yourself. This is valid both in scripting and in compiled GHAs.
Thanks Giulio, i'll get my head around this!
Just to provide help for others, here is a useful thread on creating custom Grasshopper.Kernel.Types.
Also, Grasshopper SDK (to get it open Grasshopper > Help>Download SDK) has been a wonderful resource.