algorithmic modeling for Rhino
Hello, I have two general question to gh:
Which gh tools will result in high processing power. Where i have to pay attention to keep processing power low? Will GH work better on a computer with more power or does it depends to the software because it has its one process speed? In my case i replace all boolean tools out of my definition and replace them with other tools, which i guess are easyer to calculate. Make that sense?
Are there some rules for keeping the necessary processing power low?
When will Gh stopp to spend processing power for a tool?
When it is orange, that means no correct input?
When it is red, wrong input?
When it is cut off from data, for example by a strem filter?
When it is hidden?
When it is deactivate?
I ask because one of my definition is realy huge and starts to stutter.
Is it possible to hide / preview / bake /.. more than one selected item? I often would like to select a bunch of items and hide them by right klick, but this works only for one tool.
Grasshopper is an event based solver, not a continuous solver. It will stop using processor cycles once a solution is complete and it will start using processor cycles again once the solution expires. Sometimes Grasshopper behaves like a continuous solver, mostly when using tools such as Kangaroo or Galapagos.
Grasshopper 1.0 is a single threaded application. All computations are performed on the same thread which also hosts the Rhino and Grasshopper interface. It doesn't matter how many processors you have, the only thing that matters is how fast the processors are. I.e. a 2-core machine where each core runs at 2.5GHz will be somewhat faster than an 8-core machine where each core runs 2.4GHz. This will not necessarily be true for GH2, we're going to at least try and introduce multi-threading for individual components there.
Some components perform much more 'expensive' operations than others. Adding two numbers together will be much faster than creating a Solid Boolean of two Breps, but that doesn't mean you can use Addition instead of Solid Union to speed things up. If you need to compute a Solid Union you'll need the Solid Union component. Sometimes it's possible to replace an expensive component with a cheaper one, or a collection of cheaper ones, but certainly not always.
When a component is orange it means there is at least one warning (and no errors). You can find these warnings via the menu or you can click on the little orange balloon widget.
When a component is red it means there is at least one error. Again, all errors and warnings are available both via the component menu and via the balloon widget.
Components are hidden when you switch the preview off. Hidden components appear slightly darker grey than visible components. If the 'Selected Only Preview' is enabled then only selected objects will be previewed, regardless of their preview on/off state.
Components are deactivated (disabled) when the Enabled menu option is not checked. If the solver is locked then all components are disabled, regardless of their enabled on/off state.
I don't understand what you mean by "When it is cut off from data, for example by a strem filter?" or "Which gh tools will result in high processing power."
"When it is cut off from data" means, that at least one input is virtually disconnected. Theinput datastream is empty or null. If this input is neccessary for the component to work, it will typically display an error (red).
Some components will continue to work with only a warning.
There are optional inputs, a component might work without connecting data. But normally "optional" means, that the component provides a suitable default value if nothing is connected. You will get at least a warning here too, since the default value will be overridden by the empty input data stream.
Thats what i mean!
thank you so much for your detailed description.
Know i have a better understanding of the behaviour of grasshopper.
If i understand you right RAM also doesnt matter, right?
I know the meaning of the colors orange, red, gray, etc. but what i was thought about was, even if components are not in action, it could be possible that they could exhaust cpu speed. So the result might could be that it would be better to delete all unused components or something like that. But it seems not to be necessary to worry about that anymore.
The stuttering which I can observe in my gh definition is when i adjust a slider. I my case it would be nice to have a smooth change of my resulting geometry when I trie to change the parameters with feeling.
But now i use a cpu with only 2,33 GHz, so i think i can improve this a bit by working on a better pc.
For large definitions, a certain amount of "stuttering" is normal. You might double your grasshopper performance with a better CPU but that doesn't neccessarily prevent your definition from stuttering. It just costs a lot of money and energy. Typical consumer CPUs max out at about 4GHz. You can ramp that up by adding extensive cooling and another zero to your energy bill. ;)
Every time, you move a slider, even if it is just to move contol point of a curve, all the geometry gets destroyed and is created from scratch. This will take time. If you need fast updates, just try to keep the definition light.
I ask because one of my definition is realy huge and starts to stutter.
One way to reduce long computation times is to insert Data Dams before expensive calculations. You can delay the data from left to right so the part of the GH file that is to the left of the dam can run faster, and when you're happy with it you can toggle the dam and trigger the more expensive computation.
Another very useful strategy is to keep the definition light. Quite often, you can get the same result with only math or lighter line/curve/point components. Try to introduce (heavy) geometry only when there is no other way around.
Try to reduce the number of identical paths. For beginners it's often easier to just copy a bunch of components for another set of input geometry. This will result in a large number data copies. No problem for a small defnition. If you can keep all similar data in one tree, there will be less copies. This has more impact on memory than CPU but copying and memory management will take time too... especially if you get close or max out your physical RAM.
I already cleaned up my definition as you suggest and as far as i know what is possible.
Surelly it could be improved, but I'm still learning. ;)
Deactivate everything that comes after the geometry you are currently inspecting. That's essentially what DataDam does.
Try to think outside the box:
Do you need surfaces/breps for the preview or will wireframe outlines do? Do you need the last detail or will some earlier stage give you enough of an impression?...
Hi, by the way is substituting a collection of Gh native components with a custom python component which produces the same result able to cut the processing cost in any way / keeping the definition light?
Nope, all depends on how the python is implemented. The regular components are compiled C#, so they're already pretty fast in most cases. The main benefit of merging functionality into a single component is to reduce the overhead involved with data transfer between components. Unless you have millions of data items this overhead will not be significant.
Ok, i will check if the "data dam" could be usefull here.
Thank you once again!