Grasshopper

algorithmic modeling for Rhino

Hi,

I would put to debate and consideration of David, the next idea.

Problem:

When an algorithm is performed, one has to be aware of the data structure (item, list, tree) that will be used on parameters. By default algorithms are created not responsive, that is, can that work fine with data in lists, but if you add branches in a parameter, the algorithm does not work. This can be corrected of course, adding the necessary procedures. In fact it was what I had to do with Peacock v0.96 (are user objects) with more or less good result, but almost all bugs come from this. External plugins help to deal with branches, but should be able to do easily with native components (the Shift Paths component fails when you run out of branches is a big mistake). But it is also fair to say I've always been able to solve it with native components (or with some code maybe). 

I still find me with this problem very often. Make a responsive algorithm can greatly complicate things, and it's something that from code does not happen. Script components or compiled components, and definitions (as an individual) or UserObjects, do not work in the same way to process the data.

Code components work perfectly when using different data structures of the access type of its parameters. How could do this for entire definitions?

Possible solution:

In the container components (parameters), those of hexagonal icon with black background, can assign access type, so that data are transmitted depending on the type of access, as the components work.
Thus the parameters of the User Objects or definitions can be set to operate with a specific type of structure, and when it receives other structure, the definition will be executed directly in appropriate manner.


This have sense? forced to move too many things? there would be a better solution?


Thank you.

Views: 1347

Replies to This Discussion

I find that one reason for these kind of mistakes is that Grasshopper is too permissive when you have different path depths, or even path numbers that don't match, so sometimes things work when they really shouldn't. The user would think everything is fine until they change something upstream and thing don't match anymore (usually because they didn't match in the first place).

*or doesn't work as the user expects.

The main thing to be aware is that "as list" outputs always automatically add a branch level, even if they're not adding extra elements, while components with "as list" inputs that output a single object (returning a single element per branch) don't automatically flatten one level down, you have to do that explicitly.

I find it makes more maintainable grasshopper files to keep in mind the path depth after inserting a component and use shift paths and graft to match paths rather than the "carpet bombing" solution of using the simplify toggle or relying on Grasshopper permissiveness on mismatched paths.

As a quick note:

The main thing to be aware is that "as list" outputs always automatically add a branch level [...]

Is not actually true. There are cases (for example when all inputs and all outputs are lists) the structure is maintained. See the Sort component for example, it does not complicate the tree.

Your example is a good one, but the question for me is; should I have made it so that it does work correctly when adding another curve? Are my algorithms too stupid or just plain wrong?

I was arguing that the algorithm is too smart. For beginners to understand quicker why this is not working, it shouldn't have worked with one curve in the first place. The problem with this is that keeping a tidy data tree by adding components (or grafting) all the time is tedious and the way it is right now makes you work faster.

I guess it's a similar problem with castings, they make you work faster but discourages to learn what's really happening. It's typical to find beginners using castings improperly creating very inefficient definitions.

I'm currently writing a GH plugin and I'm trying to avoid having "as list" inputs combined with item inputs in the same component if it's expected that the item inputs will be a list.

For example, imagine I have an input called "plane" and another called "property", but each plane could have more than one property. I could rename the input to "properties" and set it as list, but I know people will make a mistake.

What I've done is set plane and property inputs as item but then have an additional component called "property group" that allows you to combine a list of properties into a single property.

Another problem is having to duplicate data.

If input A has branches:

{0;0;0},{0;0;1},{0;1;0},{0;1;1}

and input B has branches:

{0;0},{0;1}

It shouldn't be necessary to duplicate data and graft for the algorithm to work as expected.

P.S. I know I didn't write any solution in all of this, sorry...

When working with complicated trees, paying attention to tree structure helps me to ensure that I am actually matching correct geometry; sure, tree structures are frustrating, and yes, when adding curves, or adding complexity, I have to rewrite a definition, or follow the structure all the way through to insure the results are correct. That said, it works, and in almost all cases I can find a solution to a computation, either through my primary approach, or through a work around. One of the great things about trees is that you do not have to declare a structure, it takes whatever you have and works with it.

[...] can assign access type, so that data are transmitted depending on the type of access [...]

Data is always transmitted as trees. It is only components that can decide in what size bites they like to be confronted with their input data.

I guess I'm not entirely clear on this yet.

I was also recently thinking about that as well (I guess it's the GH2 moods and fantasies).

Under the hood the Data Tree is actually a dictionary... Why not to actually make it a tree ? So that we have a recursive class which can (doesn't have to) store data and additionally it has children of same type. Let's call this class a Node.

The way this approach would differ is a bit more permissive behaviour. Let's say we have 3 curves, divided into 10 points each. The division component outputs ONE node which has 3 children, where each has 10 children (We only pass this ONE item around, eventually parsing it to text with some "unravel component"... at least that's my first thought). 

Now let's say we want to scale those points. If we let the data propagate down the tree, we could assign either 1 scaling factor value or 3 values or 30 values. We do not have data paths, so we don't have to match anything... the Node itself organizes the incoming data within it's structure (by propagating it up or down depending on the context).

This kind of behaviour is currently not possible with native components, IMHO it would make it much more intuitive... and even if you would make a script which can propagate the data up/down the DataTree, after some time it becomes CPU expensive. 

(yea, it's more less a model tree which can also pass data within it's own structure.. or an xml document)

PS. One Node can store only one non-collection value.

if I understand Daniel's suggestion, it's a very interesting one. To me the PRIMARY limitation of data trees is the "fragility" of reusing code with unknown upstream structures. Given a set of functionality in a cluster (that operates happily on a single item): 

point (input) -> circle -> divide curve -> circles -> divide curves -> flatten -> interpolate curve (output)

The problem here under current cluster behavior is the "flatten" - if I supply two points I still get one curve out. I can substitute for "Shift Paths" or "trim tree" and all will be well - but this is an easy mistake to make, and sometimes these approaches can "shift" a flat path out of existence. 

I would tweak Daniel's suggestion slightly -rather than having this operate on ANY param, it would work for cluster inputs only. You could explicitly set the "access level" of cluster inputs (so that, regardless of input structure, when I go to edit a cluster w/ item access, I am only confronted with one item) - such that the cluster innards behaved like a single "solve instance." I think you'd get around a lot of this headache. 

The downside to this approach is that clusters would behave differently to the same components unclustered - but perhaps they behave as they do currently, UNLESS the user is explicit about access levels. It might also be tricky to build the logic to establish what the appropriate "tree" for the output should be - but you managed the logic for components (item/list/tree inputs/outputs) and I think the same approach could be used in clusters. 

The problem is basically user error when creating clusters. I would rather prefer an approach that nudges users into being less prone to make these mistakes rather than making special cases because everyone is making the same mistake.

In this case, for example, it would be better that the 'flatten' toggle on parameters would only flatten one path down (which is the real oposite of the 'graft' toggle). This way hopefully users are more likely to make local changes to the path structure rather than global ones.

"The downside to this approach is that clusters would behave differently to the same components unclustered [...]"

That, in the end, was why I decided to not add any special iteration options to cluster inputs. It would be reasonably trivial to populate cluster inputs piecemeal, solve their internal network in a loop and harvest the outputs progressively, but the idea that a cluster would behave differently when exploded seemed like it would cause no end of trouble.

A potential solution would be to have specific components that slice up data in a customised way, that might benefit not just cluster developers and it wouldn't require additional UI. I have to say I can't quite imagine what such components might look like though...

Yeah Andrew, this is an example of what I said. Of course all these problems are solvable, one can create doors to use different procedures according to the data structure.

But I wondered, why not allow the user to control how a definition is executed? Then one realizes that in reality, there is no global control for definitions... They have not scheduled own identity. What we have now works, but perhaps study the implications of this means big improvements.

It may allow userobjects make work more as components (as re run the instance solve if needed) is the fastest solution. Perhaps allow to set the type of access of componentes-parameters does not work, I do not know. What I have clear is that this point should have some simpler solution than what we do now. I think that simplify the complex is the motor of development.

And on the other hand. It would be very useful to give identity to a definition, as a proper object. I want that you can call definitions from the canvas (as the components, typing the name), without add it as a UserObject or using Brick Box plugin. Or automate the execution of an algorithm to produce diferent results automatically. Or save the states of the parameters in a most powerful way. Or study the bottlenecks or potential improvements quickly or data flow or the flow of data structure... I think here is ground for digging. 

Suggestion: switch Grasshopper 2 to a simple list of lists array system with a dirt simple way to visualize the arrays. Just like Python. Just like Basic.

"Mr. Ruttenatschow, cut down this tree!"

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

© 2022   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service