Grasshopper

algorithmic modeling for Rhino

Performing operation on each branch of a tree with all branches of another tree...

Hi all,

Can't decide if I'm making this more difficult than it is really is or what. The title gives the basic description. I have a Tree A with a certain number of branches. I feed that into a component. I have Tree B with a different number of branches, fed into the same component. I'd like to have the output be the component having operated on each item in Tree A versus all items in Tree B, i.e. all possible combinations of paths. Ideally the output data structure would be {A_branchcount;B_branchcount}. See attached picture and GH definition. Hopefully that was clear--let me know if not.

Any ideas? Seems like a basic data structure thing but I'm stumped. Thanks in advance.

Austin

Views: 1373

Attachments:

Replies to This Discussion

Like this? (not sure that this is what you want, mind)

Attachments:

Oops: wrong multiplier connected

Attachments:

Well, I don't follow what you're doing here 100% but the results aren't quite what I'm looking for. I'll try to explain again and maybe that will help. I have 19 sets of points (each set has 300 points in it). I also have 69 curves. For each set of points, I would like to find the set of closest points to the original set (300 points) on all of the curves. Thus, for ONE of the original 19 sets of points I would have (300 original points)x(69 sets of curves) = 20,700 closest points (and, since I have 19 of these, I'd have 20,700 x 19 = 393,300 total points). Is that way too much data to handle? Hopefully this was helpful. I feel like I'm having difficulty explaining it clearly, sorry...

Hmm .. the "not sure" is added for good reason(s), he he.

What I'm doing is to select some random amount of curves (not "just" 19) out of the initial collection and then ... but wait ... this is NOT what you want, so forget it.

Apparently some "mods" are required ASAP... but 350K points appears a bit ridiculous. I think that you should provide some sketch (hopefully using LESS points: what about 10? he he) describing what are you really after (i.e. the expected result in terms of some geometry or other): most probably there's ways to do it WITHOUT 350K of points (or 1M). 

Haha the "not sure" is totally fine, I appreciate the attempt!

I agree 350k is pretty ridiculous and, actually, this could be probably be accomplished using 100 sample points or even less rather than 300, so that would help things substantially. Here I'm basically trying to use Grasshopper to do some curve analysis (actually analyzing the L2 error norm of an original curve--one of the original sets of 300 points--versus the 69 curve possibilities I'm comparing it too). However, I was actually going to be interfacing GH with Matlab for this project to begin with so I have a work-around in mind that will totally work, so no worries! Just seeing if I was overlooking something simple :)

Like so? By flattening the curve list and grafting the point list, you ensure that each point is evaluated against every curve. If you'd like to restructure the data tree to re-orient its organisation in favor of the curves, you can use the path mapper...(it works easily here because you don't have complex paths for your curve list. It would get decidedly trickier if you had, say, an {a;b} structure for your curves as well as your points).

Ah yes, that's it! I tried this but since the data isn't immediately in the form I expected I thought it wasn't doing what I wanted it to. Path mapper should take care of that. Not so bad in terms of evaluation time either, actually. Thanks David!

RSS

About

Translate

Search

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service