algorithmic modeling for Rhino
Hi all,
Suppose I have 20 curves, and I have 2000 points.
Then I use CrvCP component to check the distance between each point to each curve.
This results in 40000 values of distances. This takes very expensive computational time.
In this case the curve data are grafted for cross-referencing.
My goal is to have an efficient way to select any scattered points in space that are very very closed (eg: distance < 0.01) to each curve. Then I need to use those distance for something else.
Is there any much efficient method to do this rather than cross-referencing each point to curve?
Thank you much in advance.
Tags:
Hi Kim,
Thank you much for the suggestion.
But the Pull component needs to do 6000 computation of distances for 6 curves and 1000 points.
I was wondering if there is a way to avoid those amount of computations for this kind of problem.
The method I used using the CrvCP component also does 6000 computations in this case.
Then pretty much what we did are the same, using equality component, cull component to check if the points are closed to each curve, then pick those points.
Or maybe these are the only ways?
Thank you again.
So, do you think there are other ways for curves not to measure the distance to points and get to know which one is the nearest point?
In other words, you mean that the curves have their eyes and they'd like to be able to notice what is the nearest point? Then, they'll need some sort of special capability of perception.
If that's possible, please let me know.
Hi Kim,
I was thinking that maybe if we have a set of curves {A,B,C,D} that are spaced between each other (eg: spaced by 10 units) and not intersecting each other, and 1000 points.
Then lets say there are 200 points that are very very closed to A (distance < 0.01 unit), or lies in curve A, then curves B,C,D now only need to use the leftover 800 points for cross-referencing. Then maybe 300 points very closed to curve B. So that curve C and D now only need 500 points to cross-check the distance with. In this case, maybe this will improve the computational time a bit, and no need for 4000 computations for distance. But I was thinking if there is a better way even than this method.
Sorry for the confusion.
Your assumption is a rather special case and you'll need an iterative process to eliminate selected points previously and test leftover one by one by using scripting of loop function or plugin like Anemone.
Hi guys!
Have a look at this attempt.
I "subsampled" curves into points , i thought it would ease the cpu-work.
Instead of CrvCP i used CP component, every point from the 2000 cloud have "just" to check if there is a sampled point near the chosen distance, instead of searching for a curve.
CrvCP is "too much" accurate ... maybe what i've done is just a rough, inaccurate, noob version of CrvCP.
But it works, it seems to be x3 faster.
For better accuracy, probably the subsampling distance/density should be the "0.01" distance /sin60 .... i didn't had your data to test on...
Switch the toggle to see difference, slide the slider to move the starting curves to restart the updating waves through the definition.
Maybe i got it all wrong.... done in rush, sorry.
Also, you could do it recursively.
Firs you do a very large sampling (few point > faster) , creating n clouds of points (subsets of starting cloud).
Then, with every curve having its own cloud, do the accurate "search and dispatch"...
Do more steps if massive starting cloud.
Here.
3 Steps:
1-Distance: making subsets/clouds from main cloud, 1 for each curve
2-CP: refining/shrinking clouds using subsampled points from curves
3-Crv CP
In step 2 the slider maybe sould/should be optimized depending on the cloud size.
Step 2 could be repeated (increasing slider value on each step), or completely removed if cloud enough small.
In this case, 5000 points, 1-2-3 seems as fast as 1-3 ....
Bye
© 2020 Created by Scott Davidson. Powered by