Grasshopper

algorithmic modeling for Rhino

This has to be the most amateurish, ugliest piece of code I have ever written

using 'partition list' to feed a long list of lines (11,000) into the "remove duplicates" component, reduced run time from 5.5 hours to 1.1 minutes. Given that, I'll take the ugly code

Same technique worked to fix the "region union" bug, on a long (700) list of curves to merge; dividing the list into chunk of 5 curves to merge, then merging them, then merging 50 of those results, and so, kept "region union" from getting indigestion, and completing the operation. In a short (3 minutes) amount of time.

Hope it helps others; use some of the time saved on a run to laugh at how ugly the code is

Views: 1649

Replies to This Discussion

To remove duplicate curves, you need to permute the collection (compare each with the rest), that's time O(!n) in the best efficiency case. What you are doing sounds like you are sacrificing some permutations (and I think repeating others), which can result in duplicated curves. Perhaps the reason for the speed improvement is due to other memory issues related to operating with less data (in subcollections). Or maybe I have not found the logic behind this, in that case I would like to know. Great if it works for your collection, but I dont see why it is a generic solution.

Hi Daniel, I actually came up with this one after struggling with a tree based on one your definitions, and using your variable offset definition, on a large (up to 900 random points)  tree set. Could be due to memory issues, but I did see the run time go from, no kidding, 5-1/2 hours down to 1.5 minutes; as this logic simply removes duplicate curves in groups, and then runs it again to further remove duplicates, should not be missing any curves. I haven;t tested it extensively, but results were the same for a straight curve removal (5.5 hours) and the 'page swapped' method outlined (1.5 minutes). So called page swapped, as it is one of the old methods used by intel and others to manage memory on slow processors, from the 1970's.

This method also works to cure the "region union" bug, again coming up with an outline curve from your tree definition. Not only was it fast (45 seconds) it actually worked! on a large curve set (3,000 + curves).

Is it a generic method? Don't know. worked well in these two cases though, and is both 2 orders of magnitude faster, and embarrassingly idiot simple; so simply that I was actually not sure if I should show it, as I do not want the group to think I have the skills of an 8 year old (actually that used to be an insult; with today's kids, might be a compliment)

I am interested to hear if it work in other cases. 

Oh! I left out the last line in the sequence, the last component should be a repeat of the duplicate lines command without partition the set so indeed not to miss any curves. Same for a 'region union" sequence

By the way, your scripts, tools and examples are fantastic! - I and others have used them as starting points for various projects.

here is the full sequence

Oh! I left out the last line in the sequence, the last component should be a repeat of the duplicate lines command without partition the set so indeed not to miss any curves. Same for a 'region union" sequence

Ok, then yes, but still, the next case. We have 20 curves, 0 and 19 are the same. You do 4 sublists of 5 crvs and uses RemDupCrvs. The index 0 and 19 will never be in the same branch (so dont be deleted except if at the end you flatten everything) whereas the comparison of curves with indices 0 and 1 has been calculated as many times as you have repeated that process. But I do not know the reason for the increase in speed actually.

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service