Grasshopper

algorithmic modeling for Rhino

Hi all,

I'm working on a very complex scene in Grasshopper and my system, with 16 GB is blocked often.
I'm looking for a system for divide the calculations of Grasshopper for more computers or a solution for reducing calculations times and weight.

Have you a suggestion about this problem? 
Thanks

Luca

Views: 546

Replies to This Discussion

Are you absolutely certain it is because of a lack of RAM? What does the System Profiler say how much RAM you have left? 

Do you have a SSD as system drive? Pagefile is written to the hard drive and if that is an old one, than it will be very slow.

To reduce calculation times, use the Profiler in Grasshopper that shows you the calculation times for each component.

Grasshopper can not split the calculations among many machines. It does not even use more than one core. David had a very long explaination about this and it makes sense. So the only thing that will help you speed up calculation is to get a newer CPU with a higher clock-speed. What you are looking for is "single-thread performance". Have a look here: http://www.cpubenchmark.net/singleThread.html

But most of the time optimizing the calculation brings the most benefits. There is always several ways to achieve something. I work on really large calculations too and some components can go from needing 10 seconds to less than 1 second, just by using a different logic.

So what part of your calculation is taking the most time?

Hope that helps. Best, Armin.

Effectively the problem is not of RAM. Now, grasshopper calculating a big complexe scene and my RAM is occupied only for 6 GB, remains 10 GB free of RAM. So, the problem is not a RAM.

The part of my calculation that taking most time is increment of number of Number of Point in a POP3D componenet. 

This can take even 40 minutes of time every time I modify the number of points in the pop3d. It is impossible to calculate it. I need to insert 20.000 points in POP3D and my system doesn't go on.

Now I read your link about "singhe-thread performance" for know more about this.
Thanks

Can you help me to select the better solution CPU for what I'm looking for?

Luca

Hi Luca,

you won't find a significantly better 'CPU solution' for this, whatever that means. The problem is that you're using an algorithm (Populate3D) which just doesn't scale well to large sets. It's 'Big-O' runtime prohibits using it for large amounts of points.

Perhaps using a different algorithm will work better for you. What is it you're trying to do with your 20k points?

I will make a particular complexity reticoular system in which a certain number of points must be connected all togheter with lines in random modality. 
What is a valid alternative (more efficently) instead Populate3D?
Thanks

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service