Grasshopper

algorithmic modeling for Rhino

Grasshopper (and indeed Rhino) can be performance critical applications since both potentially deal with large amounts of data and computations. Although we aim to make our software runnable on low-end, over-the-counter computers, you may still run into serious performance issues. We have no strict recommendation or requirements, but here are the basic rules when it comes to picking hardware for Rhino and Grasshopper:

  1. Memory is key. If you deal with lots of data you need lots of memory to store it in. Since Windows itself (and any other applications that are running) require a lot of memory as well, you should make sure that you have sufficient RAM. Once you run out of RAM, Windows starts to use the hard-disk as virtual memory and when that happens you can say goodbye to performance. If you're running 64-bit Windows and a 64-bit version of Rhino, then there's really no upper limit to the amount of RAM you could install. I recommend getting at least 8GB of high speed RAM, but if you have money for more, go for it.
  2. Graphics cards are important for 3D display, but not much else. Unless you are running software which specifically uses the GPU for computations (a lot of modern Render engines for example) the only purpose of a video-card is to quickly display pixels on the screen. Be sure to get a fairly high-end card from a trusted manufacturer (ATI and NVidia basically). Do not, under any circumstance, settle for an Intel graphics card.
  3. Processors are tricky, so pay attention. It is important that you get as much bang for your buck as possible since computational speed is often a bottleneck. But remember that Grasshopper and (most of) Rhino are single-threaded applications* and therefore do not benefit from multiple cores. Do not be bamboozled by advertised processor speeds as those speeds may be given as a sum-total over all cores. I.e. an 8 core processor that has a total clock-rate of 6GHz will only give you 6/8 = 0.75GHz per core. If all you care about is Rhino and Grasshopper, you'd be better off getting a dual core @ 2GHz or even a single core at 1.8GHz

Some further points to take into account:

  • Grasshopper GUI is drawn using GDI+ which is not hardware accelerated. Grasshopper framerates are dependent mostly on processor speed.
  • Grasshopper 3D preview are drawn either in OpenGL or GDI, which are hardware accelerated. A better graphics card will improve performance here.
  • Grasshopper does not have heavy traffic to and from the disk. HD access speed is rarely a bottleneck.
  • Grasshopper requires .NET 3.5 while running on Rhino4 and .NET 4.0 or better while running on Rhino5. Be sure your windows supports these.
  • Rhino and Grasshopper sometimes run on virtual machines such as Parallels and VMWare, sometimes they don't. Even if they do, performance is typically pretty bad. Robert McNeel & Associates do not support these Operating Systems. 

This may change in the future, but not the foreseeable future.

Views: 26500

Replies to This Discussion

Thank you David,  this information is specific to what I was interested in currently and much appreciated.

Walt Lampert

Nice to read this. Only curiosity: what about a hardware accelerated interface? OpenGL could open new possibilities (and close and complicate others, of course): 3D interface capabilities, video and image decoding faster in canvas, complex effects applied to elements in future (DOF to focus atention over nodes...I know is very fancy and stupid, but only dreaming...), 3D and multidirectional connections of elements (ok,ok...We talked about this some time ago and is not a choice...but...dreaming in future again...:P)...

It's possible of course. I began with GDI+ because I knew how to use it. I still can't write OGL code today. It would be quite a significant job of course so until the current system clearly becomes inadequate I think I'll stick with GDI+.

--

David Rutten

david@mcneel.com

Poprad, Slovakia

Hi David,

will there be a multi-threaded rhino in the future? i just came across the topic because of re-occuring performance issues regarding grasshopper and components like boxmorph. hardware is not the worst. 

Greetings

ante

just found your comment from april regarding the same topic

http://www.grasshopper3d.com/forum/topics/performance-of-grasshopper

but how they accomplish stuff like vray scatter with a million of objects?
there must be a way :)

by the way, did you manage to get your zoomable shaded dots into prezi and wouldn´t that be a solution to other performance issues within grasshopper (eith the help of e.g. openscenegraph)

greetings

Rhino is already multi-threaded. Some portions of the runtime code use multiple processors*. As time goes by we'll probably add multi-threading to more and more algorithms, for Rhino5 we tried to at least make all our algorithms thread-safe, so they can all be called from multiple threads, this is the first step towards multi-threading.

But multi-threading is not just something you switch on or off, it's an approach. Let's take the meshing of Breps for example. Let's assume that at some point one or more breps are added to the document. The wireframes of these breps can be drawn immediately, but the shading meshes need to be calculated first. How do we go about doing this? Allow me to enumerate some obvious solutions:

  1. We put everything on hold and compute all meshes, one at a time. Then, when we're done we'll yield control back to the Rhino window so that key presses and mouse events can once again be processed. This is the simplest of all solutions and also the worst from the users point of view.
  2. We allow the views to be redrawn, mouse events and key presses to be handled, but we perform the meshing in a background thread. I.e. whatever processor cycles are left over from regular use are now put to work on computing meshes. Once we're done computing these meshes we can start drawing the shaded breps. This is a lot better as it doesn't block the UI, but it also means that for a while (potentially a very long time) our breps will not be shaded in the viewport. This approach is already a lot harder from a programming perspective because you now have multiple threads all with access to the same Breps in memory and you need to make sure that they don't start to perform conflicting operations. Rhino already does this (and has been doing for a long time) on a lot of commands, otherwise you wouldn't be able to abort meshing/intersections/booleans etc. with an Escape press.

So we can compute the meshes on the UI-thread or on a background thread. How about using our multiple cores to speed up the process? Again, there are several ways in which this can be achieved:

  1. Say we have a quad-core machine, i.e. four processors at our disposal. We could choose to assign the meshing of the first brep to the first processor, the second brep to the second processor, the third brep to the third processor and so on. Once a processor is done with the meshing of a specific brep, we'll give it the next brep to mesh until we're done meshing all the breps. This is a good solution when multiple breps need to be meshed at once, but it doesn't help at all if we only need to compute the mesh for a single brep, which is of course a very common case in Rhino.
  2. To go a level deeper, we need to start adding multi-threading to the mesher itself. Let's say that the mesher is set up in such a way that it will assign each face of the brep to a new core, then -once all faces have been meshed- it will stitch together the partial meshes into a single large mesh. Now we've sped up the meshing of breps with multiple faces, but not individual surfaces.
  3. We can of course go deeper still. Perhaps there is some operation that is repeated over and over during the meshing of a single face. We could also choose to multi-thread this operation, thus speeding up the meshing of all surfaces and breps.

All of the above approaches are possible, some are very difficult, some are actually not possible if we're not allowed to break the SDK. A further problem is that there's overhead involved with multi-threading. Very few operations will actually become 4 times faster if you distribute the work across 4 cores. Often one core will simply take longer than the other 3, often the partial results need to be aggregated which takes additional cycles and/or memory. What this means is that if you were to apply all of the above methods (multi-thread the meshing of individual faces, multi-thread the meshing of breps with multiple faces and multi-thread the meshing of multiple breps) you're probably worse off than you were before. 

--

David Rutten

david@mcneel.com

Poprad, Slovakia

* an example would be the z-sorting of objects in viewport prior to repainting, which is a step performed on every redraw as far as I know.

Hi David,

I'm impressed by the insight you are giving. So I can imagine what is going on in the black box.
So what do you think of the implementation of trimmed NURBS in the scene graph method considering the release of the MIC Xeon Phi? Is this the way to go for Grasshopper implementing hardware acceleration intelligently? http://hss.ulb.uni-bonn.de/2008/1614/1614.pdf

I hope I am not pondering too much :)

Hi! I'm an Interior Design student in Dubai. I am fairly new to the whole rhino/grasshopper scene. I am planning to buy a new laptop for this and I must say, I found this thread very helpful! I would just like you to explain further on the 32-64 bit criteria. Which one would you say is best for rhino and grasshopper? I have been using rhino on a sony vaio of 32 bit and 4 GB RAM, which I had purchased 4 years back, for about a month now. It kept crashing when i was using the paneling tools and said that the memory is full. I would like to know what went wrong...

Any help will be much appreciated!

Thank you. 

you should buy the 64 one  + as much ram as its possible/affordable. its not only useful when using rhino e.g. 3ds max can take any amount of ram for rendering (other way, youll have to use regions). photoshop also can use quite a lot of ram memory.

if you want to know what went wrong : you just used all avaible ram, and panelling tools wanted more.

Hmm I see your point... Thanks, I'll keep that in mind when I buy my pc. 

Just so you know, the 32-bit is actually limited to the amount of RAM it can use at any time...I believe it's basically capped at 3 GB, so anything you might have above that is completely useless in Rhino/GH, which can be crippling.  Matuesz is right...get the 64 bit, and get as much memory as possible.

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

© 2021   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service