Grasshopper

algorithmic modeling for Rhino

Which cpu would be faster for grasshopper calculations?

http://cpuboss.com/cpus/Intel-Core-i7-6700HQ-vs-Intel-Core-i7-2600

the two computers that I own which aspects of cpu are most important in speed of calculations?

Views: 1846

Replies to This Discussion

Clock speed of individual cores. The number of cores doesn't matter for Grasshopper1.

If you're buying for the future, then the number of threads/logical processors becomes a factor. Rhino6 will already support a fair amount of multi-threading and Grasshopper2 will as well. But until that happy day comes, the clock speed per core is the only relevant property. The i7 2600 seems to have the higher speed.

Do note that a single physical core may contain two or even more logical cores. For example I have an i7-6700K (4GHz) which is officially a Quad-core machine, however I see eight threads in my task manager because each core contains two logical processors:

What about options like parallel on ladybug, I remember it stating that it can use multi thread right? So it depends as well to certain components too atm.

Btw while I have you here David. There is a bug where if you working on one grasshopper file and then again open it with another instance of grasshopper it prompts to autosave recovery function and if you recover the autosave and dont save the file when closed on the second instance - then first instance crash would result with no autosave but just the original file. And it's not in autosave folder either.

So it depends as well to certain components too atm.

That will always be the case. Code either is or isn't multi-threadable, and it is up to the develop in question to either use it or not. Even in GH2 there will be bits that will not support MT.

I'll look into the double-open-bug, not sure I can really fix it though without rewriting a huge amount of the file handling code. 

An option to disable autosave deletion would be awesome. For the ones that love to push grasshopper to its limits and crash it every few steps! Or actually an option to keep the last x amount of autosaves for the same file

Oh and I was wandering on 64bit when Grasshopper uses all potential available RAM for one calculation rest of calculation uses hard drive paging? And the calculations time is then bottlenecked by the hardrive speed? my understanding might have a lot of fallacies tho. 

Python command multithreading is especially easy and effective but you must learn Rhinocommon commands too since it bogs down if you invoke node-in-code Grasshopper components. Uses 100% of CPU very effectively. I have to add indexing tags to each result though to later re-order the output to match the list sequence of the input.

As a side-note, I was just stress testing some Grasshopper2 bitmap operations (specifically, gaussian blurring) to see how the multi-threading holds up. The code takes about 50 seconds to blur two-hundred 500x500 images using blurring radii smoothly ranging from 2 to 200 pixels. Ie. about 650 milliseconds for a blurring radius of 200 pixels, and ~10 milliseconds for a radius of 2 pixels.

During the process, my CPU load varies between 80% and 90%, but that includes all the stuff the system is doing (always a few percent).

Hey David I am back from short break, doing some levenshtein distance and was wandering if on gh2 it was MT capable? And if so would I be able to transfer internalised components to gh2?

And would it be possible to code a component with levenshtein distance but it would stop calculating distance after a chosen distance (x), effectively cutting down on calculation speed. So lets say I have a string with 600 characters and if the distance reaches lets say x=10 It jumps to another string and only gives a result of 10. I bet its possible, but would it cut down on calculation time, or something that would be checking if it has reached x would actually cost computing resources too? 

Probably possible, but is Levenshtein distance slow? Only last week I rewrote the string distance methods in GH2 core (I implemented Hammond and Damerau-Levenshtein metrics as well) but I did not include a cutoff option.

Checking two numbers for larger-than-ness is pretty fast, so my guess is it would speed things up. However if the speedup means the algorithm now takes 15 microseconds to complete instead of the 25 microseconds it would otherwise have taken, is it really worth it?

Well right now each of my string has 600 characters, there is 10000 strings in each branch and there is 40 branches. And If I want to check One branche contents against rest of the branches contenets 1:39. Quickly enough the numbers rack up. So by creating the cutoff value at 10 I could decrees the calculation time 60 times. And possibly we are talking here 1 day vs 60 days. Well 60 times is best case scenario possibly halve that

I attached a file with a C# component which contains a dumbed down version of my string distance algorithm. It doesn't allow for string comparisons and it treats each char as a comparable element.

I think I added a maximum distance threshold test, but I have not proven the mathematical correctness of it.

Either way, perhaps you can use this to speed up your process.

Attachments:

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service