Grasshopper

algorithmic modeling for Rhino

having difficult time scaling up with kangaroo and some curve/ brep components

I have difficult times often when I try to enter a production phase as complexity of definition grows. Frequently rhino + grasshopper is using 14g of ram for tasks esp. since i have been using kangaroo simulations. recently I had to really GO AWAY for a kangaroo simulation to reset. like 45 minutes.

i feel scalability and performance gain with grasshopper is a difficult issue to talk about unlike some GPU rendering programs. i understand its really complex, but its really hard not to know just how long twicking one little parameter would take... i am sure there are always better and smater ways to model what i am doing but there are just sometimes brute force computing is necessary. 

well i guess it was simulating a bit. it had gravity, lots of springs and pull to mesh etc.  on and it was interacting like a melting sheet of cheese on a specially designed grill. and i think lets see... a sheet of 15 x15 quads, then on the second simulation (the one that took 45 minutes to set on i7 4700mq haswell quad core with 16g ddr3 ram) each of those 15 by 15 tiles subdivided into 8 x 8 quads, except they where divided into triangles... so that would be 15 x 15 x ( 64 x2) oh and there were 8 sheets so that would be 230400 triangles. but really not that many on todays standards i think when i see softimage doing physics simulations of millions of particles. the simulation ran quite well i think, i was able to make a simulation at least once a minute. 

my general user experience is that a modelling approach that started from simple geometric principles when applied to a large number of objects, debugging becomes a significant task - making sure data branches are organized well and so forth. and i feel identifying the bottleneck and seeing it is really important. 

is it possible to have a resource monitor of currently running operations ? really simple, low res like a pie chart or a partition tree type of visualization..i am not knowledgable enough with programming to tell how much resource it takes to have a resource monitor of this type running. the miliseconds labels are really great though. 

debug on-off mode would be really great. vvvv has this kind of resource monitor that shows how much resource certain nodes are using. i dont think if softimage ICE has this but from tutorials i havent seen a lot of waiting with ICE. I think ICE tree shows a progress bar going from one frame to another, although i am not 100 percent.

in general, what can i do if i just need a lot more brute force computation for a task ? do i need to take coding more seriously and start to get ready to turn my definitions into a  multithread python codes if i need to work with high poly geometry with grasshopper? 

Screenshot%202014-02-07%2007.21.50.png

Views: 1168

Replies to This Discussion

Hi Youngjae,

Wow - 45 minutes to reset is really bad! even for 230k triangles.

Would you be willing to send me the definition so I can try and figure out what is going wrong here, or what part is causing the bottleneck? or at least the numbers of each type of force, and the complexity of any collision volumes.

The Reset part of Kangaroo in particular is still very un-optimized, and I think I can improve at least this part relatively easily. For the actual simulation I'm also working on getting more of the force calcs to run in parallel, so it can make use of multiple cores.

There are several things to understand about GH 1.0

a) Grasshopper is single threaded. That means of your virtual 8 HT cores only one will be used an the rest will idle. Some heavy components such as Kangaroo might internally divide heavy loads to multiple cores though.

b) Grasshopper will create a complete copy of your data every time it is passed to another component. So with only the subdivision component and one mesh param after Kangaroo for easier Display, your 230k trianlges have laready tripled, not taking into account any geometry bevor the subD.

To answer your last question: Yes, if you don't want GH to copy your stuff numerous times and want to leverage multithreading, you will have to keep everything in your own component.

b) Grasshopper will create a complete copy of your data every time it is passed to another component...

As I understand it, it is not a complete copy but a pointer, unless there is some sort of transformation in which case a new entity is created.

Correct, data is duplicated only when it is about to be changed. So the same instance of -say- a brep may be shared amongst many components. Operations like sorting, culling and tree splitting also do not change the data itself, only how the data is organised in a datatree, these operations also do not duplicate the data.

--

David Rutten

david@mcneel.com

you mention a brep may be shared. i am using brep as well as mesh. i am using brep, esp. surface split to create a forward kinematic like control rig, unto which a sheet of spring loaded mesh gets attracted to it. 

its a way of simulating and studying a sheet material weighing around 45 kilo. i am trying to minimize the overall surface change through out the process.

is there any difference working with brep components or mesh components interms of memory use? (although its not an issue anymore because i added 16gs more) 

 if i have some time, i would like to make a survey page posting benchmarks for components handling muliplication of items, to see if the computing time scales linearly. the ones that are esp. causing delay are join curves and surface split. thank you david! for the info.

is there any difference working with brep components or mesh components interms of memory use?

That may not be the right sort of question. Meshes can be both very low memory and very high memory, depending on their accuracy. However it is unlikely that memory considerations are relevant when you are dealing with a small number of objects. 

The real question is what your algorithm works best with. Can it handle both meshes and nurbs and polysurfaces?

--

David Rutten

david@mcneel.com

I can say that as far as Kangaroo is concerned, the functions CollideMesh and PullToMesh are much faster than their corresponding brep versions.

Indeed, meshes tend to be vastly quicker for things like ray intersections and collisions and stuff. Breps, especially if they have curved and/or trimmed faces take a long time to solve.

--

David Rutten

david@mcneel.com

sorry forgot to include a video link

this is a general approach

https://vimeo.com/87455161

I shouldnt try to make this a case by case mesh vs brep thread, although that seems already a little more specific than the thread title...

and here i have a generic application of this approach to surfaces created by a two dimentional "sketch" of kd tree from a random float, extruded z direction. i did using only grasshopper components for now. 

but then i have to analyze the surface curvature because any tangent circle on the surface should have less than 5 cm diameter. so when i get a mesh i want analyze, it has like 400k faces at the moment. when i intersect this mesh with plane every 1cm or so, the mesh intersection work in no time, but then i wont get to the end of joining all the little lines into curves.

its the curves of the surface i need so that i can interpolate the points using discontinuity components, evaluate curve, tangent circle and cull the circles by diameters to see where the surface design can cause material to damage or have cosmetic defects.

i was hoping to use either galapagos or octopus to minimize the numbers of the tangent circles that exceed 5cms on the surfaces, once i have recorded a kangaroo simulation run, rather than having to run and rerun kangaroo for surfaces individually to simulate and analyze the mesh outputs. 

when i design the support structure for the panels that depends on the shape of each panel, I also want to see the total material length of the support structures and its efficiency before i make any changes to the initial control brep (show in green, blue, pink and white)  

 for me its important to show how it is possible for the initial floats to be altered (like the swarm behavior, or some other algorithms) and have the entire process reset, analyze and record data. because the sheet material is a formable but NOT FLEXIBLE material which can take on many possible shapes, i wanted to create a definition robust enough for many iterations and for me to be able to navigate between the possibilities.  

RSS

About

Translate

Search

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service