generative modeling for Rhino
I've been using NumPy matrices in a Python component in Grasshopper to speed up some relatively heavy data analysis.
When the component runs, I can see the Rhino Memory allocation increasing as it holds all this data in memory. After a certain threshold is exceeded (around 1gb), it stops increasing, Rhino hangs, as does Grasshopper, and I get a popup entitled "Server Busy" with the message "This action cannot be completed because the other program is busy. Choose 'Switch To' to activate the busy program and correct the problem." All I can do then is end the program from Task Manager.
Does anyone recognise this issue? I don't think it's to do with me running out of memory, as I have 24gb available. Could it be to do with me running Rhino in 32 bit (so I can use NumPy) on a 64 bit machine? I also don't think it's so much to do with the way Grasshopper holds data, as the same error occurs when I run scripts in the Rhino environment. (In that regard perhaps I should be posting on the RhinoPython forum instead?)
thanks for posting this here, it's great. Yes, chances are high that you are running out of addressable memory. 32-bit Rhino obviously has to live with the usual 32-bit memory limits.
You could try importing the Rhino module and running this statement:
#Turns off the Server Busy message in Rhino
but it is probable be that the underlying error will still be there.
Are there maybe ways to diminish memory usage to avoid the problem altogether?
Can you post a sample?
Thanks for your response. I have tried turning off alerts, but you're right, the problem persists without the messagebox.
I could diminish memory usage somewhat, although I'm using almost all the data I create. What I can do is split the analysis into chunks (I'm doing an annual environmental analysis, so I could work things out month by month, say, and only keep the results I need). However this throws up problems too. The issue now boils down to this:
If I run the following in Rhino (i.e. not using Grasshopper)...
a = numpy.zeros(10000000)
...I have no problem. I don't reach the limit of addressable memory. But if I do the following...
for i in range(10):
a = numpy.zeros(10000000)
...I run out of memory, even though you wouldn't expect more memory usage, as 'a' should be re-written each time. It seems that this isn't the case though, as I hit the memory limit and crash Rhino. It looks as though something's going wrong with the garbage collection?
Since posting, I noticed this document on EnThought's release page:http://www.enthought.com/repo/.iron/NumPySciPyforDotNet.pdf ... which on page 7 mentions a memory error IronPython can hit when arrays are created and discarded quickly. This looks like the problem I'm hitting, though I'm struggling to get around it. Re-writing my code to use the while-loop trick isn't practical, though I'm curious to understand the code that "exists in NumpyDotNet which will trigger a garbage collection run and wait for the finaliser queue to empty." Sounds like what I need to do, but I don't really know how to access what they're referring to - could you help me out??
Yes, this looks like a NumPy on IronPython problem and I hadn't yet encountered, but I'll try to write this off the top of my head. Tomorrow I could perform a better analysis:
System.GC.Collect() #collect the revived and finalized elements
I think the combination of the two should work but I havn't tested it,
please let me know how it goes,
This works great, thanks for you help!