algorithmic modeling for Rhino
I have been running into a lot of file size issues lately(Grasshopper filesize, not Rhino file). I was working on a large file that had gotten up to 50MB, which seemed ridiculous, as most GH files are very small. I don't have any internalized data in the file either. There is a lot of surface splitting and clusters in the file which is very slow going, but I didn't think that this kind of thing affects the actual GH filesize (I don't know the inner workings of how GH works, so I could be completely wrong).
I have been working on breaking the file into chunks using flux, and just ran into a very weird behavior. My file was at about 6MB (big but manageable) and I added a flux node to send something out. As soon as I saved, the file jumped to 120MB! That's bigger than the original file I was breaking up! The files are too large to post here, but I'm just wondering if there is anything that can be done to reduce filesize in general?
It's flux's fault!
To elaborate - flux maintains a copy of allllll the data you pass through it, saved in the GH file, so that it can compare it against the latest version on the web and decide if there's been any update. This is, I think, a pretty critical oversight on flux's part, and one I have complained about before: https://community.flux.io/questions/3356/flux-serializing-large-amo...
Yea that appears to be the consensus! The flux nodes seem to behave like internalized data, so they store all of the information IN the GH file. It kind of defeats the purpose of what I was trying to do in the first place..
if you're just passing data from GH to GH, use Dave Stasiuk's "Pack/Unpack" components from tree sloth.
Oh nice, I've never used that before, that's promising! So that method won't save the data in the GH fiile?
exactly - saves geometry or data to an external file. it's a great way to "split up" your gh defs
Oh man, this is a life saver. Exactly what I was looking for. Thanks!