l console app that gets triggered after all the image tiles are exported and which will stitch them all together.
It runs outside the Rhino+Grasshopper process so it has at least 2GB of clean memory to operate in. If you're running on a 32-bit OS and the uncompressed image is larger than 2GB, this will still fail and you'll have to patch them by hand.
The intermediate images get written to the Temp directory, so they'll stick around until Windows deems them stale.
I also adjusted the bit-depth, if the output format is jpg or bmp then I use a 24 bits-per-pixel, png still uses 32 bits-per-pixel. Maybe this will solve the Photoshop loading problems. I cannot test this as my install of Photoshop consistently crashes on startup.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
d fly with a Porsche flat six).
2. Added a double (nested) Anemone thing (and the Mateusz version) and some comments.
3. Added a stupid "arm maker" cluster ... primitive/ugly/pointless - see one prototype attached about how to do it (you'll need a top feature driven CAD app for this - notice the Teflon low friction ring).
4. In order to "adapt" the cluster arm you need some "stretch" capability (orient, scale et all are the 1st step). Of course putting the cluster into the 2*loop is the art of pointless (Mateusz misinterpreted my bitter comments as regards the "slow" thing, I had absolutely no intension to recreate "live" the arm).
be the Force (the dark option) with you all.…
th a graphic editor (GH) hosted in a CAD app that has primitive assembly/component capabilities and/or feature driven ops (Rhino). Did I've mentioned that Rhino is a surface modeler? (meaning the obvious).
3. Imagine a "seed" collection of assemblies related with various membrane components made in SW. Say: geometry (prior solid ops) and parameters (the feature driven part of the equation, in most of cases managed with some RDBMS). You should port these to GH (a variety of ways exist for that) and create the bare minimum of "solids" in GH as instance definitions. There's 2 main reasons to do that: (a) effectively communicating back on an assemply/component schema (via STEP) and ... (b) achieving manageable collections when in GH. These are critical for clash detection (when outlining some topology in GH, therefore NEVER work just with "curves") and "variation" control of some sort (up to a point). Of course for high class designs (where the devil hides in the details) this is NOT the best imaginable solution ... you'll need CATIA for such an integrated (all in one) procedure. On the other hand many could (wrongly) argue that CATIA is expensive (rather naive argument if a company has a certain turnover).
4. So, in general I would strongly suggest to use instance definitions of items in some sort of "intermediate state" of detail (an 100% undoable task without code) structured in such a way (classic assembly/component MCAD mentality blah, blah) that SW could take benefit of a possible modified "base topology" and proceed by finishing variations of the given assembly (feature driven stuff as usual).
5. Then export (STEP 214) back portions of the assemblies (and parameters used) to R/GH if and when this is required (for instance for BIM disciplines ... but Rhino is not a BIM app, nor it would ever be).
6. If you are familiar with code matters ... start thinking the whole puzzle that way, if not my advise is to find someone to design such a "procedure" (say an "app") using solely code, but this is not a task for the inexperienced by any means.
best, Peter…
basis).
2. Rhino does not have a proper object display capability (objects per layer per view basis and/or per "collections" per view).
3. TSplines does NOT have any on-the-fly coordinate system definition capability (making "edit" a pointless waste of time). A small example about what this means as regards view navigation matters: imagine "hoovering" along a myriad of 3d objects: if you choose/opt for it: the moment that you touch an element (that could define a vector): this instantly becomes the working plane Z axis (very common capability in top MCAD apps). Not the same as a SpaceNavigator controller mind (far from it).
If these 3 were available > rebuilding anything with TSplines could be a joy (and very fast: about 2 minutes for your mesh)
Get this as well - Load Rhino file first attached in my previous reply (just for fun: not for your case, but we could do an extra WOW MERO spaceframe out of this paranoid M mesh).
BTW: Exo W is "tricky"…
exploran los principios básicos de Grasshopper en Rhino 5 para desarrollar algoritmos de superficies responsivas a datos generados por dispositivos y aplicaciones como: iPhone/iPad/iPod, Android, GPS, Kinect, etc.
Es necesario traer tu Laptop con Rhino y Grasshopper instalados.
Rhino: http://download.rhino3d.com/rhino/4.0/ev aluation/download/
Grasshopper: http://download.rhino3d.com/Grasshopper/ 1.0/wip/download/
Cupo Limitado
info@dimensiontallerdigital.com
$4,000.00…
gt; most probably > adios Amigos.
3. WP Loop VS ... > see above
4. Daniel VS ... > see above.
There's other dedicated apps for handling huge amount of data (using very fast ball pivot algorithms for dealing with the gazillion of points).…
Get plenty of RAM. Windows 32-bit can assign 2MB of Ram per process, so if you have lots of RAM, you can run Rhino+Grasshopper in memory all the way. I'd say get at least 4GB, and preferably 8GB. If you have a 64-bit machine, then it pays off to go even higher than that.
2) Get fast RAM. Memory access is the main bottleneck in many applications, so the faster the RAM the faster most apps will work.
3) Get a fast processor, rather than lots of slow processors. Only a few apps out there can truly use Multi-Threading (Rhino and Grasshopper cannot). These days, CPU manufacturers try and dress up multi-core CPUs as the next best thing. It is not. It is a lie. Until software can truly run on multiple cores there is no benefit to this. If rendering is a big part of your job, then it does pay off to have a multi-core machine though.
4) Get a good graphics card. I've always preferred NVidia over ATI, but there are many good ATI cards as well. You can go for a gaming card (they're cheaper), but note that these are optimised for drawing triangles. If you get a professional card, it will draw lines and curves much faster.
--
David Rutten
david@mcneel.com
Robert McNeel & Associates…
on) ... the only way to do something meaningful/realistic is to follow Bentley System's way: they had 3 rendering engines (all highly problematic and archaic), a bunch of highly paid "gurus" to "develop" the dead fish and an export to Maxwell capability as well (Maxwell is very slow and has no chance VS Nexus, see below). PS: "Gurus" had no idea about Quest3D and the likes.
At the time, I was near to some permanent ban (he he) from all Bentley Forums due to my acid writings about how stupid these methods were. In fact I openly proposed to Bentley (to Ray Bentley to be exact) to fire all "gurus" involved ... and follow the outsource path.
Finally Ray (he's very smart) did the right thing: after an agreement with Luxology ... now Microstation (the core product) uses the Nexus engine (as found in Modo). This means that the Nexus is fully integrated across the whole vertical suite of BIM AEC Bentley apps the likes of AECOSim (that includes Generative Components as well).
And as everyone knows THIS is the real McCoy (US movie industry is behind that thing).
Additionally Modo has the best GUI known to mankind (US movie ... blah blah) and astonishingly innovative thinking (US movie ... blah blah).
…
ents instead of code ... it could yield a nightmare of components (and a myriad of parameters). For real-life designs I would never attempt to do this without code.
2. A certain experience with Kangaroo (or some min surf other thing since using K on these ... well may be the killing a mosquito with a bazooka thing). That said I'm a great admirer of Daniel's work. But on the other hand why not?
3. A "certain" experience with trusses/space frames.
4. A "certain" experience with instance definitions (that's not doable with GH components).
5. Years of experience with parametric feature driven MCAD apps - Image35 (NX/CATIA) for designing the real-life parts (that have NOTHING to do with "abstract" concepts).
In total I would say that a similar "app" with code (excluding the min surf/mesh thing) would require 6-10 full days of work (or even more).
BTW: https://www.google.com/url?q=http://www.grasshopper3d.com/forum/top...…
ess more memory on 64 bits. So you can load larger files and generate more data.
Every time you store something in memory it has to be stored at a specific location. We call this location an address. The first thing you store can be stored at address 0*. If that thing requires a total of 18 bytes, then addresses 0 through 17 are used up. The next thing you store can then be stored at address 18. And so on and so forth. At some point you run out of addresses and when that happens there is no more room to store any new data and there is thus nothing more that your app can do and that's usually when Windows shoots the application in the head and buries the remains behind the chemical sheds.
The total number of unique addresses that can be represented by a 32-bit integer is 4,294,967,295 (4 GigaByte). However Windows only allows a 32-bit app to access 2GB, or potentially 3GB if a special switch is set. A 64-bit application is allowed to use 64-bit integers to identify memory addresses, which means the highest possible address is now 18,446,744,073,709,551,616 (or 18.45 ExaBytes). Basically, as long as you have RAM to back you up, a 64-bit application will not run out of memory. Of course it may still become prohibitively slow as a lot of data requires a lot of computation and 64-bitness does absolutely nothing to make things go faster.
--
David Rutten
david@mcneel.com
Vienna, Austria
* Not true in reality, Windows will already use up some of the available memory just to load the application.…
Added by David Rutten at 1:39pm on November 2, 2012