ts in extreme aliasing effects that carry into the 3D realm as regular steps along what should be smooth surfaces.
On sleeping on it, I realized I hadn't yet tried fast Unary Force on fine quad meshes from the standard Grasshopper meshing system that includes the meshing options component.
Bingo! It's fast now. Workable. I don't need super fine meshing since I'm not running from aliasing. I can still use rather fine local meshes since Unary Force lets Kangaroo do a simple thing just in the Z direction rather than a full 3D force.
After only a minute or so of Kangaroo initialization that slows the interface, each of a dozen needed cycles takes half a second, FOR THE ENTIRE GRAPHIC.
I just set the timer to 1 second so I can move around the interface, and I double click the Windows taskbar timer shut-off to enjoy the result.
WHILE RUNNING VIA TIMER, IF I CHANGE A SPRING/FORCE SETTING IT SUFFERS NO DELAY AT ALL AND JUST ALTERS THE OUTPUT OVER TIME. I can change Unary Force from 20 to 100 and immediately see the bigger areas balloon like crazy:
It's fast enough overall to play with, yet the individual steps are slow enough that it's fun to watch the hysteresis as it overshoots back from 100 to 20 Unary Force, going concave in the middle of bulges then back to more shallow hills.
A force of 1000 is a bit disturbing, I wonder if I can tamp it down with greater spring strength or will that just give me the same result as before?
Looks like it's the same, just the ratio matters. Makes sense I guess. At one point it blew up though. Hitting the reset button...a minute later it blows up again...and just doesn't like huge numbers, so I don't see an advantage playing with bombs. The high mesh strength is pulling the mesh apart.
With low Unary Force and moderate mesh tension, you get flat tops, as if the overall force on the mesh fighting its anchored edge vertices, is enough to displace it, but the surface itself is too stiff to care about local gravity.
Then you have less flat areas as you increase Unary Force:
Weird, there *is* some sort of absolute effects, rather than just relative, between Unary Force and spring stiffness, since now I'm getting flat tops even in the extreme:
Oh, wait, strike that, I may be seeing but a single step with the timer off, subject to hysteresis. With the timer back on...it can sit there a minute...not locked up but just idling...until you see the Display > Widgets > Profiler time start cycling to near half minute numbers...makes you want to hit the reset button...and indeed that locks the interface for another initialization...and yes, it was merely hysteresis, not an equilibrium result. My former flat tops may have been due to that too, due to my use of the Windows taskbar timer disabler. The lesson is that you can obtain different results by using a long timer setting and just stopping it before it equilibrates.
This script is a keeper, fast and fun after the relatively mild Kangaroo initialization period is over.
The uniform mostly quad meshing is all done in Grasshopper too, from any flat surface with holes, especially from images of shapes that are traced with potrace to give surfaces with holes.
Could I switch to hex meshes from triangular meshes to do the same thing with fewer vertices?
Are there other forces I can add to smooth the bulging? Letting things bulge is not so bad if you then just scale down the result in Z afterwards (though perhaps the same result could be had with lesser force):
Also, can this same thing be done with possibly faster Kangaroo 2?…
Added by Nik Willmore at 10:02pm on February 21, 2016
bi-directional link, the link is unidirectional (downflow only), because of the use of proxies.
Matrix transforms and persistent constraints: I don't think this is true. The parts can have mates to other parts that preserve geometric relationships like 'coincident' , 'aligned' etc. These are essentially bi-directional. GH's algorithmic approach does not do relationships in the same / flexible way. In GH, the 'relationship' has to be part of the generation method that dependent on the creation sequence. I.e. draw line 2 perpendicularly from the end of point of line 1. If you are thinking about parts or assemblies sharing, or referencing parameters as part of the regen process, this is also possible. iLogic does this, and adds scripting. So does Catia. Inventor/iLogic can also access Excel and have all the parameter processing done centrally, if required.
Consequently, scripting the placement of components is irrelevant in GH, unless you decide that each component needs to be contained in its own separate file.
I wouldn't be too hasty here. Yes, you are right about compartmentalisation. I think this needs to happen with GH, in order to deal with scalability/everyday interoperability requirements. Confining projects to one script is not sustainable. MCAD apps have been doing this for ages with 'Relational Modeling'.The Adaptive Components placement example illustrates that it is beneficial to be able to script some 'hints' that can be used on placement of the component. Say, if your component requires points as inputs, then its should be able to find the nearest points to the cursor as it moves around. I think Aish's D# / DesignScript demo'd this kind of behaviour a few years ago. Similarly, Modo Toolpipe reminds me how a lot of UI based transactions can be captured as scripts (macro recorder etc). Allowing this input to be mixed in and/or extended by GH I think will yield a lot of 'modeling efficiency' around the edges. This is a (mis)using GH as an user-programmable 'jig' for placing/manipulating 'dumb' elements in Rhino. It may even give the 'dumb elements' a bit more 'intelligence' by leaving behind embedded attributes, like links to particular construction planes etc.Even if we confine ourselves to scripting. GH is a visual or graphic programming interface. A lot of 'insert and connect' tasks can be done more easily using graphic methods. If we need to select certain vertices on a mesh as inputs for, say, a facade panel, its going to be quicker to do this 'graphically' (like the AC example), then ferreting out the relevant indices in the data tree et al. The 'facade panel' script would then have some coding to filter/prompt the user as to what inputs were acceptable, and so on.
This also brings up the point that generating components and assemblies in MCAD is not as straightforward. In iParts and iAssemblies, each configuration needs to be generated as a "child" (the individual file needs to be created for each child) before those children can be used elsewhere.
Not sure what you mean here. If the i-parts are built up using sketches /profiles or other more rudimentary features (like Revits' profile/face etc family templates) then reuse should be fairly straight forward. I suppose you could make it like GH scripting, if you cut and paste or include script snippets that generate the desired Inventor features.
One of the reasons why the distributed file approach makes perfect sense in MCAD, is that in industry you deal with a finite set of objects. Generative tools are usually not a requirement. Most mechanical engineers, product engineers and machinists would never have any use for that.
I don't think this is true. Look at the automotive body design apps, which are mostly Catia based. All of the body parts are pretty much 'generative' and generated from splines, in a procedural way, using very similar approaches to GH. Or sheet metal design. It's not always about configuration of off-the-shelf items like bolts. And, the constraints manager is available to arbitrate which bit of script fires first, and your mundane workaday associative dimensions etc can update without getting run over by the DAG(s) :-)
…
mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…
what they really mean by that, as in what buttons to push, so I assume it's a Windows Path entry?
2.) Modify PATH
Add the install location on the path, this is usually: C:\Program File\IronPython 2.7
But on 64-bit Windows systems it is: C:\Program File (x86)\IronPython 2.7
As a check, open a Windows command prompt and go to a directory (which is not the above) and type:
> ipy -V PythonContext 2.7.0.40 on .NET 4.0.30319.225
Tutorial on setting a Windows environmental variable (path):
http://www.computerhope.com/issues/ch000549.htm
But this fails to point out that path contains many entries already separated by semicolons so if I merely add a new variable called "path" it's likely that I will destroy existing program function. There's no info on how to just tack on another entry, and the Windows 7 edit box doesn't even show the whole collection, but one item (!), so I copied the existing path into a text editor to see the whole collection successfully and added the C:\Program Files (x86)\IronPython 2.7 entry after an added semicolon, correcting for an Enthought page typo of no 's' on the end of "Program Files". I also checked the others and many pointed to old missing directories so I deleted those entries.
...and the test fails and "ipy" is not recognized as a command, even though the path now shows up using "path" in the Windows CMD window, that is if I copy all by right clicking and pasting the stuff into a text editor to really view it all. I can run it from the source directory just fine.
The rabbit hole was indeed deep. Using the Task Manager (control-alt-delete) to kill Explorer and then Run in the menu to restart "Explorer," along with restarting the Windows CMD window however, worked. I can now invoke Iron Python ("ipy") via command line from any directory. For the "path" I edited path in the System Variables and not the User Variables. No, you don't have to type that whole crazy line above just to test the path variable, just "ipy" (and control-Z to quite IronPython) in the CMD window invoked by typing "cmd" into the Start menu search box.
From the CMD line this step did work fine:
3.) ironpkg
Bootstrap ironpkg, which is a package install manager for binary (egg based) Python packages. Download ironpkg-1.0.0.py and type:
> ipy ironpkg-1.0.0.py --install
Now the ironpkg command should be available:
> ironpkg -h(some useful help text is displayed here)
But of course Step 4 fails, giving pages of what seem to be error messages;
C:\Users\Nik>ironpkg scipy
Traceback (most recent call last):
File "C:\Program Files (x86)\IronPython 2.7\lib\site-packages\enstaller\utils.
py", line 92, in write_data_from_url
File "C:\Program Files (x86)\IronPython 2.7\Lib\urllib2.py", line 126, in urlo
pen
File "C:\Program Files (x86)\IronPython 2.7\Lib\urllib2.py", line 397, in open
File "C:\Program Files (x86)\IronPython 2.7\Lib\urllib2.py", line 509, in http
_response
...
Why can't I just download Numpy as a normal file and thus also have it easy for other users to install it when they use my scripts? This is just crazy and lazy. The Enthought developer has turned this into a computer game, with a missing registration link and then the last step spits out errors with utterly no information on how to fix it manually.
This Step 4 error is covered here:
http://discourse.mcneel.com/t/trying-to-import-numpy-in-rhino-python-but-im-getting-this-error-cannot-import-multiarray-from-numpy-core/12912/16…
Added by Nik Willmore at 2:36pm on October 11, 2015
ou will see all of the available components on a ribbon at once so there is no need to keep clicking drop down menus.
It's all about discoverability with GH. What if you're a beginner and don't know about the Create Facility (dbl click canvas) how can you find Extr?
Even if you hover over every component or use the drop down lists you will not see the name Extr appear anywhere.
Sure it makes sense that Extr is short for Extrude but it's also the Nick Name of Extrude to Point component
So you can easily miss the fact that one has a Distance Input verses a Point Input.
I think I made the move to Icons around about the move from version 0.5 to 0.6, possibly before. I initially thought that I would go back to text because I loved the mono chromatic look of the text but I soon realised that Icons were the way forward. The greatest benefit is speed. You don't need to digest and decipher every component (which is written 90 degrees to the norm).
I'm not saying you should move to Icons forthwith but at least consider that once you have a better knowledge and understanding of GH, Icons will set you free.
My top ten tips that I would highly recommend to anyone wanting to better themselves with GH.
1) Turn on Draw Icons
2) Turn on Draw Fancy Wires
3) Turn on Obscure Components
4) Use the Create Facility like a Command Line eg "Slider=-1<0.75<2" or "Shiftlist=-1"
5) Use Component Aliases to customise your use of the Create Facility eg giving the Point XYZ component an alias of XYZ will bring it up as the first option on the Create Facility as opposed to the other possibilities.
6) Try to answer other people's questions even if it's not relevant to your own area. By looking into solving a problem outside of your comfort zone and then posting your results it is very rewarding but it also lets you see the other approaches that get posted in a new light.
7) Take the time to understand Data/Path structures.
8) Buy a second monitor - There is nothing that can compare to real estate when working in Grasshopper.
9) Read Rajaa Issa's Essential Mathematics
10) Pick a panel in a tab on the ribbon and get to know every component inside and out and then move on. Start with the Sets Tab > List Panel…
16-20 / PUEBLA JULY 23-27
This workshop is intended primarily for architects and designers interested in learning parametric and generative design applied to the generation and rationalization of complex geometries for their implementation in different design processes. The course will cover basic concepts and methodology to address many design issues through the development of algorithmic tools via a visual programming language and the development of digital fabrication schemes. Rhinoceros 3D and Grasshopper are going to be used as our modeling tools and V-Ray as our rendering engine. Monday to Friday from 10am to 2pm and from 4pm to 8pm 40hrs.
No previous knowledge of Rhinoceros 3D or programming required, CAD background desirable.
Students: 4,000 MXN Professionals: 5,000 MXN Info: workshop@3dmetrica.com 044 55 28790084 www.3dmetrica.com
www.facebook.com/3dmetrica
TALLER DE VERANO ARQUITECTURA PARAMETRICA DISEÑO GENERATIVO RHINO + GRASSHOPPER + V-RAY
TOUR MÉXICO 2012
MEXICALI 25 AL 29 DE JUNIO / CIUDAD DE MÉXICO 2 AL 6 DE JULIO / MORELIA 9 AL 13 DE JULIO / GUADALAJARA 16 AL 20 DE JULIO / PUEBLA 23 AL 27 DE JULIO
Este taller está dirigido principalmente a arquitectos y diseñadores interesados en el aprendizaje del diseño paramétrico y generativo aplicados a la generación y racionalización de geometrías complejas para su implementación en diferentes procesos de diseño. En el curso se abordarán los conceptos básicos y metodología para hacer frente a diversas problemáticas del diseño mediante el desarrollo de herramientas algorítmicas a través de un lenguaje de programación visual y el desarrollo de esquemas de fabricación digital. Se utilizarán Rhinoceros 3D y Grasshopper como herramientas de modelado y V-Ray como motor de renderizado. Lunes a Viernes de 10am a 2pm y de 4pm a 8pm 40 hrs.
No se requieren conocimientos previos de Rhinoceros 3D ni de programación, conocimientos previos de CAD deseables.
Estudiantes: 4,000 MXN Profesionales: 5,000 MXN Info: workshop@3dmetrica.com 044 55 28790084 www.3dmetrica.com
www.facebook.com/3dmetrica
…
and 3d rapid prototyping using state of the art material simulation and optimisation. Participants will be guided through methods of advanced structural analysis and evolutionary algorithms implemented in Grasshopper, Karamba and Octopus in a 5 day workshop taught by Robert Vierlinger and Matthew Tam within the premises of the Academy of Fine Arts & Design in Bratislava, Slovakia. The workshop will cover the basics of setting up a karamba definition and more advanced form finding techniques with beams and shells through to preparing files for 3d printing and 2d documentation. For the Grasshopper newcomers there is a preparatory crash course on 20 July 2015 taught by Ján Pernecký. The workshop will be held entirely in English. VENUE Academy of Fine Arts and Design in Bratislava: VŠVU / AFAD, Hviezdoslavovo námestie 18, Bratislava, Slovakia ROOM 135 PRICING Early bird Student (until Jun 30, 2015) €320 Early bird Professional (until Jun 30, 2015) €380 Regular Student (from Jun 30, 2015) €400 Regular Professional (from Jun 30, 2015) €475 The fee covers only the tuition. Travel expenses, accommodation and food is to be covered by the participants. SCHEDULE Day 1 Lecture - Karamba in Projects from Competition to Construction Introduction to karamba - Setting up a basic karamba model Shells & Beams - Understanding the impact of load on geometries. Beams - Cross Section Optimization, Load Path Emergence Day 2 Extraction and Visualization of data from Karamba Complex Geometry - Processing of Free Forms for Karamba Force Flow - Understanding and Visualizing results on shells 3d Printing - Preparing geometries for rapid prototyping Day 3 Lecture - Form Finding in Karamba Isler Shells - Hanging Forms with karamba Shells - Shape Optimisation with Galapagos Trusses - Topology Optimization with Galapagos Columns - Positioning with Galapagos Multiobjective optimisation strategies with Octopus Day 4 Frequency Analysis & Non-Linear Analysis with Karamba Extraction and Visualization Part 2 BIS - Building Information Systems with karamba Day 5 Participant’s Examples and Topics Reviewing 3d Print Studies Large Complex Models Reviewing learn techniques and strategies Concluding lecture - public PARTNERS rese arch Academy of fine arts and design…
a nodi, permette di sfruttara le potenza della programmazione, senza necessariamente avere competenze avanzate.
Con Grasshopper potrete avere accesso ai segreti della modellazione generativa, un nuovo linguaggio progettuale che sta cambiando il mondo del design, a partire dalla gioielleria, fino ad arrivare all'architettura.
Durante il corso sarà possibile comprendere le caratteristiche di funzionamento del programma e applicarlo alla creazione di oggetti complessi che potranno essere stampati in 3D, oppure renderizzati. La durata è di 30 ore e alla fine del percorso verrà rilasciato il certificato McNeel.
Il Programma
Il corso spiega i concetti base di modellazione parametrica e generativa. Nello specifico:
Interfaccia e comandi
Parametri e componenti
Interopazione con Rhinoceros
Strumenti di parametrizzazione
Combinazione dati
Data tree
Creazioni di superfici attraverso algoritmi di paneling
Teoria degli attrattori
Gestione strumenti mesh
Creazione di Cluster
Durante il corso saranno proposte esercitazioni pratiche sul campo di utilizzo preferito dallo studente
Il docente
Antonino Marsala, è un formatore certificato McNeel con alle spalle oltre 11 anni di esperienza nel settore della modellazione 3D. Oltre ad occuparsi di formazione, collabora con aziende orafe e di architettura per la messa in pratica dei principi di modellazione generativa, applicandoli a casi reali.
FAQ
Quanto costa il corso?
Il prezzo del corso è di 500,00 € + IVA che potranno essere saldati in una soluzione unica. Nel caso di iscrizione di gruppo, potrà essere applicato uno sconto.
Cosa posso portare e cosa non devo portare all'evento?
Gli organizzatori forniranno computer con il software già installato. Nel caso vogliate portare il vostro computer, vi forniremo una versione trial da 90giorni di Rihnoceros e Grasshopper
Dove posso contattare l'organizzatore per qualsiasi domanda?
antonio@mandarinoblu.com
334 24 20 203
La mia registrazione o il mio biglietto è trasferibile?
Si, purchè venga comunicato il cambiamento entro 48 ore dalla partena del corso
…