mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…
, ed è la zona in cui si manifestano in modo più evidente i temi di soglia, gradiente, variazione, catastrofe, dove le condizioni di limite e transizione possono essere esplorate trasformandole in configurazioni formali e architettoniche interessanti. Il workshop mira ad indagare strategie attraverso cui si manifestano le condizioni di transizione tra ecosistemi, sia in termini spaziali (dalla scala territoriale alla scala dei componenti) che in termini di evoluzione o ciclicità temporale (condizioni critiche come i cicli notte/giorno nelle zone desertiche). Il tema dell'eleganza riguarda il modo in cui il sistema produce un campo armonicamente articolato e differenziato di fenotipi a partire dal genotipo attraverso un processo di "estetica delle forze" guidata attraverso lo strumento digitale.
Il tema sarà dipanato attraverso le giornate del workshop sviluppando aspetti teorici e tecnici dell'approccio parametrico generativo, con particolare attenzione a strategie di design basate su caratteristiche endogene (vincoli interni del sistema) ed esogene (fattori ambientali) allo scopo di stimolare l'esplorazione di soluzioni sistemiche innovative.
Il numero dei partecipanti è fissato a 16 per offrire un tutoraggio proficuo ed una effettiva esperienza di learning ad ogni iscritto.
Temi:
Teoria
. transizione
. eleganza
. efficienza
. ridondanza
. sensibilità
. ornamento
. spazio
tecnica
. dati:gestione, manipolazione, visualizzazione
. generazione di geometria da dati
. logiche parametriche applicate al design
. genotipo/fenotipi
. attrattori, drivers e tecniche di modulazione
Dettagli:
Istruttori: Giulio Piacentino - McNeel (GH intro), Alessio Erioli + Andrea Graziano - Co-de-iT (GH & design tutors).
Si richiede esperienza di base nella modellazione in Rhino.
Saranno disponibili computers con preinstallate versioni di prova del software; i partecipanti potranno, a loro discrezione, utilizzare il proprio notebook.
Quota d'iscrizione (max 16 posti) : € 400 + IVA - la quota non comprende vitto e alloggio
Luogo : Pentacom - Via Petroni 18/4, Bologna
Orario : 10.00-18.00.
Info e iscrizioni:
www.co-de-it.com
andrea@co-de-it.com…
rogettisti, artisti di vari media, paesaggisti, studenti.
Orario_ 9.00-18.00 ( 1 ora pausa pranzo). 16 ore_2 giorni da 8 ore.
Descrizione_Il livello base di Grasshopper serve come introduzione al plugin parametrico Grasshopper per Rhino 3d. I partecipanti saranno esposti a flussi di lavoro di livello principiante /intermedio ed a strategie di progettazione per la MODELLAZIONE PARAMETRICA. L'accento sarà posto sulle tecniche di flusso di dati, la visualizzazione e l'analisi in grado di fornire una solida base per la futura ricerca e sviluppo.
Le lezioni saranno composte da una parte teorica ed una pratica in cui si svilupperanno esercizi basati su elementi di Design ed Architetture contemporanee.
Iscrizioni_ generativef@gmail.com
+info_Grasshopper Workshop_Livello base
Organizza_generativeflow.com
Chi_ I docenti saranno Marco Bonucci & Fernando Rial
___________________________________________
When?_ 27/28 October 2012 (Saturday and Sunday)
Where?_ AD Comunicazione. Via di Sant'Anna, 3, Roma. (Centro Storico)
Schedule_ 9:00 to 18:00 (1 hour lunch break). Ore_2 days_16 hours_8 h/day
Who is the target Audience?_Architects, Engineers, Industrial Designers, Interior Designers, Product Designers, Artists of various media, Landscapers.
Abstract_ The basic level of Grasshopper serves as an introduction to Grasshopper, the parametric plugin for Rhino 3d. Participants will be exposed to beginner / intermediate workflows and design strategies for PARAMETRIC MODELING. The focus will be on techniques of data flow, visualization and analysis that will provide a solid basis for future research and development.
Registration_ generativef@gmail.com
+ info_Grasshopper Workshop_Basic Level
Organizes_generativeflow.com
Who_ I docenti saranno Marco Bonucci & Fernando Rial
…
Added by Fernando Rial at 10:48am on October 18, 2012
izes like 0.6m, 0.8m, 0.9m and 1.2m are the most "common": In cases where mechanical floors are a must (hospitals for instance) a 2.4/2.4 is quite handy (habitable/mechanical per floor). You can try 1.8/2.7 as well (floor/habitable) since 1.8 floor thickness can host HVAC and some decent W truss size. Also 1.6/2.4 (floor/habitable) is used in small buildings. NOTE: see next.
3. Don't forget to include corrugated metal height + concrete screed height + raised floors height: for the latter, say, something like 0.3m (modules + adjustable mounts + free space for electric stuff [boxes etc]).
4. As regards exteriors, Laurent Buzon is a close friend of mine. Contact him directly on my behalf:
http://www.buzonuk.com/
http://www.google.gr/url?sa=t&rct=j&q=&esrc=s&sourc...
5. LBS Structural ability and "monolithic" floor behavior (humans don't like vibrating habitable spaces) ARE not the same animal.…
ybee_EnergyPlus Window Shade Generator" component.
3. SolveAdj component has the input to set BC for interior surfaces.
If you want to set them to adiabatic then you can use setToAdiabatic components.
4. For natural ventilation Chris has provided extensive answers including this one.
If the component doesn't work then you need to download the files manually from github and replace the userObjects with the old ones. You have to do it separately for Ladybug and Honeybee which can be painful. Is there anyway to change the firewall settings?
…
Parametrica.Con grasshopper puoi gestire progetti complessi dal punto di vista della forma e dell'organizzazione con un solo strumento , dal design dell'oggetto , allo spazio dell'architetture , all'organizzazione urbanistica.Grasshopper è un software open source , in continuo aggiornamento da parte degli utenti , TRA POCO POTRESTI CONTRIBUIRE ANCHE TU AL SUO SVILUPPO !!!Sabato 11 MAGGIO 2013durata di 6 ore : dalle 10:00 alle 17:00presso : STUDIO REMODESIGN (via dei marsi n° 41)per prenotare chiama il numero : 3498381249oppure manda una mail all'indirizzo : contact@ivoambrosi.itvisita il sito: www.ivoambrosi.it…
hilst settings concern only the currently selected instance.
For instance assume that you are in the Bermuda Shorts business and you want various ideas concerning a new ad campaign:
Or assume that the 4 horsemen want from you to quickly present some concept proposals related with a terminal event that they have in mind:
…
till quite rough.
I went through your attached log but it seems to be a successful run, perhaps the error log wasn't attached. In any case, I believe we have identified this issue. The goal of the update fvSchemes component was to apply schemes to finalized meshes in an automatic way. While this is useful for new users it is also a dangerous thing to do in CFD studies.
The component works by relating mesh quality to the mesh non-orthogonality, which the checkMesh component reports. While non-orthogonality is one of the important criteria of mesh quality it does present difficulties on some kind of meshes, especially like the simple cases that BF has been meshing so far.
The example case of simple box buildings in a wind tunnel above for instance will appear as a good quality case for even the lowest of cell-count meshes, simply because it is an orthogonal geometry. That means that checkMesh will probably report low values (imagine an empty blockMesh of 10m blocks has a non-orthogonality of 0) which in turn means that higher order schemes might be paired with actually low quality meshes. This I believe is causing problems.
I posted a possible solution to this here https://github.com/mostaphaRoudsari/Butterfly/issues/57. The idea is that Buttefly provides additional options to the users, enabling them to choose between first-order (faster, more robust, but lower quality schemes) and second-order (slower, less robust, but more accurate) schemes depending on mesh quality, stage of assessment, etc. In cases like the above mesh quality a first-order scheme might provide a better option. To test this I am attaching an fvSchemes file you can use by replacing yours in the /system folder of the case.
As a note however, I would like to stress there is so much that a tool like Butterfly can provide in this area. Meshing is a quite complicated and demanding part of the process, involving a lot of trial and error. Sometimes the problem is just the mesh and not the solution options (GIGO stands true in CFD as well). It does however get easier with experience. The safe advice is the simplest one: when changing solution options doesn't help, refine mesh and run again.
Kind regards,
Theodore.…
Refinement component at first, possibly using MeshMachine instead which is slow but actually gives many fewer triangles and adaptive meshing for tight curves too. Neither are easy to adjust on a deadline!
Then you have to sneak up on workable settings, using only a few lines, or Grasshopper will freeze perhaps indefinitely for 200 lines with extreme settings, especially the CS (Cube Size) setting that can blow up into a huge number if your scale is big.
Cocoon gives lots of nearly flat split quad faces so I quadrangulated those for fun:
Or MeshMachine can refine the mesh to make it efficient:
Whereas the Cocoon Refine component will merely return an equally fine mesh with more equilateral triangles but no serious remeshing to rid so many tiny triangles where they are not needed? Actually, it does seem to remesh also:
David said he used some of Daniel's MeshMachine code in there.…