catch-all phrases that pick up all of the rest of it: you have inputs, algorithms, and outputs.
It sounds like you don't necessarily want to wade into the miasma of academic reflection on the terminology, but in case you're willing to hold your nose and brave it there is some fairly interesting stuff out there. Nick's point about computation versus computerization is pretty reflective of a common mode of thinking about it. AD published a book edited by Sean Ahlquist and Achim Menges a couple years back called "Computational Design Thinking" and their introduction lays out a compelling argument for the distinction. Likewise Philip Galanter's paper "What is Generative Art? Complexity Theory as a Context for Art Theory" is a good read.
I mean, ultimately it's about semantics. If you're worried about "real, accurate meaning" the reality is you're going to have to justify the boundaries of your own definition one way or another. David is rather a wry literalist who I suspect enjoys taking the piss out of academics, particularly when they're all puffed up (and really, every event is a form of computation if you want to go there). But usage counts, and there's a growing body of work staking claim to these terms, so it's better to know how and why if you even want to ask the question.…
Added by David Stasiuk at 2:01pm on November 28, 2013
Excellent knowledge of Rhino and Adobe Creative Suite (Photoshop, Illustrator, Indesign)
* Experience in Revit is preferred * Knowledge of Grasshopper is a plus
* Excellent communication and expression skills, fluency in spoken and written English is a must
* Good team player
Please apply with the official LAVA application manager and refer to this ad:
https://lava.poolarserver.com/quicklink/pageApplicationUpload.aspx
…
t the elements I used CreateHBsrfs and I added "Adiabatic" in the EPBC input. Since the EnergyPlus results weren't what I expected, I checked the idf file and I discovered none of the element is adiabatic. Furthermore, the simulation doesn't use the materials I set up for the not-adiabatic wall.
I even tryed with MakeAdiabatic, MakeAdiabaticbyname and MakeAdiabaticbytype. With the first and second the problem is still the same. With MakeAdiabaticbyType, if I change one wall type in the CreateHBsrfs, it still remains the same type in EnergyPlus, so it makes all the walls adiabatic. Is there something I can do? I attach the GH file Thanks in advance Lisa…
The type of recipe appears to be related to the problem, because de error desolves when I connect the component to a different recipe.
A screenshot of the complete error message is in the attachment.
Error text:
0. Annual climate-based analysis1. The component is checking ad, as, ar and aa values. This is just to make sure that the results are accurate enough.2. Good to go!3. Current working directory is set to: c:\ladybug\unnamed\annualSimulation\4. Rotating the scene for 41 degrees5. Runtime error (TypeErrorException): unsupported operand type(s) for +=: 'str' and 'bool'6. Traceback: line 6509, in transform, "<string>" line 1665, in writeRADAndMaterialFiles, "<string>" line 193, in main, "<string>" line 258, in script
Many thanks in advance…
ALISTICO. Ciascun modulo si svolgerà nell’arco di due giornate e si potrà scegliere se partecipare ad entrambi i moduli o altrimenti solo all’uno o all’altro.
In questo corso si insegneranno nuove tecniche di modellazione parametrica attraverso l'utilizzo di Grasshopper, rivoluzionaria plug-in di Rhinoceros. Grasshopper permette di esprimere al massimo le qualità e le potenzialità della modellazione Nurbs, abbandonando in parte l'interfaccia classica di Rhinoceros. Quest'ultimo infatti viene sostituito da un menù a tendine nel quale vengono collezionati nodi utili alla composizione di algoritmi risolutivi.
La plug-in Grasshopper, dimostra come il linguaggio del computer stia diventando un reale strumento progettuale.GRASSHOPPER-BASE - 8 oreil giorno 09/05/2013 dalle 10.00 alle 19.00
Nella prima parte del corso si insegneranno i metodi di esplicitazione degli algoritmi, applicati ad esercizi base utili alla comprensione del software. In queste ore si illustreranno, attraverso fasi operative, i seguenti argomenti:
Suddivisione degli algoritmi in parametri e componenti;
Tipologie di dati compatibili con Grasshopper e loro combinazione creando definizioni minime;
Funzioni matematiche e logiche
Data flow, liste e filtri di esclusione.
Costruzione di curve e superfici e loro trasformazione.
Scadenza preiscrizione per Grasshopper - BASE : 06/05GRASSHOPPER-SPECIALISTICO - 8 oreil giorno 10/05/2013 dalle 10.00 alle 19.00
Nella seconda parte del corso lo strumento viene specializzato affrontando editing e trasformazioni complesse sulle superfici:
Elaborazione delle superficie di suddivisione;
Tassellazione spaziale di superfici a doppia curvatura;
Gestione di parametri variabili per la progettazione di definizioni finalizzate al controllo del movimento;
Ideazione di algoritmi per il passaggio dal modello digitale al modello reale attraverso la tecnica dello sliceing.
Scadenza preiscrizione per Grasshopper - SPECIALISTICO : 07/05
Destinatari
Il corso è rivolto a tutti gli studenti universitari e professionisti che hanno una buona conoscenza delle tecniche di modellazione NURBS.
Prerequisiti
I partecipanti dovranno venire al corso muniti di proprio laptop e con software Rhinoceros perfettamente funzionanti.Alla fine del corso, verrà rilasciato l’attestato di partecipazione ad un corso di Rhinoceros qualificato certificato dalla casa sviluppatrice McNeel, valido anche per la richiesta di crediti formativi universitari.
Docente del corso
Il corso sarà tenuto da un docente qualificato, esperto in disegno e rappresentazione dell' architettura e del design:
Michele Calvano| _architetto, dottore di ricerca in rappresentazione architettonica specializzato nella modellazione matematica (Nurbs) e modellazione parametrica.
Docente ART (Autorized Rhino Trainer) - [vedi CV]
…
rsi giornalieri (livello base) dedicati a 4 diversi topic Rhinoceros - 8 febbraio Grasshopper - 16 febbraio Rhino cam - 8 marzo Stampa 3D - 9 marzo
tutor: Amleto Picerno Ceraso, Francesca Viglione, Gianpiero Picerno Ceraso.
. Arduino for interaction (livello base-medio) 15, 16 marzo Il workshop parte dalle basi della programmazione di arduino fino ad arrivare all’interazione tra un oggetto fisico ed un imput informativo tutor: Gianpiero Picerno Ceraso
. Grasshopper advanced: “Complex surface” (livello medio) - 18, 19, 20 marzo Il workshop ha come obiettivo lo sviluppo di superfici complesse rispondenti ad informazioni provenienti dall’ambiente. Il corso parte dalle nozioni di Grasshopper fino ad arrivare alla possibile realizzazione di un oggetto tramite le tecniche di fabbrizazione digitale. tutor: Amleto Picerno Ceraso nb: è richiesta una conoscenza base di Grasshopper
. Emotional design (livello alto) 23, 24, 25 marzo Il workshop verterà sull’acquisizione, registrazione e manipolazione di tali dati/emozioni tramite Grasshopper e il loro utilizzo per controllare i parametri del design di specifici oggetti che diventeranno quindi, essendo customizzanti con le specifiche emozioni dell’utente, istanze e memoria tattile di precise esperienze. tutor: Andrea Graziano nb: è richiesta una conoscenza base di Grasshopper
. Fabricated fashion (livello alto) 26, 27, 28, 29, 30 marzo Il tema del workshop verte sulle tecniche di progettazione digitale applicate al fashion. tutor: Luis e Elizabeth Fraguada nb: è richiesta una conoscenza base di Grasshopper
. Blender (livello alto) - 16, 17, 18 maggio tutor: Andrea Graziano
. Interaction design: Arduino + Grasshopper (livello medio) - 2, 3, 4 maggio Il corso ha l’obiettivo di indagare processi di interazione tra le persone e gli ambienti in cui vivono attraverso il responsive design. nb: è richiesta una conoscenza base di Grasshopper e Arduino. tutor: Amleto Picerno Ceraso del Mediterranean FabLab e Antonio Grillo del FabLab Napoli.
info su costi: http://www.medaarch.com/2765-il-nuovo-calendario-attivita-firmato-medaarch/
…
mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…