bi-directional link, the link is unidirectional (downflow only), because of the use of proxies.
Matrix transforms and persistent constraints: I don't think this is true. The parts can have mates to other parts that preserve geometric relationships like 'coincident' , 'aligned' etc. These are essentially bi-directional. GH's algorithmic approach does not do relationships in the same / flexible way. In GH, the 'relationship' has to be part of the generation method that dependent on the creation sequence. I.e. draw line 2 perpendicularly from the end of point of line 1. If you are thinking about parts or assemblies sharing, or referencing parameters as part of the regen process, this is also possible. iLogic does this, and adds scripting. So does Catia. Inventor/iLogic can also access Excel and have all the parameter processing done centrally, if required.
Consequently, scripting the placement of components is irrelevant in GH, unless you decide that each component needs to be contained in its own separate file.
I wouldn't be too hasty here. Yes, you are right about compartmentalisation. I think this needs to happen with GH, in order to deal with scalability/everyday interoperability requirements. Confining projects to one script is not sustainable. MCAD apps have been doing this for ages with 'Relational Modeling'.The Adaptive Components placement example illustrates that it is beneficial to be able to script some 'hints' that can be used on placement of the component. Say, if your component requires points as inputs, then its should be able to find the nearest points to the cursor as it moves around. I think Aish's D# / DesignScript demo'd this kind of behaviour a few years ago. Similarly, Modo Toolpipe reminds me how a lot of UI based transactions can be captured as scripts (macro recorder etc). Allowing this input to be mixed in and/or extended by GH I think will yield a lot of 'modeling efficiency' around the edges. This is a (mis)using GH as an user-programmable 'jig' for placing/manipulating 'dumb' elements in Rhino. It may even give the 'dumb elements' a bit more 'intelligence' by leaving behind embedded attributes, like links to particular construction planes etc.Even if we confine ourselves to scripting. GH is a visual or graphic programming interface. A lot of 'insert and connect' tasks can be done more easily using graphic methods. If we need to select certain vertices on a mesh as inputs for, say, a facade panel, its going to be quicker to do this 'graphically' (like the AC example), then ferreting out the relevant indices in the data tree et al. The 'facade panel' script would then have some coding to filter/prompt the user as to what inputs were acceptable, and so on.
This also brings up the point that generating components and assemblies in MCAD is not as straightforward. In iParts and iAssemblies, each configuration needs to be generated as a "child" (the individual file needs to be created for each child) before those children can be used elsewhere.
Not sure what you mean here. If the i-parts are built up using sketches /profiles or other more rudimentary features (like Revits' profile/face etc family templates) then reuse should be fairly straight forward. I suppose you could make it like GH scripting, if you cut and paste or include script snippets that generate the desired Inventor features.
One of the reasons why the distributed file approach makes perfect sense in MCAD, is that in industry you deal with a finite set of objects. Generative tools are usually not a requirement. Most mechanical engineers, product engineers and machinists would never have any use for that.
I don't think this is true. Look at the automotive body design apps, which are mostly Catia based. All of the body parts are pretty much 'generative' and generated from splines, in a procedural way, using very similar approaches to GH. Or sheet metal design. It's not always about configuration of off-the-shelf items like bolts. And, the constraints manager is available to arbitrate which bit of script fires first, and your mundane workaday associative dimensions etc can update without getting run over by the DAG(s) :-)
…
mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…
heranno la maggior parte delle funzionalità di Rhino, tra cui i comandi più avanzati per la creazione di superfici.
Struttura Le lezioni tratteranno in maniera sistematica argomenti riguardanti l'interfaccia utente, i comandi, la creazione e modifica di curve, superfici e solidi.
Risultati attesi Dopo questo corso l’allievo deve essere in grado di:
• Muoversi agevolmente attraverso l’interfaccia di Rhino.
• Identificare quando è richiesto modellare in maniera free-form o di precisione.
• Creare e modificare curve, superfici e solidi anche di natura complessa.
• Utilizzare ausili di modellazione per la precisione.
• Produzione di facili rendering per la visualizzazione dei modelli di Rhino.
Destinatari Questo corso è rivolto a progettisti e studenti che vogliono imparare in modo efficace i concetti e le caratteristiche del software di modellazione Rhinoceros. Le lezioni saranno esposte da un docente ART qualificato dalla McNeel esperto di modellazione Nurbs.
Prerequisiti Per affrontare il corso sono richieste competenze di Windows, passione e volontà di modellazione; precedenti esperienze di modellazione, anche con altri software, sono utili ma non indispensabili.
Attestato Alla fine del corso verrà rilasciata l’attestato di partecipazione ad un corso qualificato McNeel valido anche per l’ottenimento di crediti formativi universitari.
Luogo Le lezioni si terranno in Via dei Valeri 1 int.9, 00184 ROMA
Pre-iscrizione Per garantire il numero di iscrizioni è necessaria una pre-iscrizione inviando una mail all'indirizzo 4planstudio@gmail.com il cui contenuto deve essere il seguente:
Nome:
Cognome:
Indirizzo di residenza:
mail:
telefono:
La preiscrizione dovrà avvenire entro il 30/11. A seguito di questa procedura verrà inviata dal tutor una mail di conferma con le procedure di iscrizione.
Quota di iscrizione
Il corso prevede le seguenti quote di iscrizione:
studenti: 400 Euro; (sarà necessario presentare in copia la ricevuta di pagamento dell’anno in corso)
non studenti: 470 Euro. Le quote sono considerate iva inclusa.
Info
Per ulteriori informazioni sono a disposizione i seguenti contatti:
Responsabile didattico: arch. Michele Calvano
Info mail: 4planstudio@gmail.com
tel: 340 3476330
…
ly fabricated interventions and interactive electronic performance art installations in Barra Funda. Along with other experts, these tutors will teach how to use and apply new design technologies, notably Rhino and Grasshopper (and numerous plug-ins including GECO, Galapagos, Kangaroo and RhinoCam); Arduino and Processing; and the use of laser-cutters, rapid- prototype machines and CNC routers and mills.
Alan Dempsey of NEX, was in 2010, selected by the Centre for European Architecture/Chicago Athenaeum as one of the 40 most significant architects in the EU under 40. In 2008 he was selected by the British Council as one of the six most significant Design Entrepreneurs. He previously worked with Future Systems, OCEAN and Homa Farjadi. Alan was an AA Unit Tutor and is Director of the AA Independent’s Group (www.independentsgroup.net), which facilitates research into the use of computational design and fabrication. Alan has lectured, exhibited and been published worldwide. His work has received a number of awards, including a LEAF award for Spencer Dock Bridge, and a D&AD pencil for the [C]space DRL 10 Pavilion.
Robert Stuart Smith of Kokkugiais a Studio Course Master at the AA DRL. Robert previously worked for Lab Architecture Studio and Nicholas Grimshaw & Partners. He focuses on self-organisational systems and developmental growth, pursuing polyvalent and environmentally responsive affect. He leads consultation to Cecil Balmond on non-linear algorithmic design research. Kokkugia has projects in the USA, UK and Mexico, and is exhibited and published internationally.
Iván Ivanoff is an artist, programmer, and researcher. He searches for new forms of communication for the society of the future and is the director of different Media Labs worldwide. He founded the artistic collaborative i2off.org+r3nder.net, which develops multi-media and interactive projects, and Estado Lateral Media Lab to investigate and develop new technologies.
The Barra Funda district of São Paulo was once characterised by a mix of small industrial, commercial and residential programmes, but, as economic policies have favoured larger production industries, numerous companies have abandoned the area. In response, the workshop proposes the creation of new types of smaller industries to produce a mix of both consumption and production, manifested through micro-manufacturing interventions that can co-exist alongside retail and housing. Computational design and digital fabrication could be used to help create these new micro-industries, which in turn will help empower local craftsman to produce and sell directly to consumers through micro-manufacturing, located in small urban workshops.
The workshop will tap into emergent gallery scene of Barra Funda and local initiatives that use computational technology to introduce a new cultural and economic impetus. The workshop is a part of the International Festival of Electronic Language (FILE), an exhibition of interactive electronic technology, and will import these electronic technologies out of the galler, collaborating with local manufacturers, artists, and activists, with a goal of disseminating a high-tech yet low-cost and small-scale fabrication systems to promote this new micro-industrial movement. The workshop is open to architecture and design students and professionals worldwide.…
ermedio fondamentali per la corretta comprensione del software Rhinoceros.
Il corso si svolgerà nei seguenti giorni:
Lunedì 07/10/2013 dalle ore 9:30.00 alle ore 13:30
Martedì 08/10/2013 dalle ore 9:30.00 alle ore 13:30
Lunedì 09/10/2013 dalle ore 9:30.00 alle ore 13:30
Martedì 15/10/2013 dalle ore 9:30.00 alle ore 13:30
Scadenza preiscrizione per Rhinoceros StartUP : 04/10
Contenuti
- Presentazione e spiegazione dell’ interfaccia
- Approfondimento dell’ utilizzo dei comandi base 2D per la gestione del documento di progetto
- Teoria Free-Form
- Modellazione di architetture semplici per eseguire operazioni Booleane semplici e complesse
(addizione, sottrazione, intersezione)
- Presentazione e spiegazione delle superfici a doppia curvatura e loro pannellizzazione
- Comandi di editing, superfici tagliate e raccordi tra superfici
- Analisi di curvatura, tangenza e posizione delle superfici
- Impaginazione e costruzione degli elaborati bidimensionali attraverso modelli tridimensionali
- Modellazione di architetture complesse
Destinatari
Il corso è rivolto a studenti universitari, professionisti ed anche a coloro che non hanno precedenti esperienze di modellazione 3D.
Alla fine del corso, verrà rilasciato l’attestato di partecipazione ad un corso di Rhinoceros qualificato e certificato dalla casa sviluppatrice McNeel, valido anche per la richiesta di crediti formativi universitari.
Docente del corso
Il corso sarà tenuto da un docente qualificato con riconosciuta esperienza universitaria, esperto in disegno e rappresentazione dell' architettura e del design ed istruttore McNeel:
Michele Calvano| _architetto, dottore di ricerca in rappresentazione architettonica specializzato nella modellazione matematica (Nurbs) e modellazione parametrica.
Docente ART ( Autorized Rhino Trainer) - [vedi - CV]
Info
Per ulteriori informazioni di carattere didattico sono a disposizione i seguenti contatti:
Responsabile didattico e docente del corso : arch. Michele Calvano
Info mail: parametricart@gmail.com
cell: 340 3476330
…
ssibili e facili da usare. Il corso parte dalle basi della programmazione di arduino fino ad arrivare all’interazione tra un oggetto fisico ed un imput informativo. tutor: Gianpiero Picerno Ceraso
Programma: I giorno Introduzione al Phisical Computing, input digitali e analogici, le basi del linguaggio di programmazione, esempi applicativi; led, pulsanti, fotorestistenze, servo motore, sensore di temperatura, di flessione, sensori di movimento, potenziometri.
II giorno Arduino ethernet, uso di un relè per carichi elevati, accelerometro, introduzione a Processing, interazione di Arduino e Processing, Introduzione a Grassoppher e Firefly e interazione con Arduino.
orario corso: 10:00 – 13:00 e 14:00 – 17:00 (pausa pranzo 13:00 – 14:00) costo: 150€ + IVA deadline: 13 marzo numero minimo di partecipanti: 3
Per iscrizioni scrivi a info@medaarch.com specificando nome, cognome, mail, recapito telefonico e il nome del corso al quali sei interessato. In seguito all’invio del modulo di pre-iscrizione, i partecipanti riceveranno una mail contenente tutte le specifiche di pagamento.
Per seguire il cluster su Arduino è necessario installare il software Arduino 1.0.5 al seguente linkhttp://arduino.cc/en/Main/Software#.Ux3hQj95MYE facendo attenzione a scaricare quello relativo al proprio sistema operativo, Windows 32 o 64 e Mac OS.
Software necessari solo per una parte del corso: Processing 2.1.1 https://processing.org/download/?processing
Rhino 5 http://www.rhino3d.com/it/download Grasshopper for Rhino5http://www.grasshopper3d.com/page/download-1Firefly http://fireflyexperiments.com/
Il cluster rientra in un fitto calendario di attività formative organizzate dalla Medaarch per lanno 2013-2014.…
Series“, è il corso più seguito in Italia sulla modellazione parametrica, giunto al nono anno consecutivo di attivazione. Plug it fornirà ai partecipanti un’effettiva padronanza delle più avanzate tecniche di modellazione digitale, approfondendo le metodologie della modellazione algoritmica e parametrica nel campo dell’architettura e del design del prodotto. Il corso è rivolto a studenti e professionisti dei settori della progettazione architettonica, design, moda e gioielleria, con esperienza minima nel disegno CAD bidimensionale (acquisita su qualsiasi piattaforma software) e si articolerà in lezioni teoriche frontali ed esercitazioni guidate.
_
FORM FINDING STRATEGIES | Livello Intermedio | Analisi ambientale ed ottimizzazione della forma
Form Finding Strategies è il secondo step del percorso formativo in tre fasi “AAD Workshop Series“. Il workshop intende esplorare le possibilità di generazione di forme efficienti in relazione ad influenze esterne ed alle caratteristiche intrinseche della materia stessa. Analisi ambientale (input solari, termici ed acustici) ed analisi/ottimizzazione strutturale FEM saranno le principali metodologie utilizzate per raggiungere gli obiettivi di ricerca della forma. Saranno introdotti numerosi plug-ins tra cui: Weaverbird, Kangaroo, Geco/Ecotect, Ladybug, Millipede. Il corso si rivolge a studenti e professionisti con conoscenza base di Rhino e Grasshopper.
_
PERSPECTIVES | Livello Avanzato | Python coding e modellazione algoritmica avanzata
Il nuovo corso Perspectives proposto per la prima volta nel 2019 (ed ultimo step del percorso formativo in tre fasi “AAD Workshop Series) introdurrà gli studenti alla programmazione Python ed alla sua integrazione con Grasshopper. Verranno inoltre esplorate tecniche avanzate di generazione formale basate su iterazioni. Tra i principali plugins utilizzati: GhPython, Anemone, Hoopsnake, Plankton, MeshMachine, Pufferfish. Pensato come workshop innovativo sulle prospettive e sfide future del design computazionale, è rivolto a studenti e professionisti con esperienza in modellazione algoritmica con Grasshopper.
INFO ED ISCRIZIONI
…
use I don't agree with the practice of using site EUI as a metric to evaluate the thermodynamic performance, environmental impact, or monetary value of a building. I disagree with this practice for the same reason that there are no "totalThermalLoad" and "thermalLoadBalance" for simulations run with full HVAC. I can summarize these reasons in the following way:
When we run a simulation with ideal air loads, the heating/cooling values we get are THERMAL ENERGY that is directly added to or removed from the zone. In this way, we can draw a rough parallel between these two types of energy since they are are generally of a similar type and quality. As such, I am ok with adding them together to get total thermal load or subtracting them to get a sense of thermal load balance.
However, when we run a simulation will full HVAC, the heating/cooling values that we get are usually HEATING FUEL ENERGY and ELECTRICITY respectively. Fuel energy and electricity are fundamentally two different types and qualities of energy. To cite the second law of thermodynamics, the exergy (or the capacity to do work) of electricity is much greater than that of fuel. This is evident in the fact that, to produce a given unit of electricity, I often have to burn at least 3 units of fuel energy (though this can be much more for inefficient plants). With each step in a power plant - making steam, turning a turbine, turning a generator - there are significant energy losses. This difference in exergy is also evident in the fact that there are so many more things that I can do directly with a unit of electricity than I can do with the same unit of fuel energy. I can use electricity to directly refrigerate, produce light energy or power a motor just as easily as I can use to to cook, produce hot water, or heat a space. While I can cook, make hot water, or heat a space directly with fuel energy, refrigeration and lighting are much more difficult. For this reason, I do not feel comfortable adding electricity and fuel together either in the totalThermalLoad output or in a site EUI metric.
Still, the use of site EUI has become so ingrained in the industry that I have to acknowledge it and at least show users how it's calculated. In my view, it's an ad-hoc metric that was invented to deal with previously limited amount of information on energy sources.
Instead of using site EUI, I would recommend using the following metrics depending on what you are trying to evaluate:
Utility Cost / Square Meter - to measure the monetary value of a building to an owner or user
Kg CO2 / Square Meter - to measure the environmental and climatic impact of a building
Emergy / Square Meter - to measure the overall thermodynamic performance of a building
The first two are actually fairly easy to calculate these days just by researching your site's utility rates or grid energy mixture and multiplying the building electricity or fuel by their respective rates. I will add in some capabilities to Honeybee soon to make it even easier for you to get these values from your EPW file and databases of utility rates/grid mixture. Emergy is much harder to calculate as you have to trace all your energy sources all of the way back to the sun but there are a number of experts at work to make this calculation possible (probably in the next few years, we may have much easier ways to calculate it).
Hope this helps explain the current setup.
-Chris…
he example file to this file so you can give it a try with any version of Honeybee that you're already using. The only requirement is to have OpenStudio installed as the component is using OpenStudio libraries to parse gbXML files. If you're using the latest version available on github the component is also available under WIP tab.
Why?
The main purpose of developing this component is to save time and effort for importing Revit models for energy and daylight analysis. It bothers me to see a lot of smart people spend a lot of time to just come up with solutions just to get the geometry from Revit to Honeybee for analysis. This component is not solving all the issue but is a first step forward. In an ideal world, the future version of Honeybee, which works both under DynamoBIM and Grasshopper should address this issue but that can take some time to be fully ready!
How?
To use this component you need to Export your Revit model as gbXML and then use the file path to load the file into Grasshopper. There are several resources available online on how to prepare the analytical model in Revit and export the gbXML file. Here is an image for importing the Revit 2017 sample model using the default settings. As you can see the model will be just as good as what your original gbXML file from Revit is.
What can be improved?
Well, there are several items that can be improved and they are mostly not on us. To get it started I add what I think are the 3 main shortcomings and my thoughts on how they can be addressed in the future. Feel free to add what you think needs to be added to this list in the comments section.
1. Revit analytical models and as the results gbXML files, by design, are not intended to be clean. Watch this presentation from the Autodesk University to see the logic behind this approach which in short is it doesn't matter for a large scale early stage energy model. Well, This will be quite a problem for studies that you can do with Honeybee. Included but not limited to daylight and comfort analysis.
The best solution that I can think of, until Autodesk fixes their exporter, is to use Revit Rooms and Spaces and generate a clean model from the scratch. We have already tried this approach in Revit but since the Revit API doesn't provide access to Room openings we had a very hard time to get it to work.
That's why that I opened an idea on Revit ideas to get over this issue. With your support we already have 81 votes, but it hasn't been enough to make them to consider the idea for an official review. If you haven't voted already and you think this will be a helpful feature take a moment and vote so we can have it implemented at some point in the future.
2. There is no way (that I know) to export only part of the model. The way export gbXML is set up in Revit is to export the whole model once together. As a result, if you have a huge model with 100 rooms and you want to get one of the rooms into Honeybee using this component you have to export the whole model, which can take some time, and then import them all back into Grasshopper. To partially address this issue I added an input to the component that allows you input a list of names for rooms that you're interested to be loaded into Grasshopper. You can use the name of the room/space in Revit as an input for the component.
3. The component doesn't import adjacencies, loads, schedules and HVAC systems. I wasn't able to export a gbXML file from Revit with any of this data except for the adjacency, but even if you can do that, the component currently can only import geometries and constructions. I hope we get access to 1 and so we don't have to use the xml file approach at all, but if that takes a very long time then we will add these features to the component.
Happy 2017!
Mostapha…