whole design intent, but this is what Inventor is good at. The way it packages bits of 'scripted' components into 'little models' that can be stored and re-assembled is central to MCAD working.
The Inventor model shown is almost 5 years old. We don't model like that any more, however it does offer a good idea of general MCAD modeling approaches.
iParts is useful in certain situations, it could've been useful in the above model, its usefulness is often in function of the quantity of variants/configurations.
So much is scripted in GH, maybe it should also be possible to script/define/constrain/assist the placement/gluing of the results?
...
Starting point: I think we are talking across purposes. AFAIK, the solving sequence of GH's scripted components is fixed. It won't do circular dependencies... without a fight. The inter-component dependencies not 'managed' like constraints solvers do for MCAD apps.
Components and assemblies are individual files in MCAD.
Placement of these within assemblies in MCAD is a product of matrix transforms and persistent constraints. There is no bi-directional link, the link is unidirectional (downflow only), because of the use of proxies.
Consequently, scripting the placement of components is irrelevant in GH, unless you decide that each component needs to be contained in its own separate file.
This also brings up the point that generating components and assemblies in MCAD is not as straightforward. In iParts and iAssemblies, each configuration needs to be generated as a "child" (the individual file needs to be created for each child) before those children can be used elsewhere.
You notice the dilemma, if you generate 100 parts, and then you realize you only need 20, you've created 80 extra parts which you have no need for, thus generating wasteful data that may cause file management issues later on.
GH remains in a transient world, and when you decide to bake geometry (if you need to at all), you can do that in one Rhino file, and save it as the state of the design at that given moment. Very convenient for design, though unacceptable for most non-digital manufacturing methods, which greatly limits Rhino's use for manufacturing unless you combine it with an MCAD app.
One of the reasons why the distributed file approach makes perfect sense in MCAD, is that in industry you deal with a finite set of objects. Generative tools are usually not a requirement. Most mechanical engineers, product engineers and machinists would never have any use for that.
The other thing that MCAD apps like Inventor have, is the 'structured' interface that offers up all that setting out information like the coordinate systems, work planes, parameters etc in a concise fashion in the 'history tree'. This will translate into user speed. GH's canvas is a bit more freeform. I suppose the info is all there and linked, so a bit of re-jigging is easy. Also, see how T-Flex can even embed sliders and other parameter input boxes into the model itself. Pretty handy/fast to understand, which also means more speed.
True. As long as you keep the browser pane/specification tree organized and easy to query.
:)
Would love to understand what you did by sketching.
I'll start by showing what was done years ago in the Inventor model, and then share with you what I did in GH, but in another post.
Let's use one of the beams as an example:
We can isolate this component for clarity.
Notice that I've highlighted the sectional sketch with dimensions, and the point of reference, which is in relation to the CL of the column which the beam bears on. The orientation and location of the beam is already set by underlying geometry.
Here's a perspective view of the same:
The extent of the beam was also driven by reference geometry, 2 planes offset from the beam's XY plane, driven by parameters from another underlying file which serves as a parameter container:
Reference axes and points are present for all other components, here are some of them:
It starts getting cluttered if you see the reference planes as well:
Is I mentioned earlier, over time we've found better ways to define and associate geometry, parameters, manage design change, improving the efficiency of parametric models. But this model is a fair representation of a basic modeling approach, and since an Inventor-GH comparison is like comparing apples and oranges anyways, this model can be used to understand the differences and similarities, for those interested.
I haven't even gotten to your latest post yet, I will eventually.…
Added by Santiago Diaz at 10:36am on February 26, 2011
ive collaborative environment.
TYPE : Course module and Workshop
The event is open for anybody interested from all the fields of design, including: architecture, interior design, furniture design, product design, fashion design, scenography, and engineering.
1. COURSE MODULE (20-23 April 2014) - optional
+ type: 3 days intensive course regarding basic knowledge in parametric design (LEVEL 1)
+ software: Rhinoceros & Grasshopper
+ plugins: Kangaroo, Weaver Bird, Lunch box, Ghowl, Geco
+ achievements:
- acquainting to the components & the concept of Generative Design
- understanding the strategies in Algorithmic Design
- how to easily insert simple mathematical equation into the project to gain more control
- how to utilize proper plugins with respect to their nature of the project
- interacting with different analysis platforms such as Ecotect & remote controller
- solving several exercises with different scales( 2D- 3D ) during each phase of the workshop
2. WORKSHOP (23-27 April 2014)
A 5 day Design-Based Research Workshop exploring new techniques in Digital Architecture/Fabrication, with a specific focus on the use of generative systems and parametric modeling as tools for creative expression.
Our ultimate goal is to increasing the efficiency of utilizing digital tools in parallel with geometric performance of the primitive design agent.
+ + CONCEPT
Fashion and Architecture are both based on basic life necessities – clothing and shelter.
However, they are also forms of self-expression – for both creators and consumers.
Both fashion and architecture affect our emotional being in many ways.
The agenda of this workshop is to investigate on the overlap between these two areas of design, art & fashion.
Fashion and architecture express ideas of personal, social and cultural identity, reflecting the concerns of the user and the ambition of the age. Their relationship is a symbiotic one and throughout history, clothing and buildings have echoed each other in form and appearance. This only seems natural as they not only share the primary function of providing shelter and protection for the body, but also because they both create space and volume out of flat, two-dimensional materials.
While they have much in common, they are also intrinsically different – address the human scale, but the proportions, sizes and shapes differ enormously.
+ + + OBJECTIVES
So far, Architects have been using techniques such as folding, bending etc. to create space, structural roofs or different other structural shapes.
The agenda of this workshop goes further with the investigation of algorithmic thinking through generative tools Integrated in design.
The challenge is creating a bridge that connects these two areas of design, architecture and fashion that perform at two opposite scales.
+ + + + TECHNICAL BRIEF
In the early stages physical models and low-tech strategies will be used, allowing the participants to gain a greater understanding of materials, fabrication and assembly methods as well as simple, yet pragmatic structural solutions.
Later in the workshop these strategies will be digitalized and elaborated using software visualizing tools such as Rhinoceros and the algorithmic plug-in Grasshopper.…
ort and export from the images below and also from the HELP file of DB in attachments (Page 71: Importing Geometric Data; Page 78-80: Import 3 - D CAD Data). In their HELP file, they mention about "import geometric data".
However, regarding the input of schedules, loads, constructions and etc., DB normally uses "Component " and "Template" (Page 29: Templates And Components; Page 591: Templates; Page 533: Components). "Templates" are databases of typical generic data, including Activity templates, Construction templates, Glazing templates, Facade templates, HVAC templates, Location Templates, and etc. "Component " are databases of individual data items (e.g. a construction type, material, window pane).
Both "Component " and "Template" are allowed to be imported and exported by using "Import / Export library data" command (.ddf format - DB Database File; Page 734: Import Components/Templates, Export Components/Templates). DB also allows us to build up our own libraries of templates and components (Page 731: Library Management; Page 733: Template Library Management).
In order to import both geometric information and other information related to schedules, loads, constructions and etc. from GH to BD, we supposed the following two ways:
1. GH(HB+GB) --> gbXML (both geometric and "Component " and "Template" information) --> DB
This is the way we most prefer. We did see information related to schedules, loads, constructions encoded in the gbXML file generated by GB, but still do not know the reason why DB did not take this information (I also mentioned this in Q6 within the gh file). We assume this might because the gbXML file we create encodes the schedules based on a different template / schema than the one DB expects. We also post this question to the DB forum for help.
(http://www.designbuilder.co.uk/component/option,com_forum/Itemid,25/page,viewtopic/p,13755/#13755)
2. GH(HB+GB) --> gbXML (geometric information only) + .ddf ("Component " and "Template" information only) --> DB
If the first way doesn't work and DB only takes geometric information from the gbXML, then we might think of the other way - generating the .ddf files from GH(HB+GB) to pass the schedule, load and construction information to DB.
I was wondering if it is feasible for HB and GB to have this function? And what is your suggestion to achieve this?
In addition, we notice that DB can export XML files (not gbXML), so we are trying to figure out if DB also accepts / reads the XML file. If so, we might be able to convert the gbXML (with both geometric and schedule information) to XML. What do you think about that?
Thank you again for all your help!
Best,
Ding
DB import
DB export
Template libraries
Component libraries
…
thing that MicroStation does (or doesn't). The eternal debate between us is that they focus to the so called BIM aspect of things (and obviously on interoperability matters - that said IFC2*4 is" implemented" in certain Bentley verticals like BA and others) whilst I'm after assembly/component puzzles (and on that matter ... MS ...hmm... to put it politely is not exactly CATIA and/or NX, he he).
On the other hand this paranoid obsession with Level/Layer driven CAD (I hate it) defines a red thick line between CAD and MCAD - because the most intelligent importer can't emulate the way that Siemens NX/CATIA classifies objects - and without control power means nothing.
On the other hand Microstation V9 (...soon) has interesting scripting capabilities (think Modo rather Generative Components) ... meaning that Grasshopper could work there in a rather nice way. I think that I must talk for that to Ray (he recently ditched the ancient legacy MS render engine in favor for the Luxology/Nexus engine). Ray still is negative to buy Act3D mind (hope that you know the mother of visual scripting - the Quest3D VR thing).
On the other hand - within the broad AEC aspect - things these days are different (especially in fast developing countries the likes of UAE, Saudi Arabia, certain ex USSR "democracies" etc etc). Studies are outsourced even at Preliminary Design stage to various sub-contractors (they undertake the Study completion per discipline as well). This means that N separate groups doing M aspects of the whole ... meaning entropy^(N*M) - that's chaos in plain English.
With this in mind I'm quite (a lot) skeptical about the practical meaning of the whole exchange thing in AEC - at least with regard the countries mentioned (not to mention that several portions of a modern AEC thing are made via MCAD apps - chaos^chaos.
I'll back with more focused issues on that matter.
But the big question is: Grasshopper of Generative Components? Well...let's talk serious SS bikes instead: think a Ducati 1198 and a BMW S1000RR (I have them both): which is "best"? The thing is that not always the best bunny is the fasted bunny and not always the fasted bunny is the best bunny.
Cheers,
Peter
…
ty to work in a new and exciting space, where design, art, technology and fashion meet.
If you guys are looking for a full- or part-time job, or know an expert who is - we're happy to with meet him/her. We're located in the Lower East Side, New York.
What the person will be doing:
- Provide technical vision for product and infrastructure features
- Work with Marketing/Product Management to enhance the user experience
- Develop (with our team) our e-commerce customization platform
- Manage our real time 3D modeling platform
- Mentor 3D modelers and developers, define and document development methods, and share best practices
- Review and recommend improvements to product architecture
What we require:
- BA/BS/ BARCH degree OR CS/EE/Engineering degree preferred
- EXTENSIVE 3d modeling, rhino and grasshopper experience
- Experience building online computer games
- Experience creating natural and fractal patterns and forms in 3d
- UV Texture Mapping bit mapping (texture mapping)
- Experience managing a development team in projects with tight SCHEDULES
- Architecture, programing, scripting, Media or Fashion industry experience preferred
- Experience implementing web interfaces using XHTML, CSS, Javascript, and AJAX
- Experience in recommendation engines and algorithms
- Interest in working in an early stage fast-paced environment…
edit 29/04/14 - Here is a new collection of more than 80 example files, organized by category:
KangarooExamples.zip
This zip is the most up to date collection of examples at the moment, and collects t
ers and researchers, programmers and artists, professionals and academics who come together for 4 days of intense collaboration, development, and design.
The sg2012 Workshop will be organised around Clusters. Clusters are hubs of expertise. They comprise of people, knowledge, tools, materials and machines. The Clusters provide a focus for workshop participants working together within a common framework.
Clusters provide a forum for the exchange of ideas, processes and techniques and act as a catalyst for design resolution. The Workshop is made up of ten Clusters that respond in diverse ways to the sg2012 Challenge Material Intensities.
Applicants to the sg2012 Workshop will select their preferred cluster from the following:
Beyond Mechanics
Micro Synergetics
Composite Territories
Ceramics 2.0
Material Conflicts
Transgranular Perspiration
Reactive Acoustic Environments
Form Follows Flow
Bioresponsive Building Envelopes
Gridshell Digital Tectonics
More information about the Workshop and Clusters can be found here:
http://smartgeometry.org/index.php?option=com_content&view=article&id=116&Itemid=131
The application process will close on January 15th, 2012.
Full Fee $1500
Reduced Fee $750
Scholarship Fee $350
Fees include attendance to both the workshop and conference from March 19th-24th.
Reduced Fee and Scholarships are available only for Academics, Students and Young Practitioners, and are awarded during a competitive peer review process.
sg2012 takes place from 19-24 March 2012 at EMPAC (http://empac.rpi.edu/) and is hosted by Rensselaer Polytechnic Institute in Troy, upstate New York USA. The Workshop and Conference will be a gathering of the global community of innovators and pioneers in the fields of architecture, design and engineering.
The event will be in two parts: a four day Workshop 19-22 March, and a public conference beginning with Talkshop 23 March, followed by a Symposium 24 March. The event follows the format of the highly successful preceding events sg2010 Barcelona and sg2011 Copenhagen.
sg2012 Challenge Material Intensities
Simulation, Energy, Environment
Imagine the design space of architecture was no longer at the scale of rooms, walls and atria, but that of cells, grains and vapour droplets. Rather than the flow of people, services, or construction schedules, the focus becomes the flow of light, vapour, molecular vibrations and growth schedules: design from the inside out.
The sg2012 challenge, Material Intensities, is intended to dissolve our notion of the built environment as inert constructions enclosing physically sealed spaces. Spaces and boundaries are abundant with vibration, fluctuating intensities, shifting gradients and flows. The materials that define them are in a continual state of becoming: a dance of energy and information. Material potential is defined by multiple properties: acoustical, chemical, electrical, environmental, magnetic, manufacturing, mechanical, optical, radiological, sensorial, and thermal. The challenge for sg2012 Material Intensities is to consider material economy when creating environments, micro-climates and contexts congenial for social interaction, activities and organisation. This challenge calls for design innovation and dialogue between disciplines and responsibilities. sg2010 Working Prototypes strove to emancipate digital design from the hard drive by moving from the virtual to the actual in wrestling with the tangible world of physical fabrication. sg2011 Building the Invisible focused on informing digital design with real world data. sg2012 Material Intensities strives to energise our digital prototypes and infuse them with material behaviour. They have the potential to become rich simulations informed by the material dynamics, chemical composition, energy flows, force fields and environmental conditions that feed back into the design process.
More information can be found at http://www.smartgeometry.org
Follow us on Twitter at http://twitter.com/smartgeometry…
Added by Shane Burger at 12:29pm on December 13, 2011
Introduction to Grasshopper Videos by David Rutten.
Wondering how to get started with Grasshopper? Look no further. Spend an some time with the creator of Grasshopper, David Rutten, to learn the
es has guided me in a - what I once thought - specific path within architecture, but recent discoveries (like the Grasshopper-community etc.) have learned me that the field of digital and parametric architecture is so-to-speak alive and kicking. This is also the main subject I would like to write my thesis about. It is however mainly the subject and defining its boundaries – what do I really want to explore and research? – which is the most difficult factor at this time.
A concrete idea is non-existant, and my current visions will probably be redirected when I have a first meeting with the promotors in February. Moreover there is the knowledge that it is impossible to make a thesis at the institute in Antwerp on no matter what subject in the world of digital architecture. Understandably too, it’s a small world and does not always result in realised projects, but in impressive imagery. At this moment however, I am thinking of two possible research fields to focus on.
In a first option the focus might lie on how digital design tools can be used to bring a certain aspect of interactivity to building facades. Such interactivity can occur both in the design phase and throughout the use of the building. The first scenario, in which the interactivity occurs when designing, I would focus on how the designer can shape a building’s outer perspective in function of environmental parameters: obstacles, elements that block sunlight from entering the building, visually important landmarks, etc. It should be noted however that focus will mostly lie on the design element, and less on the energy-efficiency and sustainability. Tools that will be researched would include Grasshopper, Rhino Scripting, Processing and ParaCloud.
A second possible approach could be categorized under both Swarm Intelligence and Generative Design and might study how the aforementioned digital techniques might be implemented in the new urbanism. We notably see more (innovative) interventions in which the design and planning is heavily influenced by movement patterns and morphogenetic parameters and functions. Based on the outcome of these scripted techniques, designers tend to work towards a proposal which answers a certain urbanistic issue.
All additional insights, guidelines, tips, comments are more than welcome in order to help me define the scope of my thesis subject. I must admit I am pretty new to this digital design world (it is not actively promoted at my home university, but it is promoted at the university where I am studying for one year now) and thus have limited experience at the time of writing.
Please also feel free to check out the blog post concerning this topic, which is a little more elaborate: http://nielswouters.be/thesis-digital-design-english/
Thanks for all your help!
…
mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…