rsi giornalieri (livello base) dedicati a 4 diversi topic Rhinoceros - 8 febbraio Grasshopper - 16 febbraio Rhino cam - 8 marzo Stampa 3D - 9 marzo
tutor: Amleto Picerno Ceraso, Francesca Viglione, Gianpiero Picerno Ceraso.
. Arduino for interaction (livello base-medio) 15, 16 marzo Il workshop parte dalle basi della programmazione di arduino fino ad arrivare all’interazione tra un oggetto fisico ed un imput informativo tutor: Gianpiero Picerno Ceraso
. Grasshopper advanced: “Complex surface” (livello medio) - 18, 19, 20 marzo Il workshop ha come obiettivo lo sviluppo di superfici complesse rispondenti ad informazioni provenienti dall’ambiente. Il corso parte dalle nozioni di Grasshopper fino ad arrivare alla possibile realizzazione di un oggetto tramite le tecniche di fabbrizazione digitale. tutor: Amleto Picerno Ceraso nb: è richiesta una conoscenza base di Grasshopper
. Emotional design (livello alto) 23, 24, 25 marzo Il workshop verterà sull’acquisizione, registrazione e manipolazione di tali dati/emozioni tramite Grasshopper e il loro utilizzo per controllare i parametri del design di specifici oggetti che diventeranno quindi, essendo customizzanti con le specifiche emozioni dell’utente, istanze e memoria tattile di precise esperienze. tutor: Andrea Graziano nb: è richiesta una conoscenza base di Grasshopper
. Fabricated fashion (livello alto) 26, 27, 28, 29, 30 marzo Il tema del workshop verte sulle tecniche di progettazione digitale applicate al fashion. tutor: Luis e Elizabeth Fraguada nb: è richiesta una conoscenza base di Grasshopper
. Blender (livello alto) - 16, 17, 18 maggio tutor: Andrea Graziano
. Interaction design: Arduino + Grasshopper (livello medio) - 2, 3, 4 maggio Il corso ha l’obiettivo di indagare processi di interazione tra le persone e gli ambienti in cui vivono attraverso il responsive design. nb: è richiesta una conoscenza base di Grasshopper e Arduino. tutor: Amleto Picerno Ceraso del Mediterranean FabLab e Antonio Grillo del FabLab Napoli.
info su costi: http://www.medaarch.com/2765-il-nuovo-calendario-attivita-firmato-medaarch/
…
uier momento del diseño de un modelo 3D y este se readapta sin necesidad de redibujar la zona alterada.
Otra de las principales características del trabajo paramétrico es que nos permite automatizar procesos de trabajo o diseño. Esto quiere decir que, con procesos sencillos, podemos generar geometrías complejas y siempre justificadas en función de unos parámetros que nosotros definamos; lo que, en cierto modo, elimina la arbitrariedad en el diseño y nos arma de argumentos en la toma de decisiones de proyecto. Por otro lado, se pueden generar texturas y patrones de manera aleatoria o variable en función de atractores.
Tras la realización de este workshop, el alumno será capaz de desarrollar sus propias gramáticas, con la confianza que da comprender los términos básicos de programación sobre los que se apoya todo el sistema de trabajo de Grasshopper.
Grasshopper nos abre todo un mundo de posibilidades en el diseño y en la fabricación digital.
PARA QUIÉN
El workshop está dirigido a estudiantes y profesionales de la arquitectura, el interiorismo, la ingeniería, el diseño de producto, el diseño industrial y, en general, perfiles creativos y disciplinas artísticas que quieran introducirse en el mundo del diseño paramétrico.
Es recomendable tener conocimientos previos de Rhinoceros (nivel básico) ya que hay algunos conceptos que pueden ser útiles para un mejor seguimiento del workshop.
…
mple problem.
Imagine you're dividing a space (100m²) into two rooms, one of which (room A) should be 60m², the other (room B) 40m². Now it follows that the sum of both rooms must always add up to 100m². And if you make one room smaller by 5m², the other one gets bigger by 5m².
The simplest expression that would convert room areas into a fitness value is, I think:
Abs(A - 40) + Abs(B - 60)
or, in English, the sum total of the discrepancies between the actual areas and the desired areas.
If the rooms are both 50m² we get a fitness of:
Abs(50-40) + Abs(50-60) = 20
If room A equals 10m² and room B equals 90m², we get:
Abs(10-40) + Abs(90-60) = 60
If both rooms are exactly right, we get:
Abs(40-40) + Abs(60-60) = 0
So the point here is to minimize fitness, and once the fitness has reached zero we know we're home free.
But this is a very straightforward case. What if we're trying to optimize a problem, while knowing there's no way on Earth we'll be able to solve all constraints? This is after all what Evolutionary solvers are good at. So what if the problem is not as clear cut?
This time try to imagine we want every room to be 50m², but all the rooms are too small. Let's write down three cases like before:
(Room A = 30m², Room B = 40m²)
Abs(30 - 50) + Abs(40 - 50) = 30
(Room A = 35m², Room B = 35m²)
Abs(35 - 50) + Abs(35 - 50) = 30
(Room A = 25m², Room B = 45m²)
Abs(25 - 50) + Abs(45 - 50) = 30
Holy Crap! They're all the same! Well this is no good, it's like three bald men fighting over a comb. Even though all solutions fail to meet constraints, they certainly shouldn't all be equally fit. Let's assume for the time being we'd rather have both rooms fail to meet demands in equal amounts instead of one room being ok-ish and the other being way off. How can we add this assumption to the fitness function?
Basically we need to exaggerate large departures from the ideal and trivialize small departures. Our naive fitness function was linear, our new and improved fitness function must be non-linear. The simplest non-linear function is the parabola (x²). So let's see where that gets us.
Abs(30 - 50)² + Abs(40 - 50)² = 500
Abs(35 - 50)² + Abs(35 - 50)² = 450
Abs(25 - 50)² + Abs(45 - 50)² = 650
Phew... The case where both room fail to meet demands equally has the lowest value (and thus the highest fitness) whereas the most extreme discrepancy has the highest value (and thus the lowest fitness).
This approach is called Least Squares fitting and it's one of the most common fitting algorithms in statistics.
Whether you decide to weigh your competing factors equally or differently, and whether you decide to treat deviations linearly or non-linearly is entirely up to you. It requires you have a decent understanding of the problem at hand and also a decent understanding of the mathematical behaviour of the fitness function.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
Added by David Rutten at 6:16am on February 25, 2011
about it.
2. Nick's comment below got me thinking about unit testing for clusters. Being able to work will data flowing in from outside the cluster or having multiple states to test against could be really cool. Creating definitions that were valid across a general cross section of possible input parameters was a significant issue for us. It was all too easy to write the definition as if we were drawing (often we were working from sketches) and then have it fail when the input parameters changed slightly.
4. I wasn't thinking about threading the solver itself. I was thinking along the lines of some IDEs that I've seen which compile your project while you type it. I know that threading within components and at the rhinocommon-level is a freaking hard problem that has been discussed at length already. (although when, 5-10 years from now, it's finished it will be very cool)
Let's say the solver is threaded and the canvas remains responsive. As soon as you make a change to the GH file, the solver needs to be terminated as it is now computing stale data.
What if the solver was a little more atomic and like a server? A GH file is just a list of jobs to do with the order of the jobs and the info to do them rigidly defined - right? The UI could pass the solver stuff to do and store the results back in the components on a component by component basis (i have no idea what the most efficient way to do this is in reality - I'm just talking conceptually) this might even allow running multiple solvers to allow for at least the parallelism the might be built into a given GH file to be exploited (not within components but rather solving non-interdependent branches of components simultaneously). This type of parallelism would more than make up for the performance hit you alluded to for separating the UI and the solver (at least for most of the definitions i write).
I was imagining a couple of scenarios:
a) Writing a parallel module: solver starts chewing away - you see it working - you know it's done 1/3 of the work - if you have something to do at that point you could connect up to some of the already calculated parameters and write something in parallel to the main trunk which is still being solved.
b) Skipping modifications: you need to make a series of interventions at different intervals along a section of code. Sure you could freeze out that bit of a section of down steam code and make modifications so you can observe the effects more quickly. Unfreeze a bit more and repeat etc. etc. until your done and then unfreeze that big chunk at the end to make sure you haven't blown anything up. Just letting it resolve as far as it can while you sit there waiting for inspiration seems a lot more intuitive to me though.
On a file which takes 15 minutes to solve that's no big deal, but you certainly don't want to be adding a 20 millisecond delay to a solution which only takes 30 milliseconds.
You also wouldn't notice it at that point :-) perhaps for things where it would really make a difference, like Galapagos interactivity, it could be disabled - or could the existing "speed" setting just digest this need? Since the vast majority of time that Gh is solving is on files under active development not on finished code, i think qualitative performance is probably more important that quantitative performance (again with cases like Galapagos needing to be accommodated). In our case the code only had to "work" once since its output went to a cnc machine to make a one off project and it didn't really matter if it took 15 seconds or 15 hours for the final run.
Lastly, I have no way to predict how long a component is going to take. I can probably work out how far along in steps a component is, but not how far along in time.
that's ok, from a user point of view, just seeing a percentage tick along once in a while would be nice reassurance that the thing is just slow and has not, in fact, crashed. Maybe there could be two modes of display: the simple percentage version for unpredictable code and, for those of us able to calculate the time taken for our algorithm based on the number of input parameters, a count down in seconds or minutes or whatever.
I think a good place to start with these sort of problems is to keep on improving clusters, ... etc etc
i totally agree.
…
Added by Dieter Toews at 7:53pm on September 4, 2013
isseminated at the firms I've worked at:
Always write your scripts as though someone else is going to have to use and debug them without any instruction from you. This is kind of an overall governing principle that drives a lot of the other best practices.
Structure your definitions left to right. This way it is clear what is dependent on what, what executes in what order, and makes it easy to use the "Moses" tool (alt-click and drag on the canvas to spread components apart) to insert intermediate functionality to an already-existing definition.
For a given functional group (a set of components that does a well-defined thing) keep all the data going in to the group as labeled parameters on the left, and everything going out of the group as labeled parameters on the right. This is the intent of my "Best Practicizer" tool in Metahopper (now a menu item instead of a component). In essence, you're treating each group as though it's about to be clustered - you've defined what it does, what its inputs are, and what its outputs are. This makes troubleshooting much easier - if something is going wrong, you can easily isolate which group is causing the trouble by looking at inputs and outputs.3b. If you're grabbing some value or data (e.g. "STEP COUNT") many times from elsewhere in the definition, don't make a bunch of long wires that connect all the way back to the original source - grab that data into ONE labeled parameter and then connect all your inputs that need it to that - makes for one long wire instead of 20.
Annotate, annotate, annotate. Label your params (and if you're in icon mode, switch them manually to text). Label your groups. Use scribbles to mark larger regions of functionality. Use panels for "instructions" wherever it might not be clear how someone is supposed to use your tools.
Avoid "wireless" (Hidden Wires) connections. If you MUST use them, make sure you create params at both ends with matching names so it's clear what the data represents and where it comes from.
Cluster where possible. It's extremely helpful to isolate functional groups into clusters - it makes debugging faster and easier, since you don't have to wait for the whole definition to recompute when making small edits to the inside of a cluster, and it sets you up well to create code that can be re-used later on. However, don't take whole definitions and cluster them. As a rule of thumb, if a cluster has more than ~10 inputs, it should probably be broken into multiple clusters. There is a slight performance impact when clustering, because unlike an un-clustered group of components, which only executes the parts of the definition where something has changed, any time ANY input to a cluster changes, the WHOLE cluster re-computes. Because of this, a cluster shouldn't generally wrap any groups of components that are not related / don't connect with each other.
Color code your groups. Many firms develop a standard around group coloring so that it's easy to understand what parts of a definition are doing what kind of task. For instance, at Woods Bagot where I work, we have different colors for component groups that highlight inputs, outputs, rhino references, baking, and visualization. You may find that a different set is useful to you, but having a consistent standard can improve legibility. That's my 2c. At the end of the day, everyone works a little bit differently, and that's unavoidable (and not even a bad thing!) As long as you keep #1 in mind, all the rest will follow.
…
nowledge, tools, materials and machines. The Clusters provide a focus for workshop participants working together within a common framework.
Clusters provide a forum for the exchange of ideas, processes and techniques and act as a catalyst for design resolution. The Workshop is made up of ten Clusters that respond in diverse ways to the sg2012 Challenge Material Intensities. The Call for Clusters is now open to proposals which respond in innovative ways to this year's challenge.
Deadline: September 19 2011
More information can be found here:
http://smartgeometry.org/index.php?option=com_content&view=article&id=129&Itemid=146
sg2012 takes place from 19-24 March 2012 at EMPAC (http://empac.rpi.edu/) and is hosted by Rensselaer Polytechnic Institute in Troy, upstate New York USA. The Workshop and Conference will be a gathering of the global community of innovators and pioneers in the fields of architecture, design and engineering.
The event will be in two parts: a four day Workshop 19-22 March, and a public conference beginning with Talkshop 23 March, followed by a Symposium 24 March. The event follows the format of the highly successful preceding events sg2010 Barcelona and sg2011 Copenhagen.
sg2012 Challenge Material Intensities
Simulation, Energy, Environment
Imagine the design space of architecture was no longer at the scale of rooms, walls and atria, but that of cells, grains and vapour droplets. Rather than the flow of people, services, or construction schedules, the focus becomes the flow of light, vapour, molecular vibrations and growth schedules: design from the inside out.
The sg2012 challenge, Material Intensities, is intended to dissolve our notion of the built environment as inert constructions enclosing physically sealed spaces. Spaces and boundaries are abundant with vibration, fluctuating intensities, shifting gradients and flows. The materials that define them are in a continual state of becoming: a dance of energy and information.Material potential is defined by multiple properties: acoustical, chemical, electrical, environmental, magnetic, manufacturing, mechanical, optical, radiological, sensorial, and thermal. The challenge for sg2012 Material Intensities is to consider material economy when creating environments, micro-climates and contexts congenial for social interaction, activities and organisation. This challenge calls for design innovation and dialogue between disciplines and responsibilities.sg2010 Working Prototypes strove to emancipate digital design from the hard drive by moving from the virtual to the actual in wrestling with the tangible world of physical fabrication. sg2011 Building the Invisible focused on informing digital design with real world data. sg2012 Material Intensities strives to energise our digital prototypes and infuse them with material behaviour. They have the potential to become rich simulations informed by the material dynamics, chemical composition, energy flows, force fields and environmental conditions that feed back into the design process.
More information can be found at http://www.smartgeometry.org…
t. So here we go!
1. Honeybee is brown and not yellow [stupid!]...
As you probably remember Honeybee logo was initially yellow because of my ignorance about Honeybees. With the help of our Honeybee expert, Michalina, now the color is corrected. I promised her to update everyone about this. Below are photos of her working on the honeybee logo and the results of her study.
If you think I'm exaggerating by calling her a honeybee expert you better watch this video:
Thank you Michalina for the great work! :). I corrected the colors. No yellow anymore. The only yellow arrows represent sun rays and not the honeybee!
2. Yellow or brown, W[here]TH Honeybee is?
I know. It has been a long time after I posted the initial video and it is not fun at all to wait for a long time. Here is the good news. If you are following the Facebook page you probably now that the Daylighting components are almost ready.
Couple of friends from Grasshopper community and RADIANCE community has been helping me with testing/debugging the components. I still think/hope to release the daylighting components at some point in January before Ladybug gets one year old.
There have been multiple changes. I finally feel that the current version of Honeybee is simple enough for non-expert users to start running initial studies and flexible enough for advanced users to run advanced studies. I will post a video soon and walk you through different components.
I think I still need more time to modify the energy simulation components so they are not going to be part of the next release. Unfortunately, there are so many ways to set up and run a wrong energy simulation and I really don’t want to add one new GIGO app to the world of simulation. We already have enough of that. Moreover I’m still not quite happy with the workflow. Please bear with me for few more months and then we can all celebrate!
I recently tested the idea of connecting Grasshopper to OpenStudio by using OpenStudio API successfully. If nothing else, I really want to release the EnergyPlus components so I can concentrate on Grasshopper > OpenStudio development which I personally think is the best approach.
3. What about wind analysis?
I have been asked multiple times that if Ladybug will have a component for wind study. The short answer is YES! I have been working with EFRI-PULSE project during the last year to develop a free and open source web-based CFD simulation platform for outdoor analysis.
We had a very good progress so far and our rockstar Stefan recently presented the results of the work at the American Physical Society’s 66th annual DFD meeting and the results looks pretty convincing in comparison to measured data. Here is an image from the presentation. All the credits go to Stefan Gracik and EFRI-PULSE project.
The project will go live at some point next year and after that I will release the Butterfly which will let you prepare the model for the CFD simulation and send it to EFRI-PULSE project. I haven’t tried to run the simulations locally yet but I’m considering that as a further development. Here is how the component and the logo looks like right now.
4. Teaching resources
It has been almost 11 months from the first public release of Ladybug. I know that I didn't do a good job in providing enough tutorials/teaching materials and I know that I won’t be able to put something comprehensive together soon.
Fortunately, ladybug has been flying in multiple schools during the last year. Several design, engineering and consultant firms are using it and it has been thought in several workshops. As I checked with multiple of you, almost everyone told me that they will be happy to share their teaching materials; hence I started the teaching resources page. Please share your materials on the page. They can be in any format and any language. Thanks in advance!
I hope you enjoyed/are enjoying/will enjoy the longest night of the year. Happy Yalda!
Cheers,
-Mostapha
…
DP ($$$ aside), GC, and Grasshopper. Arthur’s original question is very important
and the exact question (and hopefully answer) I was hoping to find on a
forum.
“How to take intelligent 3D parametric generative design models (scripting, etc.) into 2D documents?" Or, deliver the 3D design for evaluation, bid, construction, etc.
I am intrigued by Jon’s comments in the same thread and would like to know how I can learn more about the process (and
pitfalls) of turning over a 3D digital generative models to a contractor/fabricator.
Are there any industry guidelines established I could use as a reference to guide our firm through this type of uncharted territory?
Arthur’s question is very reminiscent of 10 years ago when I was frustrated with the amount of time spent on the development of a 3D model design (physical and/or virtual) only to have to wipe the table clean and start the process all over again in 2D in order to document the project for delivery. From this I jumped head first into BIM and Revit, vowing never to go back to unintelligent 2D line work. I am now working on Bentley software (v8i: Microstation and Bentley Architecture) with the access and desire to venture into Generative Components. I am very intrigued by Rhino/Grasshopper primarily with the apparent ease of use and available resources assisting in the learning process – something not really available with Bentley.
In hindsight, as I am doing my software research I think the current use of Revit and BA (Bentley Architecture) are more of a “bridge”
between the past (decades of digital 2D work, i.e. AutoCAD) and where hopefully
we all will be someday in the near future (100% 3D modeling, i.e. Digital
Project??). Without having the experience
it would appear that DP/CATIA (PLM software) are closer to this than any other
type of software. As complicated as the
industry standards are for the automobile and airline industry, I feel we
(architectural industry and others) are heading in a similar direction with
total understanding (PLM/ Evidence Based Design) of a design (a whole other topic). If anything I think the market will begin to
demand it sooner or later.
Gehry (DP) article NY Times:
http://www.nytimes.com/2009/02/11/business/11gehry.html
I know these type of broad discussions (software vs. software) can be blown out of proportion on forums, but I am would like to read
the pulse of those who are already in the trenches (using Grasshopper, CATIA, Digital Project, Generative Components, others??) and hear your thoughts. Just as valuable would be other threads,
industry articles/reviews of 3D parametric generative design software.
Thanks,
Boyd…
he picture (4).
Previously, I had a problem with generating intersections between the two directions of the beams, but a colleague helped me by extending beams, so there was no problem with lines of intersection. But this solution has generated curl (5) at the highest vertex geometry, which I ignored in order to repair it before printing, perhaps this mean my problem with my beam spread properly. Only when the beams is 19, does not jump no problem, but I still can not distribute them properly.
(1)
(2)
(3)
(4)
(5)
I tried to show as simply as possible by removing or signing my code in GHX file.
Thank you in advance for your help
…