make quad mesh usable with Kangaroo and with limited inputs parameters in order to simulate funicular structures like "Vaulted Willow" or "Pleated Inflation" from Marc Fornes and the Verymany.
Here is a first attempt script.
As inputs there are :
Lines_in, just lines, no duplicates, on XY plane could have Z values, but the algorithm works on a , on XY plane could have Z values, but the algorithm works on a flat representation.
Tolerance is used to glue lines when points are closer than tolerance
Width is the half width of the “roads” going through the network
Angle is the shape of the ends of the roads, 0° means flat end, 180° a totally rounded end
Deviation is the shift generating spikes or enabling to generate pleated geometry
N_u is the number of subdivision along the “roads”, image above with 3 subdivisions on the roads
N_u is the number of subdivision across the “roads”
Zbool if false everything is flat, if true the mesh is in 3d, best with angle = 180° or -180°
For the outputs there is the topology of the network (like Sandbox)
As outputs geometry are put on datatree, each branch represent a path on the road, above 3 paths, which are brep output.
Adding a diagonal there are now 4 paths so 4 branches
The mesh M goes with F which are fixed points, anchor in Kangaroo.
U and V are lines in datatree, there will be used as spring in Kangaroo, U above
This script could be used to draw sort of roads, like in here https://codequotidien.wordpress.com/2013/03/22/hemfunction/
But the primary purpose is to do that.
…
Grasshopper contains a VB.net and C# component. These components
allow you to run your own custom code within Grasshopper.
Understanding how to make even simple code components can be very
useful in G
about it.
2. Nick's comment below got me thinking about unit testing for clusters. Being able to work will data flowing in from outside the cluster or having multiple states to test against could be really cool. Creating definitions that were valid across a general cross section of possible input parameters was a significant issue for us. It was all too easy to write the definition as if we were drawing (often we were working from sketches) and then have it fail when the input parameters changed slightly.
4. I wasn't thinking about threading the solver itself. I was thinking along the lines of some IDEs that I've seen which compile your project while you type it. I know that threading within components and at the rhinocommon-level is a freaking hard problem that has been discussed at length already. (although when, 5-10 years from now, it's finished it will be very cool)
Let's say the solver is threaded and the canvas remains responsive. As soon as you make a change to the GH file, the solver needs to be terminated as it is now computing stale data.
What if the solver was a little more atomic and like a server? A GH file is just a list of jobs to do with the order of the jobs and the info to do them rigidly defined - right? The UI could pass the solver stuff to do and store the results back in the components on a component by component basis (i have no idea what the most efficient way to do this is in reality - I'm just talking conceptually) this might even allow running multiple solvers to allow for at least the parallelism the might be built into a given GH file to be exploited (not within components but rather solving non-interdependent branches of components simultaneously). This type of parallelism would more than make up for the performance hit you alluded to for separating the UI and the solver (at least for most of the definitions i write).
I was imagining a couple of scenarios:
a) Writing a parallel module: solver starts chewing away - you see it working - you know it's done 1/3 of the work - if you have something to do at that point you could connect up to some of the already calculated parameters and write something in parallel to the main trunk which is still being solved.
b) Skipping modifications: you need to make a series of interventions at different intervals along a section of code. Sure you could freeze out that bit of a section of down steam code and make modifications so you can observe the effects more quickly. Unfreeze a bit more and repeat etc. etc. until your done and then unfreeze that big chunk at the end to make sure you haven't blown anything up. Just letting it resolve as far as it can while you sit there waiting for inspiration seems a lot more intuitive to me though.
On a file which takes 15 minutes to solve that's no big deal, but you certainly don't want to be adding a 20 millisecond delay to a solution which only takes 30 milliseconds.
You also wouldn't notice it at that point :-) perhaps for things where it would really make a difference, like Galapagos interactivity, it could be disabled - or could the existing "speed" setting just digest this need? Since the vast majority of time that Gh is solving is on files under active development not on finished code, i think qualitative performance is probably more important that quantitative performance (again with cases like Galapagos needing to be accommodated). In our case the code only had to "work" once since its output went to a cnc machine to make a one off project and it didn't really matter if it took 15 seconds or 15 hours for the final run.
Lastly, I have no way to predict how long a component is going to take. I can probably work out how far along in steps a component is, but not how far along in time.
that's ok, from a user point of view, just seeing a percentage tick along once in a while would be nice reassurance that the thing is just slow and has not, in fact, crashed. Maybe there could be two modes of display: the simple percentage version for unpredictable code and, for those of us able to calculate the time taken for our algorithm based on the number of input parameters, a count down in seconds or minutes or whatever.
I think a good place to start with these sort of problems is to keep on improving clusters, ... etc etc
i totally agree.
…
Added by Dieter Toews at 7:53pm on September 4, 2013
: ----------------------------------------------------------------------------------------------
1)
Hi Clemens I've analysed a plate structure using Karamba and wanted to do a convergence analysis on results computed as a function of the number of elements.
Now, when strictly looking at the result magnitudes of internal energy (IE) and maximum displacement (w_max), it's acceptable, that their relative deviations are very small. But I cannot explain the tendencies of their graphs. From what I know, FEM should always compute underestimated results when compared to analytical solutions. So I don't understand why both the IE and w_max seem to be decreasing for an increasing number of elements.
But my main concern is the behaviour of the peak moment, it seems to be simply hill climbing untill suddenly a singularity kicks in. I initially wanted to use the peak moment as a fitness value for optimisation, but with this behaviour, I don't think that would make sense. I've attached my GH file as well.
It would be much appreciated if you could enlighten me on these subjects. Cheers Daniel Andersen
2)
Hi Daniel,
I could not run your definition because I have not all the plug-ins installed that you use.
You are basically right that the displacement should increase with a finer mesh. However the result of the shell analysis also depends on the shape of the triangles (well formed vs. very distorted). In order to test this, I think it would be interesting to use a very simple example (e.g. rectangular plate with one column) where you can easily control mesh generation. Would you like to start a discussion on this in the karamba group at http://www.grasshopper3d.com/group/karamba?
It is not a good idea to use the bending moment at a singularity for optimization because the result will be heavily mesh dependent. Also real columns do have a certain diameter and modeling them as point supports introduces an error.
Best,
Clemens
3)
oh, and by the way!
Here's some relevant literature on handling peak moments: https://books.google.dk/books?id=-5TvNxnVMmgC&pg=PA219&lpg=PA219&dq=blaauwendraad+plates+and+fem&source=bl&ots=SdDcwnrSA1&sig=6HulPmKNIhqKx4_rGxitteMC4CU&hl=da&sa=X&ved=0CDEQ6AEwA2oVChMIg66k0LPaxgIVgY1yCh1KPAeY#v=onepage&q=chapter%2014&f=false (Blaauwendraad, J., 2010. Plates and FEM : Surprises and Pitfalls, see Chapter 14) It would be great if a feature dealing with peak moments could be incorporated in Karamba. In my work, I ended up exporting my models to Robot in order to verify the moment values. Best, Daniel
4)
Hi Daniel,
thank you for your reply and the link to Blaauwendraads excellent book!
At some point I hope to include material nonlinearity in Karamba which will help in dealing with stress singularities.
If you want you could open a discussion with a title like 'moment peaks in shells at point-supports'. Then we could copy and paste the text of our conversation into it.
Best,
Clemens
----------------------------------------------------------------------------------------------…
peuvent se diviser une surface avec ne importe quel motif imaginable. 3. Ici, je fournir un moyen de le faire via Lunchbox ... cela fonctionne mais il est fixe et donc nous avons besoin de jouer avec des arbres de données afin de créer le motif approprié par cas. 4. L'autre composante est un joint C # qui fait beaucoup de choses autres que de diviser ne importe quelle collection de points avec de nombreux modèles (voir le modèle ANDRE que je ai fait pour vous). 5. Vous devez décomposer une polysurface en morceaux afin de travailler sur les subdivisions. 6. Je donne une autre définition ainsi que pourrait agir comme un tutoriel sur la façon de traiter des ensembles de points via des composants de GH standards et des méthodes classiques.
Avertissez si tous ceux-ci apparaissent floue pour vous: Si oui, je pourrais écrire une définition utilisant des composants de GH classiques - mais vous perdrez les variations de motifs de division.
mieux, Peter
…
ers can be applied from the right click Context Menu of either a component's input or output parameters. With the exception of <Principal> and <Degrees> they work exactly like their corresponding Grasshopper Component. When a I/O Modifier is applied to a parameter a visual Tag (icon) is displayed. If you hover over a Tag a tool tip will be displayed showing what it is and what it does.
The full list of these Tags:
1) Principal
An input with the Principal Icon is designated the principal input of a component for the purposes of path assignment.
For example:
2) Reverse
The Reverse I/O Modifier will reverse the order of a list (or lists in a multiple path structure)
3) Flatten
The Flatten I/O Modifier will reduce a multi-path tree down to a single list on the {0} path
4) Graft
The Graft I/O Modifier will create a new branch for each individual item in a list (or lists)
5) Simplify
The Simplify I/O Modifier will remove the overlap shared amongst all branches. [Note that a single branch does not share any overlap with anything else.]
6) Degrees
The Degrees Input Modifier indicates that the numbers received are actually measured in Degrees rather than Radians. Think of it more like a preference setting for each angle input on a Grasshopper Component that state you prefer to work in Degrees. There is no Output option as this is only available on Angle Inputs.
7) Expression
The Expression I/O Modifier allows you change the input value by evaluating an expression such as -x/2 which will have the input and make it negative. If you hover over the Tag a tool tip will be displayed with the expression. Since the release of GH version 0.9.0068 all I/O Expression Modifiers use "x" instead of the nickname of the parameter.
8) Reparameterize
The Reparameterize I/O Modifier will only work on lines, curves and surfaces forcing the domains of all geometry to the [0.0 to 1.0] range.
9) Invert
The Invert Input Modifier works in a similar way to a Not Gate in Boolean Logic negating the input. A good example of when to use this is on [Cull Pattern] where you wish to invert the logic to get the opposite results. There is no Output option as this is only available on Boolean Inputs.
…
ing the maps to the broader community.
At the moment, there are just a few known issues left that I have to fix for complex geometric cases but they should run smoothly for most energy models that you generate with Honeybee. Within the next month, I will be clearing up these last issues and, by the end of the month, there will be an updated youtube tutorial playlist on the comfort tools and how to use them.
In the meantime, there's an updated example file (http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Indoor_Microclimate_Map) and I wanted to get you all excited with some images and animations coming out of the design part of my thesis. I also wanted to post some documentation of all of the previous research that has made these climate maps possible and give out some much deserved thanks. To begin, this image gives you a sense of how the thermal maps are made by integrating several streams of data for EnergyPlus:
(https://drive.google.com/file/d/0Bz2PwDvkjovJaTMtWDRHMExvLUk/view?usp=sharing)
To get you excited, this youtube playlist has a whole bunch of time-lapse thermal animations that a lot of you should enjoy:
https://www.youtube.com/playlist?list=PLruLh1AdY-Sj3ehUTSfKa1IHPSiuJU52A
To give a brief summary of what you are looking at in the playlist, there are two proposed designs for completely passive co-habitation spaces in New York and Los Angeles.
These diagrams explain the Los Angeles design:
(https://drive.google.com/file/d/0Bz2PwDvkjovJM0JkM0tLZ1kxUmc/view?usp=sharing)
And this video gives you and idea of how it thermally performs:
These diagrams explain the New York design:
(https://drive.google.com/file/d/0Bz2PwDvkjovJS1BZVVZiTWF4MXM/view?usp=sharing)
And this video shows you the thermal performance:
Now to credit all of the awesome people that have made the creation of these thermal maps possible:
1) As any HB user knows, the open source engines and libraries under the hood of HB are EnergyPlus and OpenStudio and the incredible thermal richness of these maps would not have been possible without these DoE teams creating such a robust modeler so a big credit is definitely due to them.
2) Many of the initial ideas for these thermal maps come from an MIT Masters thesis that was completed a few years ago by Amanda Webb called "cMap". Even though these cMaps were only taking into account surface temperature from E+, it was the viewing of her radiant temperature maps that initially touched-off the series of events that led to my thesis so a great credit is due to her. You can find her thesis here (http://dspace.mit.edu/handle/1721.1/72870).
3) Since the thesis of A. Webb, there were two key developments that made the high resolution of the current maps believable as a good approximation of the actual thermal environment of a building. The first is a PhD thesis by Alejandra Menchaca (also conducted here at MIT) that developed a computationally fast way of estimating sub-zone air temperature stratification. The method, which works simply by weighing the heat gain in a room against the incoming airflow was validated by many CFD simulations over the course of Alejandra's thesis. You can find here final thesis document here (http://dspace.mit.edu/handle/1721.1/74907).
4) The other main development since the A. Webb thesis that made the radiant map much more accurate is a fast means of estimating the radiant temperature increase felt by an occupant sitting in the sun. This method was developed by some awesome scientists at the UC Berkeley Center for the Built Environment (CBE) Including Tyler Hoyt, who has been particularly helpful to me by supporting the CBE's Github page. The original paper on this fast means of estimating the solar temperature delta can be found here (http://escholarship.org/uc/item/89m1h2dg) although they should have an official publication in a journal soon.
5) The ASHRAE comfort models under the hood of LB+HB all are derived from the javascript of the CBE comfort tool (http://smap.cbe.berkeley.edu/comforttool). A huge chunk of credit definitely goes to this group and I encourage any other researchers who are getting deep into comfort to check the code resources on their github page (https://github.com/CenterForTheBuiltEnvironment/comfort_tool).
6) And, last but not least, a huge share of credit is due to Mostapha and all members of the LB+HB community. It is because of resources and help that Mostapha initially gave me that I learned how to code in the first place and the knowledge of a community that would use the things that I developed was, by fa,r the biggest motivation throughout this thesis and all of my LB efforts.
Thank you all and stay awesome,
-Chris…
presentar Digital Process: Generative Design Technologies Workshop; Taller especializado que se llevara a cabo en 4 de las ciudades mas importantes de la republica mexicana [Puebla] [Mexico DF] [Guadalajara] [Leon] en Enero y Febrero de 2012.http://gendesigntech.wordpress.com/
Enfocado principalmente a arquitectos, diseñadores industriales, diseñadores de interiores, Urbanistas, Artistas digitales, estudiantes y profesionistas afines al diseño; este Workshop tiene como objetivo proporcionar a los participantes los conocimientos y recursos tecnológicos que les permitan desarrollar los elementos de un proyecto desde la concepción hasta su aplicación de manera completa.Apoyándose en un conjunto potente y flexible de plataformas, los participantes aprenderán a generar, analizar y racionalizar morfologías complejas, formas orgánicas libres y algoritmos computacionales avanzados así como a producir visualizaciones fotorealístas aplicables en diversos proyectos de Diseño.A lo largo de 5 dias de intenso trabajo, exploración y retroalimentación los participantes seran guiados en el desarrollo de un flujo de trabajo mas dinamico, que les permitira explotar al maximo el potencial de las herramientas y potencializar sus habilidades, aptitudes y capacidades.Instructores:Leonardo Nuevo Arenas [Complex Geometry]José Eduardo Sánchez [DesignNest]Daniel Camiro/Luis de la Parra [Chido Studio]http://issuu.com/chidostudiodiseno/docs/digproworkConoce el programa aquí.http://gendesigntech.wordpress.com/program/Para registrarte por favor visita.http://gendesigntech.wordpress.com/registro…
tar Digital Process: Generative Design Technologies Workshop; Taller especializado que se llevara a cabo en 4 de las ciudades mas importantes de la republica mexicana [Puebla] [Mexico DF] [Guadalajara] [Leon] en Enero y Febrero de 2012.http://gendesigntech.wordpress.com/
Enfocado principalmente a arquitectos, diseñadores industriales, diseñadores de interiores, Urbanistas, Artistas digitales, estudiantes y profesionistas afines al diseño; este Workshop tiene como objetivo proporcionar a los participantes los conocimientos y recursos tecnológicos que les permitan desarrollar los elementos de un proyecto desde la concepción hasta su aplicación de manera completa.Apoyándose en un conjunto potente y flexible de plataformas, los participantes aprenderán a generar, analizar y racionalizar morfologías complejas, formas orgánicas libres y algoritmos computacionales avanzados así como a producir visualizaciones fotorealístas aplicables en diversos proyectos de Diseño.A lo largo de 5 dias de intenso trabajo, exploración y retroalimentación los participantes seran guiados en el desarrollo de un flujo de trabajo mas dinamico, que les permitira explotar al maximo el potencial de las herramientas y potencializar sus habilidades, aptitudes y capacidades.Instructores:Leonardo Nuevo Arenas [Complex Geometry]José Eduardo Sánchez [DesignNest]Daniel Camiro/Luis de la Parra [Chido Studio]http://issuu.com/chidostudiodiseno/docs/digproworkConoce el programa aquí.http://gendesigntech.wordpress.com/program/Para registrarte por favor visita.http://gendesigntech.wordpress.com/registro…
Profesor de Proyectos Francisco Arqués Soler, experto en la materia; Una vez exploradas y programadas las decisiones generativas del proyecto, el grupo servirá de laboratorio para investigar mutaciones del mismo.
http://dpa-etsam.com/iam/iam-cursos
https://www.facebook.com/iamadridETSAM?fref=ts
Trabajaremos en la plataforma de programación visual Grasshopper, reventaremos los principios de su estructura (mediante Millipede , exploraremos las condiciones bioclimáticas (mediante Ladybug+Honeybeey navegaremos por procesos de form finding y variaciones generativas (mediante Galapagos y Octopus y su conversión a BIM (mediante Chamaleon, Lyrebird, Visualarq, puesto que el dia final programaremos la salida del prototipo a Revit).
NOTA: el curso tendra lugar la segunda mitad de este OCTUBRE, aunque el CALENDARIO aun esta abierto de manera asamblearia al maravilloso grupo reducido de elegidos que se apunten al final, aunque suele ser MARTES Y JUEVES, DE 16:00 A 18:00 HORAS.
Título Oficial de Experto en Programacion Visual por la UPM, y los creditos respectivos (en este caso 2,5)
iAM | Instituto de Arquitectura de Madrid <iamadrid.arquitectura@upm.es> +34 91 336 6537 / 6589…