: ----------------------------------------------------------------------------------------------
1)
Hi Clemens I've analysed a plate structure using Karamba and wanted to do a convergence analysis on results computed as a function of the number of elements.
Now, when strictly looking at the result magnitudes of internal energy (IE) and maximum displacement (w_max), it's acceptable, that their relative deviations are very small. But I cannot explain the tendencies of their graphs. From what I know, FEM should always compute underestimated results when compared to analytical solutions. So I don't understand why both the IE and w_max seem to be decreasing for an increasing number of elements.
But my main concern is the behaviour of the peak moment, it seems to be simply hill climbing untill suddenly a singularity kicks in. I initially wanted to use the peak moment as a fitness value for optimisation, but with this behaviour, I don't think that would make sense. I've attached my GH file as well.
It would be much appreciated if you could enlighten me on these subjects. Cheers Daniel Andersen
2)
Hi Daniel,
I could not run your definition because I have not all the plug-ins installed that you use.
You are basically right that the displacement should increase with a finer mesh. However the result of the shell analysis also depends on the shape of the triangles (well formed vs. very distorted). In order to test this, I think it would be interesting to use a very simple example (e.g. rectangular plate with one column) where you can easily control mesh generation. Would you like to start a discussion on this in the karamba group at http://www.grasshopper3d.com/group/karamba?
It is not a good idea to use the bending moment at a singularity for optimization because the result will be heavily mesh dependent. Also real columns do have a certain diameter and modeling them as point supports introduces an error.
Best,
Clemens
3)
oh, and by the way!
Here's some relevant literature on handling peak moments: https://books.google.dk/books?id=-5TvNxnVMmgC&pg=PA219&lpg=PA219&dq=blaauwendraad+plates+and+fem&source=bl&ots=SdDcwnrSA1&sig=6HulPmKNIhqKx4_rGxitteMC4CU&hl=da&sa=X&ved=0CDEQ6AEwA2oVChMIg66k0LPaxgIVgY1yCh1KPAeY#v=onepage&q=chapter%2014&f=false (Blaauwendraad, J., 2010. Plates and FEM : Surprises and Pitfalls, see Chapter 14) It would be great if a feature dealing with peak moments could be incorporated in Karamba. In my work, I ended up exporting my models to Robot in order to verify the moment values. Best, Daniel
4)
Hi Daniel,
thank you for your reply and the link to Blaauwendraads excellent book!
At some point I hope to include material nonlinearity in Karamba which will help in dealing with stress singularities.
If you want you could open a discussion with a title like 'moment peaks in shells at point-supports'. Then we could copy and paste the text of our conversation into it.
Best,
Clemens
----------------------------------------------------------------------------------------------…
lly it should not make much of a difference - random number generation is not affected, mutation also is not. crossover is a bit more tricky, I use Simulated Binary Crossover (SBX-20) which was introduced already in 1194:
Deb K., Agrawal R. B.: Simulated Binary Crossover for Continuous Search Space, inIITK/ME/SMD-94027, Convenor, Technical Reports, Indian Institue of Technology, Kanpur, India,November 1994
Abst ract. The success of binary-coded gene t ic algorithms (GA s) inproblems having discrete sear ch sp ace largely depends on the codingused to represent the prob lem variables and on the crossover ope ratorthat propagates buildin g blocks from pare nt strings to childrenst rings . In solving optimization problems having continuous searchspace, binary-co ded GAs discr et ize the search space by using a codingof the problem var iables in binary st rings. However , t he coding of realvaluedvari ables in finit e-length st rings causes a number of difficulties:inability to achieve arbit rary pr ecision in the obtained solution , fixedmapping of problem var iab les, inh eren t Hamming cliff problem associatedwit h binary coding, and processing of Holland 's schemata incont inuous search space. Although a number of real-coded GAs aredevelop ed to solve optimization problems having a cont inuous searchspace, the search powers of these crossover operators are not adequate .In t his paper , t he search power of a crossover operator is defined int erms of the probability of creating an arbitrary child solut ion froma given pair of parent solutions . Motivated by t he success of binarycodedGAs in discret e search space problems , we develop a real-codedcrossover (which we call the simulated binar y crossover , or SBX) operatorwhose search power is similar to that of the single-point crossoverused in binary-coded GAs . Simulation results on a number of realvaluedt est problems of varying difficulty and dimensionality suggestt hat the real-cod ed GAs with t he SBX operator ar e ab le to perform asgood or bet t er than binary-cod ed GAs wit h t he single-po int crossover.SBX is found to be particularly useful in problems having mult ip le optimalsolutions with a narrow global basin an d in prob lems where thelower and upper bo unds of the global optimum are not known a priori.Further , a simulation on a two-var iable blocked function showsthat the real-coded GA with SBX work s as suggested by Goldberg
and in most cases t he performance of real-coded GA with SBX is similarto that of binary GAs with a single-point crossover. Based onth ese encouraging results, this paper suggests a number of extensionsto the present study.
7. ConclusionsIn this paper, a real-coded crossover operator has been develop ed bas ed ont he search characte rist ics of a single-point crossover used in binary -codedGAs. In ord er to define the search power of a crossover operator, a spreadfactor has been introduced as the ratio of the absolute differences of thechildren points to that of the parent points. Thereaft er , the probabilityof creat ing a child point for two given parent points has been derived forthe single-point crossover. Motivat ed by the success of binary-coded GAsin problems wit h discrete sear ch space, a simul ated bin ary crossover (SBX)operator has been develop ed to solve problems having cont inuous searchspace. The SBX operator has search power similar to that of the single-po intcrossover.On a number of t est fun ctions, including De Jong's five te st fun ct ions, ithas been found that real-coded GAs with the SBX operator can overcome anumb er of difficult ies inherent with binary-coded GAs in solving cont inuoussearch space problems-Hamming cliff problem, arbitrary pr ecision problem,and fixed mapped coding problem. In the comparison of real-coded GAs wit ha SBX operator and binary-coded GAs with a single-point crossover ope rat or ,it has been observed that the performance of the former is better than thelatt er on continuous functions and the performance of the former is similarto the lat ter in solving discret e and difficult functions. In comparison withanother real-coded crossover operator (i.e. , BLX-0 .5) suggested elsewhere ,SBX performs better in difficult test functions. It has also been observedthat SBX is particularly useful in problems where the bounds of the optimum
point is not known a priori and wher e there are multi ple optima, of whichone is global.Real-coded GAs wit h t he SBX op erator have also been tried in solvinga two-variab le blocked function (the concept of blocked fun ctions was introducedin [10]). Blocked fun ct ions are difficult for real-coded GAs , becauselocal optimal points block t he progress of search to continue towards t heglobal optimal point . The simulat ion results on t he two-var iable blockedfunction have shown that in most occasions , the sea rch proceeds the way aspr edicted in [10]. Most importantly, it has been observed that the real-codedGAs wit h SBX work similar to that of t he binary-coded GAs wit h single-pointcrossover in overcoming t he barrier of the local peaks and converging to t heglobal bas in. However , it is premature to conclude whether real-coded GAswit h SBX op erator can overcome t he local barriers in higher-dimensionalblocked fun ct ions.These results are encour aging and suggest avenues for further research.Because the SBX ope rat or uses a probability distribut ion for choosing a childpo int , the real-coded GAs wit h SBX are one st ep ahead of the binary-codedGAs in te rms of ach ieving a convergence proof for GAs. With a direct probabilist ic relationship between children and parent points used in t his paper,cues from t he clas sical stochast ic optimization methods can be borrowed toachieve a convergence proof of GAs , or a much closer tie between the classicaloptimization methods and GAs is on t he horizon.
In short, according to the authors my SBX operator using real gene values is as good as older ones specially designed for discrete searches, and better in continuous searches. SBX as far as i know meanwhile is a standard general crossover operator.
But:
- there might be better ones out there i just havent seen yet. please tell me.
- besides tournament selection and mutation, crossover is just one part of the breeding pipeline. also there is the elite management for MOEA which is AT LEAST as important as the breeding itself.
- depending on the problem, there are almost always better specific ways of how to code the mutation and the crossover operators. but octopus is meant to keep it general for the moment - maybe there's a way for an interface to code those things yourself..!?
2) elite size = SPEA-2 archive size, yes. the rate depends on your convergence behaviour i would say. i usually start off with at least half the size of the population, but mostly the same size (as it is hard-coded in the new version, i just realize) is big enough.
4) the non-dominated front is always put into the archive first. if the archive size is exceeded, the least important individual (the significant strategy in SPEA-2) are truncated one by one until the size is reached. if it is smaller, the fittest dominated individuals are put into the elite. the latter happens in the beginning of the run, when the front wasn't discovered well yet.
3) yes it is. this is a custom implementation i figured out myself. however i'm close to have the HypE algorithm working in the new version, which natively has got the possibility to articulate perference relations on sets of solutions.
…
ive collaborative environment.
TYPE : Course module and Workshop
The event is open for anybody interested from all the fields of design, including: architecture, interior design, furniture design, product design, fashion design, scenography, and engineering.
1. COURSE MODULE (20-23 April 2014) - optional
+ type: 3 days intensive course regarding basic knowledge in parametric design (LEVEL 1)
+ software: Rhinoceros & Grasshopper
+ plugins: Kangaroo, Weaver Bird, Lunch box, Ghowl, Geco
+ achievements:
- acquainting to the components & the concept of Generative Design
- understanding the strategies in Algorithmic Design
- how to easily insert simple mathematical equation into the project to gain more control
- how to utilize proper plugins with respect to their nature of the project
- interacting with different analysis platforms such as Ecotect & remote controller
- solving several exercises with different scales( 2D- 3D ) during each phase of the workshop
2. WORKSHOP (23-27 April 2014)
A 5 day Design-Based Research Workshop exploring new techniques in Digital Architecture/Fabrication, with a specific focus on the use of generative systems and parametric modeling as tools for creative expression.
Our ultimate goal is to increasing the efficiency of utilizing digital tools in parallel with geometric performance of the primitive design agent.
+ + CONCEPT
Fashion and Architecture are both based on basic life necessities – clothing and shelter.
However, they are also forms of self-expression – for both creators and consumers.
Both fashion and architecture affect our emotional being in many ways.
The agenda of this workshop is to investigate on the overlap between these two areas of design, art & fashion.
Fashion and architecture express ideas of personal, social and cultural identity, reflecting the concerns of the user and the ambition of the age. Their relationship is a symbiotic one and throughout history, clothing and buildings have echoed each other in form and appearance. This only seems natural as they not only share the primary function of providing shelter and protection for the body, but also because they both create space and volume out of flat, two-dimensional materials.
While they have much in common, they are also intrinsically different – address the human scale, but the proportions, sizes and shapes differ enormously.
+ + + OBJECTIVES
So far, Architects have been using techniques such as folding, bending etc. to create space, structural roofs or different other structural shapes.
The agenda of this workshop goes further with the investigation of algorithmic thinking through generative tools Integrated in design.
The challenge is creating a bridge that connects these two areas of design, architecture and fashion that perform at two opposite scales.
+ + + + TECHNICAL BRIEF
In the early stages physical models and low-tech strategies will be used, allowing the participants to gain a greater understanding of materials, fabrication and assembly methods as well as simple, yet pragmatic structural solutions.
Later in the workshop these strategies will be digitalized and elaborated using software visualizing tools such as Rhinoceros and the algorithmic plug-in Grasshopper.…
Profesor de Proyectos Francisco Arqués Soler, experto en la materia; Una vez exploradas y programadas las decisiones generativas del proyecto, el grupo servirá de laboratorio para investigar mutaciones del mismo.
http://dpa-etsam.com/iam/iam-cursos
https://www.facebook.com/iamadridETSAM?fref=ts
Trabajaremos en la plataforma de programación visual Grasshopper, reventaremos los principios de su estructura (mediante Millipede , exploraremos las condiciones bioclimáticas (mediante Ladybug+Honeybeey navegaremos por procesos de form finding y variaciones generativas (mediante Galapagos y Octopus y su conversión a BIM (mediante Chamaleon, Lyrebird, Visualarq, puesto que el dia final programaremos la salida del prototipo a Revit).
NOTA: el curso tendra lugar la segunda mitad de este OCTUBRE, aunque el CALENDARIO aun esta abierto de manera asamblearia al maravilloso grupo reducido de elegidos que se apunten al final, aunque suele ser MARTES Y JUEVES, DE 16:00 A 18:00 HORAS.
Título Oficial de Experto en Programacion Visual por la UPM, y los creditos respectivos (en este caso 2,5)
iAM | Instituto de Arquitectura de Madrid <iamadrid.arquitectura@upm.es> +34 91 336 6537 / 6589…
erona, nei giorni 01,02 e 03 dicembre 2016.
Il comfort visivo e la gestione dell’illuminazione naturale in relazione al risparmio energetico diventano sempre più rilevanti per una progettazione innovativa degli edifici. Ad esempio, il nuovo protocollo LEED 4 riconosce crediti per le simulazioni di daylighting e conferma l’importanza degli aspetti progettuali per “collegare gli occupanti con lo spazio esterno, rinforzare i ritmi circadiani, ridurre i consumi di energia elettrica per l’illuminazione artificiale con l’introduzione della luce naturale negli spazi”. Senza strumenti software per la simulazione della luce non è possibile ottenere risultati di qualità. Radiance è un software validato, utilizzato sia a livello di ricerca che dai progettisti ed è tra i più accurati per la simulazione professionale della luce naturale e artificiale. Non ha limiti di complessità geometrica ed è adatto a essere integrato in altri software di calcolo e interfacce grafiche. Queste ultime facilitano le procedure di programmazione. Le principali e più versatili saranno oggetto del corso (DIVA4Rhino e Ladybug+ Honeybee, plug-in per Grasshopper e Rhinoceros 3D).
Il corso è rivolto a progettisti e ricercatori che vogliano acquisire strumenti pratici per la simulazione con Radiance al fine di mettere a punto e verificare le soluzioni più adatte alle proprie esigenze. Sono previste lezioni di teoria e pratica con esempi ed esercitazioni volte a coprire in modo dimostrativo ed interattivo i concetti trattati.
Le domande di iscrizione devono essere presentate entro il 16 novembre 2016.
La brochure con i contenuti del corso e tutte le informazioni sono disponibili su questo link
Il corso è sponsorizzato da Glas Müller.…
etra -UNESCO world heritage sites. The course includes technical software tutorials in cutting-edge software, a design component, and guest lecture series.
The Visiting school is an extraordinary travel, learning and networking experience as well as a credential for graduate studies and career opportunities. The course is focused on speculating architecture for MARS, the Wadi Rum desert is resemblant of Martian landscapes, where Ridley Scott's 'The Martian' was filmed starring Matt Damon. The course will focus on novel techniques in design and fabrication at multiple scales: material, architectural and urban.
Contact jordan@aaschool.ac.uk visit www.aavsjo.com for registration and details. Accommodation is available at Antika Hotel for shared twin rooms with other participants at a discounted rate of 300$ for the entire course including breakfast and wifi (June 23 to July 4). The hotel is in Jabal Amman within walking distance to our venue.
Instructors:
Kais Al-Rawi
Julia Koerner
Marie Boltenstern
Mazen AlAli
Barry Wark
Andreas Körner
Guest Speakers:
Rob Mueller, NASA | Swampworks
Julia Koerner, JK Design | UCLA AUD
Full Guests to be announced.
Past Keynote Speakers:
Ross Lovegrove
Ben Aranda
Mark Foster Gage
…
. BIM and Parametric.
Posts and files over at Design By Many:
http://www.designbymany.com/content/model-pattern-american-cement-building
I am equally comfortable on both of these platforms, and built the same parameters into each model. My modeling experience was very similar to that of Santiago. The Revit model took 4 hours to build, while the GH deff. took 16 hours to build. Time invested is certainly not the only metric to be compared; however, it is a good demonstration of the immediacy with which modifications can be made to the component system if parameter adjustment is not satisfactory.
With credit to Andrew Kudless for his process work on Manifold, I have adapted a similar workflow tracing diagram to the two models:
My general observation is that both tool sets approach the same problem, namely providing a structured relationship between components and wholes, but from opposing directions. BIM excels at compartmentalizing individual components, while parametric modelers like GH excell at global system-wide manipulations.
In the case of the American Cement Building, modeling the cast component seems to have fit in the box of 'the whole being reducible to its parts' the best. Although i anticipated Revit having more trouble with the surface generation, I found it to be more flexible on all accounts. Building up the component in a Pattern Based Curtain System family, the direct interaction with the rig (specifying control point work planes, and offsets) allowed the network of interactions to be accessible and editable throughout the build process. This family was then applied to a curtain panel grid which itself could be flexed in proportion, and cell count.
With the GH build I originally had the intention of utilizing data trees for parallel component construction so that changes to the base grid would affect offset normals and the like. However, after i had spent three hours constructing one parametric rail curve, I was unable to continue keeping track of the parallel data structure, and reverted to building a singular component. While GH certainly has the capacity to handle this task, I have found personally that the user does not.…
between the two. A simple example would be if you plug Integer data into a Text parameter. It's perfectly possible to create a piece of text which represents the integer. I.e. the value 18 becomes the text "18".
It's also possible to convert a floating point number to text, although in that case the conversion is not lossless, as the text only shows a limited number of decimals, thus rounding the actual numeric value.
In your specific case here, you have connected a Curve parameter output with the Loft Options input. Loft options are about the type of loft, whether or not to rebuild/refit the resulting loft surface and -if so- what sort of tolerance to use.
If you look at the tooltips of the input parameter for the Loft component, you'll see that the first one takes all the section curves and the second one takes the options to be used to make the loft. You'll have to put all your curves into the first input:
This can be accomplished by holding SHIFT while making the second connection.
However this will generate a new problem. Loft operates on a list of curves, and for each list of curves you provide it will try to create a single loft. But if you merge the two curve streams, you'll sometimes get lists of 4 curves, this is probably not what you want.
At any rate, Loft is probably not what you want in the first place as an offsetted curve (especially curves with kinks) will result in incredibly messy lofts. I'd recommend Boundary Surface as an alternative, but that will generate trimmed surfaces, which may not be acceptable for you.
Now then, on to the Offset failure. Curve offsetting is a planar operation. By default, the plane in which Offset works is the world XY plane. Your curves are all perpendicular to the world XY plane, so that is already problematic. The fix would be easy (plug the curves also into the Offset P input), were it not that one of your section curves is wonky. This is probably either due to a bug in the Rhino Brep|Plane intersector or it's a problem with the input Brep. Either way, I could not get one of the curves to offset correctly, no matter what I tried.
In the end I solved it by using Loose Offset, which also means that the loft works much better because both the interior and the exterior curve have identical topology (see attached). Do note that Loose Offset does not guarantee an offset accurate to within document tolerance, it only moves the control-points.
--
David Rutten
david@mcneel.com…
ly planes instead of lines, so there is no equivalently elegant and orderly branching structure in there made from lines. You only get the mostly triangulated truss which is much tighter, shown here in blue in the 2D version:
If you only sparsely populate those truss points, you don't have as much triangulation and you do get more of a natural bone look, but you lose the orderly branching that I was so excited about in 2D. Also, since hexagons pack 2D space perfectly, the 2D case does create a lot of good areas of hexagons, but in 3D there is no similarly symmetrical space filling object except a cube, but cubes are not what Voronoi emulates at all. If the 2D case branches with three lines per vertex, then the 3D case could ideally branch with 4 lines per vertex, just like the atomic structure of diamond. I was hoping for that, naively, but am now discouraged. A surface adaptive diamonoid lattice is a long way off, it seems. Without the Voronoi relaxation cycles, just distorting an existing lattice somehow merged to the surface as needed local to the surface, won't even out well.
Diamond also is a very specific structure, not amendable to fractal like branching so I'm not even sure what the 3D equivalent of such branching is, whether there is an orderly system. "Branching" is the wrong concept anyway, since they both branch and join together again, forming cells. Pure branching with that ends at the surface is not coming out of Voronoi.
http://www.grasshopper3d.com/photo/stochastic-fractal
Here I have created a superior surface adaptive 3D Voronoi, by using my 2D system of only moving a lot the vertices already near the surface, leaving mostly alone the deeper ones, so I no longer get a blank hole in the interior but I do get lots of surface density:
…
Added by Nik Willmore at 2:01am on August 16, 2015