lly it should not make much of a difference - random number generation is not affected, mutation also is not. crossover is a bit more tricky, I use Simulated Binary Crossover (SBX-20) which was introduced already in 1194:
Deb K., Agrawal R. B.: Simulated Binary Crossover for Continuous Search Space, inIITK/ME/SMD-94027, Convenor, Technical Reports, Indian Institue of Technology, Kanpur, India,November 1994
Abst ract. The success of binary-coded gene t ic algorithms (GA s) inproblems having discrete sear ch sp ace largely depends on the codingused to represent the prob lem variables and on the crossover ope ratorthat propagates buildin g blocks from pare nt strings to childrenst rings . In solving optimization problems having continuous searchspace, binary-co ded GAs discr et ize the search space by using a codingof the problem var iables in binary st rings. However , t he coding of realvaluedvari ables in finit e-length st rings causes a number of difficulties:inability to achieve arbit rary pr ecision in the obtained solution , fixedmapping of problem var iab les, inh eren t Hamming cliff problem associatedwit h binary coding, and processing of Holland 's schemata incont inuous search space. Although a number of real-coded GAs aredevelop ed to solve optimization problems having a cont inuous searchspace, the search powers of these crossover operators are not adequate .In t his paper , t he search power of a crossover operator is defined int erms of the probability of creating an arbitrary child solut ion froma given pair of parent solutions . Motivated by t he success of binarycodedGAs in discret e search space problems , we develop a real-codedcrossover (which we call the simulated binar y crossover , or SBX) operatorwhose search power is similar to that of the single-point crossoverused in binary-coded GAs . Simulation results on a number of realvaluedt est problems of varying difficulty and dimensionality suggestt hat the real-cod ed GAs with t he SBX operator ar e ab le to perform asgood or bet t er than binary-cod ed GAs wit h t he single-po int crossover.SBX is found to be particularly useful in problems having mult ip le optimalsolutions with a narrow global basin an d in prob lems where thelower and upper bo unds of the global optimum are not known a priori.Further , a simulation on a two-var iable blocked function showsthat the real-coded GA with SBX work s as suggested by Goldberg
and in most cases t he performance of real-coded GA with SBX is similarto that of binary GAs with a single-point crossover. Based onth ese encouraging results, this paper suggests a number of extensionsto the present study.
7. ConclusionsIn this paper, a real-coded crossover operator has been develop ed bas ed ont he search characte rist ics of a single-point crossover used in binary -codedGAs. In ord er to define the search power of a crossover operator, a spreadfactor has been introduced as the ratio of the absolute differences of thechildren points to that of the parent points. Thereaft er , the probabilityof creat ing a child point for two given parent points has been derived forthe single-point crossover. Motivat ed by the success of binary-coded GAsin problems wit h discrete sear ch space, a simul ated bin ary crossover (SBX)operator has been develop ed to solve problems having cont inuous searchspace. The SBX operator has search power similar to that of the single-po intcrossover.On a number of t est fun ctions, including De Jong's five te st fun ct ions, ithas been found that real-coded GAs with the SBX operator can overcome anumb er of difficult ies inherent with binary-coded GAs in solving cont inuoussearch space problems-Hamming cliff problem, arbitrary pr ecision problem,and fixed mapped coding problem. In the comparison of real-coded GAs wit ha SBX operator and binary-coded GAs with a single-point crossover ope rat or ,it has been observed that the performance of the former is better than thelatt er on continuous functions and the performance of the former is similarto the lat ter in solving discret e and difficult functions. In comparison withanother real-coded crossover operator (i.e. , BLX-0 .5) suggested elsewhere ,SBX performs better in difficult test functions. It has also been observedthat SBX is particularly useful in problems where the bounds of the optimum
point is not known a priori and wher e there are multi ple optima, of whichone is global.Real-coded GAs wit h t he SBX op erator have also been tried in solvinga two-variab le blocked function (the concept of blocked fun ctions was introducedin [10]). Blocked fun ct ions are difficult for real-coded GAs , becauselocal optimal points block t he progress of search to continue towards t heglobal optimal point . The simulat ion results on t he two-var iable blockedfunction have shown that in most occasions , the sea rch proceeds the way aspr edicted in [10]. Most importantly, it has been observed that the real-codedGAs wit h SBX work similar to that of t he binary-coded GAs wit h single-pointcrossover in overcoming t he barrier of the local peaks and converging to t heglobal bas in. However , it is premature to conclude whether real-coded GAswit h SBX op erator can overcome t he local barriers in higher-dimensionalblocked fun ct ions.These results are encour aging and suggest avenues for further research.Because the SBX ope rat or uses a probability distribut ion for choosing a childpo int , the real-coded GAs wit h SBX are one st ep ahead of the binary-codedGAs in te rms of ach ieving a convergence proof for GAs. With a direct probabilist ic relationship between children and parent points used in t his paper,cues from t he clas sical stochast ic optimization methods can be borrowed toachieve a convergence proof of GAs , or a much closer tie between the classicaloptimization methods and GAs is on t he horizon.
In short, according to the authors my SBX operator using real gene values is as good as older ones specially designed for discrete searches, and better in continuous searches. SBX as far as i know meanwhile is a standard general crossover operator.
But:
- there might be better ones out there i just havent seen yet. please tell me.
- besides tournament selection and mutation, crossover is just one part of the breeding pipeline. also there is the elite management for MOEA which is AT LEAST as important as the breeding itself.
- depending on the problem, there are almost always better specific ways of how to code the mutation and the crossover operators. but octopus is meant to keep it general for the moment - maybe there's a way for an interface to code those things yourself..!?
2) elite size = SPEA-2 archive size, yes. the rate depends on your convergence behaviour i would say. i usually start off with at least half the size of the population, but mostly the same size (as it is hard-coded in the new version, i just realize) is big enough.
4) the non-dominated front is always put into the archive first. if the archive size is exceeded, the least important individual (the significant strategy in SPEA-2) are truncated one by one until the size is reached. if it is smaller, the fittest dominated individuals are put into the elite. the latter happens in the beginning of the run, when the front wasn't discovered well yet.
3) yes it is. this is a custom implementation i figured out myself. however i'm close to have the HypE algorithm working in the new version, which natively has got the possibility to articulate perference relations on sets of solutions.
…
presentar Digital Process: Generative Design Technologies Workshop; Taller especializado que se llevara a cabo en 4 de las ciudades mas importantes de la republica mexicana [Puebla] [Mexico DF] [Guadalajara] [Leon] en Enero y Febrero de 2012.http://gendesigntech.wordpress.com/
Enfocado principalmente a arquitectos, diseñadores industriales, diseñadores de interiores, Urbanistas, Artistas digitales, estudiantes y profesionistas afines al diseño; este Workshop tiene como objetivo proporcionar a los participantes los conocimientos y recursos tecnológicos que les permitan desarrollar los elementos de un proyecto desde la concepción hasta su aplicación de manera completa.Apoyándose en un conjunto potente y flexible de plataformas, los participantes aprenderán a generar, analizar y racionalizar morfologías complejas, formas orgánicas libres y algoritmos computacionales avanzados así como a producir visualizaciones fotorealístas aplicables en diversos proyectos de Diseño.A lo largo de 5 dias de intenso trabajo, exploración y retroalimentación los participantes seran guiados en el desarrollo de un flujo de trabajo mas dinamico, que les permitira explotar al maximo el potencial de las herramientas y potencializar sus habilidades, aptitudes y capacidades.Instructores:Leonardo Nuevo Arenas [Complex Geometry]José Eduardo Sánchez [DesignNest]Daniel Camiro/Luis de la Parra [Chido Studio]http://issuu.com/chidostudiodiseno/docs/digproworkConoce el programa aquí.http://gendesigntech.wordpress.com/program/Para registrarte por favor visita.http://gendesigntech.wordpress.com/registro…
tar Digital Process: Generative Design Technologies Workshop; Taller especializado que se llevara a cabo en 4 de las ciudades mas importantes de la republica mexicana [Puebla] [Mexico DF] [Guadalajara] [Leon] en Enero y Febrero de 2012.http://gendesigntech.wordpress.com/
Enfocado principalmente a arquitectos, diseñadores industriales, diseñadores de interiores, Urbanistas, Artistas digitales, estudiantes y profesionistas afines al diseño; este Workshop tiene como objetivo proporcionar a los participantes los conocimientos y recursos tecnológicos que les permitan desarrollar los elementos de un proyecto desde la concepción hasta su aplicación de manera completa.Apoyándose en un conjunto potente y flexible de plataformas, los participantes aprenderán a generar, analizar y racionalizar morfologías complejas, formas orgánicas libres y algoritmos computacionales avanzados así como a producir visualizaciones fotorealístas aplicables en diversos proyectos de Diseño.A lo largo de 5 dias de intenso trabajo, exploración y retroalimentación los participantes seran guiados en el desarrollo de un flujo de trabajo mas dinamico, que les permitira explotar al maximo el potencial de las herramientas y potencializar sus habilidades, aptitudes y capacidades.Instructores:Leonardo Nuevo Arenas [Complex Geometry]José Eduardo Sánchez [DesignNest]Daniel Camiro/Luis de la Parra [Chido Studio]http://issuu.com/chidostudiodiseno/docs/digproworkConoce el programa aquí.http://gendesigntech.wordpress.com/program/Para registrarte por favor visita.http://gendesigntech.wordpress.com/registro…
Profesor de Proyectos Francisco Arqués Soler, experto en la materia; Una vez exploradas y programadas las decisiones generativas del proyecto, el grupo servirá de laboratorio para investigar mutaciones del mismo.
http://dpa-etsam.com/iam/iam-cursos
https://www.facebook.com/iamadridETSAM?fref=ts
Trabajaremos en la plataforma de programación visual Grasshopper, reventaremos los principios de su estructura (mediante Millipede , exploraremos las condiciones bioclimáticas (mediante Ladybug+Honeybeey navegaremos por procesos de form finding y variaciones generativas (mediante Galapagos y Octopus y su conversión a BIM (mediante Chamaleon, Lyrebird, Visualarq, puesto que el dia final programaremos la salida del prototipo a Revit).
NOTA: el curso tendra lugar la segunda mitad de este OCTUBRE, aunque el CALENDARIO aun esta abierto de manera asamblearia al maravilloso grupo reducido de elegidos que se apunten al final, aunque suele ser MARTES Y JUEVES, DE 16:00 A 18:00 HORAS.
Título Oficial de Experto en Programacion Visual por la UPM, y los creditos respectivos (en este caso 2,5)
iAM | Instituto de Arquitectura de Madrid <iamadrid.arquitectura@upm.es> +34 91 336 6537 / 6589…
erona, nei giorni 01,02 e 03 dicembre 2016.
Il comfort visivo e la gestione dell’illuminazione naturale in relazione al risparmio energetico diventano sempre più rilevanti per una progettazione innovativa degli edifici. Ad esempio, il nuovo protocollo LEED 4 riconosce crediti per le simulazioni di daylighting e conferma l’importanza degli aspetti progettuali per “collegare gli occupanti con lo spazio esterno, rinforzare i ritmi circadiani, ridurre i consumi di energia elettrica per l’illuminazione artificiale con l’introduzione della luce naturale negli spazi”. Senza strumenti software per la simulazione della luce non è possibile ottenere risultati di qualità. Radiance è un software validato, utilizzato sia a livello di ricerca che dai progettisti ed è tra i più accurati per la simulazione professionale della luce naturale e artificiale. Non ha limiti di complessità geometrica ed è adatto a essere integrato in altri software di calcolo e interfacce grafiche. Queste ultime facilitano le procedure di programmazione. Le principali e più versatili saranno oggetto del corso (DIVA4Rhino e Ladybug+ Honeybee, plug-in per Grasshopper e Rhinoceros 3D).
Il corso è rivolto a progettisti e ricercatori che vogliano acquisire strumenti pratici per la simulazione con Radiance al fine di mettere a punto e verificare le soluzioni più adatte alle proprie esigenze. Sono previste lezioni di teoria e pratica con esempi ed esercitazioni volte a coprire in modo dimostrativo ed interattivo i concetti trattati.
Le domande di iscrizione devono essere presentate entro il 16 novembre 2016.
La brochure con i contenuti del corso e tutte le informazioni sono disponibili su questo link
Il corso è sponsorizzato da Glas Müller.…
etra -UNESCO world heritage sites. The course includes technical software tutorials in cutting-edge software, a design component, and guest lecture series.
The Visiting school is an extraordinary travel, learning and networking experience as well as a credential for graduate studies and career opportunities. The course is focused on speculating architecture for MARS, the Wadi Rum desert is resemblant of Martian landscapes, where Ridley Scott's 'The Martian' was filmed starring Matt Damon. The course will focus on novel techniques in design and fabrication at multiple scales: material, architectural and urban.
Contact jordan@aaschool.ac.uk visit www.aavsjo.com for registration and details. Accommodation is available at Antika Hotel for shared twin rooms with other participants at a discounted rate of 300$ for the entire course including breakfast and wifi (June 23 to July 4). The hotel is in Jabal Amman within walking distance to our venue.
Instructors:
Kais Al-Rawi
Julia Koerner
Marie Boltenstern
Mazen AlAli
Barry Wark
Andreas Körner
Guest Speakers:
Rob Mueller, NASA | Swampworks
Julia Koerner, JK Design | UCLA AUD
Full Guests to be announced.
Past Keynote Speakers:
Ross Lovegrove
Ben Aranda
Mark Foster Gage
…
. BIM and Parametric.
Posts and files over at Design By Many:
http://www.designbymany.com/content/model-pattern-american-cement-building
I am equally comfortable on both of these platforms, and built the same parameters into each model. My modeling experience was very similar to that of Santiago. The Revit model took 4 hours to build, while the GH deff. took 16 hours to build. Time invested is certainly not the only metric to be compared; however, it is a good demonstration of the immediacy with which modifications can be made to the component system if parameter adjustment is not satisfactory.
With credit to Andrew Kudless for his process work on Manifold, I have adapted a similar workflow tracing diagram to the two models:
My general observation is that both tool sets approach the same problem, namely providing a structured relationship between components and wholes, but from opposing directions. BIM excels at compartmentalizing individual components, while parametric modelers like GH excell at global system-wide manipulations.
In the case of the American Cement Building, modeling the cast component seems to have fit in the box of 'the whole being reducible to its parts' the best. Although i anticipated Revit having more trouble with the surface generation, I found it to be more flexible on all accounts. Building up the component in a Pattern Based Curtain System family, the direct interaction with the rig (specifying control point work planes, and offsets) allowed the network of interactions to be accessible and editable throughout the build process. This family was then applied to a curtain panel grid which itself could be flexed in proportion, and cell count.
With the GH build I originally had the intention of utilizing data trees for parallel component construction so that changes to the base grid would affect offset normals and the like. However, after i had spent three hours constructing one parametric rail curve, I was unable to continue keeping track of the parallel data structure, and reverted to building a singular component. While GH certainly has the capacity to handle this task, I have found personally that the user does not.…
between the two. A simple example would be if you plug Integer data into a Text parameter. It's perfectly possible to create a piece of text which represents the integer. I.e. the value 18 becomes the text "18".
It's also possible to convert a floating point number to text, although in that case the conversion is not lossless, as the text only shows a limited number of decimals, thus rounding the actual numeric value.
In your specific case here, you have connected a Curve parameter output with the Loft Options input. Loft options are about the type of loft, whether or not to rebuild/refit the resulting loft surface and -if so- what sort of tolerance to use.
If you look at the tooltips of the input parameter for the Loft component, you'll see that the first one takes all the section curves and the second one takes the options to be used to make the loft. You'll have to put all your curves into the first input:
This can be accomplished by holding SHIFT while making the second connection.
However this will generate a new problem. Loft operates on a list of curves, and for each list of curves you provide it will try to create a single loft. But if you merge the two curve streams, you'll sometimes get lists of 4 curves, this is probably not what you want.
At any rate, Loft is probably not what you want in the first place as an offsetted curve (especially curves with kinks) will result in incredibly messy lofts. I'd recommend Boundary Surface as an alternative, but that will generate trimmed surfaces, which may not be acceptable for you.
Now then, on to the Offset failure. Curve offsetting is a planar operation. By default, the plane in which Offset works is the world XY plane. Your curves are all perpendicular to the world XY plane, so that is already problematic. The fix would be easy (plug the curves also into the Offset P input), were it not that one of your section curves is wonky. This is probably either due to a bug in the Rhino Brep|Plane intersector or it's a problem with the input Brep. Either way, I could not get one of the curves to offset correctly, no matter what I tried.
In the end I solved it by using Loose Offset, which also means that the loft works much better because both the interior and the exterior curve have identical topology (see attached). Do note that Loose Offset does not guarantee an offset accurate to within document tolerance, it only moves the control-points.
--
David Rutten
david@mcneel.com…
ly planes instead of lines, so there is no equivalently elegant and orderly branching structure in there made from lines. You only get the mostly triangulated truss which is much tighter, shown here in blue in the 2D version:
If you only sparsely populate those truss points, you don't have as much triangulation and you do get more of a natural bone look, but you lose the orderly branching that I was so excited about in 2D. Also, since hexagons pack 2D space perfectly, the 2D case does create a lot of good areas of hexagons, but in 3D there is no similarly symmetrical space filling object except a cube, but cubes are not what Voronoi emulates at all. If the 2D case branches with three lines per vertex, then the 3D case could ideally branch with 4 lines per vertex, just like the atomic structure of diamond. I was hoping for that, naively, but am now discouraged. A surface adaptive diamonoid lattice is a long way off, it seems. Without the Voronoi relaxation cycles, just distorting an existing lattice somehow merged to the surface as needed local to the surface, won't even out well.
Diamond also is a very specific structure, not amendable to fractal like branching so I'm not even sure what the 3D equivalent of such branching is, whether there is an orderly system. "Branching" is the wrong concept anyway, since they both branch and join together again, forming cells. Pure branching with that ends at the surface is not coming out of Voronoi.
http://www.grasshopper3d.com/photo/stochastic-fractal
Here I have created a superior surface adaptive 3D Voronoi, by using my 2D system of only moving a lot the vertices already near the surface, leaving mostly alone the deeper ones, so I no longer get a blank hole in the interior but I do get lots of surface density:
…
Added by Nik Willmore at 2:01am on August 16, 2015
I guess I'd try creating a mesh from those points. Tetgen only accepts a mesh.
However, there are advanced flags that could be changed by editing the Python code, which is fairly straightforward as far as Python goes.
chrome-extension://oemmndcbldboiebfnladdacbdfmadadm/http://wias-berlin.de/software/tetgen/1.5/doc/manual/manual.pdf
There's a way to add new points (-i flag), indeed, but that doesn't override the existing ones, and it adds tetrahedron points anywhere within the volume. This indeed requires its own separate .node file, it seems?
There's also a way to specify region attributes, (-A and -a) that I don't yet understand, as to whether it requires its own file or is somehow part of a full mesh input file alternative to the normal STL file that Tetgen reads. I'm creating an STL file from Python to make the script work and that's the only file I'm creating for Tetgen, so far.
5.2.2 .poly les
A .poly file is a B-Rep description of a piecewise linear complex (PLC) containing some additional information. It consists of four parts.
Part 4 - region attributes list
The optional fourth section lists regional attributes (to be assigned to all tetrahedra in a region) and regional constraints on the maximum tetrahedron volume. TetGen will read this section only if the -A switch is used or the -a switch without a number is invoked. Regional attributes and volume constraints are propagated in the same manner as holes.
One line: <# of region>
Following lines list # of region attributes:
<region #> <x> <y> <z> <region number> <region attribute>
...
If two values are written on a line after the x, y and z coordinate, the former is assumed to be a regional attribute (but will only be applied if the -A switch is selected), and the latter is assumed to be a regional volume constraint (but will only be applied if the -a switch is selected). It is possible to specify just one value after the coordinates. It can serve as both an attribute and a volume constraint, depending on the choice of switches. A negative maximum volume constraint allows to use the -A and the -a switches without imposing a volume constraint in this specific region.
Yeah, the manual sucks. I'm confused even what the workflow is and what are output files versus extra input files Tetgen reads from.
I basically have no idea what any of this means. What's the workflow for specifying a region's target tetrahedron maximum volume, and is that even possible?…