about it.
2. Nick's comment below got me thinking about unit testing for clusters. Being able to work will data flowing in from outside the cluster or having multiple states to test against could be really cool. Creating definitions that were valid across a general cross section of possible input parameters was a significant issue for us. It was all too easy to write the definition as if we were drawing (often we were working from sketches) and then have it fail when the input parameters changed slightly.
4. I wasn't thinking about threading the solver itself. I was thinking along the lines of some IDEs that I've seen which compile your project while you type it. I know that threading within components and at the rhinocommon-level is a freaking hard problem that has been discussed at length already. (although when, 5-10 years from now, it's finished it will be very cool)
Let's say the solver is threaded and the canvas remains responsive. As soon as you make a change to the GH file, the solver needs to be terminated as it is now computing stale data.
What if the solver was a little more atomic and like a server? A GH file is just a list of jobs to do with the order of the jobs and the info to do them rigidly defined - right? The UI could pass the solver stuff to do and store the results back in the components on a component by component basis (i have no idea what the most efficient way to do this is in reality - I'm just talking conceptually) this might even allow running multiple solvers to allow for at least the parallelism the might be built into a given GH file to be exploited (not within components but rather solving non-interdependent branches of components simultaneously). This type of parallelism would more than make up for the performance hit you alluded to for separating the UI and the solver (at least for most of the definitions i write).
I was imagining a couple of scenarios:
a) Writing a parallel module: solver starts chewing away - you see it working - you know it's done 1/3 of the work - if you have something to do at that point you could connect up to some of the already calculated parameters and write something in parallel to the main trunk which is still being solved.
b) Skipping modifications: you need to make a series of interventions at different intervals along a section of code. Sure you could freeze out that bit of a section of down steam code and make modifications so you can observe the effects more quickly. Unfreeze a bit more and repeat etc. etc. until your done and then unfreeze that big chunk at the end to make sure you haven't blown anything up. Just letting it resolve as far as it can while you sit there waiting for inspiration seems a lot more intuitive to me though.
On a file which takes 15 minutes to solve that's no big deal, but you certainly don't want to be adding a 20 millisecond delay to a solution which only takes 30 milliseconds.
You also wouldn't notice it at that point :-) perhaps for things where it would really make a difference, like Galapagos interactivity, it could be disabled - or could the existing "speed" setting just digest this need? Since the vast majority of time that Gh is solving is on files under active development not on finished code, i think qualitative performance is probably more important that quantitative performance (again with cases like Galapagos needing to be accommodated). In our case the code only had to "work" once since its output went to a cnc machine to make a one off project and it didn't really matter if it took 15 seconds or 15 hours for the final run.
Lastly, I have no way to predict how long a component is going to take. I can probably work out how far along in steps a component is, but not how far along in time.
that's ok, from a user point of view, just seeing a percentage tick along once in a while would be nice reassurance that the thing is just slow and has not, in fact, crashed. Maybe there could be two modes of display: the simple percentage version for unpredictable code and, for those of us able to calculate the time taken for our algorithm based on the number of input parameters, a count down in seconds or minutes or whatever.
I think a good place to start with these sort of problems is to keep on improving clusters, ... etc etc
i totally agree.
…
Added by Dieter Toews at 7:53pm on September 4, 2013
: ----------------------------------------------------------------------------------------------
1)
Hi Clemens I've analysed a plate structure using Karamba and wanted to do a convergence analysis on results computed as a function of the number of elements.
Now, when strictly looking at the result magnitudes of internal energy (IE) and maximum displacement (w_max), it's acceptable, that their relative deviations are very small. But I cannot explain the tendencies of their graphs. From what I know, FEM should always compute underestimated results when compared to analytical solutions. So I don't understand why both the IE and w_max seem to be decreasing for an increasing number of elements.
But my main concern is the behaviour of the peak moment, it seems to be simply hill climbing untill suddenly a singularity kicks in. I initially wanted to use the peak moment as a fitness value for optimisation, but with this behaviour, I don't think that would make sense. I've attached my GH file as well.
It would be much appreciated if you could enlighten me on these subjects. Cheers Daniel Andersen
2)
Hi Daniel,
I could not run your definition because I have not all the plug-ins installed that you use.
You are basically right that the displacement should increase with a finer mesh. However the result of the shell analysis also depends on the shape of the triangles (well formed vs. very distorted). In order to test this, I think it would be interesting to use a very simple example (e.g. rectangular plate with one column) where you can easily control mesh generation. Would you like to start a discussion on this in the karamba group at http://www.grasshopper3d.com/group/karamba?
It is not a good idea to use the bending moment at a singularity for optimization because the result will be heavily mesh dependent. Also real columns do have a certain diameter and modeling them as point supports introduces an error.
Best,
Clemens
3)
oh, and by the way!
Here's some relevant literature on handling peak moments: https://books.google.dk/books?id=-5TvNxnVMmgC&pg=PA219&lpg=PA219&dq=blaauwendraad+plates+and+fem&source=bl&ots=SdDcwnrSA1&sig=6HulPmKNIhqKx4_rGxitteMC4CU&hl=da&sa=X&ved=0CDEQ6AEwA2oVChMIg66k0LPaxgIVgY1yCh1KPAeY#v=onepage&q=chapter%2014&f=false (Blaauwendraad, J., 2010. Plates and FEM : Surprises and Pitfalls, see Chapter 14) It would be great if a feature dealing with peak moments could be incorporated in Karamba. In my work, I ended up exporting my models to Robot in order to verify the moment values. Best, Daniel
4)
Hi Daniel,
thank you for your reply and the link to Blaauwendraads excellent book!
At some point I hope to include material nonlinearity in Karamba which will help in dealing with stress singularities.
If you want you could open a discussion with a title like 'moment peaks in shells at point-supports'. Then we could copy and paste the text of our conversation into it.
Best,
Clemens
----------------------------------------------------------------------------------------------…
presentar Digital Process: Generative Design Technologies Workshop; Taller especializado que se llevara a cabo en 4 de las ciudades mas importantes de la republica mexicana [Puebla] [Mexico DF] [Guadalajara] [Leon] en Enero y Febrero de 2012.http://gendesigntech.wordpress.com/
Enfocado principalmente a arquitectos, diseñadores industriales, diseñadores de interiores, Urbanistas, Artistas digitales, estudiantes y profesionistas afines al diseño; este Workshop tiene como objetivo proporcionar a los participantes los conocimientos y recursos tecnológicos que les permitan desarrollar los elementos de un proyecto desde la concepción hasta su aplicación de manera completa.Apoyándose en un conjunto potente y flexible de plataformas, los participantes aprenderán a generar, analizar y racionalizar morfologías complejas, formas orgánicas libres y algoritmos computacionales avanzados así como a producir visualizaciones fotorealístas aplicables en diversos proyectos de Diseño.A lo largo de 5 dias de intenso trabajo, exploración y retroalimentación los participantes seran guiados en el desarrollo de un flujo de trabajo mas dinamico, que les permitira explotar al maximo el potencial de las herramientas y potencializar sus habilidades, aptitudes y capacidades.Instructores:Leonardo Nuevo Arenas [Complex Geometry]José Eduardo Sánchez [DesignNest]Daniel Camiro/Luis de la Parra [Chido Studio]http://issuu.com/chidostudiodiseno/docs/digproworkConoce el programa aquí.http://gendesigntech.wordpress.com/program/Para registrarte por favor visita.http://gendesigntech.wordpress.com/registro…
Profesor de Proyectos Francisco Arqués Soler, experto en la materia; Una vez exploradas y programadas las decisiones generativas del proyecto, el grupo servirá de laboratorio para investigar mutaciones del mismo.
http://dpa-etsam.com/iam/iam-cursos
https://www.facebook.com/iamadridETSAM?fref=ts
Trabajaremos en la plataforma de programación visual Grasshopper, reventaremos los principios de su estructura (mediante Millipede , exploraremos las condiciones bioclimáticas (mediante Ladybug+Honeybeey navegaremos por procesos de form finding y variaciones generativas (mediante Galapagos y Octopus y su conversión a BIM (mediante Chamaleon, Lyrebird, Visualarq, puesto que el dia final programaremos la salida del prototipo a Revit).
NOTA: el curso tendra lugar la segunda mitad de este OCTUBRE, aunque el CALENDARIO aun esta abierto de manera asamblearia al maravilloso grupo reducido de elegidos que se apunten al final, aunque suele ser MARTES Y JUEVES, DE 16:00 A 18:00 HORAS.
Título Oficial de Experto en Programacion Visual por la UPM, y los creditos respectivos (en este caso 2,5)
iAM | Instituto de Arquitectura de Madrid <iamadrid.arquitectura@upm.es> +34 91 336 6537 / 6589…
erona, nei giorni 01,02 e 03 dicembre 2016.
Il comfort visivo e la gestione dell’illuminazione naturale in relazione al risparmio energetico diventano sempre più rilevanti per una progettazione innovativa degli edifici. Ad esempio, il nuovo protocollo LEED 4 riconosce crediti per le simulazioni di daylighting e conferma l’importanza degli aspetti progettuali per “collegare gli occupanti con lo spazio esterno, rinforzare i ritmi circadiani, ridurre i consumi di energia elettrica per l’illuminazione artificiale con l’introduzione della luce naturale negli spazi”. Senza strumenti software per la simulazione della luce non è possibile ottenere risultati di qualità. Radiance è un software validato, utilizzato sia a livello di ricerca che dai progettisti ed è tra i più accurati per la simulazione professionale della luce naturale e artificiale. Non ha limiti di complessità geometrica ed è adatto a essere integrato in altri software di calcolo e interfacce grafiche. Queste ultime facilitano le procedure di programmazione. Le principali e più versatili saranno oggetto del corso (DIVA4Rhino e Ladybug+ Honeybee, plug-in per Grasshopper e Rhinoceros 3D).
Il corso è rivolto a progettisti e ricercatori che vogliano acquisire strumenti pratici per la simulazione con Radiance al fine di mettere a punto e verificare le soluzioni più adatte alle proprie esigenze. Sono previste lezioni di teoria e pratica con esempi ed esercitazioni volte a coprire in modo dimostrativo ed interattivo i concetti trattati.
Le domande di iscrizione devono essere presentate entro il 16 novembre 2016.
La brochure con i contenuti del corso e tutte le informazioni sono disponibili su questo link
Il corso è sponsorizzato da Glas Müller.…
etra -UNESCO world heritage sites. The course includes technical software tutorials in cutting-edge software, a design component, and guest lecture series.
The Visiting school is an extraordinary travel, learning and networking experience as well as a credential for graduate studies and career opportunities. The course is focused on speculating architecture for MARS, the Wadi Rum desert is resemblant of Martian landscapes, where Ridley Scott's 'The Martian' was filmed starring Matt Damon. The course will focus on novel techniques in design and fabrication at multiple scales: material, architectural and urban.
Contact jordan@aaschool.ac.uk visit www.aavsjo.com for registration and details. Accommodation is available at Antika Hotel for shared twin rooms with other participants at a discounted rate of 300$ for the entire course including breakfast and wifi (June 23 to July 4). The hotel is in Jabal Amman within walking distance to our venue.
Instructors:
Kais Al-Rawi
Julia Koerner
Marie Boltenstern
Mazen AlAli
Barry Wark
Andreas Körner
Guest Speakers:
Rob Mueller, NASA | Swampworks
Julia Koerner, JK Design | UCLA AUD
Full Guests to be announced.
Past Keynote Speakers:
Ross Lovegrove
Ben Aranda
Mark Foster Gage
…
. BIM and Parametric.
Posts and files over at Design By Many:
http://www.designbymany.com/content/model-pattern-american-cement-building
I am equally comfortable on both of these platforms, and built the same parameters into each model. My modeling experience was very similar to that of Santiago. The Revit model took 4 hours to build, while the GH deff. took 16 hours to build. Time invested is certainly not the only metric to be compared; however, it is a good demonstration of the immediacy with which modifications can be made to the component system if parameter adjustment is not satisfactory.
With credit to Andrew Kudless for his process work on Manifold, I have adapted a similar workflow tracing diagram to the two models:
My general observation is that both tool sets approach the same problem, namely providing a structured relationship between components and wholes, but from opposing directions. BIM excels at compartmentalizing individual components, while parametric modelers like GH excell at global system-wide manipulations.
In the case of the American Cement Building, modeling the cast component seems to have fit in the box of 'the whole being reducible to its parts' the best. Although i anticipated Revit having more trouble with the surface generation, I found it to be more flexible on all accounts. Building up the component in a Pattern Based Curtain System family, the direct interaction with the rig (specifying control point work planes, and offsets) allowed the network of interactions to be accessible and editable throughout the build process. This family was then applied to a curtain panel grid which itself could be flexed in proportion, and cell count.
With the GH build I originally had the intention of utilizing data trees for parallel component construction so that changes to the base grid would affect offset normals and the like. However, after i had spent three hours constructing one parametric rail curve, I was unable to continue keeping track of the parallel data structure, and reverted to building a singular component. While GH certainly has the capacity to handle this task, I have found personally that the user does not.…
between the two. A simple example would be if you plug Integer data into a Text parameter. It's perfectly possible to create a piece of text which represents the integer. I.e. the value 18 becomes the text "18".
It's also possible to convert a floating point number to text, although in that case the conversion is not lossless, as the text only shows a limited number of decimals, thus rounding the actual numeric value.
In your specific case here, you have connected a Curve parameter output with the Loft Options input. Loft options are about the type of loft, whether or not to rebuild/refit the resulting loft surface and -if so- what sort of tolerance to use.
If you look at the tooltips of the input parameter for the Loft component, you'll see that the first one takes all the section curves and the second one takes the options to be used to make the loft. You'll have to put all your curves into the first input:
This can be accomplished by holding SHIFT while making the second connection.
However this will generate a new problem. Loft operates on a list of curves, and for each list of curves you provide it will try to create a single loft. But if you merge the two curve streams, you'll sometimes get lists of 4 curves, this is probably not what you want.
At any rate, Loft is probably not what you want in the first place as an offsetted curve (especially curves with kinks) will result in incredibly messy lofts. I'd recommend Boundary Surface as an alternative, but that will generate trimmed surfaces, which may not be acceptable for you.
Now then, on to the Offset failure. Curve offsetting is a planar operation. By default, the plane in which Offset works is the world XY plane. Your curves are all perpendicular to the world XY plane, so that is already problematic. The fix would be easy (plug the curves also into the Offset P input), were it not that one of your section curves is wonky. This is probably either due to a bug in the Rhino Brep|Plane intersector or it's a problem with the input Brep. Either way, I could not get one of the curves to offset correctly, no matter what I tried.
In the end I solved it by using Loose Offset, which also means that the loft works much better because both the interior and the exterior curve have identical topology (see attached). Do note that Loose Offset does not guarantee an offset accurate to within document tolerance, it only moves the control-points.
--
David Rutten
david@mcneel.com…
ly planes instead of lines, so there is no equivalently elegant and orderly branching structure in there made from lines. You only get the mostly triangulated truss which is much tighter, shown here in blue in the 2D version:
If you only sparsely populate those truss points, you don't have as much triangulation and you do get more of a natural bone look, but you lose the orderly branching that I was so excited about in 2D. Also, since hexagons pack 2D space perfectly, the 2D case does create a lot of good areas of hexagons, but in 3D there is no similarly symmetrical space filling object except a cube, but cubes are not what Voronoi emulates at all. If the 2D case branches with three lines per vertex, then the 3D case could ideally branch with 4 lines per vertex, just like the atomic structure of diamond. I was hoping for that, naively, but am now discouraged. A surface adaptive diamonoid lattice is a long way off, it seems. Without the Voronoi relaxation cycles, just distorting an existing lattice somehow merged to the surface as needed local to the surface, won't even out well.
Diamond also is a very specific structure, not amendable to fractal like branching so I'm not even sure what the 3D equivalent of such branching is, whether there is an orderly system. "Branching" is the wrong concept anyway, since they both branch and join together again, forming cells. Pure branching with that ends at the surface is not coming out of Voronoi.
http://www.grasshopper3d.com/photo/stochastic-fractal
Here I have created a superior surface adaptive 3D Voronoi, by using my 2D system of only moving a lot the vertices already near the surface, leaving mostly alone the deeper ones, so I no longer get a blank hole in the interior but I do get lots of surface density:
…
Added by Nik Willmore at 2:01am on August 16, 2015
I guess I'd try creating a mesh from those points. Tetgen only accepts a mesh.
However, there are advanced flags that could be changed by editing the Python code, which is fairly straightforward as far as Python goes.
chrome-extension://oemmndcbldboiebfnladdacbdfmadadm/http://wias-berlin.de/software/tetgen/1.5/doc/manual/manual.pdf
There's a way to add new points (-i flag), indeed, but that doesn't override the existing ones, and it adds tetrahedron points anywhere within the volume. This indeed requires its own separate .node file, it seems?
There's also a way to specify region attributes, (-A and -a) that I don't yet understand, as to whether it requires its own file or is somehow part of a full mesh input file alternative to the normal STL file that Tetgen reads. I'm creating an STL file from Python to make the script work and that's the only file I'm creating for Tetgen, so far.
5.2.2 .poly les
A .poly file is a B-Rep description of a piecewise linear complex (PLC) containing some additional information. It consists of four parts.
Part 4 - region attributes list
The optional fourth section lists regional attributes (to be assigned to all tetrahedra in a region) and regional constraints on the maximum tetrahedron volume. TetGen will read this section only if the -A switch is used or the -a switch without a number is invoked. Regional attributes and volume constraints are propagated in the same manner as holes.
One line: <# of region>
Following lines list # of region attributes:
<region #> <x> <y> <z> <region number> <region attribute>
...
If two values are written on a line after the x, y and z coordinate, the former is assumed to be a regional attribute (but will only be applied if the -A switch is selected), and the latter is assumed to be a regional volume constraint (but will only be applied if the -a switch is selected). It is possible to specify just one value after the coordinates. It can serve as both an attribute and a volume constraint, depending on the choice of switches. A negative maximum volume constraint allows to use the -A and the -a switches without imposing a volume constraint in this specific region.
Yeah, the manual sucks. I'm confused even what the workflow is and what are output files versus extra input files Tetgen reads from.
I basically have no idea what any of this means. What's the workflow for specifying a region's target tetrahedron maximum volume, and is that even possible?…