tura digital en corte Láser, corte CNC, impresión 3d, y modelado paramétrico.
Este tercer taller enseña los fundamentos del modelado paramétrico y algunas bases de manufactura digital.
PERFIL DEL ALUMNO QUE INGRESA:
Diseñador, Arquitecto, Artista con conocimientos de Rhinoceros interesados en comenza a modelar paramétrico con Grasshopper para fabricación digital básica.
PERFIL DEL ALUMNO QUE EGRESA:
El alumno terminará con los conocimientos y criterios para el desarrollo de piezas o proyectos utilizando fabricación digital, mejorando y agilizando los flujos de trabajo, así como los criterios fundamentales del Modelado Paramétrico -Generativo.
Taller de modelado paramétrico con Grasshopper
Interfase
Manejo de Datos
Data Volátil
Data Persistente
Rangos y dominios
Atractores
Listas y Cull
Modelado por Layer Object
Análisis Básicos
Conexión de Curvas
Superficies
Análisis de Superficies
Panelización Básica
Relaciones con Excel
Modelado generativo
Fechas: del 8 de Febrero al 1º de Marzo
Días: Sábado
Horarios: de 10 am a 3 pm
Sesiones: 4 de 5hrs
Duración: 20 horas
Precio: $3,000.00…
stributes structural supports for a uniformly loaded domain using e.g. the internal energy of the loaded domain as fitness. Here the uniformly loaded domain is represented by the trimmed surface. My genomes are the support positions (green crosses), which are restricted to a set of predefined grid points. I’m currently using an (i,j)-coordinate indexing for these grid points (illustrated in the viewport just below) as opposed to a sequential , “one-dimensional” numbering (illustrated in the viewport further down).
(i,j)-indexing systemAltenative, sequential indexing system
The support positions are computed by two gene pools; one governing the i-index, Gene List {i}, and one governing the j-index, Gene List {j}, of each support. The value of slider 0 in Gene List {i} is paired with the value of slider 0 in Gene List {j} etc. and the amount of sliders corresponds to the amount of supports. The screen shot below depicts the slider constellation corresponding to the support distribution depicted above. Unfortunately the j-index represented in the sliders needs remapping as the number of j-indices vary for each i-index (horizontal row of grid points). With the current setup I have 12^6 x 9^6 = 1,6 x 10^12 different genomes. If I were to use the sequential, “one-dimensional” numbering, I would only use one gene pool with sliders ranging from 0 to 76 meaning that remapping could be avoided and thereby having only 76^6 = 1,9 x 10^11 different genomes.
So, my current genome setup causes a bunch of issues related to the Evolutionary Solver: Remapping Changing one of the j-index sliders, will not necessarily change the related support position but it will still facilitate another genome to be calculated by the solver. (This problem could be eliminated by using the sequential, “one-dimensional” numbering)
Switching slider values around If the values of e.g. slider 0 were to be switched around with the values of slider 5, this again would yield a new genome but an identical solution. (This problem cannot be eliminated by using the sequential, “one-dimensional” numbering)
Coincident support positions Two or more supports may be located in the same position. (This problem cannot be eliminated by using the sequential, “one-dimensional” numbering)
I find it impossible to imagine the fictive “fitness landscape” of this problem and not only because of the multidimensional genome characteristic but just as much because of these listed, intertwined peculiarities. I’ve tried running the Simulated Annealing Solver as well, but my experience is that the Evolutionary Solver yields better results. To my awareness, the solver uses some kind of topographical proximity searcher. This is why, I think that the solving process itself benefits more from analysing the (i,j)-index system, in which neighbouring grid points hold more uniform topographical information than the sequential, “one-dimensional” numbering, which might have big ID-numbering gaps between neighbours. Have I understood this correctly?
Cheers…
ack to .ghx?
This is in relation to a discussion I've been having with David Rutten & Scott Davidson about GH consuming memory in a relatively large GH definition (~. I think what I've learned from this is that one should limit the size of the GH file, or put some incremental stops in the definition to limit the length of calculations that it runs at once. Is this a valid conclusion?
The GH file we're talking about is 7Mb & the Rhino file is about 120Mb, but when working w/ the GH def. I try to only keep about 2 curves turned on.
Here's a summary of the discussion:
Hi Mike,thanks for sending it over. I've been fiddling with the file for about 10 minutes and it climbed from 1.7 GB to 1.9GB, but then I've been switching previews on which means more meshes get calculated so you'd expect a higher memory consumption. It is possible we're leaking memory, but if you're working for hours on end, memory fragmentation might also explain part of the increase. Basically, memory gets fragmented just like disks get fragmented after prolonged use, difference is that memory cannot be defragmented unless you restart the application and allow it to start with a clean slate. I'll try and find any leaks we may have missed in the past.Goodwill,David
──────────── David Rutten
On 09/03/2011 06:19, Mike Calvino wrote:
Thanks very much David for the quick response. I've attached the files zipped. I can't figure out what's doing it. After working in the file for awhile, the memory usage in the Windows Task Manager climbs . . . it's gotten to 1.57+Gb before I exited GH & Rhino5Wip & let it dissipate, then restart & work for awhile before it does it again. It probably takes like 4 or 5 hours before it gets that high. That's the highest it's gotten, & that only happened while I was working in a Rhino file that had all of the elements baked into it - turned off at least, but it still climbed to 1.57+Gb. It seems to climbs when you work in the file & move around in both the GH def. & the Rhino file. Like turn on a few of the Extr components at the right end of the "StandareRibExtuder" groups, you can watch the MemUsage go up, but when you turn them off, it does not go down. - goes up fast at this point. Maybe I need to figure out how to do the definition with fewer components, I'm sure that's part of it, but I must confess, I think I'm still early on in the learning curve.I really hope that this is not operator error on my part & I do apologize up front if it is. I have done a disk cleanup, I have tried excluding .3dm & .ghx files from my NOD32 antivirus, no change. I hope you can find something.Let me know if you have any trouble with the files.See if you find anything & please let me know . . . thanks!Cheers! --Mike CalvinoCalvino Architecture Studio, inc.www.calvinodesign.com
…
are invisible in the picture.
So what you see it's a common band that has lost all those characteristics of the original in order to protect the process.
We also did an "invisible setting" prototype which has built in Flexibility.
If you are in the jewelry industry you would know what I mean and it is close to a miracle.
It's a shame I can not share details and this is why I am planning my next major work on something 10 times more complex then this, at least.
It's will be for my own business and for the jewelry industry as well.
I hate to tease people and then not to be able to produce anything more than an image.
But I thought it would be better than nothing, at least for jeweler designers, so they can see that there are more and more users and that complexity it is not something to shy away from, and it's worth the time spent because the returns on production are far larger than for special orders and this is why GH is useful.
We can design a piece of jewelry usually in less then 1 hour, hence GH is not really worth the time.
But for production with so many variables (Finger sizes controlling most of the outcome together with stone sizes etc.) then GH it's a MUST!
I really appreciate everyone's comments and suspicions and I understand why.
99% of the people out there do not really understand the complexity of jewelry at the industrial level. It' s not just form but the post-production that's the killer.
This industry it's still an hybrid of technology and art with, and due to the lack of the old school pros, unfortunately, we face very lousy and unpredictable execution in the post production (after the casting process). This leaves you with a design process and intention that requires a lot of control over every possible variant of the object.
One wrong design aspect it's multiplied thousands of times at the benches (for every single piece) = bad profits!
It sound more serious that it is but very few companies are willing to do so (delivering good product vs low quality and this also happens because the consumer is not longer aware of the difference. So, who does keep quality, it's only because of integrity, third party QA or just pride).
This is way GH is invaluable. This is why that Def looks like out of proportion for that (Visual) simple band.
It is because there are dozens and dozens of variable effecting everything else. In fact it is not even complete as it is in order to cover everything but the most critical ones.
Sorry for the long replays. I am an instructor and a professional jeweler by trade since I was very young and I love to teach, so I overflow with explanations... and Components :)).
Next time it will be "in the open" as they say...…
merely automates finding clear intersections between pairs of objects and then splits the objects along those intersection *curves*, deletes the trims, then joins the remains, and cycles on. But within the confusing Rhino Settings tolerance value, wherever surfaces actually just sort of come closely together, there *is* *no* clear intersection curve. So it bugs out and stops working EVERY time you try more than a dozen or two spheres.
Some software can do this by switching to volumetric pixels (voxels). $9K-$30K Geomagic Freeform is an example of this. It also fails sometimes, often due to memory issues, as you can imagine since it needs to fill all inner space of each sphere definition with 3D pixels.
Materialize Magics for $16K can often handle such Booleans well. It will take a seeming lifetime to figure out such often pirate software kludges though.
One thing you can try though is to simply drape a mesh or NURBS plane onto the top of your spheres.
There's a well known *reason* your Booleans are failing. Nobody here has yet even hinted at it:
The main reason is that Rhino/Grasshopper developers don't care about the human element. The math exists to make this work very fast, every time. It just has to join things *right*, incorporating human knowledge of kissing surfaces, instead of acting stupidly, like some pocket calculator. But that would involve hacks that make 99% of complex Booleans work instead of 10%, and we can't have that since it will be SLOWER for the other 1% that just happen to have no nearly kissing or really kissing surfaces.
You could also use the new Cocoon plugin to do a surface *around* your structures, with a given radius of extension beyond the spheres, then offset that surface back the same radius. That is 100% robust, but won't offer quite as sharp of intersections, more rounded, like most everybody wants anyway.
You can *test* Boolean failures, by running a Grasshopper intersection command, to see the intersection curves, and zoom in to see how badly many of them are, all knotted, or twisted, or even with gaps, often with gaps.
It's a math problem nobody at McNeel wants to solve, sorry.
Just write a check for $25K and spend six months taking notes, like I did, and you can merge your simple spheres finally.…
Added by Nik Willmore at 6:33pm on October 20, 2015
Loop'. The fun part of the slower version is that you can see what it's doing while it's running. 'Fast Loop' gives no indication that it's working, so you want to test it with small numbers and be sure it's coded properly before bumping the iteration count up.
The GH profiler running the slow version showed between 1 and 1.5 seconds per loop, but the reality was more like ~10 seconds per loop toward the end of an 11 X 11 grid, or ~20 minutes total. It's easier to be patient because you know it's working.
The 'Fast Loop' finished the same grid in 1.6 minutes! An impressive improvement. I've been running it on a 30 X 30 grid (900 points) for ~23 minutes so far and see nothing yet. Not the ~12 minutes I had hoped for... Now 36 minutes on this loop for 900 points... hope it's not stuck. Not fast! Later - DONE!! Profiler says 59 minutes for 900 points but it was more like an hour and twenty minutes total. It succeeded, I have a single 'Closed Brep' from 900 extruded rings, baked to Rhino.
Another strategy to explore would be doing 'SUnion' on a smaller grid using the Anemone loop, then replicate it by moving it as needed to form a larger grid; then run the copies through another 'SUnion' loop. I went ahead and implemented that while waiting. It works and is fast! Started with 3 X 3 and ran the result again as 5 X 5 (9 X 25 = 225 total) in barely ~70 seconds!? Trying 36 X 36 now... 1,296 points appears to have succeeded in less than ten minutes! Though it seems to take quite awhile after the loop ends before control is restored to GH/Rhino. I'll let you do your own experiments and benchmarks.
I encapsulated the loop in a cluster called 'suLoop' (blue groups).
Internal of 'suLoop' cluster:
…
Added by Joseph Oster at 11:14pm on March 22, 2017
nd helpfull for me, and I always wanted to know and explore it.. I used Galapagos for solving some task, and now I'm writng an article about what I'm doing.. I have several questions regarding the algorithm's steps you mentioned (I hope you can answer):
In your explanation you described several options for some parts of the algorithm (how to make coupling, mutating etc..). Can you please explain more in detailed (parameter, or atleast the methods only) you used for Galapagos?
To be persicely:
what is the population size on the beginning?
5.a) Did you use isotopically, exclusively or biased?? - If exclusively, what percentage? - If biased what is the 'vector of weights'? or however you implemented that..
5.b) For the implementation- do you have some Gaussian with a pick on the 'Inbreeding factor' (which is some number in [0,1], while 0 presents 'incestuous', and 1 presents 'zoophilic' or the opposite)?
5.c) Did you interpolated the values by averaging (i.e. equal weights) or using preference weight according to the fitness?
5.d) I see What you said about number of sliders, want to be sure I understand: the mutation here is just to pick some percentage of the genes (what percentage you used?) and change the child's value to be a random number in the range of the slider?
Can I change the percentage of individuals from G[n] are allowed (you said the default is 10%)?
What is the default for this? Is it the first one reached?
Can I specify the max number of iteration? Can I specify the number of generations? Can I specify a fitness value to stop in?
Maybe I missed some parameters, but I saw Galapagos as a "black box". But maybe I missed I can adjust it (in the later case, would like to know what are the defaults values).
I guess it is not an open-source code (right?) and maybe you don't want to share it for public. I would be glad if possible to know a bit more at least about the methods, so I describe it when writing my article, please J You can also answer me here: naama.glauber@gmail.com…
Added by Naama Glauber at 10:08am on November 14, 2018
ly fabricated interventions and interactive electronic performance art installations in Barra Funda. Along with other experts, these tutors will teach how to use and apply new design technologies, notably Rhino and Grasshopper (and numerous plug-ins including GECO, Galapagos, Kangaroo and RhinoCam); Arduino and Processing; and the use of laser-cutters, rapid- prototype machines and CNC routers and mills.
Alan Dempsey of NEX, was in 2010, selected by the Centre for European Architecture/Chicago Athenaeum as one of the 40 most significant architects in the EU under 40. In 2008 he was selected by the British Council as one of the six most significant Design Entrepreneurs. He previously worked with Future Systems, OCEAN and Homa Farjadi. Alan was an AA Unit Tutor and is Director of the AA Independent’s Group (www.independentsgroup.net), which facilitates research into the use of computational design and fabrication. Alan has lectured, exhibited and been published worldwide. His work has received a number of awards, including a LEAF award for Spencer Dock Bridge, and a D&AD pencil for the [C]space DRL 10 Pavilion.
Robert Stuart Smith of Kokkugiais a Studio Course Master at the AA DRL. Robert previously worked for Lab Architecture Studio and Nicholas Grimshaw & Partners. He focuses on self-organisational systems and developmental growth, pursuing polyvalent and environmentally responsive affect. He leads consultation to Cecil Balmond on non-linear algorithmic design research. Kokkugia has projects in the USA, UK and Mexico, and is exhibited and published internationally.
Iván Ivanoff is an artist, programmer, and researcher. He searches for new forms of communication for the society of the future and is the director of different Media Labs worldwide. He founded the artistic collaborative i2off.org+r3nder.net, which develops multi-media and interactive projects, and Estado Lateral Media Lab to investigate and develop new technologies.
The Barra Funda district of São Paulo was once characterised by a mix of small industrial, commercial and residential programmes, but, as economic policies have favoured larger production industries, numerous companies have abandoned the area. In response, the workshop proposes the creation of new types of smaller industries to produce a mix of both consumption and production, manifested through micro-manufacturing interventions that can co-exist alongside retail and housing. Computational design and digital fabrication could be used to help create these new micro-industries, which in turn will help empower local craftsman to produce and sell directly to consumers through micro-manufacturing, located in small urban workshops.
The workshop will tap into emergent gallery scene of Barra Funda and local initiatives that use computational technology to introduce a new cultural and economic impetus. The workshop is a part of the International Festival of Electronic Language (FILE), an exhibition of interactive electronic technology, and will import these electronic technologies out of the galler, collaborating with local manufacturers, artists, and activists, with a goal of disseminating a high-tech yet low-cost and small-scale fabrication systems to promote this new micro-industrial movement. The workshop is open to architecture and design students and professionals worldwide.…
quired)
// Agenda
Parametric Design, in the history of architecture, has defined many rules for current designers and for future practitioners to follow. One of the strongest aspects that are prominent from this style is ‘geometry’. Arguably, there is nothing new about geometry and aesthetics forming the most prominent aspect of any style or era. The language of any style, in the long history of architecture, is visually defined by geometry or shape, beyond the principles that define the core of the style. In the distinguishable style of parametric architecture, geometry has played and is continuing to play an integral role. And with this fairly young style, there are many strings of myths and false notions associated.
The workshop aims to provide a detailed insight to ‘parametric design’ and embedded logics behind it through a series of design explorations using Rhinoceros & Grasshopper platforms, along with understanding of data-driven fabrication strategies. An insight to Computational Design and its subsets of Parametric Design, Algorithmic Design, Generative Design and Evolutionary Design will be provided through presentations, technical sessions & studio work, with highlighting agenda of using data into Hands-on fabrication of a parametrically generated design. A strong focus will be made on ‘geometry’ and ‘matter’.
Day 1 Topics / Agenda
Rhinoceros 3D GUI and basic use
Installing Grasshopper & plug-ins
Grasshopper GUI
Basic logic, components, parameters, inputs, numbers, simple geometry, referenced geometry, locally defined geometry, baking, etc.
Lists & Data Tree: management, manipulation, visualization, etc.
Design Experimentations with Geometry & Data
Understanding Data for Manual Fabrication
Day 2 Topics / Agenda
Design Experimentations with Geometry, Form, Matter
Data for effective numbering and strategizing during Manual Fabrication
Collaborative effort for Hands-on ‘making’ process
Analysis & Evaluation of Fabricated Geometry
Documentation
// Tutor(s): Sushant Verma (Architect / Computational Designer / Educator)
…
. From the Thermal Comfort Indices component, Comfort Index 11 (TCI-11):MRT = f(Ta, Tground, Rprim, e)
with:- Ta = DryBulbTemperature coming from ImportEPW component- Tground = f(Ta, N) where N comes from totalSkyCover input. Tground influences the long-wave radiation emitted by the ground in the MRT calculation.- Rprim defined as solar radiation absorbed by nude man = f(Kglob, hS1, ac)- ac is the clothingAlbedo in % (bodyCharacteristics input)- I can't find any definition in the code of Kglob and hS1. Could you tell me please what are those values referencered to? --> probably the globalHorizontalRadiation but how?- e = vapour pressure calculated from Ta and Relative Humidity input
Do you agree that in this case the MRT does not depend on these inputs: location, meanRadiantTemperature, dewPointTemperature and wind speed?It does not depend neither on the other bodyCharacteristics like bodyPosture, age, sex, met, activityDuration...?
MRT calculated by the TCI-11 method is the mean radiant temperature of a vector pointing vertically with a sky view factor of 100%?For ParisOrly epw,
2. From the SolarAdjustedTemperature component (that seems to be more used for the UTCI calculation examples on Hydra compared to TCI-11).
In contrast to the TCI-11, this component distinguishes diffuse and direct radiation and contextualizes the calculation thanks to _ContextShading input, right? It can also be applied to a mannequin thanks to the CumSkyMatrix and thus evaluate the dishomogeneity of radiation exposure.This component seems not to consider the influence of vapour pressure on the result --> is it then more precise to put the MRT output (from the TCI) as an input of meanRadTemperature for SolarAdjustedTemperature?The default groundReflectivity is set to 0.25 --> is GroundReflectivity taken into account in the Tground or MRT calculation in the TCI component? If yes, what is the hypothesised groundReflectivity?The default clothing albedo of 37% (TCI-11 bodyCharacteristics) corresponds to Clothing Absorptivity of 63%?
If the CumSkyMatrix input is not supplied, I get 9 results for the mannequin --> where are those points/results coming from?
If the CumSkyMatrix input is supplied,I suppose the calculation of the 482 results correspond to a calculation method similar to the radiation analysis component that is averaged over the analysis period. Right?But I don't understand why the mannequin is composed of 481 faces and meshFaceResult gives 482 results.
Finally, what is the link between the MESH results, the solarAdjustedMRT and the Effective Radiant field ? Is there a paper to have a detailed explanation of the method?
3. Here are some results for the ParisOrly energyplus weather data. You can find here attached the grasshopper definition.There is no shading in this simulation and the result coming from the ThermalComfort indices for MRT is very different compared to the solar adjusted MRT.Why such a big difference and which of the result should be plugged into the UTCI calculation component?
Results for ParisOrly.epwM,D,H:1,1,12
Ta : 6.5°Crh: 100%globalHorizontalRadiation: 54 Wh/m2totalSkyCover: 10MRT (TCI-11): 1.2°C
_CumSkyMtxOrDirNormRad = directNormalRadiation : 0 Wh/m2diffuseHorizontalRad: 54 Wh/m2_meanRadTemp = TasolarAdjustedMRT: 10.64°CMRTDelta: 4.14°C
_CumSkyMtxOrDirNormRad = CumulativeSkyMtxdiffuseHorizontalRad: 54 Wh/m2_meanRadTemp = TasolarAdjustedMRT: 10.47°CMRTDelta: 3.97°C
_CumSkyMtxOrDirNormRad = CumulativeSkyMtxdiffuseHorizontalRad: 54 Wh/m2_meanRadTemp = MRT (TCI-11)solarAdjustedMRT: 5.17°CMRTDelta: 3.97°C
Thanks a lot for your helpRegards,
Aymeric
…