ld work.
For example there's a grid shell and I've got a number of control points (for example 3) that can move up and down.
Depending on the control points I get forms that are structurally good and some that are bad.
In my office we've got a GH-Component, which leads the geometry in structural members and solves the structural forces and so on through an external Software called Sofistik and afterwards gives back to GH some Values, for example maximum bending moments. (Like Karamba)
Now I want to create this optimization component or something like that to minimize e.g. the bending moments in the given geometry.
Let's start with the work of the component.
So when I've three control points that can only move in z-direction.
P1(0,0,Z1), P2(10,0,Z2), P3(5,5,Z3)
They only depend on Z, so everything depends on Z1 to Z3 which have a range between 0 and 10 f.e.
First I want to get some (between 9 and 15) random Particles, one particle consists of this 3 different Z's.
So for example the first particle Part1 is [Z1=10, Z2=5, Z3=7]
and the second particle Part2 is [Z1=7, Z2=1, Z3=9]
and so on.
I created these Start Particles in a Cluster. See attached file.
I also tried this in C#, but thought it is easier in GH.
After I've got the Start Particles I want to give out the first particle and evaluate with its including Z's the target value in GH. Therefore I had to take the first branch and graft this branch (Discussion before)
Afterwards I want to save this Target Value that depends on the first starting Particle. Then I want to give out the second starting Particle to evaluate its target Value and store it. And so on till the last target Value of the last Starting Particle got assigned.
Then I want to assign the particles with its target values. E.g. part1: t=0.9, part2: t=1.8...
Then I want to define neighborhoods or the count of the expected local minima.
These neighborhoods can look like: Each neighborhood has to include not less than 3 particles. And the particles have to be next to each other.
E.g. if there are 12 particles and I want to have a look for 3 local minima, I need 3 or 4 neighborhoods. Then I would take 3 neighborhoods, because the more particles in one neighborhood, the better.
So the Count of the neighborhoods would be N=min{(Count of Part/3)& N_min}
How to define these neighborhoods I don't know at the moment. I think it has to be searched for the distance between the particles. E.g. part1 with (9,9,9) and part2 with (9,9,8) are next to each other but part 3 with(1,1,2) is far away.
Then each StartParticle is set to Partx_localbest.
And in each Neighbourhood the best of these localbeststs is Part_NyBest. (The best ist the one with the smallest target Value)
Loop:
Now I want to create new Particles. These Particles don't change their Z-values randomly. They change their Z-Values depending on Part_NxBest and Part_localBest. Therefore it has to be evaluated a new velocityfactor with v_Partx_new=0,792*v_PartxOld+1,5*random(0,1)*(partx_localbest-partx)+1,5*random(0,1)*(part_NyBest-partx)
The new particles will then be partx_new=partx+v_Partx_new.
The new Particle partx_new will be set to partx and then set in the output.
then there has to be caught the targetValue of part1 afterwards part2 can be put out and its target value caught and so on.
Then it has to be looked for the Partx_localbest through comparing the partx_localbest and its target value with the new part_x and its target value. If the target value of the new partx is smaller than partx_localbest,
then partx_localbest is the new partx.
This has to be done for each partx. Afterwards the same for neighborhoods best (best of all partx_localbest in one neighborhood)
Endloop if velocity gets small.
Output all part_NxBest
Output all targetvalues of the part_NxBests.
So in the Input there have to be:
StartParticles if they are given through the cluster attached.
Device on the target Value like in the attached gh.file from David Rutten I found in the discussions
Count of neighborhoods
And in the output
Output particle for evaluation
Output all part_NxBest
Output all targetvalues of the part_NxBests
Hope didn’t forget anything. And hope it isn’t crushed to badly. Sorry for my bad English by the way ;-)
For more explanation, how the PSO works in other programs. There’s attached a workflow script (is it called like that?) I think for GH it should be a little bit changed like I tried in my explanations.
So if you can help me a in some parts or you have any advices would be great, otherwise thank you nevertheless!!!!
Thankfully there’s no limit for the words in the discussions :-D
Best, Heiko
…
Introduzione a Grasshopper", il primo manuale su Grasshopper.
.
I corsi PLUG IT nascono dalla volontà di promuovere le nuove tecnologie digitali di supporto alla progettazione e condividere il know-how maturato attraverso ricerca, collaborazione con i più importanti studi di architettura e pubblicazioni internazionali.
.
Verranno introdotte le nozioni base di Grasshopper approfondendo le metodologie della progettazione parametrica e le tecniche di modellazione algoritmica per la generazione di forme complesse. Il corso è rivolto a studenti e professionisti con esperienza minima nella modellazione 3D e si articolerà in lezioni teoriche ed esercitazioni.
. Argomenti trattati:
- Introduzione alla progettazione parametrica: teoria, esempi, casi studio - Grasshopper: concetti base, logica algoritmica, interfaccia grafica - Nozioni fondamentali: componenti, connessioni, data flow
- Funzioni matematiche e logiche, serie, gestione dei dati - Analisi e definizione di curve e superfici
- Definizione di griglie e pattern complessi - Trasformazioni geometriche, paneling - Attrattori, image sampler
- Data tree: gestione di dati complessi - Digital fabrication: teoria ed esempi - Nesting: scomposizione di oggetti tridimensionali in sezioni piane per macchine CNC
.
Verrà rilasciato un attestato finale.
.
Ulteriori info e programma completo su: www.arturotedeschi.com e su www.edizionilepenseur.it…
Rhino core functionality with the least amount of overhead.
Steve started developing a .NET SDK a few years back which allowed languages such as C# and VB.NET to also write plug-ins for Rhino. The first iteration of this .NET SDK was called Rhino_DotNET and it was a 1:1 translation of the C++ SDK. Basically, every class and every method was wrapped automatically with a python script. Rhino_DotNET was a mixed-mode assembly, meaning it was both C++ and .NET at the same time. Although this sounds like an ideal way to bridge the gap between a native C++ SDK and .NET development languages, Steve eventually decided it was more trouble than it was worth.
The second iteration of the .NET SDK is called RhinoCommon and it is lovingly hand-crafted by the hard-working, indigenous people of Seattle (and, to a small extend Poprad). It is very different from Rhino_DotNET. All the classes and methods have been redesigned one by one to provide a more .NET experience. A lot of C++ code has been replicated in C# to reduce overhead in critical parts. It is no longer mixed-mode.
RhinoCommon consists of two dlls, one created using C++ the other using C#. The C++ dll (rh_common.dll) exposes the core C++ SDK functionality via a large collection of easily P/Invokable functions. The C# dll (RhinoCommon.dll) defines the .NET SDK as we see it.
So, yes; using RhinoCommon (and Rhino_DotNET) will be slower than using the core SDK directly. However the overhead is roughly the same for every function call which means that functions that take a long time to compute will have negligible overhead. An overhead of 2 nanoseconds on a function that itself takes 4 nanoseconds is quite severe while an overhead of 2 nanoseconds on a function that takes 12 milliseconds is not worth considering.
Finally, Grasshopper is a pure .NET plug-in and if you wish to write plugins to Grasshopper you will have to use a .NET development language.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
evel in which each final branch contains a list of one number from each list in all its variations with the other two lists.
12
AB
xy
Becomes eight possible combinations:
1Ax
1Ay
1Bx
1By
2Ax
2Ay
2Bx
2By
Either I could immediately break into 8 branches or branch twice from 2 items to 4 items then from those 4 items to 8 final items. I keep trying grafting with all manner of tree components and *never* obtain a simple dual branching fractal tree structure. I barely even need a tree actually, but I'd prefer each final branch to contain a list I can pull each final value individual value out of rather than dealing with string extraction. This is all to eventually plug all these variations into a parametric mesh model that now uses three sliders, and Python script also to bake them all as OBJ files.
Crucially I also need to obtain the numbers to use as part of my multiply exported OBJ files. I can so far only get a single range to export as a series of OBJ files automatically but not the whole three list array of them.
…
step-sizes. It starts out with large jumps, then as it cools the jumps get smaller and smaller as does the likelihood of a retrograde jump being accepted as a valid new state.
Most fitness landscapes have more than one dimension and therefore a 'jump' could include any number between 1 and N, where N is the dimensionality of the landscape. The Drift Rate setting —which may well be poorly named— controls the odds that a jump includes an additional dimension. All jumps must be at least one-dimensional, but 25 percent of them (on average) will include another dimension. 25% of those will include a third dimension and 25 percent of those a fourth and so on and so forth until the dimensionality of the landscape has been reached. Here's a list for 1000 jumps:
Drift Rate: 25%
1D jumps: 750
2D jumps: 187
3D jumps: 47
4D jumps: 12
5D jumps: 3
6D jumps: 1
A good question to ask would be; "Why would you want a jump to include more than one dimension?" and the answer is that the more genes are related, the higher the changes that a multi-dimensional jump will yield an improvement. It's not difficult to imagine that you cannot improve your current state by only modifying a single gene. Sometimes you need to change two in unison in order to reach a better solution. If your genes are highly related (which is bad practice to begin with) then you may need to adjust the Drift Rate to a higher value.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
Added by David Rutten at 11:09am on April 17, 2012
eedback. To go down the list:
Theodore, The PMV indoor comfort analysis is unfortunately much more computationally intensive than the adaptive comfort analysis. In my experience, an annual adaptive comfort map analysis of 3 zones took about 12 minutes to run in parallel on my system. Running the same analysis with PMV took about an hour. I would suggest either running the PMV case with analysis periods of typical/extreme weeks (use the "Import Stat" component to get these from the weather file) or run an annual analysis with the Adaptive comfort map. Alternatively, you could just let the PMV run overnight and I am confident that you should have something by morning.
Grasshope, The text looks like that because your Rhino model tolerance is not fine enough to capture all of the details of the text. Type "Units" into the Rhino command bar and drop your tolerance down to a smaller value. Then re-compute or re-open the GH file.
Oleksii, my initial reaction is to say that you can set up GH files with LB+HB components that allow you to do all of those things but the way that you have phrased the questions are a little vague (especially the last one there). I would recommend checking out this tutorial playlist that shows you how to set up an energy model with HB and this should help address the first two questions (https://www.youtube.com/playlist?list=PLruLh1AdY-SgW4uDtNSMLeiUmA8YXEHT_). I am still having an issue understanding what you mean by the last one but maybe the comfort tutorials might be in the vein of what you are looking for (https://www.youtube.com/playlist?list=PLruLh1AdY-Sho45_D4BV1HKcIz7oVmZ8v). If you want to re-phrase the questions more specifically, please post them as a discussion.
Thank you all,
-Chris…
analysis with Honeybee. Here is the tentative outline:
09:00 - 09:30
What is Honeybee, Introduction to daylighting simulation
09:30 - 11:00
Geometry preparation workflows, Radiance materials
11:00 - 11:10
Break
11:10 - 12:30
Sky types, Run your first simulation
12:30 - 13:30
Lunch
13:30 - 15:00
Daylighting analysis types, Result visualization, Getting started with annual daylight
15:00 - 15:15
Break
15:15 - 16:00
Annual daylight analysis and Results interpretation
Check MEBD page for more information including the registration link: http://www.mebd-penndesign.info/Honeybee-MEBD-Workshop-PennDesign
Please feel free to forward this to anyone of interest.
Cheers,
Mostapha
PS: Thank you all for the kind comments and emails for the Ladybug workshop. We recorded the workshop and are in the process of figuring out how to share it with the public. I will send an update once it is uploaded.
…
is to reduce the gaps between built environment and digital technologies seamless integrating design and fabrication. Among the benefits: efficient use of production resources, material-specific design concepts, outcome optimization and durability.
Jointly organized by FabLab Poliba and Polytechnic University of Bari, Self Made Architecture 03 aims to help students to develop new skills and tools on 3D Modeling, Advanced Parametric Modeling, Structural and Daylighting Optimization and Digital Fabrication.
The tools we’ll use:
#Rhinoceros3D #Grasshopper3D #Kangaroo #Ladybug #Honeybee #Cura #BigRepOne
The students will be involved in morning lectures and hands-on workshops during the afternoon with a Do-It-Yourself and Do-It-Together approach. They will be asked to work on group projects and take part of the final phase of a temporary architecture installation.
More info:
Days: 2nd July 2018 to 7th July 2018 Location: Italy > Puglia > Bitonto Language: English Students: 27 International students Credits: 2 ECTS Benefits: Fully Fundend Summer School. Free Application for the Summer School, Free Accommodation with B&B and meals included, Free Enrollment to FabLab Poliba Elegibility criteria: students and graduates of architecture, design and engineering.
Apply: www.poliba.it/didattica/sma03 Deadline: 31st May 2018 at 12:00 (noon) Contacts: info@fablabpoliba.org Scientific Coordinator: Prof. Nicola Parisi…