d of interpenetrating surfaces somewhere:
Now all links (except a possible single ball on the very end of odd numbered ball series) are four balls long, including the jostled ones. Without that step, those items simply don't appear in the output, leaving way too big of gaps to ignore, eventually leaving huge gaps at later stages of segment doubling:
So if I turn the jostling multiplication factor way down it should work imperceptibly:
Ta-dah! The jostling strategy WORKS! Granted, only in this special case where I know I'm dealing with adjacent pairs of worms along a curve, not generic objects arranged in space by some artist.
Now I just need to wrap the multiple Python script components I'm stringing together into one script.
How long does the full 2400 balls take, finally? It took 12 Python scripts that merge pairs, to achieve this breakdown: 2400 -> 1200 -> 600 -> 300 -> 150 -> 75 -> 38 -> 19 -> 9 -> 5 -> 3 -> 2 -> 1. Time was 2 minutes 50 seconds, so there is some extra struggle for 2X as many balls as 1200 that took 1 minute 20 seconds, but only ten more seconds.
…
Added by Nik Willmore at 9:06pm on February 17, 2016
rested in specializing in the field of Computational design.
The workshop will help understand how Grasshopper facilitates during the design process allowing one to Generate, Automate and Manipulate data.
To Register:
http://goo.gl/forms/gvUTyZihVK
Workshop Structure:
Day 01: 16 August 2018
Introduction to Computational Processes in Architecture
Understanding Grasshopper and its relation to Rhino3D
Working with fields and Grids (Supplementary readings for Architectural theory)
Spatial Concepts using Data
Day 02: 17 August 2018
Understanding Data in Grasshopper - LISTS
Managing Data in Grasshopper (Supplementary reading)
Experimentation on Massing and Architectural Forms
Day 03: 18 August 2018
Understanding Data in Grasshopper – Trees
Surface Logics (Supplementary reading)
Design Exercise and Prototyping
Day 04: 20 August 2018
Architectural Skins
Day 05: 21 August 2018
MasterClass Project
Introduction to various types of Digital Fabrications
Prototyping of works during the Workshops
Basic knowledge of Rhino 5 is required to be able to take this training.
CERTIFICATION: All participants will receive a Workshop certificate from Authorized Rhino Trainer.
3D Printing: Prototyping of works during the Workshops
Workshop Tutor:
Kavitha M, an Architect and Computational Designer, 3D Printing Specialist is also the co-founder of INTO Design Research, will head the Computational Process in Architecture using Grasshopper workshop. Graduated from Stadelschule Architecture class with Masters in Advanced Architecture Design, has been researching on teaching methodologies on digital tools and their influence on Design thinking.…
22 (parametric design)
July 26-27-28-29 (digital fabrication)
The third edition of digitalMed Workshop is structured as a design laboratory. Participants will learn the challenging process of producing ideas, projects and research analysis that are to be developed through specific software and concepts that emerge through the use of mapping, parametric design and digital fabrication.
Goals and Objectives:
We aim to make clear the theoretical and technical knowledge in the approach to parametric and generative design and digital fabrication. (From collection and data management, to the manner in which these inform the geometries, to the fabrication of prototypes.)
Participants will also have the opportunity to practice the new knowledge gained in the design laboratory through project work.
Project Theme:
"Urban Field" Identify, study and analyze the system of public spaces in the urban area of the city of Salerno.
Connection, mutation, generation and evolution are the themes to be followed in project work.
Brief Description of Topics:
- Mapping. Our reality, in all its forms, has studied through concepts of the theory of Complex Systems. The techniques that will be used to study events and places of reality, will work for the management, manipulation and visualization of data and information. These will form the basis for project management and driven geometry, conducted during the second phase of the workshop.
- Parametric Design. Introduction to Rhino* and Grasshopper. Specifically, we will explain the concepts with which to work with the software of parametric design and how they function. Through these tools, we will arrive at the definition of systems of mathematical and / or geometrical relationships that are able to generate and govern patterns, shapes and objects that will inform the final design.
- Digital Fabrication. In this phase, participants of the workshop are organized into working groups. Participants have access to materials and conceptual apparatus that will take them directly to the fabrication of the geometries of the project, with the use of software CAD / CAM interface and the use of machines for the digital fabrication.
The DigitalMed workshop is organized by Nomad AREA (Academy of Research & Training in topics of Contemporary Architecture), in collaboration with the City of Salerno, the Order of Architects Province of Salerno and the National Institute of Architecture In / Arch - Campania.
Interested parties may download the Notice of Competition at the address www.digitalmedworkshop.com and fill the pre-registration no later than July 10th 2012.
PRESS OFFICE
Dr. Francesca Luciano
328 61 20 830
fra_luciano@libero.it
For information or subscriptions:
e-mail: info@digitalmedworkshop.com - tel: 089 463126 - 3391542980…
ly one (Cost of the structural material in my case) and penalize the individuals that not satisfy the structural verification by multipliyng the cost for that iteration for a factor 10. This seem to work really good, infact I obtained a convergence of the results in a specific area and number of beams.
Now, I've to modify something because the thickness of the insole, tend to minimum of the range (only because it's the most expensive material in my case), despite the validation of structural verification that is satisfied with the maximum height of the beams.
I'm expecting a insole thickness about 20-30 cm and beams height less that the maximum. I increase the range of the thickness insole to a minimum of 20 cm, but I hope the solution tend to a larger value.
Do you have some suggestion in this case?
Your post was really helpful, thank you so much again for the perfect explanation!
Leonardo…
ght on why this is, and some ideas I have for how to improve things going forward.
MeshMachine grew out of some scripts I started developing over 3 years ago (described here), originally just with the aim of achieving approximately equal edge lengths on a smooth closed triangulated mesh.
As time went on, I kept adding things, such as ways of keeping boundaries and sharp edges fixed, different ways of controlling edge lengths that vary across the surface, and different ways of pulling to surfaces.
I was also still experimenting with different rules for the core remeshing operations, such as valence driven vs angle driven edge flips.
All of these things meant many variables in the script. I wanted to share the work so others could play with it, but not really knowing exactly what people might use it for made it difficult to simplify the interface, so I just exposed most of these variables I was using (actually there were originally even more, but I felt a component with 20+ inputs was excessive, and combined some of them and fixed others to default values).
I've never been happy with that component, but some people want a component that you can just feed a surface and get a mesh with 'nice' triangles, without too much fuss or needing to know anything about how it works, while other people want to be able to vary the density based on proximity to the border, and curvature, and attractor points and see the intermediate results, and model minimal surfaces without pulling to any underlying surface, and...
Since then I did the rewrite from Kangaroo to Kangaroo2, and through that process, and associated conversations with Steve Baer, David Rutten and Will Pearson, my ideas about how to structure libraries and make cleaner more flexible Grasshopper components changed. Much of this centres around using interfaces (in the specific programming sense, not to be confused with UI), because they allow separating code into multiple components, while still allowing to edit parts of it within Grasshopper, and other parts in a proper IDE (because I find the GH code editor is not conducive to writing large amounts of well structured object oriented code).
Towards the end of last year, Dave Stasiuk and Anders Deleuran invited me and Will Pearson over to CITA for a few days of mesh and physics coding and beer drinking. During this time I made the first steps to restructuring MeshMachine to be more modular and interface based like Kangaroo2, instead of one giant script. One of the main motivations for doing this was to make it easier to combine the K2 physics library with the remeshing. However, at the time I hadn't yet released K2, so it didn't make sense to post examples that used those libraries. After the launch of K2, this restructured MeshMachine development has been a bit on the back-burner, but this discussion and Dave Stasiuk's work with Cocoon is inspiring me to pick it up again.
Seeing how you are combining the Cocoon and MeshMachine, and how Dave is also using interfaces in his recent work suggests to me it might be possible to integrate them more smoothly...
…
ly 26-27-28-29 (digital fabrication)
The third edition of digitalMed Workshop is structured as a design laboratory. Participants will learn the challenging process of producing ideas, projects and research analysis that are to be developed through specific software and concepts that emerge through the use of mapping, parametric design and digital fabrication.
The workshop will take place in the city of Salerno (Italy) and it will last 11 days structured into 3 intensive weekends: July 13-14-15 (mapping); July 19-20-21-22 (parametric design); July 26-27-28-29 (digital fabrication).
Goals and Objectives:
We aim to make clear the theoretical and technical knowledge in the approach to parametric and generative design and digital fabrication. (From collection and data management, to the manner in which these inform the geometries, to the fabrication of prototypes.)
Participants will also have the opportunity to practice the new knowledge gained in the design laboratory through project work.
Project Theme:
"Urban Field" Identify, study and analyze the system of public spaces in the urban area of the city of Salerno.
Connection, mutation, generation and evolution are the themes to be followed in project work.
Brief Description of Topics:
- Mapping. Our reality, in all its forms, has studied through concepts of the theory of Complex Systems. The techniques that will be used to study events and places of reality, will work for the management, manipulation and visualization of data and information. These will form the basis for project management and driven geometry, conducted during the second phase of the workshop.
- Parametric Design. Introduction to Rhino* and Grasshopper. Specifically, we will explain the concepts with which to work with the software of parametric design and how they function. Through these tools, we will arrive at the definition of systems of mathematical and / or geometrical relationships that are able to generate and govern patterns, shapes and objects that will inform the final design.
- Digital Fabrication. In this phase, participants of the workshop are organized into working groups. Participants have access to materials and conceptual apparatus that will take them directly to the fabrication of the geometries of the project, with the use of software CAD / CAM interface and the use of machines for the digital fabrication.
The DigitalMed workshop is organized by Nomad AREA (Academy of Research & Training in topics of Contemporary Architecture), in collaboration with the City of Salerno, the Order of Architects Province of Salerno and the National Institute of Architecture In / Arch - Campania.
Interested parties may download the Notice of Competition at the address www.digitalmedworkshop.com and fill the pre-registration no later than July 10th 2012.
PRESS OFFICE
Dr. Francesca Luciano
328 61 20 830
fra_luciano@libero.it
For information or subscriptions:
e-mail: info@digitalmedworkshop.com - tel: 089 463126 - 3391542980 …
ing ways to leverage simulation results from ladybug and inform design of building envelops with benefits that can be modeled. Given 20 percent of the cost of a project typically goes to the facades, and maybe a half of that goes to the openings, there is a good enough reason to question how to materialize that 10 percent, which can result in 10-30 percent difference in total energy comsumption.
I think ideally radiation analysis, natural ventilation and daylight analysis on floors should all inform opening sizes and placements, as well as the building sections at large. However natural ventilation seems to be the most complicated one because it couples airflow and thermo dynamics. I have a definition setup so that I can batch simulations for radiation analysis and daylight analysis, but natural ventilation is the missing link. So for what I am doing now I will select a handful of design that seem to work the best based on the two available analysis and convert all the geometry into CAD files so that I can run them in an evaluation copy of autodesk simulation CFD. So for now I can do this in 2 stages.
But for the future, given the possibility of actually have that as a part of grasshopper feature, which would be lovely, I want to understand the science behind it and share some links.
(http://www.wbdg.org/resources/naturalventilation.php) In this link the author outlines quite a few general principles and variables to consider for natural ventilated buildings.
For example, how stack effect works.
Qstack = Cd*A*[2gh(Ti-To)/Ti]^1/2, where
Qstack = volume of ventilation rate (m³/s)Cd = 0.65, a discharge coefficient.A = free area of inlet opening (m²), which equals area of outlet opening.g =9.8 (m/s²). the acceleration due to gravityh = vertical distance between inlet and outlet midpoints (m)Ti = average temperature of indoor air (K), note that 27°C = 300 K.To = average temperature of outdoor air (K)
The thing about natural ventilation is that not only the sizes and positioning of openings of the facade facing predominant wind matter, but also the openings on the other side matter. The vertical distance between the inlets and outlets also need to be taken into account. The author suggests that naturally ventilated buildings should be no wider than 45 feet.
and in this pdf presentation it discusses CFD for natural ventilation and illustrates why it is not easy
http://isites.harvard.edu/fs/docs/icb.topic882838.files/L17.6205Airflow-Modeling_Ibarra.pdf
and in this pdf briefly outlines the approach taken by designbuilder
http://isites.harvard.edu/fs/docs/icb.topic472869.files/DesignBuilder%20Simulation%20Training_HSD.pdf
Lastly a wide spectrum of environmental analysis works by e3lab
http://www.e3lab.org/research
http://www.e3lab.org/green-buildings
If I make progress on a way to tie the three analysis together (radiation, daylight and natural ventilation), I wont forget to post it on this thread.
Thanks.…
about it.
2. Nick's comment below got me thinking about unit testing for clusters. Being able to work will data flowing in from outside the cluster or having multiple states to test against could be really cool. Creating definitions that were valid across a general cross section of possible input parameters was a significant issue for us. It was all too easy to write the definition as if we were drawing (often we were working from sketches) and then have it fail when the input parameters changed slightly.
4. I wasn't thinking about threading the solver itself. I was thinking along the lines of some IDEs that I've seen which compile your project while you type it. I know that threading within components and at the rhinocommon-level is a freaking hard problem that has been discussed at length already. (although when, 5-10 years from now, it's finished it will be very cool)
Let's say the solver is threaded and the canvas remains responsive. As soon as you make a change to the GH file, the solver needs to be terminated as it is now computing stale data.
What if the solver was a little more atomic and like a server? A GH file is just a list of jobs to do with the order of the jobs and the info to do them rigidly defined - right? The UI could pass the solver stuff to do and store the results back in the components on a component by component basis (i have no idea what the most efficient way to do this is in reality - I'm just talking conceptually) this might even allow running multiple solvers to allow for at least the parallelism the might be built into a given GH file to be exploited (not within components but rather solving non-interdependent branches of components simultaneously). This type of parallelism would more than make up for the performance hit you alluded to for separating the UI and the solver (at least for most of the definitions i write).
I was imagining a couple of scenarios:
a) Writing a parallel module: solver starts chewing away - you see it working - you know it's done 1/3 of the work - if you have something to do at that point you could connect up to some of the already calculated parameters and write something in parallel to the main trunk which is still being solved.
b) Skipping modifications: you need to make a series of interventions at different intervals along a section of code. Sure you could freeze out that bit of a section of down steam code and make modifications so you can observe the effects more quickly. Unfreeze a bit more and repeat etc. etc. until your done and then unfreeze that big chunk at the end to make sure you haven't blown anything up. Just letting it resolve as far as it can while you sit there waiting for inspiration seems a lot more intuitive to me though.
On a file which takes 15 minutes to solve that's no big deal, but you certainly don't want to be adding a 20 millisecond delay to a solution which only takes 30 milliseconds.
You also wouldn't notice it at that point :-) perhaps for things where it would really make a difference, like Galapagos interactivity, it could be disabled - or could the existing "speed" setting just digest this need? Since the vast majority of time that Gh is solving is on files under active development not on finished code, i think qualitative performance is probably more important that quantitative performance (again with cases like Galapagos needing to be accommodated). In our case the code only had to "work" once since its output went to a cnc machine to make a one off project and it didn't really matter if it took 15 seconds or 15 hours for the final run.
Lastly, I have no way to predict how long a component is going to take. I can probably work out how far along in steps a component is, but not how far along in time.
that's ok, from a user point of view, just seeing a percentage tick along once in a while would be nice reassurance that the thing is just slow and has not, in fact, crashed. Maybe there could be two modes of display: the simple percentage version for unpredictable code and, for those of us able to calculate the time taken for our algorithm based on the number of input parameters, a count down in seconds or minutes or whatever.
I think a good place to start with these sort of problems is to keep on improving clusters, ... etc etc
i totally agree.
…
Added by Dieter Toews at 7:53pm on September 4, 2013
rring to the above image)
Area
effective
effective
Second
Elastic
Elastic
Plastic
Radius
Second
Elastic
Plastic
Radius
of
Vy shear
Vz shear
Moment
Modulus
Modulus
Modulus
of
Moment
Modulus
Modulus
of
Section
Area
Area
of Area
upper
lower
Gyration
of Area
Gyration
(strong axis)
(strong axis)
(strong axis)
(strong axis)
(strong axis)
(weak axis)
(weak axis)
(weak axis)
(weak axis)
A
Ay
Az
Iy
Wy
Wy
Wply
i_y
Iz
Wz
Wplz
i_z
cm2
cm2
cm2
cm4
cm3
cm3
cm3
cm
cm4
cm3
cm3
cm
I have a very similar table which I could import to the Karamba table. But I have i_v or i_u values as well as radius of inertia for instance.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
dimensjon
Masse
Areal
akse
Ix
Wpx
ix
akse
Iy
Wpy
iy
akse
Iv
Wpv
iv
Width
Thickness
Radius R
[kg/m]
[mm2]
[mm4]
[mm3]
[mm]
[mm4]
[mm3]
[mm]
[mm4]
[mm3]
[mm]
[mm]
[mm]
[mm]
L 20x3
0.89
113
x-x
4,000
290
5.9
y-y
4,000
290
5.9
v-v
1,700
200
3.9
20
3
4
L 20x4
1.15
146
x-x
5,000
360
5.8
y-y
5,000
360
5.8
v-v
2,200
240
3.8
20
4
4
L 25x3
1.12
143
x-x
8,200
460
7.6
y-y
8,200
460
7.6
v-v
3,400
330
4.9
25
3
4
L 25x4
1.46
186
x-x
10,300
590
7.4
y-y
10,300
590
7.4
v-v
4,300
400
4.8
25
4
4
L 30x3
1.37
175
x-x
14,600
680
9.1
y-y
14,600
680
9.1
v-v
6,100
510
5.9
30
3
5
L 30x4
1.79
228
x-x
18,400
870
9.0
y-y
18,400
870
9.0
v-v
7,700
620
5.8
30
4
5
L 36x3
1.66
211
x-x
25,800
990
11.1
y-y
25,800
990
11.1
v-v
10,700
760
7.1
36
3
5
L 36x4
2.16
276
x-x
32,900
1,280
10.9
y-y
32,900
1,280
10.9
v-v
13,700
930
7.0
36
4
5
L 36x5
2.65
338
x-x
39,500
1,560
10.8
y-y
39,500
1,560
10.8
v-v
16,500
1,090
7.0
36
5
5
I have diagonals (bracings) which can buckle in these "non-regular" directions too, and they do. If I could add those values then in the Karamba model I could assign specific buckling scenarios..... I can see another challenge which will be at the ModifyElement component, I will not be able to choose these buckling lengths, in these directions.
Do you think this functionality can be added within short, or should I try to find another way to model these members?
Br, Balazs
…
isseminated at the firms I've worked at:
Always write your scripts as though someone else is going to have to use and debug them without any instruction from you. This is kind of an overall governing principle that drives a lot of the other best practices.
Structure your definitions left to right. This way it is clear what is dependent on what, what executes in what order, and makes it easy to use the "Moses" tool (alt-click and drag on the canvas to spread components apart) to insert intermediate functionality to an already-existing definition.
For a given functional group (a set of components that does a well-defined thing) keep all the data going in to the group as labeled parameters on the left, and everything going out of the group as labeled parameters on the right. This is the intent of my "Best Practicizer" tool in Metahopper (now a menu item instead of a component). In essence, you're treating each group as though it's about to be clustered - you've defined what it does, what its inputs are, and what its outputs are. This makes troubleshooting much easier - if something is going wrong, you can easily isolate which group is causing the trouble by looking at inputs and outputs.3b. If you're grabbing some value or data (e.g. "STEP COUNT") many times from elsewhere in the definition, don't make a bunch of long wires that connect all the way back to the original source - grab that data into ONE labeled parameter and then connect all your inputs that need it to that - makes for one long wire instead of 20.
Annotate, annotate, annotate. Label your params (and if you're in icon mode, switch them manually to text). Label your groups. Use scribbles to mark larger regions of functionality. Use panels for "instructions" wherever it might not be clear how someone is supposed to use your tools.
Avoid "wireless" (Hidden Wires) connections. If you MUST use them, make sure you create params at both ends with matching names so it's clear what the data represents and where it comes from.
Cluster where possible. It's extremely helpful to isolate functional groups into clusters - it makes debugging faster and easier, since you don't have to wait for the whole definition to recompute when making small edits to the inside of a cluster, and it sets you up well to create code that can be re-used later on. However, don't take whole definitions and cluster them. As a rule of thumb, if a cluster has more than ~10 inputs, it should probably be broken into multiple clusters. There is a slight performance impact when clustering, because unlike an un-clustered group of components, which only executes the parts of the definition where something has changed, any time ANY input to a cluster changes, the WHOLE cluster re-computes. Because of this, a cluster shouldn't generally wrap any groups of components that are not related / don't connect with each other.
Color code your groups. Many firms develop a standard around group coloring so that it's easy to understand what parts of a definition are doing what kind of task. For instance, at Woods Bagot where I work, we have different colors for component groups that highlight inputs, outputs, rhino references, baking, and visualization. You may find that a different set is useful to you, but having a consistent standard can improve legibility. That's my 2c. At the end of the day, everyone works a little bit differently, and that's unavoidable (and not even a bad thing!) As long as you keep #1 in mind, all the rest will follow.
…