ally to describe a process of repeating objects in a self-similar way. Simply stated, the definition of a recursive function includes the function itself. Fractals are among the canonical examples of recursion in mathematics and programming. A loop can simply be a way to apply the same operation to a list of elements, but it is an iterative loop if the results from one step are used in the calculation of the next step. In design research controlling recursion becomes a new strategy to define new forms and spaces.
BRIEF
In this workshop we will be exploring iterative strategies through parametric design. Main tool for the course will be grasshopper3d and its add-on Anemone. Anemone is a simple but effective plug-in for Grasshopper that enables for loops in a simple and linear way. We will explore several strategies such iterative growth, L systems, fractals, recursive subdivisions and more. Our course will focus on how those methods can affect three-dimensional geometries, generating unexpected conformations.
TOPICS
intro to rhinointro to grasshopperadvanced grasshopperdata managementintro to loopscellular automatal-systemsagent based modelling
SCHEDULE
Day 1 / friday 16:00Tour Green Fab LabBasics of 3D modeling in RhinocerosBasics of GrasshopperOpen Lecture by Jan Pernecky, founder of rese arch
Day 2 / saturday 10 am- 18 pmRecursive iterative methodsAdvanced Topics of looping
Day 3 / sunday 10 am – 18 pmRecursive iterative methodsFinal presentation session
REQUIREMENTS
The workshop is open to all participants, no previous knowledge of Rhinoceros and Grasshopper is required (although an introductory knowledge is welcome). Participants should bring their own laptop with a pre-installed software. The software package needed has no additional cost for the participant (Rhino can be downloaded as evaluation version, Grasshopper and plugins are free). These softwares are subject to frequent updates, so a download link to the version used in the workshop will be sent to the participants a few days before the workshop.…
Added by Aldo Sollazzo at 11:10am on October 6, 2015
ere is often a bit of a misconception about the differences between 'mass-spring' models and FEA. Although the method of solving is different, as I do not form a global stiffness matrix, the elements themselves and the calculation of stresses in them can be effectively the same, and based on standard real material properties and sections.
Using nodes with only 3 degrees of freedom as Kangaroo does currently, axial stresses can be calculated (a spring being a very simple finite element), and bending without torsion (following the approach described in this paper), accounting for Young's modulus and sectional area. I had been focused for a while on more geometrical optimization, but recently have been looking again at clarifying the real world units and numerical values used by Kangaroo for structural purposes.
Several other ways of modelling beam/plate/volume type elements using combinations of springs are commonly used in game/animation physics, and these can indeed be difficult to link to accurate quantitative behaviour, which has perhaps helped form the impression of mass-spring models as non accurate, but it need not be so.
The approach can also be extended to 6dof nodes, in which case it becomes possible to include torsion, anisotropic bending etc, and to base these on more standard engineering formulations for beams and other elements.
In fact I've recently worked on some software together with Gennaro Senatore and Charlie Banthorpe for Expedition Workshed that implements such 6dof elements together with large displacements, realtime interaction, and options to output bending moment/shear/torsion graphically. This is browser based (you can try it here), rather than Grasshopper but I'm currently working on bringing the same approach into Kangaroo.
Maintaining interactive speeds while avoiding numerical instabilities does pose its challenges with these methods, and for many conventional structures where the displacements are small and interaction is less important I think conventional FEA will continue to be more efficient for some time, but I do believe the approaches will eventually converge.
Thinking about it - although they are very useful techniques, continuum mechanics and infinitesimal displacements are both just useful abstractions, and no less 'artificial' than mass-spring models (and I think infinitesimal displacements are particularly counter-intuitive - real things have to move to generate stresses).
Anyway, I'm always very interested in exploring collaborations and sharing of ideas about these approaches, and would love to hear any more thoughts from the Karamba team about this...
best,
Daniel…
user to understand. RhinoScript is a generally more straightforward and easy to use. You can think of it as a translation of RhinoCommon so that you don't have to write all the technical stuff.
In your first line you've said "import rhinoscriptsyntax as rs". To see the methods you can call from this library you can go to the help menu and choose 'Help for Rhinoscript'. It will show you a searchable window of all the the options you have. This is much easier for new users to learn than looking at the RhinoCommon SDK.
If you search the help file for 'BoundingBox' you'll get the screen capture below:
At the bottom you can see an example of how to use it. In your case you would replace the following lines:
2/ boxA=brepA.GetBoundingBox((0,0,0,)) --> boxA = rs.BoundingBox(brepA)
3/ boxB=brepB.GetBoundingBox((0,0,0,)) --> boxB = rs.BoundingBox(brepB)
The script you have written uses elements of both RhinoScript and RhinoCommonSDK. I would suggest you might start just using RhinoScript. See below, I have re-written the first 8 lines of your script using just RhinoScript:
import rhinoscriptsyntax as rs
#Get BoundingBox from breps.BoundingBoxA = rs.BoundingBox(brepA) #Returns list of eight corner points.BoundingBoxB = rs.BoundingBox(brepB)
#Get centre point of RhinoScript BoundingBox (which is a list of eight points).boxA = rs.AddBox(BoundingBoxA) #Generate box from corner pointsptA = rs.SurfaceVolumeCentroid(boxA) #Get Volumetric Centroid of boxboxB = rs.AddBox(BoundingBoxB) ptB = rs.SurfaceVolumeCentroid(boxB)
For reference the following will achieve the same thing using RhinoCommon, fewer lines, but more technical. There are a few other quirks as well, for example you have to explictly tell the python component what kind of object 'brepA' is. See below for example of same script in RhinoCommon:
import Rhino as rh
centerPtA = brepA.GetBoundingBox(rh.Geometry.Plane.WorldXY).CentercenterPtB = brepB.GetBoundingBox(rh.Geometry.Plane.WorldXY).Center
I'm not sure what you are trying to achieve overall and your loop doesn't make a lot of sense to me but I hope that clarifies some of the differences between the two libraries you can use.
Regards,
M…
out of the practice walls.
Anyway get a hint: Hard top-bottom clustering is used (process is STOPPED to the fist level of clustering for clarity: meaning flat clustering):
Cubes are abstarct (spaces, say: "rooms") centroids:
Then clusters are made; Prox K-Means is used here (other methods also available). Cyans are the abstract representation of the flat clustering (for clarity). The decision upon the number of clusters is quite complex and is based on criteria that are not used here (adjacency matrices et all):
Then this:
And finally that:
Obviously the real thing works recursively (kinda like a fractal algo) on the clusters and stops if the predefined number of nodes is reached (say 2 or 3). Then the "flat" red connector shown connects actually (bottom to top) child to child AND child to parent clusters etc etc.
BTW: GH has a component (called quadTree) that - I guess - works "kinda" like a K-Means clustering algo but the fact that GH is a-cyclic by nature ... means that you should use Anemone - if the above are attempted via the component way (not my way anyway).
more soon
…
he process. The last one is there because fixing it would cause another problem, which we feel is more serious. Solutions may well be forthcoming in the future though.
1. Grasshopper curves and points are drawn more towards the camera than they really are. This is a conscious decision. Often Rhino geometry and Grasshopper geometry exist in the same place. If we would draw the Grasshopper preview in place, then there's no telling whether you'd see the Rhino curve or the Grasshopper curve. We feel it's important that you always see the Grasshopper curve on top. This is why we draw all curves and points slightly towards the camera. However we don't do this for meshes. This results in something akin to the image below. The eye represents the location of the viewport camera, the shaded box represents the actual location of the geometry and all the thick black lines represent the edges of the geometry moved towards the camera. As you can see, the red lines will be visible, even though they should be behind the shaded box. This effect can get very strong when the camera is close to some geometry relative to the size of the boundingbox of all geometry.
2. Wires behind the camera are sometimes visible. This is a bug I don't know how to solve. We'll get around to it eventually. When an object is behind the camera the display transform sometimes makes it visible in front of the camera in some weird inverted perspective mode.
3. Meshes are not z-sorted prior to display. This means that the order in which they are drawn is not back-to-front, but fairly arbitrary. This means that a transparent mesh may appear to punch a hole in the mesh behind it. If this is annoying you to no end, you can use Ctrl+F on the Grasshopper components that contain the meshes that are punching holes and then press F5 to recompute. The draw order should now be different. Of course sometimes it will only 'fix' it for a specific camera angle.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
that aren't relevant anymore or if there are any I missed please let me know. Maybe we can get a list like this in a better place as well.
Thank you.
Right Mouse - When wiring, plugs wire into multiple inputs.Shift+Click - Pick component aggregate.Shift+Clicking - Place component aggregate.Alt+Left - Click Split canvas tool.Ctrl+Q - Preview toggle.Ctrl+E - Enable toggle.Ctrl+Left - Navigate upstream.Ctrl+Right - Navigate downstream.Ctrl+M - Mesh Edge display toggle.Ctrl+1 - No previewCtrl+2 - Wireframe preview.Ctrl+3 - ShadedCtrl+Alt+Shift+Click - Save image of canvas.Ctrl+Alt and Shift+Ctrl+Alt - Highlights components on the canvas and component palette.Ctrl+Shift - Rewire component input/output.Double Click - Find/SearchAlt+Drag - Copy component on canvas.Ctrl+Tab - Document cycling.Ctrl+Shift+P - PreferencesCtrl+N - New fileCtrl+O - Open fileCtrl+S - Save file.Ctrl+Shift+S - Save as.Ctrl+Alt+S - Save backup.Ctrl+W - Close open document.Ctrl+Z - Undo copy.Ctrl+Y - RedoCtrl+X - CutCtrl+C - CopyCtrl+P - PasteCtrl+Alt+V - Paste in placeCtrl+Shift+V - Paste in centerCtrl+A - Select allCtrl+D - DeselectCtrl+Shift+I - Invert SelectionCtrl+Shift+A - Grow SelectionCtrl+Shift+Left Arrow - Grow UpstreamCtrl+Shift+Right Arrow - Grow DownstreamCtrl+Left Arrow - Shift upstreamCtrl+Right Arrow - Shift downstreamCtrl+G - Group selectionF3 - FindF4 - CreateF5 - RecomputeCtrl+B - Send to backCtrl+F - Bring to frontCtrl+Shift+B - Move backwardsCtrl+Shift+F - Move forwardsInsert - Bake selectedCtrl+Q - Toggle previewCtrl+E - Toggle enabled selected
…
thing about how to use Grasshopper to break up the unit modules as parameterization. Is there any Grasshopper Master could help me? The result i want is looks cool and easy to build.
Reference:
https://vimeo.com/98518748 Video link
http://www.designboom.com/architecture/robotically-fabricated-landesgartenschau-exhibition-hall-06-25-2014/
The question:
1,In the video,How to process parametric form?(kangaroo?)
2,After make all the things on the round surface, how to change the Tangency circle into flat polygon?
3, at last ,how to link every unit module?
…
faces (maked in RED, GREEB and BLUE) are shared by two zones. The small zones (in GRAY) attached to the big hall are not taken into consideration in the research.
In this case, inclined surfaces are included, as the simplification of the grandstand for spectators.
As the image shows, I have some questions:
1 - What should be the correct “surface type” of the inclined surfaces? floor?wall? or else?
(Actually, I have tried both floors and walls, but warnings shown below were received. The weird thing is that they had been assigned as floors and walls respectively having checked by the "decomposeByType" component.
2 - Can I ignore these warnings? if not, why and how can I deal with it - how to assign inclined surfaces as proper "type"?
3 - How can I get "interiorWalls"?
Although the small zones attached to the large hall are not considered in the research, I still need to assigned the shared walls as interior Walls (and set EPBC as "Adiabatic"), right? But, having checked by the "decomposeByType" component regrading walls, all I got were "walls" instead of "interiorWalls". Then, how can I get "interiorWalls" as I want?
Btw, due to the complexity of the geometry (e.g. containing inclined surfaces), I formed the thermal zone of the sports hall surface by surface using the "createHBSrfs" component as shown in above images. Do you think if it is a proper way in my case?
Any help will be much appreciated!
Ding
…
vided with U and V into line segments (i'd prefer to use that method instead of rectangular grid). These segments in U direction would then be rotated around V lines segments with min value of 0 and max value of 90 degrees, according to attractor (i'd like it to be image sampler in the end but for now im trying with point/multiple points). These lines would then be lofted
I post the definition below
Here are my problems (i marked them in definition):
1)i managed to get U direction line in every second row, i dont know how to get the lines between the rest (i tried shifting list, but didnt manage to get the right result).
2)Harder part - id like to measure distance between attractor point and a bottom point of lines. Than i would like to transfer it to rotation this way (distances used just as example): distance 0-20 - angle of rotation 0degrees; distance 20-30 - angle of rotation 1-90 degrees. I have no idea how to transfer it into definition. I also have problems remapping numbers to 0-90 value.
3)I'd like to do that later but i'd also like to use black-white image sampler as white - 0 angle of rotation, black - 90 degrees. I never experimented with image sampler and would be grateful for some advice how transfer colors into degree values.
I politely ask you to help me (especially with first two points i mentioned). I'm not asking for a ready definition - I would like to understand where my mistakes are.
Below i post a picture of something simmilar (although im trying to rotate it by edge, not by center line)
Pardon my english, thank you for your time and help.
Enjoy your weekend.…
t;Custom additional code> Bob[] b = new Bob[] {new Bob(1), new Bob(2), new Bob(3)};
class Bob{....
}
//But how to make something like this in a loop?
// <Custom additional code>
Bob[] b = new Bob[10];
for(int i = 0; Bob.Length; i++){
b[i] = new Bob(i);
}…