we're actually using PET sheets for our flexures. We try to design so that the flexures don't go through more than +/- 30 degrees of deflection. If the angular deflection is kept small, the lifetime can definitely be on the order of 1000000 cycles.
As for the design process (item 2), ideally the designer would be able to use a simple 3D CAD tool to design a model of a robot, and the geometry would be represented by dimensioning the individual parts in the model. Maybe there should be some parametric primitive kinematic building blocks like four bar linkages, box frames, etc. that a user could build up a robot from. But, the key functionality the tool needs to provide is for the designer to be able to visualize how the robot will move when it's fabricated. This could mean observing (or plotting) the motion of a leg, a wing, or a series of body segments. Ideally, then, the tool would generate an unfolding of the design. How this would work is still very vague - maybe the user would assist in the unfolding, maybe there would be an optimization routine that computes optimal unfoldings based on criteria like minimal waste, or fewest pieces (I would *not* constrain the problem to construction from a single monolithic piece as in origami). The biggest problem we have right now, is that our design process is totally divorced from fabrication. Even if we went through the trouble of extruding individual thin plates in Solidworks and creating an assembly for visualizing the kinematics of a mechanism, that particular representation doesn't transfer easily to the fabrication process because it's essentially monolithic.
Item 3: The 2D drawing is simple a drawing done manually in Solidworks. There are different layers for flexure cuts, outline cuts, and potentially any cuts to be made in the plastic flexure layer. Depending on the robot, there may be many separate pieces for different parts and linkages in a single robot. For example, the drawing for a robot containing a fourbar linkage may have the linkage laid out as a physically separate piece consisting of five rigid links connected by four flexure hinges. During assembly, the designer would then fold up that linkage and insert it into the robot wherever it's supposed to go. If you're curious you can see some sample 2D drawings for older designs here: http://robotics.eecs.berkeley.edu/~ronf/Prototype/ under the "Example Structures" heading.
I noticed Kangaroo seems to be a popular choice for physical simulations. I don't really even need to include forces like bending resistance - I'm happy to allow the design tool to approximate flexures as pin joint-type hinges. Once the design is unfolded, the details of how to cut the flexures could be worked out in a post-processing step. I wouldn't expect the tool to be able to realistically simulate the bending of the hinges.
I'm going to have to dig a lot deeper into understanding Grasshopper and Kangaroo. I only just got started with Grasshopper today by following the folding plate tutorial on wa11ace.com.au today. …
now.
This V4 can sense if you feed it with your points and uses these instead of the p1,p2,p3 (it's a prelude for V5 that uses DataTrees of points making any surface subdivision a reality). Do the following: sample a triad of your points (NOT internalized) and feed the C# . Then ... start dragging these Rhino points around (the C# responds accordingly). See any difference?
The topology:
Well, the whole fractal logic (in this case) is to have 3 pts on hand (call them p1,p2,p3 : red, green, blue) and then project the "right" one, say, p3 to the Line (p1,p2) > do this > do that ... blah blah.
But ... what p3? that's the 1M question: Here for instance the right p3 (blue) is (by accident) the 3rd point entered (it's obvious the "projection" recursive logic):
but if you drag around a bit the points : p3 is now different (C# does this by sorting synchronously the triangle angles per point VS points) Numbers are used to indicate that "swift" : (0 for the new p1, 1 for the new p2, 2 for the new p3... etc). Compare with the initial points (red = ex p1, green = ex p2 , blue = ex p3).
and again different:
The 1M question:
In fractal thinking the big thing is when to stop: I could obviously control that by a counter ... but here the requirement is the tile min size (within unpredictable amount of recursions) : this is what the stop logic used does.
The 1B question:
So ... implementing fractal logic (against DataTrees of points) to a parametric environment ... requires a lot of questions: because each time the size of the start triad varies ... whilst the stop condition is constant: meaning that with a little bit of "good" luck you can reach incredible high amount of tiles (computer out of memory > Adios Amigos).
Obviously I'm taking having all possibilities in mind and especially big projects > big facades > millions (or zillions) of tiles > Armageddon > ....
more soon
…
ng in Grasshopper?
As a general recommendation for developers in Grasshopper who are writing a part of their library which is performance-sensitive (please note: often the performance sensitive part is very limited) is to write it in C#, or maybe even C, or maybe even assembly :). Of course, the closer to the machine you will be, the easier it will be to harness all minimal optimizations. However, there is always a compromise between "getting things done" and "making them best" and this boundary is not very easy to catch, right?
If you want to have significant speed improvements for numerical calculations, I would at least recommend developing with C# in a compiled component using Visual Studio or SharpDevelop. The reason is: in order to provide the line number of possible errors, Grasshopper compiles C# scripts in debug mode! They will be much less optimized than what is possible even with today's technology. This does not preclude keeping the project open-source, if that is one of your goals.
Regarding the actual list:
1) Yes, the implied loop will probably be slower than just a simple for loop. This is because Grasshopper code has to keep track of more things than the ones you could be considering with your knowledge of of your very-special case. However, a factor of 10 is simply not acceptable and is likely a symptom of something else. In fact, I think I remember fixing a bug around that in Rhino WIP. However, it appears to be still slower also there. I've added a bugtracking item here.
2) If you are able to do all casts that are involved, and do them as Grasshopper does, please write that code that way. For example, if you supply a curve to an input with number hint, Grasshopper computes the length of the curve. There will have to be an "if" that checks if the input is a curve somewhere (or some similar construct). This aid for designers is what slows down the hint input.
3) Grasshopper has to keep side effects at bay. For example, components B and C are both connected to outputs of A. If you edit data in component B, and that data came from A you of course expect that data to be unchanged in C. This means that, for even lists of numbers, Grasshopper has to perform a deep copy of the output for each input. Otherwise, what happens if B sorts the list and C finds the index of the smallest number? This could be improved if GH components had some way of flagging themselves as non-data-mutating (constant). The fact that, by supplying special types, Grasshopper has no way of performing copies will likely speed things up. But be aware of possibly very annoying side effects creeping in if data is not immutable. Another option is performing the copy "optimally", just where you need it, because you know where your data is used. This is not information that is available to GH at present.
Does this help?
Thanks again for your input,
Giulio--Giulio Piacentinofor Robert McNeel & Associatesgiulio@mcneel.com…
onents to the latest version and, as you can see, everything works fine:
Over the next week, I am going to be adding in several new capabilities to the Adaptive model in LB+HB that are not an official part of ASHRAE or ISO standards but they are endorsed by the experts and researchers who have helped build the standards. Mostapha, I will be sure to have the component give a comment any time that these un-standardized methods are used and I will be clear that I have made them a part of LB because I have found these insights from new research to be particularly helpful to design processes for passive architecture. Also, I think many of us recognize that both ASHRAE and ISO were initially founded to produce standards for conditioned or refrigerated spaces and that, understandably, they . Among the features that I will be adding in:
1) You will have the option of using either the American ASHRAE adaptive model or the ISO EN-15251 model (see the CBE's comfort tool for a visual of the differences - http://comfort.cbe.berkeley.edu/).
2) In addition to a different comfort polygon, the European standard also uses a "running mean" outdoor temperature instead of the average monthly outdoor temperature. This "running mean" is computed by looking at the average temperatures over the last week and weights each of the daily average temperatures by how recent it is. This makes more sense to me than the ASHRAE method and addresses the issue that you bring up, Alejandro. Needless to say, the updated adaptive model will allow you to use either a running mean or average monthly temperature with either the American or European polygon.
3) The WIP adaptive chart currently has an option for a "levelOfConditioning". This input allows you to make use of research the was conducted along-side the initial development of the adaptive model, which showed that the findings did not contradict the PMV model when people were surveyed in fully conditioned buildings. This parallel research ended up producing a different correlation between the outdoor and desired indoor temperatures and this correlation had a much shallower slope than the official adaptive model for fully naturally-ventilated buildings. The levelOfConditioning allows you to make a custom correlation for full natural ventilation, full conditioning or (presumably) somewhere in between for a mixed-mode building. This levelOfConditioning will become an official input for all LB components using the adaptive model (not just the chart at the moment).
At the end of all of this, I will put together a new video series on Adaptive comfort so that we are all on the same page about how to use the model.
-Chris…
Send Feedback
Defines enumerated values for all implemented corner styles in curve offsets.
Namespace: Rhino.GeometryAssembly: RhinoCommon (in RhinoCommon.dll) Version: 5.1.30000.12 (5.0.20693.0)
Syntax
C#
public enum CurveOffsetCornerStyle
Visual Basic
Public Enumeration CurveOffsetCornerStyle
Members
Member name
Value
Description
None
0
The dafault value.
Sharp
1
Offsets and extends curves with a straight line until they intersect.
Round
2
Offsets and fillets curves with an arc of radius equal to the offset distance.
Smooth
3
Offsets and connects curves with a smooth (G1 continuity) curve.
Chamfer
4
Offsets and connects curves with a straight line between their endpoints.
…
... er ... hmm ... I would strongly suggest Plan B:
How to get the gist of C# in just 123 (+1) easy steps (I've already posted that 3-4 times if memory serves well):
Step 0: get rid of the computer (press the OFF button), buy some cigars:
Step 1: get the cookies
The bible PlanA: C# In depth (Jon Skeet).
The bible PlanB: C# Step by step (John Sharp).
The bible PlanC: C# 5.0/6.0 (J/B Albahari) > my favorite
The reference: C# Language specs ECMA-334
The candidates:
C# Fundamentals (Nakov/Kolev & Co)
C# Head First (Stellman/Greene)
C# Language (Jones)
Step 2: read the cookies (computer OFF)
Step 3: re-read the cookies (computer OFF)
...
Step 120: re-read the cookies (computer OFF)
Step 121: tun ON computer
Step 122: do something
Step 123: shut down computer permanently, forget all that
May The Force (the Dark Option) be with you.…
reaky thing consisting from triangulated "modules" (i.e an assembly out of this, this and that) where the exterior edges ARE always under tension (= SS 304/316 cables OR nylon) and the interior ones MAY be under compression ( = steel, aluminum, wood, carbon) OR ... some of them ...may be under tension. Bastardized T trusses deviate a bit from theory ... but who cares? (not me anyway). T trusses have many variants (but as the greatest ever said: Less is More).
2. Large scale T for AEC is the art of pointless since it costs around the GNP of Nigeria. Here's some indicative components from a module of a multi adjustable TX system costing (the module) ~ the price of my Panigale (Google that):
The above is mailed to a friend who has MIT (yes, that MIT: the top dog) on sight ... therefor he needs some appropriate "credentials", he he.
3. The distance that separates the above with the demo TDT node provided is around 666.666 miles - but we don't care: we are after Art not some testimony to vanity.
4. On purpose I've used a smallish ring to give you a clear indication upon the constrain numero uno in truss design: CLASH matters.
5. You'll need:
(a) A decision related with the tensioners (classic Norseman + SS cables or nylon machined thingies?).
(b) A machinist who can do elementary stuff (like the adapters) and can weld this to that (the "ring" for instance). His abilities must be 1 in a scale of 100. If the fella has a computer (not a CRAY) and he knows what 3dPDF is (hmm) ... well ... use that way to communicate with him PRIOR designing anything: He must agree on the parts BEFORE the whole is attempted (as a design in GH or in some other app).
(c) A carpenter with a wood lathe for the obvious. BTW: BEFORE doing any TDT attempt > ask the carpenter about the available wood strut sizes. Against popular belief DO NOT varnish the wood (use exterior alkyd/oil stains from some top maker like the notorious US company PPG).
http://www.ppgpaints.com/products/paints-stains-data-sheets
(d) Good quality cigars (and espresso) plus some classic music (ZZTop, PFloyd, Cure, Stones, U2 etc etc) during the assembly.
(e) Faith to the Dark Side (see my avatar).
May the Force (the dark option) be with you.…
l, you can find examples of parametric design using LB/HB, specifically the HB component pollinator workflows.
In these examples, a GH component (data recorder) is used to locally store either input parameters or output values of different model configurations and transmit them to pollinator. I can imagine, depending on how your facade is made parametric in GH, that you could save those input parameters (e.g. angle of surfaces or height of extrusion) and output variables for each iteration (e.g. annual shading).
This a search process through the design space. I do think that if you would set up the model as such, then it would be ok that the components in the PV workflow resetted after each iteration as the results would be saved. There is even a really good visualization platform Mostapha has shared to go along pollinator.
You can find examples of these workflows in the forum, simply search pollinator. I have one that I shared somewhere as well, although it was doing rudimentary things it would help.
This design space approach is a bit different than the optimization approach utilizing components like galapagos. It gives you an idea of the space of possible different desings and allows you to compare alternatives. Plus, it usually allows me to avoid all these issues of losing results between components in the workflo.
I also find it very handy and much more efficient than simply allowing a component optimize everything for me. However, it can ncrease almost exponantially (or is it geometrically, I am always bad at this) to the range and number of your input parameters. So, if each square on the wall has more than a couple of input values for a a few input parameters, I would expect this to take a long time. Thankfully, the components in the workflow will let you know exactly how many iterations.
If this method is interesting to you and you follow it I would suggest a few things to hasten the process like utilizing only the squared above and on the sides of the PV panel, since the others won't really affect shading, selecting just 2 or 3 characteristic angles for extrusions, and perhaps approximating energy production through annual shading numbers (since I imagine they have an almost linear relationship).
I do hope that I have understood what you want to do and the above information helps. I'm sure Djordje will give much better feedback on the specifics of the PV workflow. I will try and keep this page saved so that I can send over the example once I'm back at work mid of next week.
Good luck!
Kind regards,
Theodore.
…
st variety of papers (mostly related with LIDAR airborne sampled clouds) ... but ... hmm ... no code (other than some "abstract" algos that may (or may not) work). Reason? A very hot cake that one these days: from reverse engineering to DARPA founded future defense systems and up to cruse missiles pattern recognition algos.
The solution (obviously doable only via code) is the so called flat hard clustering ... were points are sampled into clusters based on the coPlanarity "rule". For large amounts recursive octTrees (an oriented box divided in 8 "partitions") subdivisions are used and then pts are processed in parallel (and then clusters are re-evaluated in order to "absorb" other clusters with same plane A,B,C,D vars etc etc).
See what's happening in a very carefully made test point collection:
3.7 ms and the "ideal" clustering (7 search loops VS the max 42M theoretical threshold):
Depending on the pts "preparation" ... a considerable more time/search loops is required ... and ... well ... also "valid" clusters (4 points and up) made:
So "ideally" speaking in your case:
1. Mesh faces center points (or alternatively: mesh vertices) are sampled into a pts collection .
2. Hard flat coPlanarity clustering is attempted yielding pts/planes in equivalent DataTrees.
3. Planar Breps are made with respect the planes (like the black things captured above) and sampled, say, into a breps List.
4. The method Brep[] solids = Brep.CreateSolid(breps); is used for attempting to create your desired "engulfing" brep. This method is very slow mind (other waaaay faster approaches also available).
…
ace
4 : Waterplane inertia
5 : "Finesse" f = WLL/(V^1/3) _ f stands for "frog" also, I think you use WLL^3/V in the English speaking world. I could have used WLL directly because the hull is set at the desired displacement before analysis - no use to test at random displacement indeed!
---First try on a 40' cruiser with jittery control points and bad volume repartition, a 30-seconds-work to a tee.
Red dots are original points, white mesh is current surface control polygon.
INPUT
After just a few steps I was able to find good candidates. Check the video.
OUTPUT
---Second try on an Open 60' hull. This was an rhino modeling exercise for students, so it's clean and realistic already, and I gave the same objective values as the original to see if it would fall on its feet. In this case I blocked the edges points.
I was amazed to see that It works!
INPUTOUTPUT
See the nice diagram! All points closest to the wet surface axis have good CP and position. On the left, very round hulls with poor stability (purple) and short WLL (big), on the right, wide hulls with higher water resistance (cyan) and longer WLL (small).
This is only one generation. with a few steps more it zooms in where the two sides meet. The two surfaces actually cross and I found very quickly an individual dominant on both wet surface and stability fronts.
I LIKE IT :)))
========
…