an almost planar tissue (your case) can cause a variety of issues up to the undo able state (metal parts/components grow in size as well for no reason). See forces estimated by FF below.
2. Therefor I strongly suggest to consider Plan B (a) mastermind a secondary "anchor" capability in order to achieve a far more stable system (b) use a mount design that can support this (and comply with the attractor concept of yours). Here's a variable mount custom system (mostly machined AND not cast) that is suitable for the scope (Rhino reads the stp file OK .... but makes a colossally big file - thus I attach here the original).
3. On first sight lot's of things in this system appear "odd". For instance: is it stable? Why these double cables are used? How far can be adjusted? (that's a classic case for feature driven parametric design - not doable with Rhino).
4. This concept (strut axis exported only) is tested in FORMFINDER and some other far more complex membrane apps that I use quite often (not RhinoMembrane). Here's is what FF tells us about:
Observe a different kind of "stress" when this is converted to radial type:
5. If you insert the stp file to the Rhino file provided (exactly as exported from FORMFINDER - no mods of mine of any kind) you'll see what goes where (and why). That way the usage of double cables is rather obvious (and a lot other things - for instance the way that the struts achieve "equilibrium", see the slots in the base mount plate.
6. If this approach is worth considering your definition requires some serious rethinking (far more simpler/manageable with the drawback that the real parts they are "static" they can adjust only as far this particular solution allows them to do - controlling them parametrically is clearly impossible with the current state of R/GH capabilities).`
All in all: this case works because the cables push the anchor points downwards and the struts push them upwards.
more in a while
…
s mostly related with panelization. Panelization means many things, for instance (1.1) designing an aluminum facade system (most common case: "hinged" extrusion profiles that contain opaque or transparent materials - the "facets"), (1.2) designing insulation and final "coating" in roofs, (1.3) ... (1.n) continue at infinitum.
2. Let's stick to the least understood (and less glamorous) part : topic (1.2). The best core material for the core job is FOAMGLAS:
http://www.foamglas.co.uk/building/applications/
3. Most ignorants in our trade believe that the main point/task of a thermal insulation is the U thing. But in fact is the Dew Point (DP) management the most important of them all (DP = critical temperature at witch the relative humidity reaches saturation). Thus we arrive to the compact "roof" (or some compact "part" of the AEC thing) matter: (3.1): Dew point INSIDE the thermal insulation, (3.2): no thermal bridges, (3.3): no air from the application medium (say plywood, corrugated/flat sheets, special Foamglas Px panels etc etc) up to to the water proofing membrane(s) (say 2 layers of SBS bituminous membranes). Here's the most typical case of them all (special tapered inserts not shown - notice the cladding fixing method without perforating the sheets, no other insulating material can do that):
4. The above image brings us directly to Kangaroo matters (if we add the "liquid" thing meaning no linear geometry around). By "liquid" I mean that our working surface is no more "flat":
In particular we must: (4.1) test if the corrugated sheets can follow the curvature (they can up to a point), (4.2) test if the FOAMGLAS panels (straight "boxes") can safely AND FULLY adhere to the medium without spending the GNP of Nigeria to do it (*), (4.3) test if the VM Zink (or Kalzip) cladding systems can cut the mustard - they are more flexible than the corrugated sheets (and can been tapered on-the-fly, Germans are very innovative on that matter) ... but... well ... you understand where the issue is, I do hope.
(*) you can use 85/25 bitumen (cheap and nightmare to put it) or PC500 (very expensive and easy to apply). Obviously some mechanical fixing is required as well.
And what is the most important test of them all? Well ... the 4.2 thing, what else?
more soon.
…
now.
This V4 can sense if you feed it with your points and uses these instead of the p1,p2,p3 (it's a prelude for V5 that uses DataTrees of points making any surface subdivision a reality). Do the following: sample a triad of your points (NOT internalized) and feed the C# . Then ... start dragging these Rhino points around (the C# responds accordingly). See any difference?
The topology:
Well, the whole fractal logic (in this case) is to have 3 pts on hand (call them p1,p2,p3 : red, green, blue) and then project the "right" one, say, p3 to the Line (p1,p2) > do this > do that ... blah blah.
But ... what p3? that's the 1M question: Here for instance the right p3 (blue) is (by accident) the 3rd point entered (it's obvious the "projection" recursive logic):
but if you drag around a bit the points : p3 is now different (C# does this by sorting synchronously the triangle angles per point VS points) Numbers are used to indicate that "swift" : (0 for the new p1, 1 for the new p2, 2 for the new p3... etc). Compare with the initial points (red = ex p1, green = ex p2 , blue = ex p3).
and again different:
The 1M question:
In fractal thinking the big thing is when to stop: I could obviously control that by a counter ... but here the requirement is the tile min size (within unpredictable amount of recursions) : this is what the stop logic used does.
The 1B question:
So ... implementing fractal logic (against DataTrees of points) to a parametric environment ... requires a lot of questions: because each time the size of the start triad varies ... whilst the stop condition is constant: meaning that with a little bit of "good" luck you can reach incredible high amount of tiles (computer out of memory > Adios Amigos).
Obviously I'm taking having all possibilities in mind and especially big projects > big facades > millions (or zillions) of tiles > Armageddon > ....
more soon
…
ng in Grasshopper?
As a general recommendation for developers in Grasshopper who are writing a part of their library which is performance-sensitive (please note: often the performance sensitive part is very limited) is to write it in C#, or maybe even C, or maybe even assembly :). Of course, the closer to the machine you will be, the easier it will be to harness all minimal optimizations. However, there is always a compromise between "getting things done" and "making them best" and this boundary is not very easy to catch, right?
If you want to have significant speed improvements for numerical calculations, I would at least recommend developing with C# in a compiled component using Visual Studio or SharpDevelop. The reason is: in order to provide the line number of possible errors, Grasshopper compiles C# scripts in debug mode! They will be much less optimized than what is possible even with today's technology. This does not preclude keeping the project open-source, if that is one of your goals.
Regarding the actual list:
1) Yes, the implied loop will probably be slower than just a simple for loop. This is because Grasshopper code has to keep track of more things than the ones you could be considering with your knowledge of of your very-special case. However, a factor of 10 is simply not acceptable and is likely a symptom of something else. In fact, I think I remember fixing a bug around that in Rhino WIP. However, it appears to be still slower also there. I've added a bugtracking item here.
2) If you are able to do all casts that are involved, and do them as Grasshopper does, please write that code that way. For example, if you supply a curve to an input with number hint, Grasshopper computes the length of the curve. There will have to be an "if" that checks if the input is a curve somewhere (or some similar construct). This aid for designers is what slows down the hint input.
3) Grasshopper has to keep side effects at bay. For example, components B and C are both connected to outputs of A. If you edit data in component B, and that data came from A you of course expect that data to be unchanged in C. This means that, for even lists of numbers, Grasshopper has to perform a deep copy of the output for each input. Otherwise, what happens if B sorts the list and C finds the index of the smallest number? This could be improved if GH components had some way of flagging themselves as non-data-mutating (constant). The fact that, by supplying special types, Grasshopper has no way of performing copies will likely speed things up. But be aware of possibly very annoying side effects creeping in if data is not immutable. Another option is performing the copy "optimally", just where you need it, because you know where your data is used. This is not information that is available to GH at present.
Does this help?
Thanks again for your input,
Giulio--Giulio Piacentinofor Robert McNeel & Associatesgiulio@mcneel.com…
our students could taste first hand the Apocalypse (and/or the brave new world and/or the animal farm - depending on your point of view, he he).
2. First ... make a break and spend some time to play with the def attached. Of course comes straight from the Dark Side (no components of any kind). But you know what? ... sooner or later your students they must obey to the Dark Side ... or they'll extinct (future galloping you know ... I mean in a few years from now anyone not speaking some programming language > Homo heidelbergensis ).
3. This thingy attached works in 2 modes: (a) design a(ny) pattern (in a "flat" Plane.WorldXY) or (b) apply a(ny) pattern (in any given surface List).
Start from here (diamond pattern, like the one used by you):
Apply random Z noise (other pattern used):
Or use surfaces (to make frames or their content among other things):
Note: Although this def attached MAY appear off-topic ... there's a reason (other than using any pattern you like) that I provide this to you : because that way we can totally control nodes, edges and "facets" and therefor extract any plane imaginable and therefor place/manage any imaginable profile.
Note: Of course using the make frames capability (and extruding these BrepFaces AT ONCE both sides) we could obtain "autonomous" [monocoque, so to speak] modular load bearing "panels" ready for assembly (instead of beams + nodes + plates + cats + dogs + why??) ... but this is not exactly what you've asked ... he he.
more soon…
ints. Anyway this is made for AEC purposes (wavy roofs/envelopes and the likes) and is classified as internal (but I could provide a "light" version).
To give you a very rough idea: C# rebuilds first any input list of nurbs > then samples the control points in a tree > then excludes (or not) the "peripheral" points (case: closed in U/V surfaces) > then "picks" some of them according a rather vast variety of options (~30) > then modifies these either individually (that's only possible with code and it's a bit tricky) or via any collection of push/pull attractors or randomly or ... > then "joins" the 2 sets together (modified + unmodified) > and finally does the new nurbs. Only 456 lines of code that one.
With regard the Dark Side: C# would be my recommendation (P is ala mode, mind) for a vast variety of reasons (less than 10% of them are GH related).
If you decide to cross the Rubicon:
How to go to hell (and stay there) in just 123 easy steps:
Step 1: get the cookies
The bible PlanA: C# In depth (Jon Skeet).
The bible PlanB: C# Step by step (John Sharp).
The bible PlanC: C# 5.0 (J/B Albahari) > my favorite
The reference: C# Language specs ECMA-334
The candidates:
C# Fundamentals (Nakov/Kolev & Co)
C# Head First (Stellman/Greene)
C# Language (Jones)
Step 2: read the cookies (computer OFF)
Step 3: re-read the cookies (computer OFF)
...
Step 121: open computer
Step 122: get the 30 steps to heaven (i.e. hell)
Step 123: shut down computer > change occupation/planet
May The Force (the Dark Option) be with you.
…
onents to the latest version and, as you can see, everything works fine:
Over the next week, I am going to be adding in several new capabilities to the Adaptive model in LB+HB that are not an official part of ASHRAE or ISO standards but they are endorsed by the experts and researchers who have helped build the standards. Mostapha, I will be sure to have the component give a comment any time that these un-standardized methods are used and I will be clear that I have made them a part of LB because I have found these insights from new research to be particularly helpful to design processes for passive architecture. Also, I think many of us recognize that both ASHRAE and ISO were initially founded to produce standards for conditioned or refrigerated spaces and that, understandably, they . Among the features that I will be adding in:
1) You will have the option of using either the American ASHRAE adaptive model or the ISO EN-15251 model (see the CBE's comfort tool for a visual of the differences - http://comfort.cbe.berkeley.edu/).
2) In addition to a different comfort polygon, the European standard also uses a "running mean" outdoor temperature instead of the average monthly outdoor temperature. This "running mean" is computed by looking at the average temperatures over the last week and weights each of the daily average temperatures by how recent it is. This makes more sense to me than the ASHRAE method and addresses the issue that you bring up, Alejandro. Needless to say, the updated adaptive model will allow you to use either a running mean or average monthly temperature with either the American or European polygon.
3) The WIP adaptive chart currently has an option for a "levelOfConditioning". This input allows you to make use of research the was conducted along-side the initial development of the adaptive model, which showed that the findings did not contradict the PMV model when people were surveyed in fully conditioned buildings. This parallel research ended up producing a different correlation between the outdoor and desired indoor temperatures and this correlation had a much shallower slope than the official adaptive model for fully naturally-ventilated buildings. The levelOfConditioning allows you to make a custom correlation for full natural ventilation, full conditioning or (presumably) somewhere in between for a mixed-mode building. This levelOfConditioning will become an official input for all LB components using the adaptive model (not just the chart at the moment).
At the end of all of this, I will put together a new video series on Adaptive comfort so that we are all on the same page about how to use the model.
-Chris…
Send Feedback
Defines enumerated values for all implemented corner styles in curve offsets.
Namespace: Rhino.GeometryAssembly: RhinoCommon (in RhinoCommon.dll) Version: 5.1.30000.12 (5.0.20693.0)
Syntax
C#
public enum CurveOffsetCornerStyle
Visual Basic
Public Enumeration CurveOffsetCornerStyle
Members
Member name
Value
Description
None
0
The dafault value.
Sharp
1
Offsets and extends curves with a straight line until they intersect.
Round
2
Offsets and fillets curves with an arc of radius equal to the offset distance.
Smooth
3
Offsets and connects curves with a smooth (G1 continuity) curve.
Chamfer
4
Offsets and connects curves with a straight line between their endpoints.
…
ach object has a "Source" property (layer, parent, object) - my fix causes it to look at this source property in order to determine where to draw the plot width value from. I was already doing this for color and material, but had neglected to do it for plot width.
2. The "Print Preview" viewport display option is calling the "PrintDisplay" command in Rhino, which you will notice takes a "Thickness" value - this is the conversion factor between plot weights/print widths (in mm) and the number of pixels in absolute screen width. As you note, this is a relative and not an absolute width in model units, so it does not change when you zoom. In most design applications it would be quite strange to specify the print widths of your geometry in absolute units - e.g. setting your lines to be 50 ft thick. In illustrator you are always working in "Paper Space" whereas in Rhino you have to be aware of the differences between Model Space and Paper Space (or Layout Space in Rhino terminology.)
My lineweight preview component operates on the basis of pixels - if you tell it "2" it will display a 2px-wide line irrespective of your zoom. The 4x conversion ratio you note is purely a function of the setting of your PrintDisplay command in Rhino.
3. The good news is my custom preview component ALSO supports "Absolute" lineweights in world-space units - so that they create a line that gets fatter when you zoom in and thinner when you zoom out (though it can't get thinner than a pixel, naturally.) Set the "Absolute" toggle (the 4th option" on the component - I think it will create the "Illustrator-like" behavior you're looking for, without having to create surfaces from your lines.
4. The dynamic pipeline component updates when the by-object plot weight changes. It does not update when the layer-level plot weight changes. In the end I have had to make some judgment calls about what kinds of changes should trigger a component refresh: too sensitive, and a definition could be forced to recompute unnecessarily on every little change; too insensitive, and you require too many forced refreshes.
In general I have focused on triggering updates from object-level attribute changes (Where they conceptually represent data about THIS OBJECT) and NOT from layer-level attribute changes (Where they conceptually represent data about a category). The Layer Table is the component that is designed to report changes to layer-level settings - and with "Auto Update" enabled on this component, it will in fact trigger an update on layer-level attribute changes.
With this approach, you may have to match up your geometry to the layers it belongs to, and then use the layer table component to retrieve the plot weight settings. The definition shown below is an example of how to do this. It assumes you are using layer-level plot weights.
…
st variety of papers (mostly related with LIDAR airborne sampled clouds) ... but ... hmm ... no code (other than some "abstract" algos that may (or may not) work). Reason? A very hot cake that one these days: from reverse engineering to DARPA founded future defense systems and up to cruse missiles pattern recognition algos.
The solution (obviously doable only via code) is the so called flat hard clustering ... were points are sampled into clusters based on the coPlanarity "rule". For large amounts recursive octTrees (an oriented box divided in 8 "partitions") subdivisions are used and then pts are processed in parallel (and then clusters are re-evaluated in order to "absorb" other clusters with same plane A,B,C,D vars etc etc).
See what's happening in a very carefully made test point collection:
3.7 ms and the "ideal" clustering (7 search loops VS the max 42M theoretical threshold):
Depending on the pts "preparation" ... a considerable more time/search loops is required ... and ... well ... also "valid" clusters (4 points and up) made:
So "ideally" speaking in your case:
1. Mesh faces center points (or alternatively: mesh vertices) are sampled into a pts collection .
2. Hard flat coPlanarity clustering is attempted yielding pts/planes in equivalent DataTrees.
3. Planar Breps are made with respect the planes (like the black things captured above) and sampled, say, into a breps List.
4. The method Brep[] solids = Brep.CreateSolid(breps); is used for attempting to create your desired "engulfing" brep. This method is very slow mind (other waaaay faster approaches also available).
…