as mine but couldn't manage to make it work.
The following script was working on the python module in Rhino, but not in Ghpython.
Note that I have imported a library, and it seems to be importing on Ghpython
--------------------------------------------------------------------------------------------------
from khepri.rhino import *
def iterate_quads(f, ptss):
return [[f(p0, p1, p2, p3)
for p0, p1, p2, p3
in zip(pts0, pts1, pts1[1:], pts0[1:])]
for pts0, pts1
in zip(ptss, ptss[1:])]
def iterate_hexagono(pts, n, v):
return iterate_quads(lambda p0, p1, p2, p3: hexagono_quad(p0, p1, p2, p3, n, v), pts)
def hexagono_quad(p0, p1, p2, p3, n, v):
def chapa(pts):
return intersection(extrusion(line(pts), 280), shape_from_ref(v.copy_ref(v.realize()._ref)))
#return extrusion(line(pts), -40)
topo = intermediate_loc(p3, p2) + vx(distance(p3, p2)/4 * n), intermediate_loc(p3, p2) - vx(distance(p3, p2)/4 * n)
base = intermediate_loc(p0, p1) + vx(distance(p0, p1)/4 * n), intermediate_loc(p0, p1) - vx(distance(p0, p1)/4 * n)
lateral_esq = intermediate_loc(p3, p0), intermediate_loc(p3, p0) + vx(distance(intermediate_loc(p3, p0),intermediate_loc(p2, p1))/4 * n)
lateral_dir = intermediate_loc(p2, p1), intermediate_loc(p2, p1) - vx(distance(intermediate_loc(p2, p1),intermediate_loc(p3, p0))/4 * n)
conex_1 = intermediate_loc(p3, p2) - vx(distance(p3, p2)/4 * n), intermediate_loc(p3, p0) + vx(distance(intermediate_loc(p3, p0),intermediate_loc(p2, p1))/4 * n)
conex_2 = intermediate_loc(p3, p0) + vx(distance(intermediate_loc(p3, p0), intermediate_loc(p2, p1))/4 * n), intermediate_loc(p0, p1) - vx(distance(p0, p1)/4 * n)
conex_3 = intermediate_loc(p0, p1) + vx(distance(p0, p1)/4 * n), intermediate_loc(p2, p1) - vx(distance(intermediate_loc(p2, p1),intermediate_loc(p3, p0))/4 * n)
conex_4 = intermediate_loc(p2, p1) - vx(distance(intermediate_loc(p2, p1),intermediate_loc(p3, p0))/4 * n), intermediate_loc(p3, p2) + vx(distance(p3, p2)/4 * n)
return chapa(topo), chapa(base), chapa(lateral_esq), chapa(lateral_dir), chapa(conex_1), chapa(conex_2), chapa(conex_3), chapa(conex_4)
s = prompt_shape("Escolha superficie")
v = prompt_shape("Escolha solido")
iterate_hexagono(map_surface_division(lambda p:p, s, 5, 15), 0.5, v)
---------------------------------------------------------------------------------------------------
I imported the geometry from another cad software, and then I would select the surface and solid to perform a pattern iteration on the surface to be constrained inside the solid as a internal structure.
The problem is that the surface comes with u, v and normals all weird from the other software so I wanted to pass it through Grasshopper so I can get more control and also perform other computations on Gh on the Ghpython output. Sorry, maybe I’m over complicating. All I want is the Gh inputs working on Ghpython.
I’ll attach the Gh definition,. Need help with the Ghpython component, the rest is just me fooling around.
When I try to run the sript in Ghpython I get:
Runtime error (MissingMemberException): 'NurbsSurface' object has no attribute 'realize'
Traceback:
line 39, in map_surface_division, "<string>"
I'm also attaching the module I've imported
Any help will be very appreciated and sorry about my english
Thanks!
…
cess informing the user the network is incomplete.
I've been thinking for a while about reading in these blobs of incomprehensible data in an attempt to maintain them through an open/save cycle, but I'll never be able to get this process watertight.
2) When you release components, you should try and make sure that they are backwards compatible previous releases. For example, if you decide to change the number of inputs/outputs or the type of inputs/outputs, this might well break file IO. What you should do in those cases is:
- Copy-paste the old component source code and change the ComponentGuid property. In essence, you make a different component which will have the changes.
- Change the Exposure property on the old component to be GH_Exposure.hidden. This will hide the component from the interface.
This basically means that when people open a file that uses the old style component, they'll get the old-style component. If people instantiate the component anew, they'll get the new component.
Grasshopper and it's default gha assemblies feature dozens upon dozens of these hidden components, sometimes there's as many as 4 old-style components out there.
3) If you want to store additional data in the ghx file for a specific component, you'll need to override the Read() and Write() methods. Something like this:
Public Overrides Function Write(ByVal writer As GH_IO.Serialization.GH_IWriter) As Boolean
writer.SetBoolean("MySpecialBooleanValue", m_myBoolean)
writer.SetString("MySpecialStringData", m_myString)
Return MyBase.Write(writer)
End Function
and
Public Overrides Function Read(ByVal reader As GH_IO.Serialization.GH_IReader) As Boolean
m_myBoolean = False 'Default state
m_myString = String.Empty 'Default state
reader.TryGetBoolean("MySpecialBooleanValue", m_myBoolean)
reader.TryGetString("MySpecialStringData", m_myString)
Return MyBase.Read(reader)
End Function
It is usually possible to make the Reading process smart enough to handle backwards compatibility. You can ask the reader object whether or not a certain value exists and you can then decide whether you can safely use old or new reading logic. So any changes to this part probably don't require you to create a duplicate component and hide the old one.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
Added by David Rutten at 2:34am on February 26, 2011
s mostly related with panelization. Panelization means many things, for instance (1.1) designing an aluminum facade system (most common case: "hinged" extrusion profiles that contain opaque or transparent materials - the "facets"), (1.2) designing insulation and final "coating" in roofs, (1.3) ... (1.n) continue at infinitum.
2. Let's stick to the least understood (and less glamorous) part : topic (1.2). The best core material for the core job is FOAMGLAS:
http://www.foamglas.co.uk/building/applications/
3. Most ignorants in our trade believe that the main point/task of a thermal insulation is the U thing. But in fact is the Dew Point (DP) management the most important of them all (DP = critical temperature at witch the relative humidity reaches saturation). Thus we arrive to the compact "roof" (or some compact "part" of the AEC thing) matter: (3.1): Dew point INSIDE the thermal insulation, (3.2): no thermal bridges, (3.3): no air from the application medium (say plywood, corrugated/flat sheets, special Foamglas Px panels etc etc) up to to the water proofing membrane(s) (say 2 layers of SBS bituminous membranes). Here's the most typical case of them all (special tapered inserts not shown - notice the cladding fixing method without perforating the sheets, no other insulating material can do that):
4. The above image brings us directly to Kangaroo matters (if we add the "liquid" thing meaning no linear geometry around). By "liquid" I mean that our working surface is no more "flat":
In particular we must: (4.1) test if the corrugated sheets can follow the curvature (they can up to a point), (4.2) test if the FOAMGLAS panels (straight "boxes") can safely AND FULLY adhere to the medium without spending the GNP of Nigeria to do it (*), (4.3) test if the VM Zink (or Kalzip) cladding systems can cut the mustard - they are more flexible than the corrugated sheets (and can been tapered on-the-fly, Germans are very innovative on that matter) ... but... well ... you understand where the issue is, I do hope.
(*) you can use 85/25 bitumen (cheap and nightmare to put it) or PC500 (very expensive and easy to apply). Obviously some mechanical fixing is required as well.
And what is the most important test of them all? Well ... the 4.2 thing, what else?
more soon.
…
now.
This V4 can sense if you feed it with your points and uses these instead of the p1,p2,p3 (it's a prelude for V5 that uses DataTrees of points making any surface subdivision a reality). Do the following: sample a triad of your points (NOT internalized) and feed the C# . Then ... start dragging these Rhino points around (the C# responds accordingly). See any difference?
The topology:
Well, the whole fractal logic (in this case) is to have 3 pts on hand (call them p1,p2,p3 : red, green, blue) and then project the "right" one, say, p3 to the Line (p1,p2) > do this > do that ... blah blah.
But ... what p3? that's the 1M question: Here for instance the right p3 (blue) is (by accident) the 3rd point entered (it's obvious the "projection" recursive logic):
but if you drag around a bit the points : p3 is now different (C# does this by sorting synchronously the triangle angles per point VS points) Numbers are used to indicate that "swift" : (0 for the new p1, 1 for the new p2, 2 for the new p3... etc). Compare with the initial points (red = ex p1, green = ex p2 , blue = ex p3).
and again different:
The 1M question:
In fractal thinking the big thing is when to stop: I could obviously control that by a counter ... but here the requirement is the tile min size (within unpredictable amount of recursions) : this is what the stop logic used does.
The 1B question:
So ... implementing fractal logic (against DataTrees of points) to a parametric environment ... requires a lot of questions: because each time the size of the start triad varies ... whilst the stop condition is constant: meaning that with a little bit of "good" luck you can reach incredible high amount of tiles (computer out of memory > Adios Amigos).
Obviously I'm taking having all possibilities in mind and especially big projects > big facades > millions (or zillions) of tiles > Armageddon > ....
more soon
…
ng in Grasshopper?
As a general recommendation for developers in Grasshopper who are writing a part of their library which is performance-sensitive (please note: often the performance sensitive part is very limited) is to write it in C#, or maybe even C, or maybe even assembly :). Of course, the closer to the machine you will be, the easier it will be to harness all minimal optimizations. However, there is always a compromise between "getting things done" and "making them best" and this boundary is not very easy to catch, right?
If you want to have significant speed improvements for numerical calculations, I would at least recommend developing with C# in a compiled component using Visual Studio or SharpDevelop. The reason is: in order to provide the line number of possible errors, Grasshopper compiles C# scripts in debug mode! They will be much less optimized than what is possible even with today's technology. This does not preclude keeping the project open-source, if that is one of your goals.
Regarding the actual list:
1) Yes, the implied loop will probably be slower than just a simple for loop. This is because Grasshopper code has to keep track of more things than the ones you could be considering with your knowledge of of your very-special case. However, a factor of 10 is simply not acceptable and is likely a symptom of something else. In fact, I think I remember fixing a bug around that in Rhino WIP. However, it appears to be still slower also there. I've added a bugtracking item here.
2) If you are able to do all casts that are involved, and do them as Grasshopper does, please write that code that way. For example, if you supply a curve to an input with number hint, Grasshopper computes the length of the curve. There will have to be an "if" that checks if the input is a curve somewhere (or some similar construct). This aid for designers is what slows down the hint input.
3) Grasshopper has to keep side effects at bay. For example, components B and C are both connected to outputs of A. If you edit data in component B, and that data came from A you of course expect that data to be unchanged in C. This means that, for even lists of numbers, Grasshopper has to perform a deep copy of the output for each input. Otherwise, what happens if B sorts the list and C finds the index of the smallest number? This could be improved if GH components had some way of flagging themselves as non-data-mutating (constant). The fact that, by supplying special types, Grasshopper has no way of performing copies will likely speed things up. But be aware of possibly very annoying side effects creeping in if data is not immutable. Another option is performing the copy "optimally", just where you need it, because you know where your data is used. This is not information that is available to GH at present.
Does this help?
Thanks again for your input,
Giulio--Giulio Piacentinofor Robert McNeel & Associatesgiulio@mcneel.com…
our students could taste first hand the Apocalypse (and/or the brave new world and/or the animal farm - depending on your point of view, he he).
2. First ... make a break and spend some time to play with the def attached. Of course comes straight from the Dark Side (no components of any kind). But you know what? ... sooner or later your students they must obey to the Dark Side ... or they'll extinct (future galloping you know ... I mean in a few years from now anyone not speaking some programming language > Homo heidelbergensis ).
3. This thingy attached works in 2 modes: (a) design a(ny) pattern (in a "flat" Plane.WorldXY) or (b) apply a(ny) pattern (in any given surface List).
Start from here (diamond pattern, like the one used by you):
Apply random Z noise (other pattern used):
Or use surfaces (to make frames or their content among other things):
Note: Although this def attached MAY appear off-topic ... there's a reason (other than using any pattern you like) that I provide this to you : because that way we can totally control nodes, edges and "facets" and therefor extract any plane imaginable and therefor place/manage any imaginable profile.
Note: Of course using the make frames capability (and extruding these BrepFaces AT ONCE both sides) we could obtain "autonomous" [monocoque, so to speak] modular load bearing "panels" ready for assembly (instead of beams + nodes + plates + cats + dogs + why??) ... but this is not exactly what you've asked ... he he.
more soon…
ints. Anyway this is made for AEC purposes (wavy roofs/envelopes and the likes) and is classified as internal (but I could provide a "light" version).
To give you a very rough idea: C# rebuilds first any input list of nurbs > then samples the control points in a tree > then excludes (or not) the "peripheral" points (case: closed in U/V surfaces) > then "picks" some of them according a rather vast variety of options (~30) > then modifies these either individually (that's only possible with code and it's a bit tricky) or via any collection of push/pull attractors or randomly or ... > then "joins" the 2 sets together (modified + unmodified) > and finally does the new nurbs. Only 456 lines of code that one.
With regard the Dark Side: C# would be my recommendation (P is ala mode, mind) for a vast variety of reasons (less than 10% of them are GH related).
If you decide to cross the Rubicon:
How to go to hell (and stay there) in just 123 easy steps:
Step 1: get the cookies
The bible PlanA: C# In depth (Jon Skeet).
The bible PlanB: C# Step by step (John Sharp).
The bible PlanC: C# 5.0 (J/B Albahari) > my favorite
The reference: C# Language specs ECMA-334
The candidates:
C# Fundamentals (Nakov/Kolev & Co)
C# Head First (Stellman/Greene)
C# Language (Jones)
Step 2: read the cookies (computer OFF)
Step 3: re-read the cookies (computer OFF)
...
Step 121: open computer
Step 122: get the 30 steps to heaven (i.e. hell)
Step 123: shut down computer > change occupation/planet
May The Force (the Dark Option) be with you.
…
onents to the latest version and, as you can see, everything works fine:
Over the next week, I am going to be adding in several new capabilities to the Adaptive model in LB+HB that are not an official part of ASHRAE or ISO standards but they are endorsed by the experts and researchers who have helped build the standards. Mostapha, I will be sure to have the component give a comment any time that these un-standardized methods are used and I will be clear that I have made them a part of LB because I have found these insights from new research to be particularly helpful to design processes for passive architecture. Also, I think many of us recognize that both ASHRAE and ISO were initially founded to produce standards for conditioned or refrigerated spaces and that, understandably, they . Among the features that I will be adding in:
1) You will have the option of using either the American ASHRAE adaptive model or the ISO EN-15251 model (see the CBE's comfort tool for a visual of the differences - http://comfort.cbe.berkeley.edu/).
2) In addition to a different comfort polygon, the European standard also uses a "running mean" outdoor temperature instead of the average monthly outdoor temperature. This "running mean" is computed by looking at the average temperatures over the last week and weights each of the daily average temperatures by how recent it is. This makes more sense to me than the ASHRAE method and addresses the issue that you bring up, Alejandro. Needless to say, the updated adaptive model will allow you to use either a running mean or average monthly temperature with either the American or European polygon.
3) The WIP adaptive chart currently has an option for a "levelOfConditioning". This input allows you to make use of research the was conducted along-side the initial development of the adaptive model, which showed that the findings did not contradict the PMV model when people were surveyed in fully conditioned buildings. This parallel research ended up producing a different correlation between the outdoor and desired indoor temperatures and this correlation had a much shallower slope than the official adaptive model for fully naturally-ventilated buildings. The levelOfConditioning allows you to make a custom correlation for full natural ventilation, full conditioning or (presumably) somewhere in between for a mixed-mode building. This levelOfConditioning will become an official input for all LB components using the adaptive model (not just the chart at the moment).
At the end of all of this, I will put together a new video series on Adaptive comfort so that we are all on the same page about how to use the model.
-Chris…
Send Feedback
Defines enumerated values for all implemented corner styles in curve offsets.
Namespace: Rhino.GeometryAssembly: RhinoCommon (in RhinoCommon.dll) Version: 5.1.30000.12 (5.0.20693.0)
Syntax
C#
public enum CurveOffsetCornerStyle
Visual Basic
Public Enumeration CurveOffsetCornerStyle
Members
Member name
Value
Description
None
0
The dafault value.
Sharp
1
Offsets and extends curves with a straight line until they intersect.
Round
2
Offsets and fillets curves with an arc of radius equal to the offset distance.
Smooth
3
Offsets and connects curves with a smooth (G1 continuity) curve.
Chamfer
4
Offsets and connects curves with a straight line between their endpoints.
…
ach object has a "Source" property (layer, parent, object) - my fix causes it to look at this source property in order to determine where to draw the plot width value from. I was already doing this for color and material, but had neglected to do it for plot width.
2. The "Print Preview" viewport display option is calling the "PrintDisplay" command in Rhino, which you will notice takes a "Thickness" value - this is the conversion factor between plot weights/print widths (in mm) and the number of pixels in absolute screen width. As you note, this is a relative and not an absolute width in model units, so it does not change when you zoom. In most design applications it would be quite strange to specify the print widths of your geometry in absolute units - e.g. setting your lines to be 50 ft thick. In illustrator you are always working in "Paper Space" whereas in Rhino you have to be aware of the differences between Model Space and Paper Space (or Layout Space in Rhino terminology.)
My lineweight preview component operates on the basis of pixels - if you tell it "2" it will display a 2px-wide line irrespective of your zoom. The 4x conversion ratio you note is purely a function of the setting of your PrintDisplay command in Rhino.
3. The good news is my custom preview component ALSO supports "Absolute" lineweights in world-space units - so that they create a line that gets fatter when you zoom in and thinner when you zoom out (though it can't get thinner than a pixel, naturally.) Set the "Absolute" toggle (the 4th option" on the component - I think it will create the "Illustrator-like" behavior you're looking for, without having to create surfaces from your lines.
4. The dynamic pipeline component updates when the by-object plot weight changes. It does not update when the layer-level plot weight changes. In the end I have had to make some judgment calls about what kinds of changes should trigger a component refresh: too sensitive, and a definition could be forced to recompute unnecessarily on every little change; too insensitive, and you require too many forced refreshes.
In general I have focused on triggering updates from object-level attribute changes (Where they conceptually represent data about THIS OBJECT) and NOT from layer-level attribute changes (Where they conceptually represent data about a category). The Layer Table is the component that is designed to report changes to layer-level settings - and with "Auto Update" enabled on this component, it will in fact trigger an update on layer-level attribute changes.
With this approach, you may have to match up your geometry to the layers it belongs to, and then use the layer table component to retrieve the plot weight settings. The definition shown below is an example of how to do this. It assumes you are using layer-level plot weights.
…