me work I was doing on DP on GH. Here are my conclusions:
- As Rhino is not a constraint-based modeller, assembly design without plugins(RhinoWorks or else) is just not possible. So as long as constraints will not be present in rhino... no constraints, no AEC.
- The list management that GH offers is 10 000 time more efficient and user friendly. So a good point would be to link all the list management tools with GH-like interface. In fact, for all operations that are not concerning assembly (wireframe generation for example), GH is way ahead in terms of speed IF you're not dealing with geodesic curves or parallels on surface, eventually boolean operations, that are really a weakness of Rhino in terms of precision and stability. You can also do amazing synchronised attributes datatrees quite easily in GH, that you can then synchronise via Excel with a massive product based on Catia without problem. It can easily save you a few days of work.
- Rhino does not handle pre-computation of the geometry without loading effectively that geometry, so you will not be able today to work on a product bigger than 2Gb (maybe 3) in rhino in any way, even on rhino v5 64 with 16Gb of Ram. With the constraint stuff, I really think it is the second bad point about rhino.
- As Jon said, I think Rhino has to be understood as a sketch-oriented application for the construction (this is not pejorative, that's what I personnaly prefere) in a sense that its usefulness is to allow research of design possibilities, that you can of course link afterwards with what you want, but too much basic options are missing to rhino to be really viable for AEC. I personnaly don't want to see geometrical sets to appear in rhino, it is absolutely useless considering grasshopper evolution towards clusters for exemple.
After that, in purely technical terms I would say that:
1) Possible, partially already working --> Clusters (waiting for updates)/nested definitions + SQL for attributes management on several working definitions.
2) --> I think there are two ideas here: a) exporting some dead geometry in an arborescence of files (can be done quite easily with LocalCode but it will remain dead. You can also create a definition based on dead geometry and update this geometry using the geometry cache. Of course if this geometry is automatically exported via LocalCode from a precedent definition, when you update the upper definitions then the modification is repercuted on all your model. Personnaly I think it is best not to do it in rhino. b) otherwise, it is just synchronisation of public attributes attached to existing parts/products, as I described previously.
3) Geometry Cache. You can also auto-loop you file using loading/unloading input geometry of your desifnition with LocalCode and some VB.
But maybe I am wrong on some points of course.
Best,
Thibault.
…
now.
This V4 can sense if you feed it with your points and uses these instead of the p1,p2,p3 (it's a prelude for V5 that uses DataTrees of points making any surface subdivision a reality). Do the following: sample a triad of your points (NOT internalized) and feed the C# . Then ... start dragging these Rhino points around (the C# responds accordingly). See any difference?
The topology:
Well, the whole fractal logic (in this case) is to have 3 pts on hand (call them p1,p2,p3 : red, green, blue) and then project the "right" one, say, p3 to the Line (p1,p2) > do this > do that ... blah blah.
But ... what p3? that's the 1M question: Here for instance the right p3 (blue) is (by accident) the 3rd point entered (it's obvious the "projection" recursive logic):
but if you drag around a bit the points : p3 is now different (C# does this by sorting synchronously the triangle angles per point VS points) Numbers are used to indicate that "swift" : (0 for the new p1, 1 for the new p2, 2 for the new p3... etc). Compare with the initial points (red = ex p1, green = ex p2 , blue = ex p3).
and again different:
The 1M question:
In fractal thinking the big thing is when to stop: I could obviously control that by a counter ... but here the requirement is the tile min size (within unpredictable amount of recursions) : this is what the stop logic used does.
The 1B question:
So ... implementing fractal logic (against DataTrees of points) to a parametric environment ... requires a lot of questions: because each time the size of the start triad varies ... whilst the stop condition is constant: meaning that with a little bit of "good" luck you can reach incredible high amount of tiles (computer out of memory > Adios Amigos).
Obviously I'm taking having all possibilities in mind and especially big projects > big facades > millions (or zillions) of tiles > Armageddon > ....
more soon
…
ng in Grasshopper?
As a general recommendation for developers in Grasshopper who are writing a part of their library which is performance-sensitive (please note: often the performance sensitive part is very limited) is to write it in C#, or maybe even C, or maybe even assembly :). Of course, the closer to the machine you will be, the easier it will be to harness all minimal optimizations. However, there is always a compromise between "getting things done" and "making them best" and this boundary is not very easy to catch, right?
If you want to have significant speed improvements for numerical calculations, I would at least recommend developing with C# in a compiled component using Visual Studio or SharpDevelop. The reason is: in order to provide the line number of possible errors, Grasshopper compiles C# scripts in debug mode! They will be much less optimized than what is possible even with today's technology. This does not preclude keeping the project open-source, if that is one of your goals.
Regarding the actual list:
1) Yes, the implied loop will probably be slower than just a simple for loop. This is because Grasshopper code has to keep track of more things than the ones you could be considering with your knowledge of of your very-special case. However, a factor of 10 is simply not acceptable and is likely a symptom of something else. In fact, I think I remember fixing a bug around that in Rhino WIP. However, it appears to be still slower also there. I've added a bugtracking item here.
2) If you are able to do all casts that are involved, and do them as Grasshopper does, please write that code that way. For example, if you supply a curve to an input with number hint, Grasshopper computes the length of the curve. There will have to be an "if" that checks if the input is a curve somewhere (or some similar construct). This aid for designers is what slows down the hint input.
3) Grasshopper has to keep side effects at bay. For example, components B and C are both connected to outputs of A. If you edit data in component B, and that data came from A you of course expect that data to be unchanged in C. This means that, for even lists of numbers, Grasshopper has to perform a deep copy of the output for each input. Otherwise, what happens if B sorts the list and C finds the index of the smallest number? This could be improved if GH components had some way of flagging themselves as non-data-mutating (constant). The fact that, by supplying special types, Grasshopper has no way of performing copies will likely speed things up. But be aware of possibly very annoying side effects creeping in if data is not immutable. Another option is performing the copy "optimally", just where you need it, because you know where your data is used. This is not information that is available to GH at present.
Does this help?
Thanks again for your input,
Giulio--Giulio Piacentinofor Robert McNeel & Associatesgiulio@mcneel.com…
onents to the latest version and, as you can see, everything works fine:
Over the next week, I am going to be adding in several new capabilities to the Adaptive model in LB+HB that are not an official part of ASHRAE or ISO standards but they are endorsed by the experts and researchers who have helped build the standards. Mostapha, I will be sure to have the component give a comment any time that these un-standardized methods are used and I will be clear that I have made them a part of LB because I have found these insights from new research to be particularly helpful to design processes for passive architecture. Also, I think many of us recognize that both ASHRAE and ISO were initially founded to produce standards for conditioned or refrigerated spaces and that, understandably, they . Among the features that I will be adding in:
1) You will have the option of using either the American ASHRAE adaptive model or the ISO EN-15251 model (see the CBE's comfort tool for a visual of the differences - http://comfort.cbe.berkeley.edu/).
2) In addition to a different comfort polygon, the European standard also uses a "running mean" outdoor temperature instead of the average monthly outdoor temperature. This "running mean" is computed by looking at the average temperatures over the last week and weights each of the daily average temperatures by how recent it is. This makes more sense to me than the ASHRAE method and addresses the issue that you bring up, Alejandro. Needless to say, the updated adaptive model will allow you to use either a running mean or average monthly temperature with either the American or European polygon.
3) The WIP adaptive chart currently has an option for a "levelOfConditioning". This input allows you to make use of research the was conducted along-side the initial development of the adaptive model, which showed that the findings did not contradict the PMV model when people were surveyed in fully conditioned buildings. This parallel research ended up producing a different correlation between the outdoor and desired indoor temperatures and this correlation had a much shallower slope than the official adaptive model for fully naturally-ventilated buildings. The levelOfConditioning allows you to make a custom correlation for full natural ventilation, full conditioning or (presumably) somewhere in between for a mixed-mode building. This levelOfConditioning will become an official input for all LB components using the adaptive model (not just the chart at the moment).
At the end of all of this, I will put together a new video series on Adaptive comfort so that we are all on the same page about how to use the model.
-Chris…
2_Radiation.gh template, reorganized it a bit for a June 21 7am-7pm study (everything in orange bubbles below)… However I realized that the template was only the Sky Dome Visualization tool.
So I took a wild guess and I grabbed the Ladybug_Radiation_Analysis component using it to produce a mesh on the floor of an office building. (everything in white bubble below)
1) Is this the correct component to give me relative values of sunshade behavior? (Later I’ll connect my sunshade geometry into the Context input to block incoming daylight)
2) Is it incorrect to let the grid size be anything except 1m square? (I ask because the individual, total and average Radiation result output values swing wildly when I change the grid size.)
My Best,
Rob…
you working on a PV system which will power a domestic hot water boiler?
To answer your questions:1) Each grasshopper component (ghpython being one of those too) is using grasshopper's data matching algorithm. This algorithm takes care of complex issues which may arise from combining lists with single items, data trees with different number of items per branch and so on.I think there is a way of introducing a call to other processor's threads per each inputted surface, but this will be a very difficult job, as it will require writing a custom data matching algorithm. I do not think I am up to that task.Instead I tried to introduce the multithread only to the final part of the PVsurface component and one of its time consuming parts: calculation of sun angles, solar radiation and ac/dc power output.I attached the test file below, but sadly it didn't go well: the multithreaded version mostly runs at the same time as the regular version.I do not think I am qualified enough to answer why is that so, but I think that it may have something to do with the type of the function that the multithreading is applied to: the code is suppose to run few separate functions a couple of thousand times, and work with a couple of lists. From my experience, the multithreading works the best when a single list or two are supplied to a single function. I may be wrong on this.I am very sorry to say that I can not implement this feature.2) I am not familiar if open source PV modules database has been released.But one can always download the data for specific modules from producers websites. It can then easily be transferred to a .csv file or other text file.Ladybug Photovoltaics are based on NREL's PVWatts model.In comparison with other commercial software applications, PVWatts offers a more generalized system model, with some of the values and characteristics being assumed or embedded.The Fuentes empirical thermal model we are currently using follows the same logic: it generalizes the Module characteristics. The following characteristics are only editable: module efficiency, temperature coefficient and module mount type.It may be possible to replace Fuentes with some other, less generalized 5 parameter thermal model. But as an architect, I would definitively need help on this.
Sorry if my reply did not fulfill your expectations, and thank you for the kind words!…
vate Sub RunScript(ByVal x As Object, ByVal y As Object, ByVal z As Object, ByRef A As Object, ByRef B As Object)
Dim c_list As New list(Of nurbscurve)
Dim P_list As New list(Of Point3d)
For i As int32 = 0 To z
Dim c As New Ellipse(plane.WorldXY, i / 2, i)
Dim g As New NurbsCurve(c.ToNurbsCurve)
g.rotate(y * i, New vector3d(0, 1, 0), New point3d(x, 0, 0))
c_list.add(g)
Dim e As New BoundingBox
e = g.getboundingbox(False)
p_list.add(e.Center)
Next
Dim polyline As New polyline(p_list)
a = c_list
b = polyline
End Sub
To do the same thing is more easy in the old version
because i can't get the center of an ellipse directly
how can i do ???
thanks
ceason
…
ry grateful.
Well, i get some explanation of my problems:
First of all, i was working on making an double helix building, thing that i've solved making a sketch of lofting an transformating helix., but now i want to make some transformations to the helix, to make it less boring.
I want to transform the helix making somethin like that, making displacements on the helix to let the sun get in the building.
Well i've tried the ways that i know, but i'don't know if it's a better way or simply way of doing it.
i've attached my working files where i explain where i've get stuck
I've tried by this ways:
1) selecting the part of the helix wich i've want to transform.
This selection that i've made i think that was rought, i think that the best way to select the part that i've want to move is by the turns, but i've get lost in this way.
When i've get this points and i move it, if i divide the original helix in a few points it seems to work, but the result appears to be an helix but it isn't because i've made interpolation with few points.
If i divide in lots of points, when i make the "transformation" it isn't what i've expected, i think that i have to move this points in a gradiend move, to get a nice deformed helix.
This way is on the 1.ghx
2) The second way that i've tried is with map to surface, and morph box, but here i don't get the transformation thay i've want, or i don't know if i'm doing something wrong.
This way is on the test1.ghx
In the rhino file attached there is a "transformed helix" that i've want to obtain in GH, made by rhino, it's the blue one. It is the result that i've want to get,the displacement in xy plane, and after that transformation continue the helix.
1.ghx
test1.ghx
test1.3dm
Thanks for your attention.
…
hole. Currently I control it through PREVIEW, in component Solid Geometry or Solid DiferencedIn practice, the procedure of generating this whole is not needed, if the number of wholes = 0, or Yes/No (appeared or no) Question: How to use Boolean Toggle (optionaly):
1 to control the component PREVIEW state (On/Off)2 better for me- to start or stop the procedure to create whole (whole = 0,false - no whole needed)Hope you will hlep.
regardsSlawek…