an run. GH2 still uses the Rhino SDK for the geometry functionality, so curve offsets, meshing, brep intersections etc. will run exactly as fast as they do now.
However even in the absence of a working version of GH2 which can be profiled, we can still discuss some of the major aspects of performance:
Preview display. Each GH solution involves a redraw of all the Rhino viewports at the end (unless is preview is switched off, which I imagine is exceedingly rare). For simple GH files, the viewport redraw takes far more time than the solution. Rhino6 has a completely rewritten display pipeline using more modern APIs so we should see a speed-up here in the future, be it GH1 or GH2 or GHx.
Canvas display. Each GH solution involves a redraw of the Grasshopper canvas. If the canvas shows a lot of bitmaps or intricate geometry (lots or text, dense graphs, etc.) this can take a significant amount of time. GH2 will use Eto instead of GDI+ as a UI platform. Eto can be both faster and slower than GDI, depending on what's being drawn. It is particularly fast when drawing images, not so much when drawing lots of lines. There is a little room for improvement here and I intend to take full advantage of that.
Preview meshing. Grasshopper uses standard Rhino mesher to generate preview meshes. If a GH file generates lots of breps, a large amount of time will be required to create the preview meshes. The new display improvements in Rhino6 will allow us to get away with previewing some types of geometry without the need to mesh them first, and I imagine some effort will be spend in the near future to improve the Rhino mesher as well.
Data casting. Most component code operates on standard framework and RhinoCommon types (bool, int, string, Point3d, Curve, Brep, ...), however Grasshopper stores and transfers data wrapped up in IGH_Goo derived types. This means that every time a component 'does it's thing', data needs to be converted from one type into another, and then back again. This involves type-checking and often type instantiation. This stuff is fast, but it's overhead nonetheless and can take significant amount of processor cycles when there's lots of data. GH2 no longer does this, it stores and transfers the types directly as they are. There will still be some overhead left, but hopefully a lot less.
Computation. GH1 is a single-threaded application. When a component operates on a large collection of data, each iteration waits for the next. GH2 will be parallel, meaning components will be invoked on multiple threads, each thread focusing only on part of the data. Then all the results need to be merged back into a single data tree. On my 8-core machine (4 physical cores, each with 2 logical cores) I've been getting performance speed-ups of 4~6 times when using my multi-threading code. I wish it was 8, but clearly there is some overhead involved here as well.This will not help to speed up a single very complicated solid boolean operation, but if you're offsetting 800 curves, then each thread can be assigned 100 curves and the time it takes will set by whatever thread takes the longest.
Algorithms. If a specific component is slow, there may be things we can do to speed it up. Either improve the Rhino SDK, or improve the GH code. Depends on the component in question.
When all's said and done, I'd love to see a 10x speed increase for GH2 over GH1 for simplish stuff, and I shall get very cross if it's anything less than 5x.…
option, after downloading check if .ghuser files are blocked (right click -> "Properties" and select "Unblock"). Then paste them in File->Special Folders->User Object Folder. You can download the example files from here. They act in similar way, Ladybug Photovoltaics components do: we pick a surface, and get an answer to a question: "How much thermal energy, for a certain number of persons can my roof, building facade... generate if I would populate them with Solar Water Heating collectors"? This information can then be used to cover domestic hot water, space heating or space cooling loads:
Components enable setting specific details of the system, or using simplified ones. They cover analysis of domestic hot water load, final performance of the SWH system, its embodied energy, energy value, consumption, emissions... And finding optimal system and storage size. By Dr. Chengchu Yan and Djordje Spasic, with invaluable support of Dr. Willian Beckman, Dr. Jason M. Keith, Jeff Maguire, Nicolas DiOrio, Niraj Palsule, Sargon George Ishaya and Craig Christensen. Hope you will enjoy using the components! References: 1) Calculation of delivered energy: Solar Engineering of Thermal Processes, John Wiley and Sons, J. Duffie, W. Beckman, 4th ed., 2013. Technical Manual for the SAM Solar Water Heating Model, NREL, N. DiOrio, C. Christensen, J. Burch, A. Dobos, 2014. A simplified method for optimal design of solar water heating systems based on life-cycle energy analysis, Renewable Energy journal, Yan, Wang, Ma, Shi, Vol 74, Feb 2015
2) Domestic hot water load: Modeling patterns of hot water use in households, Ernest Orlando Lawrence Berkeley National Laboratory; Lutz, Liu, McMahon, Dunham, Shown, McGrue; Nov 1996. ASHRAE 2003 Applications Handbook (SI), Chapter 49, Service water heating
3) Mains water temperature Residential alternative calculation method reference manual, California energy commission, June 2013. Development of an Energy Savings Benchmark for All Residential End-Uses, NREL, August 2004. Solar water heating project analysis chapter, Minister of Natural Resources Canada, 2004.
4) Pipe diameters and pump power: Planning & Installing Solar Thermal Systems, Earthscan, 2nd edition
5) Sun postion and POA irradiance, the same as for Ladybug Photovoltaics (Michalsky (1988), diffuse irradiance by Perez (1990), ground reflected irradiance by Liu, Jordan (1963))
6) Optimal system and storage tank size: A simplified method for optimal design of solar water heating systems based on life-cycle energy analysis, Renewable Energy journal, Yan, Wang, Ma, Shi, Vol 74, Feb 2015.…
hopper) and High Definition visualizations (V-Ray) and exploring its scientific innovations supporting the users' platform philosophical ideas.
SESSIONS: 5 sessions of 8 hours (40 hours total)
E-MAIL: educacion@chconsultores.net
REGISTRATION: (55) 56 62 57 93
TECHNICAL INFO: 044 (55) 31 22 71 83
INSTRUCTORS: Have past experience working at Gehry Technologies, and participated at studios with Eric Owen Moss and Tom Wiscombe at SCI-Arc (Southern California Institute of Architecture).
Day 1: Introduction to MAYA tools, 3D exercise start.
Day 2: Continue 3D exercise.
Day 3: Original 3D architecture design.
Day 4: Grasshopper optional application on 3D architecture design.
Day 5: V-Ray Application on 3D architecture design.
30 DAY TRIAL SOFTWARE DOWNLOAD:MAYA 2012: http://www.autodesk.com/products/autodesk-maya/free-triaRHINO 4: http://s3.amazonaws.com/files.na.mcneel.com/rhino/4.0/2011-02-11/eval/rh40eval_en_20110211.exe3DS MAX 2010: http://www.autodesk.com/products/autodesk-3ds-max/free-trialVRAY FOR 3DS MAX: http://www.vray.com/vray_for_3ds_max/demo/thankyou.shtml#thankyouPHOTOSHOP e ILLUSTRATOR: https://creative.adobe.com/apps?trial=PHSP&promoid=JZXPS
www.helenico.edu.mx
www.scifi-architecture.com/#!workshops/c1wua
LIKE US ON: www.facebook.com/scifiarchitecture
…
ails.
Some word about the mesh... (see Image_01)
I took a flat 4 points NURBS surface as imput (very easy, it defines the total area of my pavilion) and some points (that defines the contact with the ground).
Then I extracted a grid of points from the NURBS (Surface_Util_Divide surface) and compared 'em with the contol points, in order to associate to each grid's point its own attractor (Vector_Point_Closest Point).
Than I moved the points down. I used the distance from each point to its attractor (inverted) as amplitude for the vector of the movement, in order to say: the nearer you are to the control point, the more intense your movement will be. During this operation I've passed the distances' data list into a graph mapper (Params_Special_Graph Mapper), in order to regulate in a very intuitive and interactive way the shaping of my canopy.
At the end of the process I asked GH for a simple Delaunay mesh (Mesh_Triangulation_Delaunay Mesh). It's a very cool command, I believe!!!
Ok, now some word about the component, it's design and it's repetition/adaptation to the mesh...
(see Image_02)
I took the mesh and extracted components on first and faces's information on second. Then I selected and separated the vertexes (1°, 2°, 3°) of each triangular face into threee well defined list.
Then I re-created the triangles' edges. Please pay attention because it's not the same if you use output information from Delaunay components, because here we need a justapposition of edges where triangles touches each others.
After this work I joined the edges and found their centroid. At the same time I found the mid point of each edge.
Now the component... (see Image_03)
It' a little bit longer to describe: I'll try to be synthetic.
Substantially it is a loft from a curve to a point, repeated three times for each triangle (Surface_Freeform_Extrude Point). The point is an elevation of the centroid of the triangle (you can choose if the exstrusion has a single height or it's related to an attractor. In my case it was fixed). The curve is combination of things. There's an arch, which starts on the edge (there's an offset from the corner) end terminates on the same edge (on the other side, obviously). While it's generation the arch passes through a third point which belong to another segment. This last connects the mid point of the original edge (base triangle) with the centroid. The result is a kind of polyline, with two segments and an arch. If you go back to the image of the component that I posted probably you'll understand what I'm saying better than with the definition.
The posit…
ou mean by 'Activate Direct Rhino Modifying'. Perhaps you could expand?
I like the idea of mixing and matching script and 'direct' modeling. There seems to be a lot of potential platforms for this:
1. Implict History: Is there a way for GH to read the direct modifications (with History activated) and translate this as a component (or cluster of components?)? IH seems to record the UI events and the associated elements. GH would need to write as well as read the IH info, in order to preserve as much flexibility downstream as possible. You mentioned Houdini. H seems to record all 'implicit' or direct mods, done via the CAD mouse-based UI, in its network graph. Maybe, this should be captured in the IH cluster/component mentioned above.
2. RhinoParametrics: RP has done a lot of work to intercept and translate Rhino commands into its version of Implicit History. Seems to be centred on points, which makes sense as so much of the traditional 'dumb' way of inputing CAD info is based on mouse clicks on screen (points) predicated by commands, active locks, workplanes etc.
3. Gumball: Rubberduck's use of the new Gumball tool to capture 'direct' modeling inputs thru the Gumball points to a good source for capturing this kind or input, that is related to the 'macro recorder' approach taken by RP and IH.
4. The new Geom Cache component seems to be able to preserve a lot of info about the baked object. There may be even a way to read tagged info generated both GH baked with the "reference" object, and external to GH (by IH, the gumball or even third party apps like RP).
Would be interesting to know what kind of info is 'preserved'. Houdini seems to have a pretty consistent approach to geometric data, that seems to allow parallel NURBS/subD/mesh versions of the geometry. It also seems to have a coherent heirarchical approach to vertices/edges/loops/faces etc that allows the subelements to be arbitarily grouped for 'direct' modeling, and still be part of a procedural script.
I guess the polygon / mesh approach to geometry lends itself to this. If all the procedural commands/components all understand mesh geometry in either vertex, edge, face format, then combining direct and script modeling is doable in transparent way?
In your example above, the Geo Cache node 'flattens' the object to dumb geometry which is manipulated using Rhino, then used as a Reference object, in the next section of the graph. I guess there is nothing to stop the follow on components reading the precedenting graph for parameters, for additional intelligence?
Does GH 'get' or 'put' parameter data?
…
sophy though, I have a rudimentary grasp of the Ancient Greeks and modern schools of thought such as Existentialism and Pragmatism, but there is certainly no depth in my understanding. However here the same rule applies. You can quote philosophy all you want, but unless you understand that which you're channelling you can be -at best- accidentally correct.
According to you, these are all vital characteristics:
Aesthetic judgement
Intuition about spatial effectiveness
Knowledge of construction materials & assembly systems
Consideration of performance-driven design properties
Mad synthesizing skillz
[1] and [2] are pretty much worthless, especially when we're dealing with students. Aesthetic judgement is not something that can be wrong or right. You can hone your aesthetic skills but you cannot cultivate better tastes. Intuition is also problematic. It's basically a stand-in for argumentation. Instead of saying "these buildings have to have 20 meters apart because of wind/sound/human perception/human psychology/light/shadow/etc. etc" is a far stronger statement than "these buildings have to have 20 meters apart because of my feelings". Who are you to be trusted? If you have a long and distinguished career backing you up, maybe your opinions carry some weight, but until that point you'd better be prepared to justify your decisions with cold hard logic and data.
[3] is certainly important for certain jobs in construction, but it can be argued that implementation details are not necessarily central to a design. One can design a good computer interface without having to be able to program, and certainly without being familiar with all the idiosyncrasies of a particular programming language. Conversely, one can design an excellent space without knowing exactly how strong certain atomic bonds are. If what you design is physically impossible, then obviously something has to change, but it doesn't mean that the design as an abstract idea was bad. Of course on the other hand one can argue that designing impossible things is not doing anyone any favours. I'm not exactly certain where I stand on this issue, probably comfortably in the middle; YES, students need to learn about what can be build in the physical world, but NO that is not part of design training.
I'm not quite sure what [4] means.
[5] is true for a lot of professions, not just Architects. I would concede that architects probably have more to take into account than most designers and that it is indeed an important skill to have.
I would say that -especially for students, who have little experience- an incredibly important skill to be able to ask yourself "why am I doing this?" about pretty much every decision you make. Basically you need to get very comfortable applying the Socratic method to everything you do.
--
David Rutten
david@mcneel.com
Tirol, Austria…
Added by David Rutten at 11:03am on August 14, 2013
ctor. I do not dispose of any IGH_Goo instances, mostly because I have no idea when an instance is truly no longer needed. If any of your fields need to be disposed, you may have to implement a destructor, but I have no experience with this.
2) should I pass those classes to other parameters by DA(0, MotherClass.Duplicate?) or it is already there by GH_Goo ?
IGH_Goo is not duplicated by default. If you use DA.GetData() and ask for IGH_Goo types, you'll get a reference to the same instance as exists. Thus, if you take in an instance of your type, modify and output it, you should duplicate it yourself. But you only need to do this if you change the state of an instance.
MyGooType data = null;
if (!DA.GetData(0, ref data)) return;
data = data.Duplicate() as MyGooType;
data.Property = newValue;
DA.SetData(0, data);
3) should I create ChildClass and MotherClass in SolveInstance, or create it once as a component's field and then change theirs properties and pass it to DA (as duplicate ?)....
It's almost always better to use variables with the lowest possible scope. So method variables are preferred to class variables, class variables are preferred to static variables.
4) if I create those classes in SolveInstance, is it necessary to Dispose them there ?
NO! Do not dispose of instances that are passed on to output parameters. Disposing objects typically makes them invalid, so if you share instances with anyone else, you should not dispose them or the other code may well crash. However I don't think your types need to be disposable so this is a moot point now.
In general, if you're dealing with disposable types, and the instances aren't shared, then you dispose them as quickly as possible. But if they are shared it's a lot more complicated.
5) finally - maybe it would be better if MotherClass inherits the ChildClass ?
Maybe. Not necessarily. Depends on the classes. …
Added by David Rutten at 12:08pm on December 31, 2014
ight be able to provide more insight). Whenever you run a new simulation in Radiance, it is not always necessary to re-write all of the initial simulation files from scratch. These initial simulation files include both a .rad geometry file as well as a separate .pts file that contains the test point locations. If all that you are changing in a given parametric run is the locations of the test points (like your case), it is not necessary to re-write (or reinterpret) the entire .rad geometry file. My guess is that there is some type of check for this built into either code Mostapha wrote or radiance functions that Mostapha is calling. As such, it seems that the rad geometry file is not being re-written (or re-interpreted by radiance) completely when all that you change is the test points and this actually seems to be saving you an extra 10 seconds each time that you run the component without changing the materials or the building geometry. Other times (like when you plug in custom radParameters), it seems that it re-writes (or re-interprets) the .rad geometry file from scratch since this file is probably affected by customized rad parameters.
So far, if this explanation is holding, it seems like there would be no concern on your end but I also recognize that the difference between these long and short simulations is giving you radiation results that are ever so slightly different from each other (by my estimates, they differ by about 0.2%). Compared to the other types of assumptions that the radiance model is making, though, these are mere rounding errors that probably originate from the number of decimal places in the vertices of the rad geometry file. Rather than worrying about whether your simulations are giving you the right rounding errors to give you matching results, I would encourage you to instead contemplate how much your radiance results are matching reality given all of the assumptions that you are making about the climate (with the epw file for a "typical" year) and with the number of light bounces in the radiance simulation. To give you an example, I ran your model with a higher quality of simulation type (3 ambient bounces) and this gives you results that differ by 1.1% from the original simulation that you were running with only 2 ambient bounces (this is practically an order of magnitude larger than 0.2%).
To address your unease I will say that, for a long time, I also felt uneasy any time that I encountered something that seemed unpredictable in software that I was using. Once I started coding my own stuff, though, I realized quickly that unpredictable behavior is an unavoidable aspect of all software. There is always a tradeoff between accurate results and the time it takes to get them, which produces a multitude of possible ways to arrive at a solution. Add into this complex situation the fact that you might have an almost infinite number of possible inputs to a given set of code.
Because of the unpredictable multitude of cases, there is no application that is completely free from limitations and assumptions. In this light, what ends up being more important than the actual calculation method used is the social infrastructure that is in place to help understand what is being run under the hood, hence why both Radiance and Honeybee are open source and why we try to build a robust community of support through forums like this one!
-Chris…
size component supported only ground PV panels and angled roof PV panels.
Download the newest PV SWH system size component from here (Click on "View Raw" to download it. Then move the downloaded .ghuser file to File->Special Folders->User Objects Folder, an confirm to overwrite it with previously located one).
Just a few opinions on the project you are currently working on:This kind of fixed, non-transparent (overhang) PV panels attached to a building facade are vert convenient for locations with higher latitudes.The reason for this is because they (fixed overhang PV panels) are dimensioned according to the sun position at summer solstice. Elevation angles on summer solstice at higher latitude locations are lower, than those of lower latitude locations.Due to Incheon's low latitude (37), you will get rather short length of the PV panels* : less than 10 centimeters (0.097 meters in the attached .gh file below). As you have mentioned, Galapagos needs to be used too.I will just mention some of the good and bad ways in which the upper issue could be somewhat avoided:1) Increasing the vertical distance between PV panels (PV panels appear above every second window).2) Increase the tilt angle. This will increase the length of PV panels also, but will decrease the final annual AC energy output.An example of this solution has been applied at FKI building in Seoul (latitude: 37N):I already did some tests (with tilt angles: 40, 45, 55) and this does not seem like a good solution, though.3) Shrinking the "sun window" by using the minimalSpacingPeriod_ input. In Photovoltaics, a planner is suppose to make the 9h to 15h part of the sun window free of any obstructions. If you try to decrease the "sun window" to 10 to 14h, the length of your PV panels will increase. You can try to experiment a little bit with this (set your minimalSpacingPeriod_ to 21th of June 10 to 14hours). In general, shrinking the sun window on summer solstice is not a good principle during planning.4) Using tracking PV panels, not fixed ones. But Ladybug Photovoltaics components do not support this kind of PV systems. They only support fixed ones.I would personally go with the first option. You can also experiment with the second and third one.Comment back if you have any other questions.-----------------------* By "length of the PV panels" I mean the: tiltedArrayHeight_ input of the PV SWH system size component.…