But not just any gum tree. The angophora, no less:
Why? Because I like nature, that's why. Every time I see new designs –especially architectural designs– it worries me that the natural environment is being taken over. Not just that, but even the new materials used in all product designs has to come from nature as well [read: mines].
So. People are forgetting that we still need trees and I believe that if someone sees a beautiful [read: established] tree in their architectural plans, they are going to be much more likely to build around it and not cut it down. That alone would no doubt increase the value of the house.
My thinking is that current tree models suck. They look unnatural and I think I know why. They're not random or organic enough. They're not detailed enough. That's basically my 'rationale' for this project. Just look at how different all of these tree trunks are!
So I am not being paid for this project. It's a personal project of mine. I'm just worried about the trunk shape for now — I'll worry about all the leaves... when I get to that.
I am a grasshopper beginner. Please keep that in mind. I am also fairly hopeless at traditional programming, but I find the visual approach of grasshopper much easier to grasp. So unfortunately I have gotten stuck and need some help, even just a clue, as to how to proceed.
That said, here is my current progress:
About a year ago, I started modelling with straight trunks using pipe sections, to see if I could get a very basic "tree" shape. And to see if I could join the segments together. Yes it works but it looks hopeless as you can imagine. Then I stopped for a long while. Now I'm back at it, hoping to improve a lot more.
I have already made one basic vertical nurbs curve with tangents at either end as the main "trunk".
I tried creating two ellipses at each end of the main trunk/curve and lofting between them but it omitted the main curve/rail. So it ended up being an elliptical trunk with straight sides which of course still didn't look right.
Then I divided the first main curve up into a number of segments. I think that is a better approach.
I have taken the parameters of the curve at each segment (probably the tangent, but I am unsure what the exact parameter is) and used that to form a basic angled plane at each segment/division.
I have been able to draw ellipses at each segment and rotate them onto the plane.
I was going to loft it together later on. A Curved loft with elliptical cross-sections looks much better than straight a pipe does, but still looks too unnatural.
I quickly realised that tree trunks are not elliptical, but rather, shaped more like 'kidneys'.
The next step was to create >3 points on each of those planes (spaced fairly evenly around the ellipse so as not to create a really funky/unwanted shape).
Maybe it would be better to model with a triangle or other polygon instead of an ellipse. I haven't got that far yet... because here is where I am getting stuck.
I managed to find a way of getting three roughly 'triangular' points along each that ellipse.
I also managed to create three nurbs cuves in the Z direction which intersected those three points, a bit like three seams down the side of the tree trunk, but couldn't figure out how to loft it all together.
I think it was the wrong approach anyway... I'd rather try to create a bunch of nurbs curves at each of the XY planes so as to get more control of the shape.
What I am trying to do now is create three roughly triangular-spaced points on a basic ellipse through which I can then draw a simple nurbs curve (think like a cross section of the trunk).
I would then like to add some XY-only randomness to the positions of those points. Not Z randomness, otherwise the trunk is going to get messed/kinked up. That's probably very important.
Then I would like to loft those nurbs curvs at each XY plane together forming the basic tree trunk, which also tapers based on some other variable (a non-linear factor, not simply distance from ground plane, perhaps something else?).
I have attached the GH file.
I am also open to suggestions if you have a better way of solving a problem. I would like to retain control over a lot of factor such as number of branches, spacing, average branch length, etc. My main contrsaints are that the entire thing has to be somewhat random and non-linear.
…
ou will see a list of potential matches, sorted from most relevant to least relevant:
Some components and objects support initialisation codes, which means you can assign certain values directly from the popup box. You can do this by adding an equals symbol after the name and then the value you wish to assign. For example, the [Curve Offset] component allows you to specify the offset distance via the popup box by typing =5 after the offset command:
However the popup box also supports a set of special formats that allow you to create specific objects without even typing their names. As of 0.9.0077 (which hasn't been released yet at the time of writing) you can use the following shortcuts to create special objects. In the notation below optional parts of a format will be surrounded by square brackets and hashes (#) will be used to indicate numeric values. So #,#[,#] means;
at least two numeric values separated by a comma, with an optional second comma and third number.
A complete list of special formats (not all of these are supported yet in 0.9.0076):
"∙∙∙ If the format starts with a double quote, then the entire contents (minus any other double quotes) will be placed into a Text Panel.
//∙∙∙ If the format starts with two forward slashes, then the entire contents will be placed in a Text Panel.
~∙∙∙ If the format starts with a tilde, then the entire contents will be placed in a Scribble object.
#,#[,#] If the format contains two or three numerics separated by commas, a Point parameter will be created with the specified coordinates.
+[#] If the format starts with a plus symbol followed by a numeric, then an Addition component will be created.
-[#] If the format starts with a minus symbol followed by a numeric, then a Subtraction component will be created.
*[#] If the format starts with an asterisk symbol followed by a numeric, then a Multiplication component will be created.
/[#] If the format starts with a forward slash symbol followed by a numeric, then a Division component will be created.
\[#] If the format starts with a backward slash symbol followed by a numeric, then an Integer Division component will be created.
%[#] If the format starts with a percent symbol followed by a numeric, then a Modulus component will be created.
&[∙∙∙] If the format starts with an ampersand symbol, then a Concatenation component will be created.
=[∙∙∙] If the format starts with an equals symbol, then an Equality component will be created.
<[*] If the format starts with a smaller than symbol, then a Smaller Than component will be created.
>[*] If the format starts with a larger than symbol, then a Larger Than component will be created.
[# *] Pi If the format contains the text "Pi" with an optional multiplication factor, then a Pi component will be created.
# If the format can be evaluated as a single numeric value, then a Slider will be created with the specified initial value and sensible™ lower and upper limits.
#<# If the format contains two numerics separated by a smaller than symbol, a Slider with the specified limits will be created. The initial slider value will be equal to the lower limit.
#<#<# If the format contains three numerics separated by a smaller than symbol, a Slider with the specified limits will be created. The initial slider value will be the value in the middle.
#..# If the format contains two numerics separated by two or more consecutive dots, a Slider with the specified limits will be created. The initial slider value will be equal to the lower limit.
#..#..# If the format contains three numerics separated by two or more consecutive dots, a Slider with the specified limits will be created. The initial slider value will be the value in the middle.
#/#/[#] If the format contains two or three numerics separated by forward slashes, a Calendar object will be created. The order of value is day/month/year. If year is omitted then the current year is used. Note that a second slash is required because #/# is interpreted as a number and thus results in a Slider.
#:#[:#] [am/pm] If the format contains at least two numerics separated by a colon, a Clock object is created. Seconds are optional, as are am/pm suffixes.
f([...[,...[,...]]]) [= *]If the format starts with a lower case f followed by an opening bracket, an Expression component is created. A list of comma separated arguments can be provided as inputs, and anything after the optional equals symbol becomes the expression string.
Note that decimal places will be harvested from formats that indicate sliders. I.e. the format 0..2..10 is not the same as 0..2..10.00, as the former will create an integer slider from zero to ten whereas the latter will create a floating point slider with two decimal places from zero to ten.…
Added by David Rutten at 3:24pm on February 18, 2013
URBS cup surface, and boy oh boy did it ever work more uniformly than using 3D orb cutters on a 3D cup. Different sized spheres return the *same* hex grid only less and less raised up as the spheres get very large.
My first question is whether these are different in character or just in Z scaling, so if I rescale them all to the same Z thickness, after extracting only the relief structure via Boolean union and splitting...and they are only *slightly* different in character, which means mere Z re-scaling of a single moderate ball size relief is an appropriate cheat to avoid slow Boolean union re-making each relief Z scale with different sized balls.
The one on the right is a very shallow relief scaled up to the same Z thickness as the pure sphere one on the left. And really, we will be mostly scaling *down* from a thicker master surface so that will attenuate any weirdness in the curvature. Indeed, I see no difference, so it makes sense to only archive the thickest one so we can control the full range of thicknesses, all the way to nearly flat bulbs. Here is the thickest one, just before the balls lose holes between them, scaled down compared to a shallow one made with huge balls to start with:
Now we just use Rhino Flow Along Surface or the Grasshopper Jackalope plug-in Sporf to morph this flat system onto our lathe form.
With Rhino history for the Flow Along Surface step I can rescale the original in Z and wait twenty seconds to see the update:
There are sad edge artifacts that will require some strategy to retain or later delete a whole row:
Maybe add more geometry to later delete or make a solid to hold stuff together?
So vastly decreasing the cell count and changing grid direction to match your cup:
The edges came out fine on this one, happily. The isocurve count has been increased by the Flow Along Surface command:
It can't be filleted yet since the joint where the cup NURBS surface has a joint now leaves feathery edges, so I went back and duplicated the border of the flat array, offset and lofted to make a protecting surface:
But that gave crazy artifacts:
I'm just going to use symmetry to fill in the joint with good faces that are not having to be joined as two halves. I had to turn my Rhino units tolerance down from a silly 0.0001 to 0.01 units to get a good re-join, but it still won't fillet without leaving holes.
SO LET'S FILLET THE FLAT THING. Same problem but a bit faster, and actually repairable manually. Rhino 5 is buggy as hell with core commands, damn it. This is not world class behavior.
Let's try it in Rhino 6 WIP, our great hope of the future: nope, the same. I had to simply manually copy the missing pieces from where it did work, which at least is easy to do in flatland. Now I get a cup:
This can *all* be done quickly in Rhino without Grasshopper, and Rhino affords you fast cage editing of the original flat array that Grasshopper cannot yet do. You just need to use Analyze Direction to be able to swap UV directions of the source or target and flip the source surface to achieve concave vs. convex patterns.
Grasshopper doesn't even have a fillet (multiple) edges component so there's not a lot of advantage to having some super slow parametric system via Grasshopper. It's not like you'll be able to see the changes fast enough to tweak a design.…
t. This was a reasonably effective workflow for the purposes of solving the initial problem. (in reviewing this post, it seems a bit lengthy, but hopefully it's of use to others).
Link to Illustrator Script example:https://forums.adobe.com/thread/508138
Portion I used: This applies to entire illustrator document. I am using Illustrator CC 64 bit and this worked okay. Tested a few times and it failed once, but a restart of Illustrator fixed it.
var v_selection = app.activeDocument.pathItems;SwapFillStroke(v_selection); function SwapFillStroke(objSel) { for(k = 0; k < objSel.length; k++){ var subSel = objSel[k]; var c_fill = subSel.fillColor; var c_stroke = subSel.strokeColor; subSel.fillColor = c_stroke; if(!subSel.stroked) subSel.stroked = true; subSel.strokeColor = c_fill; }} redraw();
My goal was to export colored geometry, (analysis meshes for example), from Rhino and get it into illustrator with solid fills.
If you want to know how meshes are colored in rhino...there are many explanations here on the forum, a quick search will get you more detailed information.
Short version: export your lines from rhino to illustrator and run the script listed above to make the stroke color the fill color. (in illustrator, shift+X will swap the fill and stroke colors on individual objects, but does not work on multiple objects..hence the need for the script).
Detailed Version:
In my case, I had 2 case studies I was working with.1 - wind rose meshes generated from Ladybug/honeybee2 - A mesh terrain that was colored by pre-set slope values.
NOTE: There are a few plugins to bake objects with color. I used Human tools, (Bake Geometry and JustifiedText3D).http://www.grasshopper3d.com/group/human (lots of other great stuff in there too!)
I had two types of geometry. (2 different definitions)
1- An analysis mesh, (HoneyBee/LadyBug),
2 - Lines generated from mesh faces. (mesh terrain/slope values).
Export results as a DXF, and choose "do not explode". (these were my settings)
DXF seemed to produce the most consistent results.
(you could export/save as an AI file and just open them in illustrator, but that seemed to give inconsistent results with the script).
Open DXF in Illustrator:
Apply Script in illustrator:
In the terrain example, there are only 5 colors, so selection in illustrator, by color, is very easy. In the results from honeybee/ladybug, (or any analysis process I imagine), the default colors are created with a much wider range of values. I presume the legend is then created by an average of those values within a range. My point is that, with the analysis results, selecting objects by color in Illustrator is probably not a very effective workflow.
I only tested this on my instance of rhino and Illustrator. mileage may vary.
In summation, at this point, it seems that the best way to get colored mesh faces, into illustrator, is to export the meshes, (which really ends up being the mesh face edges...curves), and bringing them into illustrator and running a quick script to swap the colors. Once that is complete, you can then select ALL the objects, and change the stroke color/weight at once.…
e a fundamental failure on my part. On the other hand, Grasshopper isn't supposed to be on a par with most other 3D programs. It is emphatically not meant for manual/direct modelling. If you would normally tackle a problem by drawing geometry by hand, Grasshopper is not (and should never be advertised as) a good alternative.
I get that. That’s why that 3D shape I’m trying to apply the voronoi to was done in NX. I do wonder where the GUI metaphor GH uses comes from. It reminds me of LabVIEW.
"What in other programs is a dialog box, is 8 or 10 components strung together in grasshopper. The wisdom for this I often hear among the grasshopper community is that this allows for parametric design."
Grasshopper ships with about 1000 components (rounded to the nearest power of ten). I'm adding more all the time, either because new functionality has been exposed in the Rhino SDK or because a certain component makes a lot of sense to a lot of people. Adding pre-canned components that do the same as '8 or 10 components strung together' for the heck of it will balloon the total number of components everyone has to deal with. If you find yourself using the same 8 to 10 components together all the time, then please mention it on this forum. A lot of the currently existing components have been added because someone asked for it.
It’s not the primary components that catalyzed this thought but rather the secondary components. I was toying with a component today (twist from jackalope) that made use of three toggle components. The things they controlled are checkboxes in other apps.
Take a look at this jpg. Ignore differences; I did 'em quickly. GH required 19 components to do what SW did with 4 commands. Note the difference in screen real estate.
As an aside, I really hate SolidWorks (SW). But going forward, I’ll use it as an example because it’s what most people are familiar with.
"[...] has a far cleaner and more intuitive interface. So does SolidWorks, Inventor, CATIA, NX, and a bunch of others."
Again, GH was not designed to be an alternative to these sort of modellers. I don't like referring to GH as 'parameteric' as that term has been co-opted by relational modellers. I prefer to use 'algorithmic' instead. The idea behind parameteric seems to be that one models by hand, but every click exists within a context, and when the context changes the software figures out where to move the click to. The idea behind algorithmic is that you don't model by hand.
I agree, and disagree. I believe parametric applies equally to GH AND SW, NX, and so forth, while algorithmic is unique to GH (and GC and Dynamo I think). Thus I understand why you prefer the term. I too tend to not like referring to GH as a parametric modeler for the same reason.
But I think it oversimplifies it to say parametric modelers move the clicks. SW tracks clicks the same way GH does; GH holds that information in geometry components while SW holds it in a feature in the feature tree. In both GH and SW edits to the base geometry will drive a recalculation, but more commonly, it’s an edit to input data, beit equations or just plain numbers, that drive a recalculation.
I understand the difference in these programs. What brought me to GH is that it can create a visual dialog that standard modelers can’t. But as I've grown more comfortable with it I’ve come to realize that the GUI of GH and the GUI of other parametric modelers, while looking completely different, are surprisingly interchangeable. Do not misconstrue that I’m suggesting that GH should replace it’s GUI with SW’s. I’m not. I refrain from suggesting anything specific. I only suggest that you allow yourself to think radically.
This is not to say there is no value in the parametric approach. Obviously it is a winning strategy and many people love to use it. We have considered adding some features to GH that would make manual modelling less of a chore and we would still very much like to do so. However this is such a large chunk of work that we have to be very careful about investing the time. Before I start down this road I want to make sure that the choice I'm making is not 'lame-ass algorithmic modeller with some lame-ass parametrics tacked on' vs. 'kick-ass algorithmic modeller with no parametrics tacked on'.
Given a choice, I'd pick kick-ass algorithmic modeller with no parametrics tacked on.
2. Visual Programming.
I'm not exactly sure I understand your grievance here, but I suspect I agree. The visual part is front and centre at the moment and it should remain there. However we need to improve upon it and at the same time give programmers more tools to achieve what they want.
I'll admit, this is a bit tough to explain. As I've re-read my own comment, I think it was partly a precursor to the context sensitivity point and touched upon other stated points.
This now touches upon my own ignorance about GH’s target market. Are you moving toward a highly specialized tool for programmers and/or mathematicians, or is the intent to create a tool that most designers can master? If it’s the former, rock on. You’re doing great. If it’s the latter, I’m one of the more technically sophisticated designers I know and I’m lost most of the time when using GH.
GH allows the same freedom as a command line editor. You can do whatever you like, and it’ll work or not. And you won’t know why it works or doesn't until you start becoming a bit of an expert and can actually decipher the gibberish in a panel component. I often feel GH has the ease of use of DOS with a badass video card in front.
Please indulge my bit of storytelling. Early 3D modelers, CATIA, Unigraphics, and Pro-Engineer, were unbelievably difficult to use. Yet no one ever complained. The pain of entry was immense. But once you made it past the pain threshold, the salary you could command was very well worth it. And the fewer the people who knew how to use it, the more money you could demand. So in a sense, their lack of usability was a desirable feature among those who’d figured it out.
Then SolidWorks came along. It could only do a fraction of what the others did, but it was a fraction of the cost, it did most of what you needed, and anyone could figure it out. There was even a manual on how to use it. (Craziness!) Within a few short years, the big three all had to change their names (V5, NX, and Wildfire (now Creo)) and change the way they do things. All are now significantly easier to use.
I can tell that the amount of development time that’s gone into GH is immense and I believe the functionality is genius. I also believe it’s ease of use could be greatly improved.
Having re-read my original comments, I think it sounded a bit snotty. For that I apologize.
3. Context sensitivity.
"There is no reason a program in 2014 should allow me to make decisions that will not work. For example, if a component input is in all cases incompatible with another component's output, I shouldn't be able to connect them."
Unfortunately it's not as simple as that. Whether or not a conversion between two data types makes sense is often dependent on the actual values. If you plug a list of curves into a Line component, none of them may be convertible. Should I therefore not allow this connection to be made? What if there is a single curve that could be converted to a line? What if you want to make the connection now, but only later plan to add some convertible curves to the data? What you made the connection back when it was valid, but now it's no longer valid, wouldn't it be weird if there was a connection you couldn't make again?
I've started work on GH2 and one of the first things I'm writing now is the new data-conversion logic. The goal [...] is to not just try and convert type A into type B, but include information about what sort of conversion was needed (straightforward, exotic, far-fetched. etc.) and information regarding why that type was assigned.
You are right that under some conditions, we can be sure that a conversion will always fail. For example connecting a Boolean output with a Curve input. But even there my preferred solution is to tell people why that doesn't make sense rather than not allowing it in the first place.
You bring up both interesting points and limits to my understanding of coding. I’ve reached the point in my learning of GH where I’m just getting into figuring out the sets tab (and so far I’m not doing too well). I often find myself wondering “Is all of this manual conditioning of the data really necessary? Doesn’t most software perform this kind of stuff invisibly?” I’d love to be right and see it go away, but I could easily be wrong. I’ve been wrong before.
5. Components.
"Give components a little “+” or a drawer on the bottom or something that by clicking, opens the component into something akin to a dialog box. This should give access to all of the variables in the component. I shouldn't have to r-click on each thing on a component to do all of the settings."
I was thinking of just zooming in on a component would eventually provide easier ways to access settings and data.
I kinda like this. It’s a continuation of what you’re currently doing with things like the panel component.
"Could some of these items disappear if they are contextually inappropriate or gray out if they're unlikely?"
It's almost impossible for me to know whether these things are 'unlikely' in any given situation. There are probably some cases where a suggestion along the lines of "Hey, this component is about to run 40,524 times. It seems like it would make sense to Graft the 'P' input." would be useful.
6. Integration.
"Why isn't it just live geometry?"
This is an unfortunate side-effect of the way the Rhino SDK was designed. Pumping all my geometry through the Rhino document would severely impact performance and memory usage. It also complicates the matter to an almost impossible degree as any command and plugin running in Rhino now has access to 'my' geometry.
"Maybe add more Rhino functionality to GH. GH has no 3D offset."
That's the plan moving forward. A lot of algorithms in Rhino (Make2D, FilletEdge, Shelling, BlendSrf, the list goes on) are not available as part of the public SDK. The Rhino development team is going to try and rectify this for Rhino6 and beyond. As soon as these functions become available I'll start adding them to GH (provided they make sense of course).
On the whole I agree that integration needs a lot of work, and it's work that has to happen on both sides of the isle.
You work for McNeel yet you seem to speak of them as a separate entity. Is this to say that there are technical reasons GH can only access things through the Rhino SDK? I’d think you would have complete access to all Rhino API’s. I hope it’s not a fiefdom issue, but it happens.
7. Documentation.
Absolutely. Development for GH1 has slowed because I'm now working on GH2. We decided that GH1 is 'feature complete', basically to avoid feature creep. GH2 is a ground-up rewrite so it will take a long time until something is ready for testing. During this time, minor additions and of course bug fixes will be available for GH1, but on a much lower frequency.
Documentation is woefully inadequate at present. The primer is being updated (and the new version looks great), but for GH2 we're planning a completely new help system. People have been hired to provide the content. With a bit of luck and a lot of work this will be one of the main selling points of GH2.
It begs the question that I have to ask. When is GH1.0 scheduled to launch? And if you need another person to proofread the current draft of new primer.
patrick@girgen.com
I can’t believe wikipedia has an entry for feature creep. And I can’t believe you included it. It made me giggle. Thanks.
8. 2D-ness.
"I know you'll disagree completely, but I'm sticking to this. How else could an omission like offsetsurf happen?"
I don't fully disagree. A lot of geometry is either flat or happens inside surfaces. The reason there's no shelling (I'm assuming that's what you meant, there are two Offset Surface components in GH) is because (a) it's a very new feature in Rhino and doesn't work too well yet and (b) as a result of that isn't available to plugins.
I believe it’s been helpful for me to have figured this out. I recently completed a GH course at a local Community College and have done a bunch of online tutorials. The first real project I decided to tackle has turned out to be one of the more difficult things to try. It’s the source of the questions I posted. (Thanks for pointing out that they were posted in the wrong spot. I re-posted to the discussions board.)
I just can't seem to figure out how to turn the voronoi into legitimate geometry. I've seen this exact question posted a few times, but it’s never been successfully answered. What I'm showing here is far more angular than I’m hoping for. The mesh is too fine for weaverbird to have much of an effect. And I haven't cracked re-meshing. Btw, in product design, meshes are to be avoided like the plague. Embracing them remains difficult.
As for offsetsurf, in Rhino, if you do an offsetsurf to a solid body, it executes it on all sides creating another neatly trimmed body thats either larger or smaller than the original. This is how every other app I know of works. GH’s offsetsurf creates a bunch of unjoined faces spaced away from the original brep. A common technique for 3D voronois (Yes, I hit the voronoi overuse easter egg) is to find the center of each cell and scale them by this center. If you think about it, this creates a different distance from the face of the scaled cell to the face of the original cell for every face. As I've mentioned, this project is giving me serious headaches.
Don't get me wrong, I appreciate the feedback, I really do, but I want to be honest and open about my own plans and where they might conflict with your wishes. Grasshopper is being used far beyond the boundaries of what we expected and it's clear that there are major shortcomings that must be addressed before too long. We didn't get it right with the first version, I don't expect we'll get it completely right with the second version but if we can improve upon the -say- five biggest drawbacks (performance, documentation, organisation, plugin management and no mac version) I'll be a happy puppy.
--
David Rutten
Thank you for taking the time to reply David. Often we feel that posting such things is send it into the empty ether. I’m very glad that this was not the case.
And thank you for all of the work you've put into GH. If you found any of my input overly harsh or ill-mannered, I apologise. It was not my intent. I'm generally not the ranting sort. If I hadn't intended to provide possibly useful input, I wouldn't have written.
Cheers
Patrick Girgen
Ps. Any pointers on how to get a bit further on the above project would be greatly appreciated.
…
lly it should not make much of a difference - random number generation is not affected, mutation also is not. crossover is a bit more tricky, I use Simulated Binary Crossover (SBX-20) which was introduced already in 1194:
Deb K., Agrawal R. B.: Simulated Binary Crossover for Continuous Search Space, inIITK/ME/SMD-94027, Convenor, Technical Reports, Indian Institue of Technology, Kanpur, India,November 1994
Abst ract. The success of binary-coded gene t ic algorithms (GA s) inproblems having discrete sear ch sp ace largely depends on the codingused to represent the prob lem variables and on the crossover ope ratorthat propagates buildin g blocks from pare nt strings to childrenst rings . In solving optimization problems having continuous searchspace, binary-co ded GAs discr et ize the search space by using a codingof the problem var iables in binary st rings. However , t he coding of realvaluedvari ables in finit e-length st rings causes a number of difficulties:inability to achieve arbit rary pr ecision in the obtained solution , fixedmapping of problem var iab les, inh eren t Hamming cliff problem associatedwit h binary coding, and processing of Holland 's schemata incont inuous search space. Although a number of real-coded GAs aredevelop ed to solve optimization problems having a cont inuous searchspace, the search powers of these crossover operators are not adequate .In t his paper , t he search power of a crossover operator is defined int erms of the probability of creating an arbitrary child solut ion froma given pair of parent solutions . Motivated by t he success of binarycodedGAs in discret e search space problems , we develop a real-codedcrossover (which we call the simulated binar y crossover , or SBX) operatorwhose search power is similar to that of the single-point crossoverused in binary-coded GAs . Simulation results on a number of realvaluedt est problems of varying difficulty and dimensionality suggestt hat the real-cod ed GAs with t he SBX operator ar e ab le to perform asgood or bet t er than binary-cod ed GAs wit h t he single-po int crossover.SBX is found to be particularly useful in problems having mult ip le optimalsolutions with a narrow global basin an d in prob lems where thelower and upper bo unds of the global optimum are not known a priori.Further , a simulation on a two-var iable blocked function showsthat the real-coded GA with SBX work s as suggested by Goldberg
and in most cases t he performance of real-coded GA with SBX is similarto that of binary GAs with a single-point crossover. Based onth ese encouraging results, this paper suggests a number of extensionsto the present study.
7. ConclusionsIn this paper, a real-coded crossover operator has been develop ed bas ed ont he search characte rist ics of a single-point crossover used in binary -codedGAs. In ord er to define the search power of a crossover operator, a spreadfactor has been introduced as the ratio of the absolute differences of thechildren points to that of the parent points. Thereaft er , the probabilityof creat ing a child point for two given parent points has been derived forthe single-point crossover. Motivat ed by the success of binary-coded GAsin problems wit h discrete sear ch space, a simul ated bin ary crossover (SBX)operator has been develop ed to solve problems having cont inuous searchspace. The SBX operator has search power similar to that of the single-po intcrossover.On a number of t est fun ctions, including De Jong's five te st fun ct ions, ithas been found that real-coded GAs with the SBX operator can overcome anumb er of difficult ies inherent with binary-coded GAs in solving cont inuoussearch space problems-Hamming cliff problem, arbitrary pr ecision problem,and fixed mapped coding problem. In the comparison of real-coded GAs wit ha SBX operator and binary-coded GAs with a single-point crossover ope rat or ,it has been observed that the performance of the former is better than thelatt er on continuous functions and the performance of the former is similarto the lat ter in solving discret e and difficult functions. In comparison withanother real-coded crossover operator (i.e. , BLX-0 .5) suggested elsewhere ,SBX performs better in difficult test functions. It has also been observedthat SBX is particularly useful in problems where the bounds of the optimum
point is not known a priori and wher e there are multi ple optima, of whichone is global.Real-coded GAs wit h t he SBX op erator have also been tried in solvinga two-variab le blocked function (the concept of blocked fun ctions was introducedin [10]). Blocked fun ct ions are difficult for real-coded GAs , becauselocal optimal points block t he progress of search to continue towards t heglobal optimal point . The simulat ion results on t he two-var iable blockedfunction have shown that in most occasions , the sea rch proceeds the way aspr edicted in [10]. Most importantly, it has been observed that the real-codedGAs wit h SBX work similar to that of t he binary-coded GAs wit h single-pointcrossover in overcoming t he barrier of the local peaks and converging to t heglobal bas in. However , it is premature to conclude whether real-coded GAswit h SBX op erator can overcome t he local barriers in higher-dimensionalblocked fun ct ions.These results are encour aging and suggest avenues for further research.Because the SBX ope rat or uses a probability distribut ion for choosing a childpo int , the real-coded GAs wit h SBX are one st ep ahead of the binary-codedGAs in te rms of ach ieving a convergence proof for GAs. With a direct probabilist ic relationship between children and parent points used in t his paper,cues from t he clas sical stochast ic optimization methods can be borrowed toachieve a convergence proof of GAs , or a much closer tie between the classicaloptimization methods and GAs is on t he horizon.
In short, according to the authors my SBX operator using real gene values is as good as older ones specially designed for discrete searches, and better in continuous searches. SBX as far as i know meanwhile is a standard general crossover operator.
But:
- there might be better ones out there i just havent seen yet. please tell me.
- besides tournament selection and mutation, crossover is just one part of the breeding pipeline. also there is the elite management for MOEA which is AT LEAST as important as the breeding itself.
- depending on the problem, there are almost always better specific ways of how to code the mutation and the crossover operators. but octopus is meant to keep it general for the moment - maybe there's a way for an interface to code those things yourself..!?
2) elite size = SPEA-2 archive size, yes. the rate depends on your convergence behaviour i would say. i usually start off with at least half the size of the population, but mostly the same size (as it is hard-coded in the new version, i just realize) is big enough.
4) the non-dominated front is always put into the archive first. if the archive size is exceeded, the least important individual (the significant strategy in SPEA-2) are truncated one by one until the size is reached. if it is smaller, the fittest dominated individuals are put into the elite. the latter happens in the beginning of the run, when the front wasn't discovered well yet.
3) yes it is. this is a custom implementation i figured out myself. however i'm close to have the HypE algorithm working in the new version, which natively has got the possibility to articulate perference relations on sets of solutions.
…
Analysis Tools (LAT). Our plugin has come a long way in the last 4 years and, while the legacy version will still include some small updates and contributions, we are confident in saying that the changes will be far fewer and the plugin more stable in the following months as we switch gears into the LAT effort. I can say personally that (save for a couple of small capabilities) I have made it through my list of critical features and I will hereafter be working on making these features cross-platform, cleanly-implemented, and well-documented in the new Ladybug Analysis Tools software package. As always, you can download the new release from Food4Rhino. Make sure to remove the older version of Ladybug and Honeybee and update your scripts.
The majority of changes with this release represent “icing on the cake” after a long, multi-year effort to connect to the major open source engines and datasets. So, without further adieu, here is the list of the new capabilities added with this release:
LADYBUG
Stereographic Sky Projections - Thanks to several code contributions from Byron Mardas, all Ladybug sky visualizations now support stereographic projections! Such projections are useful for understanding the hemispherical visualizations in a 2D format and they also make it easier to overlay different sky datasets on top of one another. Check here for an example file showing the sun path overlaid with helpful/harmful parts of the sky and see here for an example file using shading masks representing strategies (like an overhang) on top of the helpful / harmful portions of the sun path.
Wind Rose Upgrades - Devang Chauhan has added several new features to the Ladybug wind rose including both visual and numerical outputs of average wind velocity and frequency for each petal of the rose. Not only does this enhance the usefulness of the rose but it also paves the way for the use of the wind rose to set up CFD simulations once Butterfly is released in the near future. The new features of the wind rose can be seen in this hydra example file.
Complete Set of Local Thermal Discomfort Models - After the last release included components to evaluate radiant asymmetry discomfort (which can be modeled using these example files: 1, 2), today’s release completes Ladybug’s suite of local discomfort models from ASHRAE and the ISO by adding components to account for discomfort from cold draft. Specifically, two draft models have been added for different types of situations. The first is an older model published by P.O. Fanger, which was developed through experiments where subjects had cold air blown on the back of their neck (the most sensitive part of the body to draft). While this is useful for understanding a worst-case scenario, it can greatly overestimate the discomfort for cases of draft at ankle level - a more common occurrence that typically results from the tendency of cold air to sink. For this situation, a second draft discomfort model has been included, which is specifically meant to forecast ankle draft discomfort. The model is currently undergoing review for integration into ASHRAE-55 and a publication outlining the derivation of this model can be found here:
Liu, S., Schiavon, S., Kabanshi, A. and Nazaroff, W. (2016), Predicted Percentage Dissatisfied with Ankle Draft. Indoor Air. Accepted Author Manuscript. doi:10.1111/ina.12364 (http://escholarship.org/uc/item/9076254n).
Special thanks is due to Shichao Liu, Toby Cheung and Stefano Schiavon for sharing the model and the results of their study with the development team. The integration of draft models completes the full integration of ASHRAE-55 and EN-15251 with Ladybug. Now, you can rest assured that, if there is a certain thermal comfort standard that you need to fulfill for a given project, you can model it with the ‘bug!
Window-Based Draft Model - With the integration of draft models, the first question that one might ask is “how should these models be applied to typical design cases?” While the (soon-to-be-released) Butterfly plugin for OpenFOAM should open up a Pandora’s box of possible situations, this release of Ladybug includes a simplified downdraft model from cold vertical surfaces, which helps model several typical cases of draft discomfort. The model has been validated across several papers:
Heiselberg, P. (1994). Draught Risk From Cold Vertical Surfaces. Building and Environment, Vol 29, No. 3, 297-301
Manz, H. and Frank, T. (2003). Analysis of Thermal Comfort near Cold Vertical Surfaces by Means of Computational Fluid Dynamics. Indoor Built Environment. 13: 233-242
It has been built into the “Ladybug_Downdraft Velocity” component and has been included in an example file illustrating discomfort from cold windows in winter. The example is intended to show when glazing ratio and window U-Values are small enough to eliminate perimeter heating - a practice that is aesthetically unpleasing, costly to maintain and wasteful in its energy use.
Operative Temperature on the Psychrometric Chart - This is a feature that should have been added a long time ago but we are finally happy to say that the Ladybug_Psychrometric Chart can draw a comfort polygon assuming that the air temperature and radiant temperature are the same value (aka. an operative temperature psychrometric chart). This operative temperature chart is the format that is needed to use the ASHRAE-55 graphical method and is generally a better representation of the range of comfort in cases where one does not intend to hold the radiant temperature constant. This operative temperature capability is now set as the default on the component but you can, of course, still bring back the older comfort polygon by simply plugging in a value for meanRadiantTemperature_.
Contour Map Visualizations - Using the same inputs as the Ladybug_Recolor Mesh component, the new Ladybug_Contour Mesh component allows you to generate contoured color graphics from the results of any analysis. Now, you to maximize the use of your high-resolution studies with contours that highlight thresholds and gradients!
Image Texture Mapping for Colored Meshes - Antonello DiNunzio has added the very useful Ladybug_Texture Maker component, which allows you to bake Ladybug colored meshes with image texture maps (as opposed to the classic method that used colored vertices). This enables the creation of transparent Ladybug meshes, making it even easier to overlay Ladybug graphics with one another and with Rhino geometry:
This component also adds the ability to render Ladybug + Honeybee meshes with other rendering programs like V-Ray and 3ds Max. So you can produce Ladybug graphics like this!
Finally, image-mapped textures are also the format required for gaming and Virtual Reality software like Unity and Augmented Reality programs like Augment. So now you can export your Ladybug meshes all of the way to the virtual world!
Rhino Sun Component - If you have ever had to set up the sun for a rendering plugin and wished that you could just take your Ladybug sun and use that, then you are in luck! Byron Mardas has contributed a component that lets you set the Rhino sun based on your EPW location data, your north direction (if different from the Y-Axis) and any time of day that you want. Not only does this make it easier to coordinate the Rhino sun with your Ladybug visualizations, but you can also use it for real time shadow previews by setting your Rhino view to “Rendered” and scrolling through a slider.
Rendered Ladybug Animations - With both the image texture mapping and the Rhino sun components released, your first thought might be “it would be great if I could use this all in a rendered animation!” Thankfully, Ladybug has added a new component to help you here. The Ladybug_Render View component works in essentially the same way as the Capture View component, allowing you to make a series of images as you animate through a slider. The major benefit here is that it works with both Rhino Render and V-Ray so that animations like this can be produced effortlessly:
Cone of Vision Added - Antonello Di Nunzio has added a component that allows you to visualize various cones of vision in order to help inform your view studies. You can fine tune parameters to include just text-readable or full peripheral vision and use the resulting view cone to constrict the results of your “Ladybug_View Analysis” studies.
Terrain WIP Components Released as the Gismo Plugin - Our friend Djordje has released a new plugin Gismo - a plugin for GIS environmental analysis. As a result the following 5 terrain components: Horizon Angles, Flow Paths, Terrain Shading Mask, Terrain Generator 2, Terrain Analysis, have been removed from Ladybug+Honeybee's WIP section and are added to Gismo.
HONEYBEE
Search, Select, and Import the Hundreds Outputs from EnergyPlus/OpenStudio - Many of the power users in our community know that EnergyPlus is capable of writing several hundred different outputs from the simulation (well beyond what the basic Honeybee result readers can import). While Honeybee has always allowed one to request these outputs by adding them to the simulationOutputs_ of the component, there has not been an official workflow for searching through all of the possible outputs or importing their specific results… until now! We have added the "Honeybee_Read Result Dictionary" component, which allows you to parse the Result Data Dictionary (or .rrd file) that EnergyPlus outputs during every run of a given model. This allows you to see all of the outputs that are available for the model and you can even search through this list to find a particular output that you are interested in. Once you find what you are looking for, simply copy the text output from the component into a panel and and plug this into simulationOutputs_. Then you can use the "Honeybee_Read EP Custom Result" component to bring your custom results into GH after rerunning the simulation. The example file of an evaporative cooling tower shows how to use the workflow to request and import in the energy removed by the tower.
OpenStudio HVAC System Sizing Results - After the full integration of HVAC in the last release, we realized that a number of people wanted to run EnergyPlus models simply to evaluate the size of the Heating/Cooling system in the model (obtained from the EnergyPlus autosize calculation that is run at the start of every simulation). Such a sizing calculation can be a great way to quantify the anticipated savings from a given strategy (like shading) on the size/cost of the building’s HVAC system. To get the results of the sizing calculation, all that one needs to do is connect the output eioFile from the OpenStudio component to the Honeybee_Read HVAC Sizing component. The outputs will indicate the peak heating/cooling loads of each zone (in Watts) as well as the size of each piece of HVAC equipment in the model. The next time that you are on a project that is about to value-engineer out an exterior shading system, use the workflow in the following example file to show that the client will probably end up paying for it with a more expensive HVAC system: Quantifying HVAC Sizing Impact of Shade.
Improved Memory Usage When Building Large Energy Models - As we take the capabilities of Honeybee to larger and larger models, many of us have begun to run up against a particular limitation of our machines: memory. After upgrading our machines to have 32 GBs of RAM, there was only one way left to alleviate the problem: restructure some of the code. Honeybee now uses an enhanced approach that ensures all the previous iterations of Honeybee objects will be removed from the memory once there is a change. In any case, the considerations of memory are definitely something that we intend to improve with the future Honeybee[+] plugin.
Workflow to Import gbXML Files - While GrizzlyBear has been around for several years, enabling us to export Honeybee zones to gbXML, we have gone for quite some time without a workflow to import gbXML files to Honeybee. The new Honeybee_gbXML to Honeybee component addresses this and establishes an easier path to import models from Revit into honeybee. You can read more about the component in this post.
Window Frame Capabilities Added to OpenStudio - After the implementation of LBNL THERM / WINDOW capabilities in the last two releases, there was one final bridge to build in the Honeybee workflow - fully connecting LBNL WINDOW to Honeybee’s OpenStudio workflow. This release of Honeybee will now write all FrameAndDivider objects exported from LBNL WINDOW glazing systems into the energy simulation, enabling you to account for the frame’s thermal bridging effects. As long as the construction is brought in with the Honeybee_Import WINDOW IDF Report component, the frames associated with the construction will be assigned to all windows that have the construction. Finally, it is worth noting that the current Honeybee will also write all glass spectral data as well as gas (or gas mixture) materials into the simulation. This means that essentially all properties of any IDF export that one makes from LBNL WINDOW can be factored into the OpenStudio energy simulation (with the only exception being BSDF materials).
OpenStudio Daylight Sensors Added - In our previous releases of Honeybee, the only means of correctly account for daylight sensors in an energy simulation was to run an annual daylight simulation and use the resulting schedules for the lighting in the energy simulation. However, this can take a lot of time and work to set up and run, particularly if the daylight control (at the end of the day) will be driven by just one sensor per room. Now, we have added another option, which uses OpenStudio/EnergyPlus’s built-in daylight controls. You can assign just a point and an illuminance target on the “Set Zone Thresholds” component and the lighting will be automatically adjusted in the course of the simulation. It should also be noted that the addition of daylight sensors has also coincided with the addition of blind/shade control based on glare. The same sensor point for daylight can be used to drive dynamic shades in the energy simulation based on glare experienced at this point. This example file shows how to set up daylight controls on the EnergyPlus model and check the lighting power results to see the effect.
Better Defaults for Natural Ventilation - After many good people wrote to me informing me that Honeybee overestimates natural ventilation airflow and I wrote back showing the way that I intended natural ventilation to be set up with the component, it dawned on me that I had selected some poor component defaults. Accordingly, this release includes a window-based natural ventilation option on the Set EP Airflow component that corrects for some of the common issues that I have seen. Insect screens are included by default and the component runs a general check to see if wind-driven cross ventilation is possible before auto-assigning it. The component will air on the side of more-conservative, lower airflow rates unless the user overrides the defaults. Finally, it’s worth noting that all of these changes have not affected the freedom of the Custom WindAndStack option on the component. The new defaults can be viewed in this example file.
CFD Results Can be Plugged into Microclimate Maps - In preparation for the (very soon) release of the Butterfly that connects to the OpenFOAM CFD platform, we just wanted to note that all of the microclimate map recipes can now take an input of a csv file with a matrix of CFD results for wind speed. For the time being, we have used these to produce very high-accuracy, high resolution maps of outdoor comfort. There will be more to follow soon!
We should also note that, in the last release I mentioned that we would be phasing out the EnergyPlus component so that all efforts are focused on the OpenStudio component. While I reiterate that all of the features of the EnergyPlus component are available in the OpenStudio component and I encourage everyone to use the OpenStudio component in order to take advantage of its HVAC capabilities, I have come to realize that many prefer to use the EnergyPlus component out of habit and have not yet gotten the time to understand why the OpenStudio component is an improvement over the EnergyPlus component. As a result, we have decided to leave the EnergyPlus component in place for the time being so that everyone has more time to understand this. The future Ladybug Analysis Tools platform will only interact with EnergyPlus through OpenStudio and so it is recommended that everyone use these two components in the Honeybee plugin will serve as an educational resource to understand our current path moving forward with OpenStudio.
Lastly, it is with great pleasure that we welcome Devang Chauhan and Byron Mardas to the developer team! As mentioned previously Devang has contributed several updates to the Ladybug Wind Rose in addition to finding and solving a multitude of bugs in other components. Byron has contributed code that has enabled the previously-mentioned stereographic sky projections along with a better method for running the Ladybug Sky Mask. Finally, Byron has contributed the Rhino Sun component, which allows you to coordinate your Rhino renders with your Ladybug data. Welcome to the Ladybug team, gentlemen!
As always let us know your comments and suggestions. Cheers!
Ladybug Analysis Tools Development Team…
tal at food4Rhino:
http://www.food4rhino.com/project/ladybug-Honeybee?ufh
Before addressing the changes in the software itself, we would like to announce the start of two new resources that have been added to help everyone learn and share knowledge across our community.
NEW RESOURCES
GH Example File Sharing - After recognizing how important example files are for sharing knowledge and capabilities in our community, we have initiated a github-based platform for sharing Grasshopper definitions called Hydra:https://hydrashare.github.io/hydra/index.htmlWhile the database of files is a little over 50 files at the moment, it is hoped that this will become THE forum where much of collective knowledge is exchanged and shared into the future. As you can see by clicking on any of the examples, you now are able to get a high-res visual of both the Rhino scene and the GH canvas without having to download files to your machine. Furthermore the search functionality through the database enables you to quickly and easily see all that our community has contributed on certain subjects (just by searching “shade” or “wind” for example).In addition to other files that have been contributed, you can find all of the original Ladybug examples here:https://hydrashare.github.io/hydra/index.html?keywords=LBExampleFilesAnd all of the original Honeybee examples here:https://hydrashare.github.io/hydra/index.html?keywords=HBExampleFiles
LB+HB Documentation - While our historical practice of including all documentation within component descriptions may have sufficed up until this point, we have since recognized that an online database of all this documentation would be helpful. Now, you can search for key terms throughout the entire documentation of the project in our beautiful online documentation database created by Mostapha:https://www.gitbook.com/book/mostapharoudsari/ladybug-primer/detailshttps://www.gitbook.com/book/mostapharoudsari/honeybee-primer/details
And now, onto the major changes and enhancements in the software:
LADYBUG
Photovoltaics Components - Based on original code from NREL’s PVWatts (http://pvwatts.nrel.gov), Djordje Spasic and Jason Sensibaugh have built a set of 5 components that perform detailed estimate of the electricity generated by Rhino/Grasshopper surfaces when populated with Photovoltaics (PV) modules.Components allow definition of losses and shading, finding optimal tilt and orientation angles, analysing performance, energy value, consumption and emissions of the PV system.
Enhanced Solar Envelope - Boris Plotnikov has contributed a solar envelope component that is not only much faster and more stable than the previous component but also allows you to input the geometries of buildings for which you would like to ensure solar access. This enables customization of the solar envelope to specific urban contexts in a manner that the previous envelope did not. The component also features a “solar access” option that draws the envelope above which a given site receives sun from a set of sun vectors. An example file can be found here:http://hydrashare.github.io/hydra/viewer?owner=boris-p&fork=hydra&id=SolarEnvelope
Adaptive Comfort Chart - To assist with understanding the variations of the adaptive comfort model, an Adaptive Comfort Chart component has been added that functions in a similar manner to the psychrometric chart for the PMV model. In addition to granting a visualization of the adaptive standard itself, the chart is also particularly helpful for displaying the results of energy simulations in relation to the comfort polygon. The chart is based off of the UC Berkeley Center for the Built Environment’s Comfort Tool (http://comfort.cbe.berkeley.edu/) (https://github.com/CenterForTheBuiltEnvironment/comfort_tool). An example file can be found here:http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Adaptive_Comfort_Chart
Full Support for US + European Thermal Comfort Standards - Ladybug now supports the ability to model any of the variations of the Adaptive/PMV models for both the US (ASHRAE) and European (ISO) standards. This includes varying thresholds of percentage of people dissatisfied (PPD), varying thresholds for humidity ratios, the ability to use either a monthly average or daily running mean temperatures in the adaptive model, and even some functions that are not yet a part of these standards but are referenced widely in thermal comfort research. Such widely referenced functions include the ability to apply the adaptive model’s method to conditioned or mixed-mode buildings as well as the application of the adaptive model to times of the year when it is considered too cold by ASHRAE and the ISO for an adaptive standard. All of these variations on the standards can be accessed through the new “PMV Comfort Parameters” and “Adaptive Comfort Parameters” components. As a final nod to dual support for US/European standards, it is now possible to view the psychrometric chart as a Molier i,x diagram.
EPWMap - After years of struggling with the text-based indexing of the DOE’s epw file database, it is now possible to search for weather files using a map interface and search bar thanks to Mostapha’s recent web interface built with D3 and GoogleMaps (http://mostapharoudsari.github.io/epwmap/). From here on out, the Ladybug “Download EPW” component will direct you to this interface.
“RunItAll” Released as “Fly” - In preparation for future features that will assist with exploring of large multidimensional design spaces, this release of Ladybug includes a component by the name of “Fly” that is meant to run through all of the combinations of a given set of sliders. Those who follow this forum closely might recognize it as a reincarnated version of a component called “RunItAll” that appeared in some older example files. You can find an example file here: http://hydrashare.github.io/hydra/viewer?owner=mostaphaRoudsari&fork=hydra_1&id=Parametric_Daylight_Analysis
Shade Benefit Evaluator Validated + Published - After a long process of testing, the key functions within the comfort and energy shade benefit evaluator components have been validated against several similar software and energy modeling tools. A paper published to the SimAUD conference regarding this validation can be found here: https://www.dropbox.com/s/tvdj6d2giswurew/SIMAUD_Paper12.pdf?dl=0. Special recognition goes to Panagiotis Samaras, who ran many of these intensive tests for his thesis. Along with this validation, there are a few more variables that have been exposed to allow more freedom of running the shade benefit functions including the use of higher sky resolutions and multiple shade benefit test regions for a given shade.
Color Gradient Library - After realizing that several of us wanted quick access to common color gradients that we frequently plug into the Legend Parameters component, we have now added a component called “Color Gradient Library” to do just this. An image displaying all of these gradients can be found here:https://github.com/mostaphaRoudsari/ladybug/blob/master/resources/gradients.jpgAnd an example file showing how to use the library can be found here:http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Color_LibraryIf you feel that there is a common gradient that is currently missing, feel free to start a discussion on our GH group about it and we should include it soon.
Solar Time Available - The Ladybug Sunpath now includes an option to display solar time, which many have found to be more intuitive and easy to work with when designing with solar geometry. Solar time is also useful for minimizing an east vs west bias that can develop in sunlight hour studies without having to generate sun vectors at very small timesteps.
Monthly/Daily Totals for Hourly Data - The Ladybug “Average Data” component now includes the ability to total the values for months and days (as opposed to timply averaging them). This is useful particularly when you want to get monthly or daily values of total energy or visualize these totals on the monthly bar chart.
Increased IP Functionality - This release of Ladybug includes several more features that assist with converting data for an IP audience including the ability to view an IP Psychrometric or Adaptive Chart by plugging in temperature values in Farenheit as well as a number of and new converter components for the following: Wh to BTU, R-Value SI to R-Value IP, m/s to mph, Liters to Gallons. Note that Honeybee is still largely SI (requiring your Rhino model to be in meters to run energy simulations).
Mesh-to-Hatch and Future BakeIt Plans - Given that the current BakeIt_ option has only been implemented on a few components with relatively minimal use, it has been decided that future implementations of BakeIt_ will provide not just a means of recording parametric results in the Rhino scene but will also support a full pathway to vector-based programs (like Illustrator and Inkscape). As such, BakeIt_ will place text in the Rhino scene as actual editable text (not meshes) and colored meshes will be output as groups of colored hatches (so that they appear as color-filled polygons in vector-based programs). In order to give those interested in this future capability a chance to experiment at the present, a “Mesh-To-Hatch” component has been added to the Extra tab.
HONEYBEE
Fully Functional Microclimate Maps - Finally, after a long and arduous thesis followed by a couple of months of bug-fixing, Chris Mackey is pleased to announce that the ability to produce high resolution temperature maps from EnergyPlus results is complete. Together, these maps account for four key variables that produce microclimatic diversity in and around buildings - MRT variation from different surface temperatures, solar radiation shining directly on occupants, average air temperature diversity, and air temperature stratification. In addition to using these 4 variables to produce high-resolution visuals of temperature, it is also possible to produce maps of thermal comfort by using any of the three primary thermal comfort models in Ladybug (PMV, Adaptive, and Outdoor (UTCI)). Support currently exists to produce maps for both indoor and outdoor conditions and, while the temperature values and indoor comfort values currently produced are highly accurate, the outdoor wind speeds are calculated using the simplified assumptions of EnergyPlus and will be revised to enable more accurate accounting for the effects of wind on outdoor comfort in the next stable release. The whole workflow is broken down into eight components that can all be found under the 9 | Energy Energy tab. For some videos showing some time-lapse thermal renderings made from these tools see this video playlist:https://www.youtube.com/playlist?list=PLruLh1AdY-Sj3ehUTSfKa1IHPSiuJU52AFor the full 150-page documentation of the tools produced for Chris’s thesis, see this link:https://www.dropbox.com/s/k4r4rd279y4td9n/Mackey_Thesis.pdf?dl=0Finally, if you want to dive in and produce some comfort maps for yourself, you can find an example file here for indoor maps:http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Indoor_Microclimate_MapAnd an example file here for outdoor maps:http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Outdoor_Microclimate_Map
Thermal Autonomy / Thermal Comfort Percent - In addition to the new thermal mapping capabilities, this release includes the ability to use these maps to calculate a series of spatial thermal comfort metrics that are meant to mirror the metrics currently used to evaluate daylight (daylight autonomy, UDI, etc.). Specifically, these metrics are the following:Thermal Comfort Percent - The percentage of occupied time that a given point in space is thermally comfortable.Thermal Autonomy - The percentage of occupied time that a given point in space is thermally comfortable without the addition of any heating or cooling energy.Overheated Hours - The percentage of occupied time when a given point is space is too hot to be thermally comfortable.Underheated Hours - The percentage of occupied time when a given point is space is too cold to be thermally comfortable.All of these metrics can be accessed through the “Thermal Autonomy Analysis” component and you can find an example file here:http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Comfort_Autonomy
Energy Balance Visualizations - In order to help understand the flow of energy through Honeybee energy models, it is now possible to completely reconstruct the energy balance calculation of EnergyPlus from the energy simulation results. This is facilitated by the new EnergyPlus “Construct Energy Balance” component and some new features added to the monthly bar chart. See here for an example:http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Energy_Balance
More Geometry Control for Glazing - In order to make it faster to assign several different types of glazing geometries to your energy models, the “AddHBGlz” can now be used to add glazing surfaces to HBzones (not just HBsurfaces). Furthermore, the “Glazing Based on Ratio” component now contains several more inputs that enable you to customize window geometry on orthogonal surfaces, including the ability to set the horizontal distance between windows and the ability to split windows vertically into a lower view window and higher daylight window.
Earth Tube Capability - Thanks to the efforts of Anton Szilasi, it is now possible to assign earth tubes to your energy models in order to test the potential of this powerful passive strategy. See here for an example file:http://hydrashare.github.io/hydra/viewer?owner=antonszilasi&fork=hydra&id=HB_EarthTube
North Input For Annual Daylight - After the toil of having to rotate your model any time you wanted to run an annual daylight analysis, we are happy to announce that the annual daylight recipe now contains a working “North” input.
Honeybee Object Transforms - After realizing that many of us wanted to construct energy models of multi-story buildings by duplicating and moving zones, this capability is now easily facilitated with a set of three components to duplicate and transform your HBObjects. Specifically, this includes a component to move (translate) your HBObject, mirror (reflect) your HBObject, and rotate your HBObject. Using these components ensures that any properties that you have assigned to your original HBObject will be present in the transformed HBOjbect, allowing you to build large energy models very quickly. The three components can currently be found under the WIP tab.
And finally, it is with great pleasure that we welcome Boris Plotnikov to the team. As mentioned in the above release notes, Boris has added a highly advanced solar envelope component to the project.
As always let us know your comments and suggestions.
Enjoy!
Ladybug+Honeybee development team
…
r." I'm sorry to hear that, I take the interface and ease-of-use rather seriously so this sounds like a fundamental failure on my part. On the other hand, Grasshopper isn't supposed to be on a par with most other 3D programs. It is emphatically not meant for manual/direct modelling. If you would normally tackle a problem by drawing geometry by hand, Grasshopper is not (and should never be advertised as) a good alternative."What in other programs is a dialog box, is 8 or 10 components strung together in grasshopper. The wisdom for this I often hear among the grasshopper community is that this allows for parametric design."Grasshopper ships with about 1000 components (rounded to the nearest power of ten). I'm adding more all the time, either because new functionality has been exposed in the Rhino SDK or because a certain component makes a lot of sense to a lot of people. Adding pre-canned components that do the same as '8 or 10 components strung together' for the heck of it will balloon the total number of components everyone has to deal with. If you find yourself using the same 8 to 10 components together all the time, then please mention it on this forum. A lot of the currently existing components have been added because someone asked for it."[...] has a far cleaner and more intuitive interface. So does SolidWorks, Inventor, CATIA, NX, and a bunch of others."Again, GH was not designed to be an alternative to these sort of modellers. I don't like referring to GH as 'parameteric' as that term has been co-opted by relational modellers. I prefer to use 'algorithmic' instead. The idea behind parameteric seems to be that one models by hand, but every click exists within a context, and when the context changes the software figures out where to move the click to. The idea behind algorithmic is that you don't model by hand.This is not to say there is no value in the parametric approach. Obviously it is a winning strategy and many people love to use it. We have considered adding some features to GH that would make manual modelling less of a chore and we would still very much like to do so. However this is such a large chunk of work that we have to be very careful about investing the time. Before I start down this road I want to make sure that the choice I'm making is not 'lame-ass algorithmic modeller with some lame-ass parametrics tacked on' vs. 'kick-ass algorithmic modeller with no parametrics tacked on'.
Visual Programming.I'm not exactly sure I understand your grievance here, but I suspect I agree. The visual part is front and centre at the moment and it should remain there. However we need to improve upon it and at the same time give programmers more tools to achieve what they want.
Context sensitivity."There is no reason a program in 2014 should allow me to make decisions that will not work. For example, if a component input is in all cases incompatible with another component's output, I shouldn't be able to connect them."Unfortunately it's not as simple as that. Whether or not a conversion between two data types makes sense is often dependent on the actual values. If you plug a list of curves into a Line component, none of them may be convertible. Should I therefore not allow this connection to be made? What if there is a single curve that could be converted to a line? What if you want to make the connection now, but only later plan to add some convertible curves to the data? What you made the connection back when it was valid, but now it's no longer valid, wouldn't it be weird if there was a connection you couldn't make again?I've started work on GH2 and one of the first things I'm writing now is the new data-conversion logic. The goal this time around is to not just try and convert type A into type B, but include information about what sort of conversion was needed (straightforward, exotic, far-fetched. etc.) and information regarding why that type was assigned.You are right that under some conditions, we can be sure that a conversion will always fail. For example connecting a Boolean output with a Curve input. But even there my preferred solution is to tell people why that doesn't make sense rather than not allowing it in the first place.
Sliders."I think they should be optional."They are optional."The “N” should turn into the number if set."What if you assign more than one integer? I think I'd rather see a component with inputs 'N', 'P' and 'X' rather than '5', '8' and '35.7', but I concede that is a personal preference."But if I plug it into something that'll only accept a 1, a 2, or a 3, that slider should self set accordingly."Agreed.
Components."Give components a little “+” or a drawer on the bottom or something that by clicking, opens the component into something akin to a dialog box. This should give access to all of the variables in the component. I shouldn't have to r-click on each thing on a component to do all of the settings."I was thinking of just zooming in on a component would eventually provide easier ways to access settings and data."Could some of these items disappear if they are contextually inappropriate or gray out if they're unlikely?"It's almost impossible for me to know whether these things are 'unlikely' in any given situation. There are probably some cases where a suggestion along the lines of "Hey, this component is about to run 40,524 times. It seems like it would make sense to Graft the 'P' input." would be useful.
Integration."Why isn't it just live geometry?"This is an unfortunate side-effect of the way the Rhino SDK was designed. Pumping all my geometry through the Rhino document would severely impact performance and memory usage. It also complicates the matter to an almost impossible degree as any command and plugin running in Rhino now has access to 'my' geometry."Maybe add more Rhino functionality to GH. GH has no 3D offset."That's the plan moving forward. A lot of algorithms in Rhino (Make2D, FilletEdge, Shelling, BlendSrf, the list goes on) are not available as part of the public SDK. The Rhino development team is going to try and rectify this for Rhino6 and beyond. As soon as these functions become available I'll start adding them to GH (provided they make sense of course).On the whole I agree that integration needs a lot of work, and it's work that has to happen on both sides of the isle.
Documentation.Absolutely. Development for GH1 has slowed because I'm now working on GH2. We decided that GH1 is 'feature complete', basically to avoid feature creep. GH2 is a ground-up rewrite so it will take a long time until something is ready for testing. During this time, minor additions and of course bug fixes will be available for GH1, but on a much lower frequency.Documentation is woefully inadequate at present. The primer is being updated (and the new version looks great), but for GH2 we're planning a completely new help system. People have been hired to provide the content. With a bit of luck and a lot of work this will be one of the main selling points of GH2.
2D-ness."I know you'll disagree completely, but I'm sticking to this. How else could an omission like offsetsurf happen?"I don't fully disagree. A lot of geometry is either flat or happens inside surfaces. The reason there's no shelling (I'm assuming that's what you meant, there are two Offset Surface components in GH) is because (a) it's a very new feature in Rhino and doesn't work too well yet and (b) as a result of that isn't available to plugins.
Organisation.Agreed. We need to come up with better ways to organise, document, version, share and simplify GH files. GH1 UI is ok for small projects (<100 components) but can't handle more complexity.
Don't get me wrong, I appreciate the feedback, I really do, but I want to be honest and open about my own plans and where they might conflict with your wishes. Grasshopper is being used far beyond the boundaries of what we expected and it's clear that there are major shortcomings that must be addressed before too long. We didn't get it right with the first version, I don't expect we'll get it completely right with the second version but if we can improve upon the -say- five biggest drawbacks (performance, documentation, organisation, plugin management and no mac version) I'll be a happy puppy.
--
David Rutten
david@mcneel.com…
ager As Grasshopper.Kernel.GH_Component.GH_InputParamManager)
pManager.AddTextParameter(
"Name", "N", "String", GH_ParamAccess.item)
pManager.AddPointParameter(
"Point", "P", "Point3d", GH_ParamAccess.item)
pManager.AddGenericParameter(
"Local Axis", "LA", "Null/Surface/Plane", GH_ParamAccess.item)
pManager.AddGenericParameter(
"Support", "S", "I_Model_Support", GH_ParamAccess.item)
pManager.AddGenericParameter(
"PointLoad", "PL", "List (of I_Model_PointLoad)", GH_ParamAccess.list)
pManager.AddGenericParameter(
"Group", "G", "List (of (I_Model_Group)", GH_ParamAccess.list)
End Sub
Protected Overrides Sub RegisterOutputParams(ByVal pManager As Grasshopper.Kernel.GH_Component.GH_OutputParamManager)
pManager.AddGenericParameter(
"Node", "N", "I_Model_Node",GH_ParamAccess.item)
End Sub
Protected Overrides Sub SolveInstance(ByVal DA As Grasshopper.Kernel.IGH_DataAccess)
Dim inName As String = Nothing
Dim inP As Point3d = Nothing
Dim inLA As Plane = Nothing
Dim inS As I_Model.I_Model_NodeSupport = Nothing
Dim inPL As New List(Of I_Model.I_Model_PointLoad)
Dim inG As New List(Of I_Model.I_Model_Group)
Dim outNode As I_Model.I_Model_Node
If Not DA.GetData(0, inName) Then Return
If Not DA.GetData(1, inP) Then Return
If Not DA.GetData(2, inLA) Then Return
If Not DA.GetData(3, inS) Then Return
If Not DA.GetData(4, inPL) Then Return
If Not DA.GetData(5, inG) Then Return
Dim IM_P As I_Model_Node
IM_P =
New I_Model_Node(inP, Nothing, inName)
If Not DA.GetData(2, inLA) Then IM_P.SetLocalAxis(inLA)
If Not DA.GetData(3, inS) Then IM_P.SetSupport(inS)
If Not DA.GetData(4, inPL) Then
Dim PL As I_Model_PointLoad
For Each PL In inPL
IM_P.AddPointLoad(PL)
Next
End If
If Not DA.GetData(5, inG) Then
Dim G As I_Model_Group
For Each G In inG
IM_P.AddToGroup(G)
Next
End If
outNode = IM_P
DA.SetData(0, outNode)
End Sub
…
Added by Daniel Bosia at 9:22am on January 11, 2013