h, and using the BScale and BDistance are creating havoc somehow too. I've simplified first, and used the Kangaroo Frames component along with setting internal iterations, to make MeshMachine act like a normal component, along with releasing the FixC and FixV. The FixV didn't make any sense anyway. I've also set Pull to 0 to speed it up during testing, since much less calculation is involved to just let the meshes collapse, prevented from disappearing altogether by using a mere 15 iterations.
Also, your breps are open so that allows much more chaos and then collapse, though they did manage to close themselves too at times. Here is closed breps with a full 45 iterations:
So now that it's working, lets re-Fix the curves, and the problem arises that there is an extra seam line that is getting fixed too, running along the cylinder, stopping the mesh from pulling tight under tension wherever a vertex happens to be near that line:
So lets grab only the naked edge curves instead:
And what happens if we lose the end caps, now that we don't have an extra line skewing the result?:
There is no real curvature differences since it's not a curvy brep so the Adapt at full 1 setting has little to do. Now what does the BScale and BDist do? Nothing! Why? Your scale is out of whack, 99 mm high cylinders but only a falloff maximum of about 5, so let's make the falloff be 25 instead, but I must restore the end caps or the meshes collapse away for some reason and freezes Rhino for a minute or so the first time I try it:
It's a start.
If I intersect the cylinders, nothing changes, since they are being treated as separate runs. MeshMachine outputs a sequence of two outputs though, due to Frames being set to a bare minimum of 2 needed to get it to work, so I filter out the original run, which is just the unmodified initial mesh it creates.
The lesson so far is that closed meshes are much less prone to collapse and glitches leading to screw ups.
A Boolean union of the cylinders is when it gets funner, here show with and without the fixed curves that seem to define boundaries too where really there are just polysurface edges:
…
xternal loads acting on the structure.But if gravity is zero and I change the material or the cross section (while the external loads are unchanged), than the values of the section forces change as well.The difference is not much, but it is there. How is this explained?
I made two files to address the gravity problem.
The first test file is based on a simple supported beam. Here, whether the gravity is on or not, you don't see any change in the values of the section forces of the load case (LC1) with the external load (when you are looking at the lists which are the output of the section force component), if the material or the cross section is changed.
And this is how it should be because these list list the values for each loadcase, and the values of the loadcase with the external load don't take into account the gravity (hence material and cross section).
The second file is a smaller definition of the main file I have been working on.
It is with beams and springs, corresponding to a deployable scissor structure.
To use this file, you have to use the corresponding rhino file. You have to set the curves (in the grasshopper file, indicated with the arrows). In the first set of curves (GH) you set the curves of layer 1 (RH), the red ones, and please do so by selecting them clockwise. In the second set of curves (GH) you set the curves of layer 2 (RH), the purple ones, again selecting them clockwise.
Normally the Karamba output should appear now.
In this file the values of the section forces of the loadcase with the external loads (LC1) (output list of the section force component) change when you change the material (steel or aluminium) and the cross section.
And this is not correct, because these values should be independent from gravity, whether gravity is on or not, right?
Is it because I am using springs (and their related cross section and material)?
Thanks for the help!
Best
Lara…
ay to use a sine wave length along a curve with the grasshopper script below. [Update: done, thank you TOM].
I am trying to figure out a way to reverse the sine wavelength.
Current problems:
1.) Reversing the sine wavelength along the curve. And provide a graph to allow for different graphs.
I.e., I want to have a sine wavelength on the base, and 3200mm above a wavelength of a different sort. Current File: (apply curve to any curve)
Brick%20problem.gh
2.) Contouring when lofted to allow for HFrame placement
I need to be able to apply the script to curves; and be able to adjust a series of points, and the multipiers to adjust the width the curve extrudes in the y axis away from the curve at a perpendicular angle.
Current: (took a long time but this is where I am now)
OLD:
Fixing: Achieved:
Also, is there a way to do this without series and by linking surfaces already in Rhino3d?
Sine wavelength below, and intention to use the brick wall on in picture after.
I've been following Nick Senske's work on YouTube for those wondering.
I hope to achieve something similar to the pictures attached:
http://www.zja.nl/image/2014/10/15/2112_500_325.jpg(mediaclass-default.f996b08cf5abfa43b3b03133a89ec231272756a9).jpg
http://www.zja.nl/en/page/2311/parametric-design-for-brick-surfaces
Edits: Removal of unneeded content and grammar. Update of pictures and progress. Thanks to Tom for his assistance. …
Added by Iain McQuin at 9:52pm on October 18, 2016
st list, shortest list, cross reference) in cases where the number of iterations has to be kept track of and used in the code.
I am developing a number of components that instantiate objects of a custom class type (which I define), and add the object's parameters to a database for use in other applications. In almost all cases my classes have an ID parameter, which I typically set in the SolveInstance method using an int variable which is incremented after each object's instantiation. I'd like to be able to access or keep track of the number of iterations without having to register my params as GH_ParamAccess.tree, and looping over one of the input trees.
I've included a simple example to illustrate the point. I define a simple class that takes two numbers as inputs, adds them, and includes an ID attribute:
public class AddTwoNumbers {
//Basic addition parameters
public string ID = "not set";
public double A;
public double B;
public double C;
//Addition method
public void Add()
{
C = A + B;
}
//Constructor
public AddTwoNumbers(double a, double b)
{
A = a;
B = b;
this.Add();
}
}
If I do not register my param access as .tree, I am not able to increment the ID each time through the SolveInstance. Here are my SolveInstance and RegisterInputParams methods using GH's iteration:
protected override void RegisterInputParams(GH_Component.GH_InputParamManager pManager) {
pManager.Register_DoubleParam("Double A", "A", "First number to add.");
pManager.Register_DoubleParam("Double B", "B", "Second number to add.");
}
protected override void SolveInstance(IGH_DataAccess DA) {
//local variables to catch incoming data
double myA = 0.0;
double myB = 0.0;
//assign incoming data to local variables
if (!DA.GetData(0, ref myA)) { return; }
if (!DA.GetData(1, ref myB)) { return; }
//instantiate a new AddTwoNumbers object
AddTwoNumbers myAdd = new AddTwoNumbers(myA, myB);
//give this object an ID
// ??? how do we assign a numerical ID if we don't have access to an iterator?
//add the object to a static list in this namespace - not implemented here for simplicity's sake
//a string to report all of myAdd's parameters
string myParams = myAdd.ID + ", " + myAdd.A.ToString() + ", " + myAdd.B.ToString() + ", " + myAdd.C.ToString();
//set output data
DA.SetData(0, myParams);
}
and a screenshot of the component in action:
If I do register my params as .tree, I am able to increment an ID variable each time through my nested for loop, but I'd have to do a lot of work to account for all of the possible input scenarios (an item and a list, 2 lists of different lengths, a list and a tree, etc.). Here are my SolveInstance and RegisterOutputParams methods, minus all of the defense, and a screenshot of this component:
protected override void RegisterInputParams(GH_Component.GH_InputParamManager pManager) {
pManager.Register_DoubleParam("Double A", "A", "First number to add.", GH_ParamAccess.tree);
pManager.Register_DoubleParam("Double B", "B", "Second number to add.", GH_ParamAccess.tree);
}
protected override void SolveInstance(IGH_DataAccess DA) {
//local variables to catch incoming data
GH_Structure<GH_Number> myATree = new GH_Structure<GH_Number>();
GH_Structure<GH_Number> myBTree = new GH_Structure<GH_Number>();
//An output data tree
DataTree<string> myOutTree = new DataTree<string>();
//An iterator counter
int IteratorCounter = 0;
//pass incoming data into local variables
if (!DA.GetDataTree(0, out myATree)) { return; }
if (!DA.GetDataTree(1, out myBTree)) { return; }
//pick a data tree to loop over - since we don't know which tree is larger, we'd have to
//do a bunch of defensive programming here. In this case we'll just loop over the A tree
for (int i = 0; i < myATree.Branches.Count; i++)
{
//set the output path = to the incoming path
GH_Path myOutPath = new GH_Path(myATree.Paths[i]);
//loop over each item in the branch
for (int j = 0; j < myATree.Branches[i].Count; j++)
{
//instantiate a new AddTwoNumbers object
AddTwoNumbers myAdd = new AddTwoNumbers(myATree.Branches[i][j].Value, myBTree.Branches[i][j].Value);
//here we assume that that the Btree will be of identical structure ... of course this might not be the case
//again, we could do a bunch of defense here, but we'd like to be able to use GH's iteration.
//give this object an ID
myAdd.ID = IteratorCounter.ToString();
//now since we have access to a counter, we can assign IDs using that counter. Since the counter is
//incremented each time through the loop, we know we'll never get a duplicate ID.
//add the object to a static list in this namespace - not implemented here for simplicity's sake
//a string to report all of myAdd's parameters
string myParams = myAdd.ID + ", " + myAdd.A.ToString() + ", " + myAdd.B.ToString() + ", " + myAdd.C.ToString();
//add the string to the out tree
myOutTree.Add(myParams, myOutPath);
//increment the counter
IteratorCounter++;
}
}
I think this is a specific breed of a more general question: what are the scenarios under which GH's built in iteration will not suffice? When do you need to specify the GH_ParamAccess and take control of the iteration? Another example: sometimes you need to access and use branch paths in a component's code ... can you do this without setting the access to .tree?
Any help would be greatly appreciated. Thanks!
PS - I included a .zip of my visual studio 2010 project, in case anyone would like to take a closer look.
…
g of over 150 annotated Grasshopper definitions and representations of their potential for the beginning to advanced Grasshopper user. The site brings together ongoing parametric research through the fields of Architecture, Landscape Architecture, Fabrication and Structure.
Summer 2014 Additions Include:
…
mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…
s before here: http://spacesymmetrystructure.wordpress.com/2011/05/18/pseudo-physical-materials/)
If we want to design repetitive structures, we might want to be able to assign periodic boundary conditions to some structure to enforce translation symmetry. For instance, a long row of connected arches or vaults, which we want to be identical for ease of fabrication.
We could simulate this by adding many identical vaults in a row, and as we added more, the ones near the middle would get closer and closer to being identical. But they would never quite reach the point of being truly identical, even as we added hundreds of copies, and this would be very inefficient for large simulations:
One way around this is to take some points on one side of our structure, and lock them to some points on the other side of the structure using the new TranslationLock component:As far as the physics engine is concerned, each pair of points linked in this way is then actually just one point. It is as though the space itself has been wrapped around to join one side with the other.
Anyone who has played the game Portal will be familiar with a version of this concept (or for the older ones among you - asteroids).
Translation locks can be applied in any direction, and combined with any of the other forces. However, a few things to bear in mind:
-Be careful not to double up forces unintentionally. For instance, if you are adding a gravity load to the nodes of a catenary arch, and you want an equal load on every point, add only half the load to the locked particles, because when joined together these get combined (or equivalently you could add the full load to just one particle of the pair).
Similarly for springs - if you are smoothing a periodic tensile mesh using springs, be careful not to add the forces of the boundary springs twice.
-If you are using this for structural form-finding, remember that space we inhabit in the real world doesn't have these periodic boundary conditions (at least not on everyday scales!), so when you build it you will need to provide appropriate balancing forces at the ends.
-For forces which act on more than 2 particles, such as bending or Laplacian smoothing, you need to lock an appropriate number of particles on one side to those on the other side. Sometimes this may require adding 'ghost vertices'.
For example, here we model a periodic elastica curve:
This is achieved by applying a translation lock to the pairs shown by the red and blue arrows.
(note that the particle at the end of the blue arrow is 1 segment beyond the end of the curve)
One possible use of this tool would be the form-finding of periodic minimal surfaces (following the example of the great Surface Evolver by Ken Brakke). His site has many more great examples of these:
Generating these surfaces in a way that they remain minimal across the boundary would be very difficult without this periodic constraint.
Perhaps more interesting from a design perspective is the possibility to move beyond pure mathematical surfaces, and generate more free-form repeating units, but still preserving continuity across the boundaries, something like the work of Erwin Hauer:
…
me of course!So I'll try to be as clear as possible.I have two problems.The form we have is a shade. I would like to close the shape of the top so she cancling to a bulb socket. For this, I want to keep a planar surface (the same surface asthe top of the basic shape without distortion but reduced (scale)), then connect themodified form (with the attractors points) to this surface. However, it must be dividethis surface on triangle to succeed have flat surface (because the shade is made of paper and then cut out of planar sheets).I managed to do it but very complicated and non-automatic by taking each point oneby one through lists items.Do you know a way to do it automatically and it still works even if we increase the number of facets of the form?I also have a problem with attractors points to 2 different places, to distort the basic shape and create the holes.
I could wish to create as many attractors points what I want in my program but it is limited.Do you think it is possible to group all attractors points in only component (point) to make this automatic?In my program, I have managed to use several (3) points attractors to distort the basic shape using dispatch order if I want attractors for example 24 points, I wouldcreate 24 pieces of program which is quite disturb!For holes, the problem IS exactly the same.Do you have any ideas? (If you have time).
Thanks a lot.Ines
…
n common tasks like updating GH definitions, viewing images on the GH canvass, and augmenting existing study-types. Most of the improvements to Honeybee have been in the making for a while and are just getting into the spotlight with this release. Notably, a number of improvements have been made to support large-scale full building energy models, including fixes to memory issues with large models, better components for splitting building masses into zones, and the ability to store HBZones in external files. Additionally, the THERM workflows have gotten a boost and these simulations can now be run directly from the Grasshopper canvass.
As always you can download the new release from Food4Rhino. Make sure to remove the older version of Ladybug and Honeybee before you do so and update your scripts. So, without further adieu, here is the list of the new capabilities added with this release:
LADYBUG
Better Method for Updating Old Grasshopper Files - As many of you have come to realize, Ladybug + Honeybee is updated on a fairly regular basis, with a stable release roughly every 6 months and a github version that never ceases to improve itself on a weekly basis. For this reason, we realize that updating old Grasshopper definitions to use recent components is a challenge for many of us. While we’ve had some methods for this in the past, there were always hiccups, particularly when it came to components that had new inputs/outputs since the previous version. Accordingly, Mostapha has added a new “Ladybug_Update File” component that will automatically update any Grasshopper Definition to be synchronized with the version of Ladybug+Honeybee that is currently in your toolbar (aka. the components in your userobjects folder). If there is a component that has new inputs/outputs since the time you built the definition, it will be automatically circled in red in your GH definition and a newer version of the component will be automatically added right next to this component:
While you still have to do some manual connecting of inputs to the newer component in this case, it should be much faster than our older methods and will hopefully help your old definitions survive long into the future!
EPWmap Now includes OneBuilding Files - Mostapha has added a number of new features to the EPWmap web interface that the “Download Ladybug” component connects to. Among the improvements are a color wheel that quickly shows you how hot, cold, and comfortable a given climate is and, perhaps more importantly, there is now support for EPW files sourced from OneBuilding. With the addition of many more weather files, you should now be able to use Ladybug with ease for more locations across the planet. We should also note that the “Open EPW and STAT” component that downloads/unzips files from a URL now supports OneBuilding URLs.
New Image Viewer Component - Mingbo Peng has graced Ladybug with a fantastic new “Image Viewer” component that takes a given image file on one’s machine and displays it on the Grasshopper canvas. It also enables one to pull color data off of the image with ease by simply clicking on the pixel of the image one is interested in. This new component is useful for a wide variety of cases, including the viewing of screenshots after they have been taken with the “Ladybug_Capture View” or “Ladybug_Render View” components. However, many of you will likely recognize it as most immediately useful in workflows involving image-based Honeybee Daylight (Radiance) simulations. This is particularly true as Migbo has built-in the capability to read many image file types, including PNG, JPEG, GIF, TIFF and the High Dynamic Range (.HDR) image files that Radiance Outputs:
The following video gives a quick overview of the Image Viewer’s capabilities:
The new component can be found under the Ladybug_Extra tab and I think I speak for us all in saying thank you Mingbo for this great component!
New Sun Shades Calculator Released Under WIP - After over a year of software development and nearly a career's worth of geometric math development, a joint effort between Abraham Yezioro and Antonello Di Nunzio has produced a new sun shade design component that can be described as nothing short of “magical.” Based on a similar principle to the current “Ladybug_Shading Designer,” the new component takes an input of sun vectors and produces shade geometries that can block the vectors. However, in comparison to the shading designer, the range of shade options that are available in this new component is truly staggering, ranging from classic overhangs, louvers and fins to pergolas and custom shade surfaces. Perhaps more importantly, the calculation methods used by this new component are faster and more reliable. It can currently can be found under the WIP section of Ladybug and it will continue to evolve in new versions of Ladybug.
Renewable Component Now Support Sandia and CEC Photovoltaics Modules - Polishing off his many contributions to the “Renewables” section of Ladybug, Djordje Spasic has added support for a couple more ways of defining Photovoltaic modules for renewables estimation. Specifically, the Ladybug WIP section now includes components to import modules defined with the California Energy Commission (CEC) and Sandia Labs.
HONEYBEE
Support for OpenStudio 2.x - A few months ago, the National Renewable Energy Lab (NREL) released a stable version of OpenStudio version 2, which included a number of improvements in stability and available features. This stable release of Honeybee is built to work with the new version of OpenStudio and, in the coming months, Honeybee will be adding a few more capabilities to its OpenStudio workflows to support v2.x’s new capabilities. Most notable among these will be support for OpenStudio measures. Measures are short scripts written in Ruby using OpenStudio’s SDK to quickly edit and change OpenStudio models. They are fundamental to visions of OpenStudio as a flexible energy modeling interface and to Honeybee’s goals of being a collaborative interface between the architectural and engineering industries. Stay tuned for the next release for many of these new capabilities!
Critical Memory Issue Fixed for Large Energy Models - A number of you wonderful members of our community have been aware of computer memory issues with large Honeybee models for some time (examples: 1, 2, 3, 4). Namely, a model that is larger than 50 zones could quickly eat up 16 GBs of memory and change Honeybee from a fast-flying insect to something more reminiscent of a snail. We are happy to say that, after a much longer time than it should have taken us, we finally identified and fixed the issue. In this version of Honeybee, such large models can now be created using less than 2% of the memory and time previously. Thanks to all of you who made us aware of this and hopefully you will now reap the rewards of your struggle.
Split Building Mass Component Getting a Makeover - Many of you veteran Ladybug users will recognize Saeran Vasathakumar as one of the original contributors of Ladybug who added components for solar fans and envelopes years ago. Now he’s back with new components to split a building mass into zones that are truly revolutionary in their speed and methodology. Saeran has divided the new capabilities into two components (one for floor-by-floor subdivision and another for core-perimeter subdivision) and they both can be found under the WIP section of this release. In this WIP version, core-perimeter thermal zones can only be generated for all convex and very simple concave geometries. Most concave geometries and geometries with holes (or courtyards) in them will fail. However it can handle even very complex convex geometries with speed and ease. You can expect the component to start accommodating concave/courtyard geometries very soon.
Load / Dump HB Objects to File - Keeping in line with the support of large, full building energy models, this release includes full support for two components that can dump and load any HBObjects to a standalone file. All information about HBzones can go into this file including custom constructions, schedules, loads, natural ventilation, shading devices, etc. You can then send the resulting .HB file to someone else and they can load up the same exact zones in another definition. This also makes it possible to have one Grasshopper file for generating the zones and running the simulation and another GH definition to import results and color zones/surfaces with those results, make energy balance graphics, etc.
Write ViewFactorInfo to File - After many of you asked for it, the _viewFactorInfo that is output from the “Honeybee_View Factor” component can now be written out to an external file using the same Load / Dump HB Objects components cited above. For those of you who have worked with the comfort map workflows, you probably already know that calculating these view factors is one of the most time consuming portions of building a microclimate map. Having to re-run this calculation each time you want to open up the Grasshopper script is a nuisance and, thanks to this new capability, you should only have to run it once and then store your results in an external .HB file.
Transform Honeybee Components Modified for Large Model Creation - Many large buildings today are made up of copies of the same rooms repeated over and over again across multiple floors, or along a street, etc. Accordingly, one can imagine that the fastest way to create a full building energy model of such buildings is to simply move and copy the same zones several times. This is what a new set of edits to the Honeybee Transform components is aimed at supporting by allowing one to build a custom set of zones, translate them several times with a Honeybee_Transform component, then solve adjacencies on all zones to make a complete energy model.
Central Plants Available on HVAC Systems - While Honeybee has historically supported the assigning of separate HVAC systems to different groups of zones, each HVAC was always an entirely new system from the ground up. So a building with separate VAV systems for each floor would be modeled with a different chiller and boiler for each floor. While this can be the case sometimes, it is more common to have only one chiller and boiler per building but separate air systems for each floor. The new ‘centralPlant_’ options on the Honeybee coolingDetails and heatingDetails enable you to create this HVAC structure by making a single boiler and chiller for any HVAC systems that have this option toggled on. Furthermore, in the case of VRF systems, you can also centralize the ventilation system, using the grouping of zones around a given HVAC to assign which zone terminals are connected to a given heat pump.
More HVAC Templates Added - As the profession continues to push the industry standard towards lower-energy HVAC systems, Honeybee intends to keep up. In this release, we have included a few more templates for modeling advanced HVAC systems including Radiant Ceilings, Radiant Heated Floors + VAV Cooling, and Two Ground Source Heat Pump (GSHP) systems. Variable Refrigerant Flow (VRF) systems have also gotten a large boost as it is now possible to model these systems with more efficient water-source loops. The next release will include the ability to model central ground source systems that use hydronics for heating cooling delivery.
Run THERM Simulations Directly from Grasshopper - Anyone who has used the THERM workflow in the past likely realized that, while Honeybee can write the THERM file, you would still have to open model in THERM yourself and hit “simulate” to get results. Now that LBNL has started a transition to becoming more open, they have graciously allowed free access for everyone to run THERM from a command line. What this means for Honeybee is that you no longer need to open THERM at all in order to get results and you can now work entirely in Rhino/Grasshopper. This also opens up the possibility of long parametric runs with THERM models since you can now automatically run simulations and collect results as you animate sliders, use galapagos, etc. A special thanks is due to the LBNL team for exposing this feature, including Setphen Selkowitz, Christian Kohler, Charlie Curcija, Eleanor Lee, and Robin Mitchell.
All Options Exposed for THERM Boundary Conditions - To finish off the full implementation of THERM in Honeybee, a final component has been added called “Honeybee_Custom Radiant Environment.” This component completes the access to all boundary condition options that THERM offers, including separate radiant and air temperatures, different view factor models, and the specification of additional heat flux (which is typically used to account for solar radiation).
Improvements to Schedule-Generating Components - Many of you who have watched the Honeybee energy modeling video tutorials have likely gotten in the habit of using CSV schedules for everything. While this is definitely one valid way to work, it is not always the most efficient since simple schedules can be specified much more cleanly to EnergyPlus/OpenStudio and the use of CSVs can also make it difficult to share your energy models (since you have to send CSV files along with the schedules themselves). This release adds two new schedule components that should take care of a lot of cases where CSV schedules were unnecessary. The new “Constant Schedule” component allow you to quickly make a schedule that is set at a single value or a set of constantly repeating 24-hour values. The second component allows you to create “Seasonal Schedules” by connecting “week schedules” from the other schedule components along with analysis periods in which these seek schedules operate. Together, these will hopefully make our schedule-generating habit a bit better as a community.
Lastly, many of you may know Mingbo Peng as the current maintainer of the Design Explorer web interface and the Colibri components under TTToolbox. Both of these tools have been revolutionary in enabling “brute force” studies of design spaces (aka. Grasshopper scripts where one runs all combinations of a set of sliders). Now, Mingbo has graced Ladybug with the aforementioned image viewer component and it is with pride that we welcome Mingbo Peng to the development team!
As always let us know your comments and suggestions.Cheers!
The Ladybug Tools Development Team
…
GH, same as using sweep2 command in Rhino.
The one on the right is what I got so far (the output smooth our the kink of the original rails). Basically I am just following the methods provided by sdk sample: http://wiki.mcneel.com/developer/sdksamples/sweep2 .
The following is the function I copy and use directly from the SDK sample. By using this function, I can generate the sweep surface at right. But I want to have is the one in the middle with the kink edges. Can anyone show me how and where to modify he settings? I guess some sweep arguments need to be changed? I have try couples, such m_simplify, m_bSimpleSweep, m_bSameHeight, m_rebuild_count... but still cannot find a right combination for this function to output the sweep surface I want. Any suggestions or helps are very appreciated. Thanks for your help and time on this.
'Sweep2 function'----------------
Sub Sweep2( ByVal Rail1 As IOnCurve, _
ByVal Rail2 As IOnCurve, _
ByVal sCurves As List(Of IOnCurve), _
ByRef Sweep2_Breps As List(Of OnBrep))
'Define a new class that contains sweep2 arguments
Dim args As New MArgsRhinoSweep2
'Set the 2 rails
Dim Edge1 As New MRhinoPolyEdge
Dim Edge2 As New MRhinoPolyEdge
Edge1.Append(Rail1.DuplicateCurve())
Edge2.Append(Rail2.DuplicateCurve())
'Add rails to sweep arguments
args.m_rail_curves(0) = Edge1
args.m_rail_curves(1) = Edge2
args.m_bClosed = False
Dim section_curves As New List(Of OnCurve)
'Loop through sections to set parameters
For Each Section As IOnCurve In sCurves
Dim sCurve As OnCurve = Section.DuplicateCurve()
section_curves.Add(sCurve)
Dim t0 As Double = 0
If Not Edge1.GetClosestPoint(sCurve.PointAtStart(), t0) Then
If Not Edge1.GetClosestPoint(sCurve.PointAtEnd(), t0) Then
Dim s As Double = 0
sCurve.GetNormalizedArcLengthPoint(0.5, s)
Edge1.GetClosestPoint(sCurve.PointAt(s), t0)
End If
End If
args.m_rail_params(0).Append(t0)
Dim t1 As Double = 0
If Not Edge2.GetClosestPoint(sCurve.PointAtStart(), t1) Then
If Not Edge2.GetClosestPoint(sCurve.PointAtEnd(), t1) Then
Dim s As Double = 0
sCurve.GetNormalizedArcLengthPoint(0.5, s)
Edge2.GetClosestPoint(sCurve.PointAt(s), t1)
End If
End If
args.m_rail_params(1).Append(t1)
Next
'Set shapes
args.m_shape_curves = section_curves.ToArray
'Set the rest of parameters
args.m_simplify = 0
args.m_bSimpleSweep = False
args.m_bSameHeight = False
args.m_rebuild_count = -1 'Sample point count for rebuilding shapes
args.m_refit_tolerance = RMA.Rhino.RhUtil.RhinoApp.ActiveDoc.AbsoluteTolerance()
args.m_sweep_tolerance = RMA.Rhino.RhUtil.RhinoApp.ActiveDoc.AbsoluteTolerance()
args.m_angle_tolerance = RMA.Rhino.RhUtil.RhinoApp.ActiveDoc.AngleToleranceRadians()
Dim sBreps() As OnBrep = Nothing
If (RhUtil.RhinoSweep2(args, sBreps)) Then
For Each b As OnBrep In sBreps
Sweep2_Breps.Add(b)
Next
End If
Return
End Sub
…