rk and I will just clarify some of the details. I will also note that I do not know what the shadings output of the decomposeByType component is supposed to do as Mostapha put it there a long time ago and I was not sure what his intentions were. Going down a list of clarification points:
1) You are right that you should either connect up the shade breps to the EPContext component or just plug the HBObjwShades into the RunSimulation component (never do both). However, connecting the breps to the EPContext component is greatly undesirable for two reasons: It will make the simulation run much longer and the energyPlus calculation will not account for the surface temperatures of the blinds (it will assume they are the same temperature as the outdoor air, which is very wrong in a lot of cases). When you connect up the HBObjwShades to run the simulation, EnergyPlus will understand the blinds as abstract objects defined only by a numeric parameterization and not as actual geometry. This enables the calculation to run fast and is also enough of a description that E+ can calculate the temperature of the blinds, thereby accounting for the heat that can be re-radiated from the blinds to the indoors when they get hot in the sun. This more desirable way of running the blinds was how I imagined the component being run most of the time and I mostly included the shadeBreps so that you have a visual of what you are simulating.
2) When you use the more desirable HBObjWShades to approximate your blinds, you should use the blindsSchedule input in order to tell E+ when the shades are pulled (this is instead of the transShcedule on the EPContext component).
3) The zoneData inputs on the EPWindowShade component are meant to help in an entirely different workflow, which evaluates shade benefit based on energy simulation results. I apologize if it seems confusing to have two major uses of the component in one but we have so many Honeybee components right now and I did not want to make 2 separate ones when they seemed so similar. See this example file to see how to do energy shade benefit (https://app.box.com/s/oi64zoj5u1slz494grzhgmdkx7yie9jo).
Ok. I think that clears up everything that I know. Now to the part that I do not. As I said, Mostapha put in the shadings input there a long time ago and I do not know what his intentions were. Abraham, as you know, I am about to do a major revision of the EPWindowShade component to make it compatible with OpenStudio, include drapes/generic shades in addition to blinds, and I also should figure out how to do electrochromic glazing. I can easily include all of the visualized shades as output from the decomposeByType component when I do this. However, I do not want to interfere with other intentions Mostapha had when he put the input in. If he could confirm that this was in-line with his vision for the shadings output, I will implement it soon.
-Chris
…
size component supported only ground PV panels and angled roof PV panels.
Download the newest PV SWH system size component from here (Click on "View Raw" to download it. Then move the downloaded .ghuser file to File->Special Folders->User Objects Folder, an confirm to overwrite it with previously located one).
Just a few opinions on the project you are currently working on:This kind of fixed, non-transparent (overhang) PV panels attached to a building facade are vert convenient for locations with higher latitudes.The reason for this is because they (fixed overhang PV panels) are dimensioned according to the sun position at summer solstice. Elevation angles on summer solstice at higher latitude locations are lower, than those of lower latitude locations.Due to Incheon's low latitude (37), you will get rather short length of the PV panels* : less than 10 centimeters (0.097 meters in the attached .gh file below). As you have mentioned, Galapagos needs to be used too.I will just mention some of the good and bad ways in which the upper issue could be somewhat avoided:1) Increasing the vertical distance between PV panels (PV panels appear above every second window).2) Increase the tilt angle. This will increase the length of PV panels also, but will decrease the final annual AC energy output.An example of this solution has been applied at FKI building in Seoul (latitude: 37N):I already did some tests (with tilt angles: 40, 45, 55) and this does not seem like a good solution, though.3) Shrinking the "sun window" by using the minimalSpacingPeriod_ input. In Photovoltaics, a planner is suppose to make the 9h to 15h part of the sun window free of any obstructions. If you try to decrease the "sun window" to 10 to 14h, the length of your PV panels will increase. You can try to experiment a little bit with this (set your minimalSpacingPeriod_ to 21th of June 10 to 14hours). In general, shrinking the sun window on summer solstice is not a good principle during planning.4) Using tracking PV panels, not fixed ones. But Ladybug Photovoltaics components do not support this kind of PV systems. They only support fixed ones.I would personally go with the first option. You can also experiment with the second and third one.Comment back if you have any other questions.-----------------------* By "length of the PV panels" I mean the: tiltedArrayHeight_ input of the PV SWH system size component.…
On the other hand ... well ... we can pretend that this could be some sort of add-on dedicated for broken pieces, (and nerves if loops = a big number) he he.
Anyway:
1. If you enable the history (the yellow things) you can watch the recursion working: get a donor box and "slice" it in 2 (either via an "orthogonal" plane [the fast boxes] or a random one [the slow breps]). Then get each one and repeat until the desired "depth" of "slices" is achieved (the loops, that is). Pure recursion in terms of programming (a function does something, yields results and then calls itself to further process each result).
Double click on the C# to see the code (but don't change anything). For the record this is the function that does the main job (spot the fact that if it's not terminated it calls itself [last line]):
2. The x, xy, xyz options restrict the random plane (actually in the boxes case there's another technique used (Intervals) but never mind). For instance (case random breps) the slicing plane is defined at the brep center and using a random direction:
Vector3d dir = new Vector3d(rand.NextDouble(-1,1), rand.NextDouble(-1,1), rand.NextDouble(-1,1));
If the 3rd value is 0 then the plane's YAxis is parallel to Plane.WorldXY.ZAxis.
3. Now if the "slicing" thing was a random polyline at a random plane the pieces could be far more "elaborated" (and/or "naturally looking") ... but the thing with programming is to know(?) where/when to stop.
4. This approach could use any donor Brep (a blob for instance) or a Brep List. Notify if you want to add such an option.
5. Added some lines more for an option that allows to sample the pieces (due to the last loop) in an automated flat "layout" (it's a bit more complex than it appears on first sight).
6. The x,y restriction mode now affects the random slices as well. See what I mean:
and the same restriction using boxes:
Truth is that all that freaky stuff could be helpful for you if you had serious plans to learn C# (not something achievable without pain and tears aplenty).
best…
e point in each pair that has the lowest Z value (then later the highest Z)... The problem is the intersections are not returned sorted by Z, sometimes the lower point is first in the list, sometimes last. So I need to sort those pairs of points by Z value.I noticed the sort points component does not have any inputs for sort criteria... RhinoScript SortPoints allows you to sort by:
blnOrder
Optional. Number. The component sort order, where:
Value
Component Sort Order
0 (default)
X, Y, Z
1
X, Z, Y
2
Y, X, Z
3
Y, Z, X
4
Z, X, Y
5
Z, Y, X
Will we get something like this in GH? For now I think I can manage to analyze the Z for each and re-order the points, but a more comprehensive point sorting tool might be nice... no? Or did I miss something obvious? --Thx, --Mitch…
works joyfully if you want to change parameters and generate screen captures and planning to do a lot of them. You can of course generate the file name dynamically referring to the parameters you gave to the script, so that you have meaningful file names.
The example below will generate two captures at J:\Temp\001_top,jpg and J:\Temp\001_front,jpg both at 600X600 px in ghosted mode.
The instructions are as follows: (if you open the VB code by double clicking you will see it)
' Note1: The script is actually calling Rhino commands.
' Note2: Remember you have to draw something and is selectable for the script to function. The script uses _SelAll then _Zoom _Selected
' Note3: After you toggle blnSave to True, a new viewport will popup, be patient while Rhino work, and wait for that viewport to disappear befor clicking on anything.
' Note4: The component is not stable if you try to mouse click on anywhere while the saving process is running. Some stupid move may crash your programme, save RH and GH files before using this component.
' FileName : String Input = Supply with the path and file name without ".jpg" extension : e.g.: "C:\Temp\001" (Without the quotes)
' blnSave : Boolean Input = Saves when toggles to True (Remember to toggle back to False after use, otherwise the script will re-run itself during next update)
' Resolution_width : Integer Input = Resolution for the captured image
' Resolution_height : Integer Input = well...
' TopYea : Boolean Input = Toggles if the Top View is captured (Default is False if not connected)
' FrontYea : Boolean Input = Toggles if the Front View is captured (Default is False if not connected)
' ...Yea : Boolean Input = Toggles if the corresponding View is captured (Default is False if not connected)
' DisplayMode : Integer Input(0-4) = Sets the display Mode 0:Shaded 1:Wireframe 2:Rendered 3:Ghosted 4:XRay Default:Shaded
I remember I took some code from somewhere but I forgot exactly the source, (if someone could remind me I would love to cite) I rewrite most of them though. But the attribution header in the code still remains there and now it seems a bit interesting to see the family tree:
'////// Marc Hoppermann ///////////tweaked by Damien Almor ///////rewritten for curves by to]///////adapted by u]...www.utos.blogspot.com ///readapted by Victor Leung @ www.dreamationworks.com
Visit my blog if you have time: www.dreamationworks.com…
nition. Using RenderAnimation component from http://www.giuliopiacentino.com/grasshopper-tools/, I could do all of above except for the Toon material part.
I have found a post regarding same matter ( http://www.grasshopper3d.com/forum/topics/how-to-add-materials-to-material-table ), but since I am not very familiar with scripts, this is what I think his definition does. Correct me if I am wrong.
Since Rhino Vray only supports Toon environment per material (unlike Max Vray has global override feature),
1. import toon material from Rhino material editor
2. add colors to the toon material and make new toon materials with color (as many as needed)
3. import that new materials back into grasshopper
4. match them with designated geometries and render. (RenderAnimation component by Giulio does this job) Here is the final work he did : http://vimeo.com/34728433
Grasshopper + Vray from Marc Syp on Vimeo.
I am using rhino 4 with vray 1.5.
I have uploaded my definition, simple definition that transforms box height along with color as frame advances. The definition works but toon effect is not there.…
case for sure (started by Giorgio a couple of days before). Ive got involved because I exploit ways to "relax" shapes on nurbs (say patterns created by Lunchbox or "manually) without using any kind of mesh (more explanations soon).
Here's 5 test cases (SDK appears that doesn't have some "thicken surface" thing ... thus the algo that finds the "whole" shapes is rather naive) VS 2 Kangaroo "methods" and the why bother (he he) option as well.
If the goal is to "fit" these shapes within the nurbs ... does it work so far? No I'm afraid (appears that "springs" used are not the proper ones - or [Kangaroo1 option] the lines that pull should been originated from valance 2 points only)
Tricky points:
1. Internalize appears having a variety of serious issues (see Input inside definition) - Load Rhino file first (but even so ...).
2. Pull to surface is deactivated - this is not the issue here (and it's very slow).
3. Since Starling/WB alter the "curves - points" related order
the issue here (Pull points to curves) is to correspond apples to apples:
and that's what Anemone does:
From chaos :
to order:
this means that prior activating Kangaroo you should double click to the Anemone start component in order to "sort" properly the curves.
But .. fact is that results are pathetic:
more soon
best, Peter…
bsp;
-Vehicle elements (3D objects and a component for custom vehicles; models from Google Warehouse)
-Traffic Velocity Graphs, drawn on every trajectory curve (allow custom graphs drawn)
-Traffic regulation elements (such as Traffic Lights and Stop Signals) and traffic density
-Particle Systems on trajectory curves, just to manage the traffic regulations and avoid collisions based on security distances
-Traffic Vehicle Animation Modes (Dots, Bounding Boxes or complex Meshes with attributes for final rendering (Giulio Piacentino´s Render Animation)
-Vehicle Lights and Vehicle Sights, to make visual studies
Team:
-Sergio del Castillo Tello (Doctor No, lead programmer)
-Everyone that wants to be involved, support.. these tools
The development of Roadrunner is planned to take part within a Research Group Program at ETSAM (University of Architecture in Madrid); This forum group is created just to test the interest of the community, while we keep on developing (it is still being tested), probably we will share the whole thing in the future. Cheers!
Traffic Cluster Scheme
Traffic Elements
Traffic Urban Systems
Vehicle Elements
Roadrunner - overview
Roadrunner 0 Basics
Roadrunner 1 Modes
Roadrunner 2 Elements
Roadrunner 3 Urban Systems…
peuvent se diviser une surface avec ne importe quel motif imaginable. 3. Ici, je fournir un moyen de le faire via Lunchbox ... cela fonctionne mais il est fixe et donc nous avons besoin de jouer avec des arbres de données afin de créer le motif approprié par cas. 4. L'autre composante est un joint C # qui fait beaucoup de choses autres que de diviser ne importe quelle collection de points avec de nombreux modèles (voir le modèle ANDRE que je ai fait pour vous). 5. Vous devez décomposer une polysurface en morceaux afin de travailler sur les subdivisions. 6. Je donne une autre définition ainsi que pourrait agir comme un tutoriel sur la façon de traiter des ensembles de points via des composants de GH standards et des méthodes classiques.
Avertissez si tous ceux-ci apparaissent floue pour vous: Si oui, je pourrais écrire une définition utilisant des composants de GH classiques - mais vous perdrez les variations de motifs de division.
mieux, Peter
…
about it.
2. Nick's comment below got me thinking about unit testing for clusters. Being able to work will data flowing in from outside the cluster or having multiple states to test against could be really cool. Creating definitions that were valid across a general cross section of possible input parameters was a significant issue for us. It was all too easy to write the definition as if we were drawing (often we were working from sketches) and then have it fail when the input parameters changed slightly.
4. I wasn't thinking about threading the solver itself. I was thinking along the lines of some IDEs that I've seen which compile your project while you type it. I know that threading within components and at the rhinocommon-level is a freaking hard problem that has been discussed at length already. (although when, 5-10 years from now, it's finished it will be very cool)
Let's say the solver is threaded and the canvas remains responsive. As soon as you make a change to the GH file, the solver needs to be terminated as it is now computing stale data.
What if the solver was a little more atomic and like a server? A GH file is just a list of jobs to do with the order of the jobs and the info to do them rigidly defined - right? The UI could pass the solver stuff to do and store the results back in the components on a component by component basis (i have no idea what the most efficient way to do this is in reality - I'm just talking conceptually) this might even allow running multiple solvers to allow for at least the parallelism the might be built into a given GH file to be exploited (not within components but rather solving non-interdependent branches of components simultaneously). This type of parallelism would more than make up for the performance hit you alluded to for separating the UI and the solver (at least for most of the definitions i write).
I was imagining a couple of scenarios:
a) Writing a parallel module: solver starts chewing away - you see it working - you know it's done 1/3 of the work - if you have something to do at that point you could connect up to some of the already calculated parameters and write something in parallel to the main trunk which is still being solved.
b) Skipping modifications: you need to make a series of interventions at different intervals along a section of code. Sure you could freeze out that bit of a section of down steam code and make modifications so you can observe the effects more quickly. Unfreeze a bit more and repeat etc. etc. until your done and then unfreeze that big chunk at the end to make sure you haven't blown anything up. Just letting it resolve as far as it can while you sit there waiting for inspiration seems a lot more intuitive to me though.
On a file which takes 15 minutes to solve that's no big deal, but you certainly don't want to be adding a 20 millisecond delay to a solution which only takes 30 milliseconds.
You also wouldn't notice it at that point :-) perhaps for things where it would really make a difference, like Galapagos interactivity, it could be disabled - or could the existing "speed" setting just digest this need? Since the vast majority of time that Gh is solving is on files under active development not on finished code, i think qualitative performance is probably more important that quantitative performance (again with cases like Galapagos needing to be accommodated). In our case the code only had to "work" once since its output went to a cnc machine to make a one off project and it didn't really matter if it took 15 seconds or 15 hours for the final run.
Lastly, I have no way to predict how long a component is going to take. I can probably work out how far along in steps a component is, but not how far along in time.
that's ok, from a user point of view, just seeing a percentage tick along once in a while would be nice reassurance that the thing is just slow and has not, in fact, crashed. Maybe there could be two modes of display: the simple percentage version for unpredictable code and, for those of us able to calculate the time taken for our algorithm based on the number of input parameters, a count down in seconds or minutes or whatever.
I think a good place to start with these sort of problems is to keep on improving clusters, ... etc etc
i totally agree.
…
Added by Dieter Toews at 7:53pm on September 4, 2013