m
-Area of blue line: min. 80% of the rectangel a x b
-Max. hight h of the top point: h,max = a
-Min. Volume between rectangel a x b and membrane: 500 m3
Can anyone help me?…
mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…
ou will see all of the available components on a ribbon at once so there is no need to keep clicking drop down menus.
It's all about discoverability with GH. What if you're a beginner and don't know about the Create Facility (dbl click canvas) how can you find Extr?
Even if you hover over every component or use the drop down lists you will not see the name Extr appear anywhere.
Sure it makes sense that Extr is short for Extrude but it's also the Nick Name of Extrude to Point component
So you can easily miss the fact that one has a Distance Input verses a Point Input.
I think I made the move to Icons around about the move from version 0.5 to 0.6, possibly before. I initially thought that I would go back to text because I loved the mono chromatic look of the text but I soon realised that Icons were the way forward. The greatest benefit is speed. You don't need to digest and decipher every component (which is written 90 degrees to the norm).
I'm not saying you should move to Icons forthwith but at least consider that once you have a better knowledge and understanding of GH, Icons will set you free.
My top ten tips that I would highly recommend to anyone wanting to better themselves with GH.
1) Turn on Draw Icons
2) Turn on Draw Fancy Wires
3) Turn on Obscure Components
4) Use the Create Facility like a Command Line eg "Slider=-1<0.75<2" or "Shiftlist=-1"
5) Use Component Aliases to customise your use of the Create Facility eg giving the Point XYZ component an alias of XYZ will bring it up as the first option on the Create Facility as opposed to the other possibilities.
6) Try to answer other people's questions even if it's not relevant to your own area. By looking into solving a problem outside of your comfort zone and then posting your results it is very rewarding but it also lets you see the other approaches that get posted in a new light.
7) Take the time to understand Data/Path structures.
8) Buy a second monitor - There is nothing that can compare to real estate when working in Grasshopper.
9) Read Rajaa Issa's Essential Mathematics
10) Pick a panel in a tab on the ribbon and get to know every component inside and out and then move on. Start with the Sets Tab > List Panel…
s and robotic fabrication technologies in constructing them. It features a range of work from well known progressive practices, such as Zaha Hadid Architects, Greg Lynn Form, UN Studio, Contemporary Architectural Practice and Evan Douglis Studio, together with emerging experimental practices, such SPAN, Biothing, Kokkugia, Rubedo and Synthesis, along with some talented emerging Chinese architects, such as Archi-Union Studio and HHD_Fun, and student work from leading schools of architecture, including AA, Harvard GSD, MIT, RMIT, UPenn, Columbia GSAPP, DIA, USC, CAFA and Tongji. The exhibition also includes work from the AAC DigitalFUTURE collaborative workshop between Tongji University and USC. The exhibition is curated by Neil Leach (USC) and Philip Yuan (Tongji), and designed by Kris Mun (USC). It is open weekdays until 15 September. Image: 'Digital Merzbau, designed by SCUT students, Lin Rungu and Zhang Mei, tutored by Neil Leach (USC).''
…
on Air.
Curated by Gil Akos, Evan Greenberg, and Ronnie Parsons, these lectures aim to interrogate three main lines of inquiry--material systems, natural systems, and machanic systems. Each esteemed presenter will discuss how designers can approach problems through the lens of Embedded Intelligence in practice, research, and academia. There will be an in-person audience at the Architectural Association in London and a recording of the series will be available on-demand through the AA's online lecture video catalog.
The first lecture, Biological Intelligence, will take place on February 3 and will feature The Living's David Benjamin. Winner of the MoMA PS1 Young Architects Program, Benjamin has created paradigm-shifting projects such as Living Light, an interactive canopy in Seoul that reacts to air quality, and Amphibious Architecture, a project which expresses pollution levels in the Hudson River.
Future lectures will be given by Michael Winestock of the Architectural Association and Skylar Tibbits of the Self-Assembly Lab at MIT.
…
ing to download your examples but it sends me to the code instead ( I only able to download the rhino files but not the gh) , I just installed the plug in and have been playing with vortex component but not enough control yet, I would like to have water velocity continuity along the river and generate vortex when the field find and obstacle such a pier attach to the river bank.
1.-I am thinking on having 2 lines ( river banks) as input and generator the vector field
2.-Different curves ( polygons) along the river attach to the river bank that create the vortex ( this ones could also be define by the centre of the actual pier as point with certain radius of action.
3.-And finally the z value of the vortex should decrease along the z axis ( surface water vortex bigger,) as tornado
I would like to be able to set points and create or modify the vector field positioning this vortex that its position also should be related with its strength ( as closer to river bank as bigger the force of rotation)
I would appreciate if you can address me to some tutorials related or suggest the workflow
many thanks! …
nually.
Now when I see how short and easy are the codes I want to propose you a wish list of "AA SED programme" so that later students would be able to use your honeybee tool more intensively.
First of all, I want to clarify, what are the pressures when we specify the infiltration. That was still unclear for me as a beginner. Is it m3/m2s at 50Pa or at actual Pascal? If it is at actual Pascal, does that mean we should specify the concext somehow by the input of coefficients or by the actual bRep context or input it from some CFD? What do we do? What do you typically do?
Secondly, I found an idf example which works with material substitution in energy plus example folders. I think this is something what Chris was trying to propose. The code seems short. Can we expect that this feature of material replacement according to the schedule would appear later?
Other passive elements like trombe wall for instance would be appreciated as well.
I see you are now focused more on high/light tech tools, but don't forget about low tech vernacular strategies.
Many thanks again.
…