mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…
even (0, 2, 4) then that means the point either never hit it, or went in and out again, meaning it's outside. If it hits an odd number of times, then it must have come from within originally.
The method implements this approach using the mesh bounding box, and then striking a polyline from your test point along a vector that is defined by the upper right corner of the bounding box + a vector of (100,100,100). In the case of your failing points, this is a result of their striking an edge very precisely, which gets counted as 2 hits instead of 1 (as it should be getting captured) and passing false:
Your best bet is probably to roll your own implementation, that tests for multiple vectors:
private void RunScript(List<Point3d> P, Mesh M, ref object A, ref object B, ref object C) {
BoundingBox bb = M.GetBoundingBox(false);
List<bool> inside = new List<bool>();
for (int i = 0; i < P.Count; i++) {
Polyline a = new Polyline(); Polyline b = new Polyline();
a.Add(P[i]); b.Add(P[i]);
a.Add(bb.Max + new Vector3d(100, 100, 100)); b.Add(bb.Max + new Vector3d(100, 150, 150));
int[] fa; int[] fb;
Point3d[] xa = Rhino.Geometry.Intersect.Intersection.MeshPolyline(M, new PolylineCurve(a), out fa); Point3d[] xb = Rhino.Geometry.Intersect.Intersection.MeshPolyline(M, new PolylineCurve(b), out fb);
inside.Add(xa.Length % 2 == 1 || xb.Length % 2 == 1);
checkA.AddRange(xa, new GH_Path(i)); checkB.AddRange(xb, new GH_Path(i));
}
A = inside;
}
…
Added by David Stasiuk at 10:20am on October 10, 2017
ino Mc Neel, autore di "Architettura Parametrica - Introduzione a Grasshopper", il primo manuale su Grasshopper. I corsi PLUG IT nascono dalla volontà di promuovere le nuove tecnologie digitali di supporto alla progettazione e condividere il know-how maturato attraverso ricerca, collaborazione con i più importanti studi di architettura e pubblicazioni internazionali. Verranno introdotte le nozioni base di Grasshopper approfondendo le metodologie della progettazione parametrica e le tecniche di modellazione algoritmica per la generazione di forme complesse. Il corso è rivolto a studenti e professionisti con esperienza minima nella modellazione 3D e si articolerà in lezioni teoriche ed esercitazioni. Argomenti trattati: - Introduzione alla progettazione parametrica: teoria, esempi, casi studio - Grasshopper: concetti base, logica algoritmica, interfaccia grafica - Nozioni fondamentali: componenti, connessioni, data flow - Funzioni matematiche e logiche, serie, gestione dei dati - Analisi e definizione di curve e superfici - Definizione di griglie e pattern complessi - Trasformazioni geometriche, paneling - Attrattori, image sampler - Data tree: gestione di dati complessi - Digital fabrication: teoria ed esempi - Nesting: scomposizione di oggetti tridimensionali in sezioni piane per macchine CNC Verrà rilasciato un attestato finale. INFO E PRENOTAZIONI: http://www.arturotedeschi.com/wordpress/?p=2888…
ky.exe did not accept -p parameter and made empty sky.cal file.
----
Edit: solved run problem, Bee did not download OpenStudioMasterTemplate.idf
Get it here: https://github.com/mostaphaRoudsari/Honeybee/issues/119
Now get empty HDR:
C:\ladybug\prox\imageBasedSimulation>rpict -i -t 10 -vtv -vp 245.129 -226.458 20 0.405 -vd -0.549 0.656 -0.518 -vu -0.332 0.397 0.855 -vh 42.862 -vv 26.991 -v l 0 -vs 0 -vl 0 -x 800 -y 600 -af prox_RAD_Perspective.amb -ps 8 -pt 0.15 -pj 0.6 -dj 0 -ds 0.5 -dt 0.5 -dc 0.25 -dr 0 -dp 64 -st 0.85 -ab 2 -ad 1024 -as 175 -ar 150 -aa 0.200 -lr 4 -lw 0.050 -av 0 0 0 prox_RAD.oct 1>prox_RAD_Perspectiv e.unf rpict: 0 rays, 0.00% after 0.0000 hours rpict: skybright`c__ladybug_skylib_cumulativeSkies_SINGAPORE_SGP_SINGAPORE_SGP_1 : undefined variable rpict: 1020 rays, 4.91% after 0.0000 hours
----
Hi friends,
trying to get a cumulative sky image metric to run and encountered an issue with the image-based metrics component. It throws:
Runtime error (KeyNotFoundException): honeybee_materialLib Traceback: line 768, in main, "<string>" line 1442, in script
I guess this is some sort of setup issue on my end, or I messed up the definition? Any help appreciated.
Thanks,
Max
…
Grasshopper. So, I once made an attempt to bind ms sqlServer in order to get frozen definitions at some states, to avoid managing baked objects in Rhino and also be able to retain whole results without using the GH state manager that rebuilds everything.
But at that time GH's VB.Net component didn't properly read referenced dlls and I forgot it since then.
At first, I was surprised by Slingshot's extensive interface : I was still having in mind my own old project, a tool that would have acted at the Rhino's geometry object level, and auto creating the needed tables.
The bd would have consisted of a main table, owning the objects ID and name, and related tables containing the necessary information relative to the main objects.
For example, a Brep is made of so and so underlying objects, passed to respective tables, according to GH objects definition layout (just the way they are written in the xml schema).
Then, on a db, query an object by name, and retrieve the whole object or underlying objects (e.g. at the bounding curves level, or points level for a Brep).
With Slingshot, I made a few attempts to cheat GH with BLOB data fields, but no way to get a whole object. It seems that GH simply provides an object.toString ... and GH is definitely not conceived to produce persistence outside of Rhino. If I have some spare time, I will try to extract
About points and colors, I am now simply using a single field with CHAR(asLargeAsNeeded...), as GH parses String to every Point (or Vector or Color) entry of any component.
I do so because it need less to display on the canvas...
Whatever I wrote before, I really like your conception, as opened to relational interactions between ...whatever you need or dream of !
One last thing : GH can't open the definition file "Genome_DB_Template.gh" that I've downloaded from your site : http://slingshot-dev.wikidot.com/database-genome. I was expecting to learn a lot from your very smart stuff ! (I am running GH 08.00.13 and Slingshot 0.7.2.0)
Slingshot is running great, opened to any use...Thanks again.
Best,
Stan
…
y in English. ○Presenter
Robert (Bob) McNeel (McNeel & Associates founder) Robert (Bob) McNeel is the founder and president of Robert McNeel & Associates (RMA). Founded in 1978, RMA originally focused on developing accounting software for accounting, architecture, engineering, and other personal services firms. Within a few years, RMA expanded its services to include selling and supporting microprocessor-based engineering and design software including AutoCAD. By 1985, the main focus of the business had shifted to AutoCAD sales, service, training, and software development. Bob McNeel grew up in the mountains of southern Washington State on a subsistence dairy farm. To pay for college, he worked in construction as a carpenter, welder, and cement finisher. Bob has a BA in Accounting from Washington State University. Prior to founding McNeel & Associates, he was a practicing Certified Public Accountant and the comptroller for a large construction company in Spokane. Andrés González (Rhino Fablab director) Andrés is a software trainer and developer since the 1980s. He has developed applications for diverse design markets as well as training materials for different CAD and Design software including the community of training materialswww.Rhino3D.TV Andrés has been working with the Rhino Team since the very early stages. He is now the head of the McNeel Southeast US & Latin American Division. He is the worldwide director of the digital fabrication community called RhinoFabLabwww.RhinoFabLab.com as well as the Generative Jewelry & Fashion Design community GJD3D www.GJD3d.com and Generative Furniture Design community GFD3D www.GFD3d.com 1981 -1985 University of North Carolina at Charlotte N.C. - EE.UU. B.S., Bachelor of Science in Engineering
…
Added by Yusuke Oono at 9:28pm on October 16, 2013
ey provide all the means to what I try to achieve.
What I need is to get a fast (as possible) evaluation of passive heat/solar gain from a certain facade. I know my building can cool to a certain degree (lets say 80 W/m2 - now lets forget other internal gains) and I want to be sure my facade is not letting excessive amounts of heat into the room/building. Normally I would make a full blown simulation to count my overheating hours and thereby evaluate my facade. To speed up the process, the idea is just to evaluate overheating hours in a faster way. So what I am thinking is that excessive amounts may estimated by counting high intensity irradiation patches in a critical sky-component or whatever such thing would be called that surpasses my sensible cooling load. My hope is that any facade visible to the sky-patches would very similar to the number of overheating hours if properly calibrated to a simulated model. However I have no idea right now, if this can be done.
Why do this? Speed, convenience, whole building thermal analyses.
@Chris and @Abraham The critical sky-component is made with LBs radiance component radiation and filtering the beam-components with highest effects from a yearly epw-file.
@Chris Conductive heat gains are also important especially if the facade is badly insulated, so the next step is to filter the outdoor temperature parallel with that critical sky-component and then do a static heat transfer analysis and combine that with the effect from direct sun influence. Again, no idea if it works.
Hope it makes sense. I a little embarrassed I drew you into this little experiment. This was not at all the point of the discussion. But now we are into it I like to know what you think. If it works its kinda neat, at least i think it is.
/K…
pts organize in a data tree without losing the data structure. To create a folding surface as per image attach.
1. Replace items (to create a gradient) / Like the weight culling example.
Path {0} replace all indexes with a new value (a)
Path {1} replace 90% indexes with a new value (a)
Path {2} replace 80% indexes ...
2. Decrease value (a) in relation to path number
3. After Replace the above items value with
for even path number {0,2,...} replace items with a negative number
Did not find a easy way to create data tree that would achieve the above inside GH.
Point 2 & 3 are easy but i could not found a simple solution for points 1.
At the moment the only way i found is to create the list in Excell manually and import/ export or to create a list on indices for each path.
Any hint appreciated.
Might need to wait for the number slider or path mapper to accept input or notation ?
best
Stephane
…
rld.wolfram.com/EnnepersMinimalSurface.html
when i type the equations for z,y,z it says a syntax error so i obviously do not understand how to construct an expression. (screen capture attached)
Any help/explanation of using this function would be greatly appreciated
thanks so much
Capture.JPG…