mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…
Parametrica.Con grasshopper puoi gestire progetti complessi dal punto di vista della forma e dell'organizzazione con un solo strumento , dal design dell'oggetto , allo spazio dell'architetture , all'organizzazione urbanistica.Grasshopper è un software open source , in continuo aggiornamento da parte degli utenti , TRA POCO POTRESTI CONTRIBUIRE ANCHE TU AL SUO SVILUPPO !!!Sabato 11 MAGGIO 2013durata di 6 ore : dalle 10:00 alle 17:00presso : STUDIO REMODESIGN (via dei marsi n° 41)per prenotare chiama il numero : 3498381249oppure manda una mail all'indirizzo : contact@ivoambrosi.itvisita il sito: www.ivoambrosi.it…
hilst settings concern only the currently selected instance.
For instance assume that you are in the Bermuda Shorts business and you want various ideas concerning a new ad campaign:
Or assume that the 4 horsemen want from you to quickly present some concept proposals related with a terminal event that they have in mind:
…
le with you.
I am trying to achieve the minimal path algorithm of Steiners tree in Python using the minimal path algorithm.The syntax would be as followsFirst I need to create a cube of any dimension.
Then I need to specify one origin say point A and destination point say B.
Now for this point A,B I need to create a machine based network which will automatically enroute A to B.
Where the angle will be constant i.e 120, length can be a variable, triangular node(steiners tree)using these constraints it will create a network.
Now, I should iterate the program in such a way that I should specify the further points say like A1 and B1 so on.The program will contain a limit constraint where it will come out of iteration loop and start a new loop,forming the network.
By this I will get a dense network of 120 deg branches.
The branching gets denser the moment I add source and destination points.
There can be 100 iterations to reach from A to B but the algorithm chooses the one following the minimal path.
I would be highly thankful to you if you would please share the python syntax and grasshopper definitionCapture.JPG for the same
Thank you for your time in advance
I would be highly grateful if you help me through
warm regards
Arya
12.gifShortest%20path%20algorithm.gh
min-paths.jpgcc.henn.studyimagesminimalpaths.jpg …
Grasshopper. So, I once made an attempt to bind ms sqlServer in order to get frozen definitions at some states, to avoid managing baked objects in Rhino and also be able to retain whole results without using the GH state manager that rebuilds everything.
But at that time GH's VB.Net component didn't properly read referenced dlls and I forgot it since then.
At first, I was surprised by Slingshot's extensive interface : I was still having in mind my own old project, a tool that would have acted at the Rhino's geometry object level, and auto creating the needed tables.
The bd would have consisted of a main table, owning the objects ID and name, and related tables containing the necessary information relative to the main objects.
For example, a Brep is made of so and so underlying objects, passed to respective tables, according to GH objects definition layout (just the way they are written in the xml schema).
Then, on a db, query an object by name, and retrieve the whole object or underlying objects (e.g. at the bounding curves level, or points level for a Brep).
With Slingshot, I made a few attempts to cheat GH with BLOB data fields, but no way to get a whole object. It seems that GH simply provides an object.toString ... and GH is definitely not conceived to produce persistence outside of Rhino. If I have some spare time, I will try to extract
About points and colors, I am now simply using a single field with CHAR(asLargeAsNeeded...), as GH parses String to every Point (or Vector or Color) entry of any component.
I do so because it need less to display on the canvas...
Whatever I wrote before, I really like your conception, as opened to relational interactions between ...whatever you need or dream of !
One last thing : GH can't open the definition file "Genome_DB_Template.gh" that I've downloaded from your site : http://slingshot-dev.wikidot.com/database-genome. I was expecting to learn a lot from your very smart stuff ! (I am running GH 08.00.13 and Slingshot 0.7.2.0)
Slingshot is running great, opened to any use...Thanks again.
Best,
Stan
…
ra' nella finestra di Grasshopper, in alto, insieme agli altri set di componenti come 'Params', 'Maths', ecc.
Si tratta di un esperimento per cercare di ampliare in qualche modo l'ambito di utilizzo di Grasshopper.
Come sappiamo Grasshopper e' nato per consentire l'utilizzo parametrico di Rhino. Le definizioni di Grasshopper permettono di registrare i passi necessari per costruire gli oggetti, nonche' di variare i dati utilizzati dalla definizione, ad esempio oggetti geometrici, lunghezze, angoli, ecc.
Quando modifichiamo i valori utilizzati dalla definizione Grasshopper automaticamente ricalcola il tutto e ci mostra la preview del risultato.
A questo punto, se il risultato e' soddisfacente, possiamo dire a Grasshopper di inserire gli oggetti in questione nel documento di Rhino, cosicche' li vedremo apparire nelle viste come veri e proprii oggetti Rhino.
Questo modo di lavorare ha avuto un grande successo tra gli utilizzatoti di Rhino, rendendo molto piu' agevole la costruzione di oggetti nel caso in cui sia necessario procedere per tentativi, verificando il risultato prima di stabilire la forma finale da ottenere.
Il successo di Grasshopper pero' ha anche mostrato quanto sia comodo poter definire graficamente le procedure di costruzione, e in generale poter utilizzare Rhino tramite i componenti, ad esempio gli slider, che tutti noi, suppongo, vorremmo avere a disposizione anche quando usiamo Rhino nel modo classico tramite pulsanti e comandi.
Quindi col passare del tempo sono apparsi sempre piu' Add-on per Grasshopper che permettono di eseguire operazioni particolari o anche di utilizzare Grasshopper in ambiti diversi dal concetto originale di 'History programmabile'. Accodandosi a questa tendenza, edoc prova a costruire dei componenti che permettano di operare direttamente sugli oggetti Rhino, cioe' curve, superfici, layer ecc. appartenenti al documento Rhino su cui stiamo lavorando. L'idea e' permettere di utilizzare la comoda interfaccia utente di Grasshopper anche per operazioni che solitamente sono eseguite in modo tradizionale con pulsanti e comandi, o anche tramite script.
Come gia' detto, e' un esperimento. I componenti nascono, muoioni e cambiano molto spesso, nel tentativo di capire cosa puo' essere utile e cosa puo' fuzionare o meno.
Segnalazioni di bug, suggerimenti, considerazioni ecc. sono benvenuti.
se qualche anima pia volesse tradurre questa presentazione gli faremo un monumento equestre!
grazie e scusate
gg
…
nza dal centro delle facce ad un punto fisso per determinare quant'è il valore dell'offset per quella faccia.
Prova questa soluzione per ora:
- abilita il componente disattivato all'inizio;
- il componente curve offset non funziona bene, domani vedo se riesco a crearne uno migliore;
- inforna (bake) la brep risultante e convertila in mesh da rhino;
- per dargli spessore, fai l'offset solido della mesh in rhino per l'ultima fase, funziona meglio.
I've used the distance from the center of the faces to a fixed point to determine the value of the offset.
Try like this:
- enable the first component disabled;
- offset curve don't work perfectly, I'll try to fix it maybe...
- bake the brep and convert it into mesh in rhino;
- for the thickness, do a solid offset of the mesh in rhino for last phase, it just works better.…
n account of the position of the sun and weather cannot be expressed in terms of a single set of luminous intensity values (which is what IES files do).
With regards to your example files, I agree with Chris. The primary reason for the low illuminance levels is that the light bounces are getting lost in the tube. Have you checked with the manufacturer/distributor if the location of the IES file should be inside the tube and not flush with the ceiling? Physically modelling such tubes in lighting software like Radiance (which is what HB uses) or AGI32 is a fairly expensive proposition. This is one of the reasons why manufacturers provide photometric data for such devices (however simplistic that data might be).
The candelamultiplier increases or decreases the luminous intensity values. So it will have a direct impact on the calculation. The primary reason for having that input was to enable users to do some testing with different lamp types and environmental factors such as dirt depreciation. You need not change them for your simulation. Assuming that the IES file is inside the tube, in order to make this calculation work inside HB you'd have to crank up the calculation settings to a very high level (start with -ab 10 -ad 4096).
Finally, due to shortcomings in the annual simulation software (Daysim), IES files will not work directly work with annual calculations. However, there is a fairly easy workaround for that issue. In case you are planning to run annual calculations with IES files, please let us know here.
Sarith…
me research involving shades and and solar radiation and I need the sun's path through the entire year to fully optimize the design. This far I've been able to simulate what I want by having my shadders following a mock solar orbit around them, what I need to know is to use a model that simulates solar paths, use it as an attractor point and have my shadding surfaces follow it, pretty much like that I am doing right now (or so I think)
Here's where my questions come around:
I remember finding somewhere on the internet a definiton that simulates the sun's path through the year; I think I can find it again and use it for my purposes. I think that I could just run the GH definition, bake the geometry and then upload it to Ecotect and have it run so I can get the data and keep working over that, then feed the geometry again to Ecotect, ad nauseam. However I think that is a very slow process.
Is there a way that I can run an Ecotect plug in of sorts within GH, that way I can get my data IN grasshopper and model accordingly?
Does that make sense?
Thanks a lot for any input.…
Added by Antonio Tamez at 3:40am on October 24, 2011
s for the sunlight hours analysis.
I'm producing BRE Annual Probable Sunlight Hours calculations and so to match the BRE approach, I'm using 100 sun vectors, each representing 1% of probable sunlight hours. I could use the Sunpath and Analysis Period components to produce sun positions for the whole year, but this gives results that do not fully reflect the BRE methodology - which is important here. I'm detailing this just to clarify that this isn't a full annual calc of 8760 hours for 350 surfaces.
Anyway, when I run the calc, it takes about an hour to run, but the Sunlight Hours Component itself reports a calculation time of 3 seconds! Does this mean that the rest of the time is all about prepping the brep geometry? If so, is there a reason why this is much slower than when using a view of sky recipe and exporting to radiance. For the same project, I completed a view of sky calculations and based on the number of test points and -ad setting, this was completing about 5.25 billions rays so I understand why that took an hour.
Any thoughts as to why the sunlight hours calc seems to take so long?
thanks
Nick
…