st sampled into data trees (if not we must "add" them "manually" == code: get this item from Rhino and put it there) into collections.
2. Then we must perform some kind of selection(s) on a per individual item basis and THAT is in 99% of cases "manual" (== code) or on a per "global basis" (hard or soft clusters et all == code). If clusters are hierarchical and some kind of dendrogram is required ... this obviously means ... er ... more code.
3. Doing the 2 we use some kind of input by means of sliders (say pairs of 2: for branches and items) and therefor MAY their values cause slider control issues (== code). For instance IF this slider yields a x event > do this and that to some other sliders.
4. Then perform the "histogram" required and obviously treat this as just a variant (i.e. a possible solution out of a given collection witch is variable) meaning ways to "store" this into parameter(s) (as persistent data). This also requires code.
In a nutshell (and oversimplified): given a collection of "shapes" pick some make the histogram, store the result (or do something with that and store the outcome as well) recall some other for any reason, modify it, stored it ... and then repeat until the end of time (or worst: until you are out of espresso).
As I said: NOT a task for a novice AND NOT a task for someone not familiar with code matters (But I guess that you qualify in both areas, he he).
I do this type of things day in day out (but for real-life AEC purposes) therefor I could make a "simple demo" (add some "" more) but ... well ... you are warned, he he
But in case that you take the wrong decision (you are warned) we must use Skype a bit.…
mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…
even (0, 2, 4) then that means the point either never hit it, or went in and out again, meaning it's outside. If it hits an odd number of times, then it must have come from within originally.
The method implements this approach using the mesh bounding box, and then striking a polyline from your test point along a vector that is defined by the upper right corner of the bounding box + a vector of (100,100,100). In the case of your failing points, this is a result of their striking an edge very precisely, which gets counted as 2 hits instead of 1 (as it should be getting captured) and passing false:
Your best bet is probably to roll your own implementation, that tests for multiple vectors:
private void RunScript(List<Point3d> P, Mesh M, ref object A, ref object B, ref object C) {
BoundingBox bb = M.GetBoundingBox(false);
List<bool> inside = new List<bool>();
for (int i = 0; i < P.Count; i++) {
Polyline a = new Polyline(); Polyline b = new Polyline();
a.Add(P[i]); b.Add(P[i]);
a.Add(bb.Max + new Vector3d(100, 100, 100)); b.Add(bb.Max + new Vector3d(100, 150, 150));
int[] fa; int[] fb;
Point3d[] xa = Rhino.Geometry.Intersect.Intersection.MeshPolyline(M, new PolylineCurve(a), out fa); Point3d[] xb = Rhino.Geometry.Intersect.Intersection.MeshPolyline(M, new PolylineCurve(b), out fb);
inside.Add(xa.Length % 2 == 1 || xb.Length % 2 == 1);
checkA.AddRange(xa, new GH_Path(i)); checkB.AddRange(xb, new GH_Path(i));
}
A = inside;
}
…
Added by David Stasiuk at 10:20am on October 10, 2017
till quite rough.
I went through your attached log but it seems to be a successful run, perhaps the error log wasn't attached. In any case, I believe we have identified this issue. The goal of the update fvSchemes component was to apply schemes to finalized meshes in an automatic way. While this is useful for new users it is also a dangerous thing to do in CFD studies.
The component works by relating mesh quality to the mesh non-orthogonality, which the checkMesh component reports. While non-orthogonality is one of the important criteria of mesh quality it does present difficulties on some kind of meshes, especially like the simple cases that BF has been meshing so far.
The example case of simple box buildings in a wind tunnel above for instance will appear as a good quality case for even the lowest of cell-count meshes, simply because it is an orthogonal geometry. That means that checkMesh will probably report low values (imagine an empty blockMesh of 10m blocks has a non-orthogonality of 0) which in turn means that higher order schemes might be paired with actually low quality meshes. This I believe is causing problems.
I posted a possible solution to this here https://github.com/mostaphaRoudsari/Butterfly/issues/57. The idea is that Buttefly provides additional options to the users, enabling them to choose between first-order (faster, more robust, but lower quality schemes) and second-order (slower, less robust, but more accurate) schemes depending on mesh quality, stage of assessment, etc. In cases like the above mesh quality a first-order scheme might provide a better option. To test this I am attaching an fvSchemes file you can use by replacing yours in the /system folder of the case.
As a note however, I would like to stress there is so much that a tool like Butterfly can provide in this area. Meshing is a quite complicated and demanding part of the process, involving a lot of trial and error. Sometimes the problem is just the mesh and not the solution options (GIGO stands true in CFD as well). It does however get easier with experience. The safe advice is the simplest one: when changing solution options doesn't help, refine mesh and run again.
Kind regards,
Theodore.…
se enseñan los principios de modelado básico y orgánico en Rhinoceros. En Grasshopper se estudian los principios de Parametrización, panelización y análisis en Grasshopper, así como el proceso de manufactura digital para maquinaria de corte Láser y CNC.
UN solo pago anticipado $5,000.00
Pagos diferidos $5,500.00*
*reserva tu lugar con el 50%
De lunes a viernes de 10 am a 18 pm
Del 23 al 27 de julio de 2012
DURACION: 40 HORAS
SESIONES: 5 DE 8 HORAS
o info@dimensiontallerdigital.com
informes al 55 (50 16 0634) con Mayri Gallegos (o al cel. 55 28 85 24 73)
Incluye material para corte digital.…
Integer = 0 To 9
val *= 2
lst.Add(val)
Next
Since val is a ValueType, when we assign it to the list we actually put a copy of val into the list. Thus, the list contains the following memory layout:
[0] = 2
[1] = 4
[2] = 8
[3] = 16
[4] = 32
[5] = 64
[6] = 128
[7] = 256
[8] = 512
[9] = 1024
Now let's assume we do the same, but with OnLines:
Dim ln As New OnLine(A, B)
Dim lst As New List(Of OnLine)
For i As Integer = 0 To 9
ln.Transform(xform)
lst.Add(ln)
Next
When we declare ln on line 1, it is assigned an address in memory, say "24 Bell Ave." Then we modify that one line over and over, and keep on adding the same address to lst. Thus, the memory layout of lst is now:
[0] = "24 Bell Ave."
[1] = "24 Bell Ave."
[2] = "24 Bell Ave."
[3] = "24 Bell Ave."
[4] = "24 Bell Ave."
[5] = "24 Bell Ave."
[6] = "24 Bell Ave."
[7] = "24 Bell Ave."
[8] = "24 Bell Ave."
[9] = "24 Bell Ave."
To do this properly, we need to create a unique line for every element in lst:
Dim lst As New List(Of OnLine)
For i As Integer = 0 To 9
Dim ln As New OnLine(A, B)
ln.Transform(xform)
lst.Add(ln)
Next
Now, ln is constructed not just once, but whenever the loop runs. And every time it is constructed, a new piece of memory is reserved for it and a new address is created. So now the list memory layout is:
[0] = "24 Bell Ave."
[1] = "12 Pike St."
[2] = "377 The Pines"
[3] = "3670 Woodland Park Ave."
[4] = "99 Zoo Ln."
[5] = "13a District Rd."
[6] = "2 Penny Lane"
[7] = "10 Broadway"
[8] = "225 Franklin Ave."
[9] = "420 Paper St."
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
Added by David Rutten at 6:26am on September 9, 2010
precise) that unfortunately has more than one staff. This means that I pay the bills (unfortunate to the max). Practice is vertical meaning no Structural/HVAC etc services.
2. AEC Projects are made by teams. Period.
3. Teams are organized with some sort of hierarchy. Period.
4. On each team there's always one leader. Teams can being sampled in group teams - call them clusters (kinda like a List of List of ...)
5. All cluster leaders report to the supreme human being (yours truly). Leader heads are always on my disposal (it's fun to decapitate someone: I do this every Monday).
6. AEC projects are made with 1% idea(s) and 99% of what we call "sludge" (this is not my job: I'm the One , he he).
7. You can't steer any boat if you don't know each @@$#@ nut and bold. In the past there was a naive approach on that matter (ruined automotive companies, potato chip makers, software vendors, political systems, secret service agencies ... etc etc).
8. Efficiency is above all (even above tax-free cash).
9, You can't do ANY AEC real-life thing with what GH has to offer (nor Rhino is an AEC BIM app - it would never be). You simply use GH as a supplement to Generative Components (and/or as stand alone because it's good fun). There's nothing that GH does (I'm speaking solely for AEC as always) that can't being done with Generative Components.
10. I've done so fat 257 projects (a "bit" bigger than a house, he he). Let's say about 51427 drawings (master, master details, details) and 78956 lines of text (specs, cost estimations, space schedules, supplier lists, contracts, cats and 1 dog).
If you combine all the above you'll have the answer (i.e. why I use solely - if possible - code and not GH components). If you can't combine them I'm sorry.
PS: C# is the absolute standard (never judge a language as a "stand-alone" thingy).
best, Peter (Prince of Cynics)
…
he past Architecture was the art of sketching: some "idea" with pencils/crayons + vellum paper (or with some computer) > then "others" trying to make this happen. This in general is known as top-to-bottom approach. Naive and dangerous (for the reputation/reception/acceptance of Architects/Architecture) to the max.
2. These days we work both ways: whilst some work on some "idea" (called it: "assembly") others (in sync mode) resolve the bits and nuts of that "idea" - up to 1:1 level of detail (called it "components"). This is the bottom-to-top approach. Make this your way: NEVER proceed in something whist's not EVERY bit of that something is well addressed (with at least 3-5 ways).
3. The emergence of parametric (GH, Generative Components, Dynamo) in AEC (an approach well known in MCAD word many years ago, mind) made things ... worst: the tremendous topology exploitation capabilities blinded people's mind and they are completely sucked up by the forest forgetting/by passing the critical fact that there's no forest without trees.
4. That's expected: is in the human nature to follow/admire the blink/glam and omit/skip the humble. It's the easy way you know, he he.
5. The tremendous growth of countries the likes of UAE/China/Russia made AEC things ... even worst: lot's of cash available > make us some encomium to Vanity, forget Modesty. You can replace "Vanity" with "New Frontiers" ... if you like fooling yourself.
Some Academics are not capable to understand all that: if they could they would potentially operate in the field (where the pink color is rarely used) and not in fishbowl(s). Some Academics believe that an "idea" is the 99% of the whole whilst actually is less than 1%. But on the other hand anyone can do Architecture (even Architects, he he).
That said (Vanity crisis) you want some other "component" options for this case of yours? (starting with "some" dollars more and ending with the mortgage the house/sell wife+kids option).
take care (and kill them all)…
s (and God knows how many in the next case) that's why (other than the colossal amount of time (for no reason) required for creating them ... try to bake them and measure the file size).
3 .Most non pros believe that the thing that matters the most in engineering is the geometry. Nothing could be further from the truth. Is about the 5% (complex real-life cases etc etc - but this one is very simple geometry wise and not that simple with regard the whole "ideal" AND effective strategy required).
4. So I've included in this Rhino file attached a small portion of your frames as input for the second C#: CAREFULLY study what it does and most importantly why: it gives you the clear indication about why you should attack this on an assembly/component basis by using instance definitions INSTEAD of recreating 14++ K "solids". The difference in performance is COLOSSAL, not to mention the baked Rhino file size.
5. Using instances is IMPOSSIBLE whiteout code (as is the case in 99% or real-life engineering tasks).
6. Geometry was never an issue on that one (is the 5% max of the whole puzzle no matter requirements you may have).
Bad news:
1. Zoom extends doesn't work after importing your data (maybe a NVidia Quadro K4200 driver issue - who knows?): use saved views stored.
So ...the choice is yours, best, Lord of Darkness…
ponents, among other functionalities, is significantly widening the relevance of the toolset.
Meanwhile having used the tools for some time now and have gone through the forum, in my opinion a few critical system controls is still missing - unless I'm missing some understanding.
In order to really make the hourly energy analysis valuable in early massing studies etc. the consideration of indoor climate can be more detailed. The HVAC capacities, max. airrate and min. inlet temperature should be within comfortable ranges and hardsized by user input to reduce internal draft problems. If not considered I find that the analysis could possibly demonstrate good energy behavior and reasonable operative temperature but in reality could cause a bad indoor environment - and when "rectified" at a later stage the energy consumption will increase.
I would like to know how it is possible in HB to set-up a HVAC system with these ventilation controls and a "unlimited" convective/radiant heating system, and how to deal with the issues mentioned below. The inputs parameters exists in the components, but I can't seem to get the right system behaviour.
In the attached file I have gone through 4 scenaries, each with seperate issues in setting up the system (As no template appearantly supports the combined setup the heating system is simulated using an inlet temperature of 99 degrees).
HVACSystem: "ideal air loads" - Issue: no hardsized airrate, no cooling supply air temperature
HVACSystem: "VAV w. reheat" - Issue: no regulation of airrate, no use of input heat supply temperature in heating mode
HVACSystem: "idealairloadsystem" using "additionstrings" -> issue with duplicate zone names
HVACSystem: "idealairloadsystem" using "additionstrings" on multiple zones -> issue with duplicate zone names
Thanks a lot!
Jon…