nza dal centro delle facce ad un punto fisso per determinare quant'è il valore dell'offset per quella faccia.
Prova questa soluzione per ora:
- abilita il componente disattivato all'inizio;
- il componente curve offset non funziona bene, domani vedo se riesco a crearne uno migliore;
- inforna (bake) la brep risultante e convertila in mesh da rhino;
- per dargli spessore, fai l'offset solido della mesh in rhino per l'ultima fase, funziona meglio.
I've used the distance from the center of the faces to a fixed point to determine the value of the offset.
Try like this:
- enable the first component disabled;
- offset curve don't work perfectly, I'll try to fix it maybe...
- bake the brep and convert it into mesh in rhino;
- for the thickness, do a solid offset of the mesh in rhino for last phase, it just works better.…
me research involving shades and and solar radiation and I need the sun's path through the entire year to fully optimize the design. This far I've been able to simulate what I want by having my shadders following a mock solar orbit around them, what I need to know is to use a model that simulates solar paths, use it as an attractor point and have my shadding surfaces follow it, pretty much like that I am doing right now (or so I think)
Here's where my questions come around:
I remember finding somewhere on the internet a definiton that simulates the sun's path through the year; I think I can find it again and use it for my purposes. I think that I could just run the GH definition, bake the geometry and then upload it to Ecotect and have it run so I can get the data and keep working over that, then feed the geometry again to Ecotect, ad nauseam. However I think that is a very slow process.
Is there a way that I can run an Ecotect plug in of sorts within GH, that way I can get my data IN grasshopper and model accordingly?
Does that make sense?
Thanks a lot for any input.…
Added by Antonio Tamez at 3:40am on October 24, 2011
s for the sunlight hours analysis.
I'm producing BRE Annual Probable Sunlight Hours calculations and so to match the BRE approach, I'm using 100 sun vectors, each representing 1% of probable sunlight hours. I could use the Sunpath and Analysis Period components to produce sun positions for the whole year, but this gives results that do not fully reflect the BRE methodology - which is important here. I'm detailing this just to clarify that this isn't a full annual calc of 8760 hours for 350 surfaces.
Anyway, when I run the calc, it takes about an hour to run, but the Sunlight Hours Component itself reports a calculation time of 3 seconds! Does this mean that the rest of the time is all about prepping the brep geometry? If so, is there a reason why this is much slower than when using a view of sky recipe and exporting to radiance. For the same project, I completed a view of sky calculations and based on the number of test points and -ad setting, this was completing about 5.25 billions rays so I understand why that took an hour.
Any thoughts as to why the sunlight hours calc seems to take so long?
thanks
Nick
…
s topology gets pretty bad for use in CAD programs, since they are "in and out" at the same time. Generate naked edges and non-manifold edges. The problem itself is when I make an offset of the surfaces, which create "bad objets" in Rhino. I'm using Mantis, a plugin for Mathematica software, and one based on this the Math Surfaces script from http://www.co-de-it.com/wordpress/code/grasshopper-code. Both give me errors. I have tried to make a merge with the normal flip in the same model, but the error continues. If I do a split, in Rhino, there is no problem to create a solid offset, but the opposite is totally different if I make a Mirror. Can you help me with this complicated issue? Thank you.…
on 2: I think the reason to draw a fitness landscape is to highlight graphically the presence of local minima, even in a simple optimisation problem. In architectural terms, this means getting an idea of how many sub-optimal solutions there are in a problem, which helps while exploring conceptual design proposals.
Have a look at this very basic example (which I published with two colleagues on "Shell Structures for Architecture", chapter 18): a shell footbridge (24m x 4m footprint), which is generated by two parabolic section curves (the two apex heights are the two design variables). The maximum displacement of the structure under gravity load and self-weight is the objective function. Simple example, but several local minima and interesting shell forms (image below).
@AB,
The expression used by David in the Number of Samples Input is a simple “x+1”. By grafting the Divide Curve Output, he got 81*81 lenghts (6,561 values). You have to make sure that number is divisible by the no. of samples. The second expression used for the Length output is only a scaling factor (my guess), to control the height of the fitness landscape drawing.
Cheers…
ing illuminance and limiting exposure (lux hours). Hours with direct solar irradiance are likely to exceed the limiting illuminance thresholds, which range from (200 to 50 lux as per Table 3.4 in CIE 157:2004). It makes sense to consider direct illuminance (an ab=0 simulation in Honeybee) separately from a normal illuminance calculation.
Assuming that the museum exhibits have low to high responsivity to light, an ideal solution would minimize direct sunlight. For daylight from the sky and reflected light, it might be enough to keep the illuminance levels below the recommended thresholds and then sum up lux-hours.
Daysim, the annual daylighting engine used by Honeybee and DIVA, is not very accurate for direct-sun calculations. You will get more accurate results if you run your analysis with Radiance directly.
Instead of considering the horizontal illuminance grids, one can create grids that correspond to the dimensions of the exhibit and then average those values. I think single points, as shown in your gh file might not suffice. Calculating lux-hours is by far the simplest part of such a simulation. It will only require averaging these points, extracting them into an array and then summing up that array.…
precise) that unfortunately has more than one staff. This means that I pay the bills (unfortunate to the max). Practice is vertical meaning no Structural/HVAC etc services.
2. AEC Projects are made by teams. Period.
3. Teams are organized with some sort of hierarchy. Period.
4. On each team there's always one leader. Teams can being sampled in group teams - call them clusters (kinda like a List of List of ...)
5. All cluster leaders report to the supreme human being (yours truly). Leader heads are always on my disposal (it's fun to decapitate someone: I do this every Monday).
6. AEC projects are made with 1% idea(s) and 99% of what we call "sludge" (this is not my job: I'm the One , he he).
7. You can't steer any boat if you don't know each @@$#@ nut and bold. In the past there was a naive approach on that matter (ruined automotive companies, potato chip makers, software vendors, political systems, secret service agencies ... etc etc).
8. Efficiency is above all (even above tax-free cash).
9, You can't do ANY AEC real-life thing with what GH has to offer (nor Rhino is an AEC BIM app - it would never be). You simply use GH as a supplement to Generative Components (and/or as stand alone because it's good fun). There's nothing that GH does (I'm speaking solely for AEC as always) that can't being done with Generative Components.
10. I've done so fat 257 projects (a "bit" bigger than a house, he he). Let's say about 51427 drawings (master, master details, details) and 78956 lines of text (specs, cost estimations, space schedules, supplier lists, contracts, cats and 1 dog).
If you combine all the above you'll have the answer (i.e. why I use solely - if possible - code and not GH components). If you can't combine them I'm sorry.
PS: C# is the absolute standard (never judge a language as a "stand-alone" thingy).
best, Peter (Prince of Cynics)
…
he past Architecture was the art of sketching: some "idea" with pencils/crayons + vellum paper (or with some computer) > then "others" trying to make this happen. This in general is known as top-to-bottom approach. Naive and dangerous (for the reputation/reception/acceptance of Architects/Architecture) to the max.
2. These days we work both ways: whilst some work on some "idea" (called it: "assembly") others (in sync mode) resolve the bits and nuts of that "idea" - up to 1:1 level of detail (called it "components"). This is the bottom-to-top approach. Make this your way: NEVER proceed in something whist's not EVERY bit of that something is well addressed (with at least 3-5 ways).
3. The emergence of parametric (GH, Generative Components, Dynamo) in AEC (an approach well known in MCAD word many years ago, mind) made things ... worst: the tremendous topology exploitation capabilities blinded people's mind and they are completely sucked up by the forest forgetting/by passing the critical fact that there's no forest without trees.
4. That's expected: is in the human nature to follow/admire the blink/glam and omit/skip the humble. It's the easy way you know, he he.
5. The tremendous growth of countries the likes of UAE/China/Russia made AEC things ... even worst: lot's of cash available > make us some encomium to Vanity, forget Modesty. You can replace "Vanity" with "New Frontiers" ... if you like fooling yourself.
Some Academics are not capable to understand all that: if they could they would potentially operate in the field (where the pink color is rarely used) and not in fishbowl(s). Some Academics believe that an "idea" is the 99% of the whole whilst actually is less than 1%. But on the other hand anyone can do Architecture (even Architects, he he).
That said (Vanity crisis) you want some other "component" options for this case of yours? (starting with "some" dollars more and ending with the mortgage the house/sell wife+kids option).
take care (and kill them all)…
s (and God knows how many in the next case) that's why (other than the colossal amount of time (for no reason) required for creating them ... try to bake them and measure the file size).
3 .Most non pros believe that the thing that matters the most in engineering is the geometry. Nothing could be further from the truth. Is about the 5% (complex real-life cases etc etc - but this one is very simple geometry wise and not that simple with regard the whole "ideal" AND effective strategy required).
4. So I've included in this Rhino file attached a small portion of your frames as input for the second C#: CAREFULLY study what it does and most importantly why: it gives you the clear indication about why you should attack this on an assembly/component basis by using instance definitions INSTEAD of recreating 14++ K "solids". The difference in performance is COLOSSAL, not to mention the baked Rhino file size.
5. Using instances is IMPOSSIBLE whiteout code (as is the case in 99% or real-life engineering tasks).
6. Geometry was never an issue on that one (is the 5% max of the whole puzzle no matter requirements you may have).
Bad news:
1. Zoom extends doesn't work after importing your data (maybe a NVidia Quadro K4200 driver issue - who knows?): use saved views stored.
So ...the choice is yours, best, Lord of Darkness…