ey provide all the means to what I try to achieve.
What I need is to get a fast (as possible) evaluation of passive heat/solar gain from a certain facade. I know my building can cool to a certain degree (lets say 80 W/m2 - now lets forget other internal gains) and I want to be sure my facade is not letting excessive amounts of heat into the room/building. Normally I would make a full blown simulation to count my overheating hours and thereby evaluate my facade. To speed up the process, the idea is just to evaluate overheating hours in a faster way. So what I am thinking is that excessive amounts may estimated by counting high intensity irradiation patches in a critical sky-component or whatever such thing would be called that surpasses my sensible cooling load. My hope is that any facade visible to the sky-patches would very similar to the number of overheating hours if properly calibrated to a simulated model. However I have no idea right now, if this can be done.
Why do this? Speed, convenience, whole building thermal analyses.
@Chris and @Abraham The critical sky-component is made with LBs radiance component radiation and filtering the beam-components with highest effects from a yearly epw-file.
@Chris Conductive heat gains are also important especially if the facade is badly insulated, so the next step is to filter the outdoor temperature parallel with that critical sky-component and then do a static heat transfer analysis and combine that with the effect from direct sun influence. Again, no idea if it works.
Hope it makes sense. I a little embarrassed I drew you into this little experiment. This was not at all the point of the discussion. But now we are into it I like to know what you think. If it works its kinda neat, at least i think it is.
/K…
pts organize in a data tree without losing the data structure. To create a folding surface as per image attach.
1. Replace items (to create a gradient) / Like the weight culling example.
Path {0} replace all indexes with a new value (a)
Path {1} replace 90% indexes with a new value (a)
Path {2} replace 80% indexes ...
2. Decrease value (a) in relation to path number
3. After Replace the above items value with
for even path number {0,2,...} replace items with a negative number
Did not find a easy way to create data tree that would achieve the above inside GH.
Point 2 & 3 are easy but i could not found a simple solution for points 1.
At the moment the only way i found is to create the list in Excell manually and import/ export or to create a list on indices for each path.
Any hint appreciated.
Might need to wait for the number slider or path mapper to accept input or notation ?
best
Stephane
…
rld.wolfram.com/EnnepersMinimalSurface.html
when i type the equations for z,y,z it says a syntax error so i obviously do not understand how to construct an expression. (screen capture attached)
Any help/explanation of using this function would be greatly appreciated
thanks so much
Capture.JPG…
precise) that unfortunately has more than one staff. This means that I pay the bills (unfortunate to the max). Practice is vertical meaning no Structural/HVAC etc services.
2. AEC Projects are made by teams. Period.
3. Teams are organized with some sort of hierarchy. Period.
4. On each team there's always one leader. Teams can being sampled in group teams - call them clusters (kinda like a List of List of ...)
5. All cluster leaders report to the supreme human being (yours truly). Leader heads are always on my disposal (it's fun to decapitate someone: I do this every Monday).
6. AEC projects are made with 1% idea(s) and 99% of what we call "sludge" (this is not my job: I'm the One , he he).
7. You can't steer any boat if you don't know each @@$#@ nut and bold. In the past there was a naive approach on that matter (ruined automotive companies, potato chip makers, software vendors, political systems, secret service agencies ... etc etc).
8. Efficiency is above all (even above tax-free cash).
9, You can't do ANY AEC real-life thing with what GH has to offer (nor Rhino is an AEC BIM app - it would never be). You simply use GH as a supplement to Generative Components (and/or as stand alone because it's good fun). There's nothing that GH does (I'm speaking solely for AEC as always) that can't being done with Generative Components.
10. I've done so fat 257 projects (a "bit" bigger than a house, he he). Let's say about 51427 drawings (master, master details, details) and 78956 lines of text (specs, cost estimations, space schedules, supplier lists, contracts, cats and 1 dog).
If you combine all the above you'll have the answer (i.e. why I use solely - if possible - code and not GH components). If you can't combine them I'm sorry.
PS: C# is the absolute standard (never judge a language as a "stand-alone" thingy).
best, Peter (Prince of Cynics)
…
he past Architecture was the art of sketching: some "idea" with pencils/crayons + vellum paper (or with some computer) > then "others" trying to make this happen. This in general is known as top-to-bottom approach. Naive and dangerous (for the reputation/reception/acceptance of Architects/Architecture) to the max.
2. These days we work both ways: whilst some work on some "idea" (called it: "assembly") others (in sync mode) resolve the bits and nuts of that "idea" - up to 1:1 level of detail (called it "components"). This is the bottom-to-top approach. Make this your way: NEVER proceed in something whist's not EVERY bit of that something is well addressed (with at least 3-5 ways).
3. The emergence of parametric (GH, Generative Components, Dynamo) in AEC (an approach well known in MCAD word many years ago, mind) made things ... worst: the tremendous topology exploitation capabilities blinded people's mind and they are completely sucked up by the forest forgetting/by passing the critical fact that there's no forest without trees.
4. That's expected: is in the human nature to follow/admire the blink/glam and omit/skip the humble. It's the easy way you know, he he.
5. The tremendous growth of countries the likes of UAE/China/Russia made AEC things ... even worst: lot's of cash available > make us some encomium to Vanity, forget Modesty. You can replace "Vanity" with "New Frontiers" ... if you like fooling yourself.
Some Academics are not capable to understand all that: if they could they would potentially operate in the field (where the pink color is rarely used) and not in fishbowl(s). Some Academics believe that an "idea" is the 99% of the whole whilst actually is less than 1%. But on the other hand anyone can do Architecture (even Architects, he he).
That said (Vanity crisis) you want some other "component" options for this case of yours? (starting with "some" dollars more and ending with the mortgage the house/sell wife+kids option).
take care (and kill them all)…
s (and God knows how many in the next case) that's why (other than the colossal amount of time (for no reason) required for creating them ... try to bake them and measure the file size).
3 .Most non pros believe that the thing that matters the most in engineering is the geometry. Nothing could be further from the truth. Is about the 5% (complex real-life cases etc etc - but this one is very simple geometry wise and not that simple with regard the whole "ideal" AND effective strategy required).
4. So I've included in this Rhino file attached a small portion of your frames as input for the second C#: CAREFULLY study what it does and most importantly why: it gives you the clear indication about why you should attack this on an assembly/component basis by using instance definitions INSTEAD of recreating 14++ K "solids". The difference in performance is COLOSSAL, not to mention the baked Rhino file size.
5. Using instances is IMPOSSIBLE whiteout code (as is the case in 99% or real-life engineering tasks).
6. Geometry was never an issue on that one (is the 5% max of the whole puzzle no matter requirements you may have).
Bad news:
1. Zoom extends doesn't work after importing your data (maybe a NVidia Quadro K4200 driver issue - who knows?): use saved views stored.
So ...the choice is yours, best, Lord of Darkness…
ponents, among other functionalities, is significantly widening the relevance of the toolset.
Meanwhile having used the tools for some time now and have gone through the forum, in my opinion a few critical system controls is still missing - unless I'm missing some understanding.
In order to really make the hourly energy analysis valuable in early massing studies etc. the consideration of indoor climate can be more detailed. The HVAC capacities, max. airrate and min. inlet temperature should be within comfortable ranges and hardsized by user input to reduce internal draft problems. If not considered I find that the analysis could possibly demonstrate good energy behavior and reasonable operative temperature but in reality could cause a bad indoor environment - and when "rectified" at a later stage the energy consumption will increase.
I would like to know how it is possible in HB to set-up a HVAC system with these ventilation controls and a "unlimited" convective/radiant heating system, and how to deal with the issues mentioned below. The inputs parameters exists in the components, but I can't seem to get the right system behaviour.
In the attached file I have gone through 4 scenaries, each with seperate issues in setting up the system (As no template appearantly supports the combined setup the heating system is simulated using an inlet temperature of 99 degrees).
HVACSystem: "ideal air loads" - Issue: no hardsized airrate, no cooling supply air temperature
HVACSystem: "VAV w. reheat" - Issue: no regulation of airrate, no use of input heat supply temperature in heating mode
HVACSystem: "idealairloadsystem" using "additionstrings" -> issue with duplicate zone names
HVACSystem: "idealairloadsystem" using "additionstrings" on multiple zones -> issue with duplicate zone names
Thanks a lot!
Jon…
ace Syntax." eCAADe 2013 18 (2013): 357.
http://www.sss9.or.kr/paperpdf/mmd/sss9_2013_ref048_p.pdf
The measure Entropy is newer. I hereby explain it (from my PhD dissertation):
Entropy values, as described in (Hillier & Hanson, The Social Logic of Space, 1984) and specified in (Turner A. , “Depthmap: A Program to Perform Visibility Graph Analysis, 2007), intuitively describe the difficulty of getting to other spaces from a certain space. In other words, the higher the entropy value, the more difficult it is to reach other spaces from that space and vice-versa. We compute the spatial entropy of the node as using the point depth set:
(11)
“The term is the maximum depth from vertex and is the frequency of point depth *d* from the vertex” (ibid). Technically, we compute it using the function below, which itself uses some outputs and by-products from previous calculations:
Algorithm 4: Entropy Computation
Given the graph (adjacency lists), Depths as List of List of integer, DepthMap as Dictionary of integer
Initialize Entropies as List(double)
For node as integer in range [0, |V|)
integer How_Many_of_D=0
double S_node=0
For depth as integer in range [1, Depths[node].Max()]
How_Many_of_D=DepthMap.Branch[(node,depth)].Count
double frequency= How_Many_of_D/|V|
S_node = S_node - frequency * Math.Log(frequency, 2)
Next
Entropies [node] = S_node
Next
…
t the maximum potential with the bridge BIM+PARAMETRIC DESIGN ;D
During this Intense Week, we will learn about the power of Rhino + Grasshopper + ArchiCAD with Professional and Useful examples for our Normal Working day :D
You will get Advanced Library Files + Personal Web + Knowledge and Skills to start using this incredible Methodology ;D
Also, the week is having Lectures from different Experts sharing their Computational Working Experiences ;D And Jam Sessions! opening the door to 5 interesting topics to research, learn and experiment together :D
2020 is your YEAR ;D !!!
Complete details and registration……
thought that architect's love for drawing comes from the necessity of translate abstract ideas into built 3D reality, and the technology behind that 2D representation has not evolve so much until some decades ago. Our teachers come from that times: times when computers try to find their place in the reality representation world. If you try to imagine that people that have always drawn with pencils adapting to this new tools...some become fan of new methods, other just keep the old fashion workflow (like Andrew said in the article, Schumacher VS Graves)
We've bear (at least Andrew and me :P) in 80's with first video games, computers (I still remember my old x286 with 1Mb RAM and 20Mb of HD and that MS-DOS interface)...New technology was natural for us...But there is a big difference between traditional drawing and new computer aided tools: the learning curve. To draw you only need to take a pen and put over a paper (that interface is understood by children easily) , but traditional computational tools (new touch interfaces are out of this group) are based in a complex logic and environment that is not easy to understand for some people.
In the workshops I'm teaching in, I try to put all that tools (new and old one) in my students hands and motivate them to mix and use them together (Andrew knows a little bit about that :P). Why not to make a lines sketch with GH and then print it and render with some markers?; the last step could be scan the result and enhance it in Photoshop adding textures, vegetation, some background...There are no rules, only a bunch of tools to explore and use to develop your ideas, evolve and finally represent them.
I bet to the touch interfaces (with some augmented reality sauce) like that one that will be able to blend both worlds, analog and digital, offering that fluidity and natural interaction that Grave miss in digital tools. And our generation attached to this "not natural" interfaces will need to change its mind and adapt to that new and amazing interface that our children will love.
Only to complete:
<iframe width="560" height="315" src="http://www.youtube.com/embed/aXV-yaFmQNk" frameborder="0" allowfullscreen></iframe>…
Added by Ángel Linares at 5:40pm on September 10, 2012