try now to integrate Geco in an interdisciplinary architectural engineering studio: hoping we can show you some nice applications of your tool, I'll keep you update and sending now details by e-mail. Here the file (very welcome to be shared). It most probably contais trivial errors by me, thanks for helping and giving some tip! Gr. Michela
FILE:
Ok, right, I see the outputs update correctly. Origin of problems must be in some different mistake I do:
- Incident radiation: I am not sure I understand what is going on: why I get so many 'not a number' ? (The Galapagos report is full of NaNs).
Bio-Diversity: 0.887 Genome[0], Fitness=NaN, Genes [89% · 44%] { Record: Too many fitness values supplied } ...
Genome[7], Fitness=NaN, Genes [74%] { Record: No fitness value was supplied } ....
Genome[9], Fitness=NaN, Genes [37% · 11%] { Record: Genome was mutated to avoid collision Record: Too many fitness values supplied }
- Daylight calculations: the geometry accumulates withouth deleting the previous models. As a consequance, results almost do not change after few varations (so, outputs get updated but do not vary). In current daylight definition: the first object being imported is the one where the grid has to fit; its setting makes it cancelling all the other objects during import. All the others, do not delete anything when imported. When running loops (manual or GA) that vary parameters, the entire geometry do not get cancelled - so I guess the loop does not pass back by the cancelling step, but imports only the geometry which has been varied by the parameters using the setting of that import component only? I will then try again by changing the order of the operations, but if you have specfic tips, let me know.
THANKS!
…
at 0.85m above the floor.
I copy paste from the Appendix E:Rights to Light of the book "Paul Littlefair, Site Layout Planning for Daylight and Sunlight, A good practice, BRE Press, p.60" which is the primary guide for evaluating the impact of new construction to the Rights to Light of the existing adjustment buildings:
"The accepted way of calculating the loss of light is to compute the sky factor at a series of points on the working plane. In dwellings, the working plane height is usually taken to be 0.85 m (two feet nine inches). The sky factor is the ratio of the illuminance directly received from a uniform sky at the point indoors, to the illuminance outdoors under an unobstructed hemisphere of this sky. No allowance is made for glass losses or light blocked by glazed bars and (usually) window frames; nor is reflected light included, either from interior surfaces or obstructions outside. Thus the sky factor is not the same as the CIE daylight factor (see Appendix C). The sky factor is often calculated using a Waldram diagram, but this is a different Waldram diagram to Figure B1 in Appendix B, which should not be used for this purpose."
Thought couldn't find the specific Waldram diagram for this case from the references, I assume contemporary analytical tools should exist to calculate it.
I used your Vertical Sky Component process and culled the mesh faces lower than 0.2% but I believe because of they type of the radiance analysis as you have explained it before (stochastic method) it doesn't create one continuous edge, as you can see in the attached image.
Thanks,
Dimitris…
button to generate such complicated and unruled geometry. Seriously, if you don't understand a geometry, how can you solve the structural needs and the bloody fabrication. Giant fast prototyping machines doesn't exist!
In a era where ressources and energy is getting scarce, I don't understand this trend of fancy no sence look like organic buildings. They just look organic in our human perception. Nature builds things with define physical and biochemicals rules, and this is why when they grow, they look like that. You should study Frei Otto publication from the 80's.. the IL publications. They were using physical models to generate physical structures that would be build in the physical world. Computers and softwares are dangerous as we distach from reality.
We put all this effort to generate these fancy forms, but no brain is put in structural optimization, energy efficiency (for instance in relation with the sun, or other natural elements)
IT technology goes faster than the time we have to reflect about it. (not talking about the technics).
As Frei Otto told me personally in our last discussion (talking about philosophy and architecture): " We have to define the OPEN QUESTIONS. Once these questions will be defined, you'll get answers".
I think we are getting to a question here: " How to use this technology to solve problems in Architecture?" Before that " What are the real problems in architecture?"
Maybe David should make a component for that? For instance, a button that could solve the loging and infrastructure problems for these millions of people living in the slums of Mumbai...
What about that Krish Raj?…
igner called Christophe Barreau.
http://www.christophe-barreau.fr/
We design sail catamarans from 40' to 80' and occasionally some other stuff.
One may know it's a quite uncertain activity so I find myself tacking upwind on other seas from time to time, such as product design and jewelry. I also have side projects with mates regarding hi-fi or RC planes.
As for "static" architecture I had a couple experiences working on large "complex" buildings. Sadly French architects are not very familiar with BIM, parametric or even precise 3d modeling so I've been hired to introduce GH in the workflow.
I'm an un-authorized rhino trainer, sorry to say, but I just love teaching and meeting new faces, although I'm not as devoted as Danny ;)
I've been using GH both for modeling and analysis for about three years now and I'll daresay I became pretty good at it... I'm not a geek at all but it's just so useful, and it's really worth it sometimes €€€!…
ke 20 samples per day, 50 days out of the year for 1000 samples) from each panel and calculate the % of occlusion. Allow that % to be the % "open" of each panel. Design the opening in each panel to be something cool and proportional. Profit.
You could even break it down by a finite number of available panel types(say 0%, 20%, 40%, 60%, 80% open) and create an efficient production. All of these things can be paramterized to allow for more samples or more panel types as needed or based on your calculation limits.
The only exception would be proper environmental analysis, say, if you were trying to reduce solar gain in summer and allow for it in winter. You would want to split this calculation between when you need to be gaining heat and where you want to be shading. Then extrapolate the percentage between the two. You may even need a gradient of heat gain through fall/spring. The possibilities depend on how much you know about the mechanical requirements of the area/building.
That would be my approach. If I have more time tonight I will try and put something together on this as its been something I'd like to have in my back pocket....
Edit: You would also need to analyze the angle of incidence as it could have an effect on the amount of solar gain.....…
oks like all your GH components are disabled? I just tried baking the cone from my earlier code and using that but can't see anything at all.
OH! You had 'Display | Shaded Preview' disabled - why? Now I see that you have 80 X 55 'SFrames', which will be VERY SLOW. I never understood why you abandon 'PopGeo'? But that many points will be extremely slow either way. I won't wait that long.
You're making this way too hard for me, bobbi.
I said early on that it's best to work with a very low count until everything works properly. Solid unions are one of the ragged edges of Grasshopper; slow and prone to failure, depending on the complexity of the geometry (co-planar surfaces, etc.).
Good luck!
P.S. I can see two problems here:
Surface normal is in instead of out.
You didn't 'Cap Holes' on the lofted tubes so they aren't solid "Closed Breps".
I have no clue what you're doing. Do you? :)…
k on forum?
or
B) install from a networked location?
Second question.
If you download from link do you:
A) read the post because you want to see what changes have occurred?
or
B) ignore the post as you are too excited to get the latest version up and running?
Third question.
When confronted by a demanding LOL cat telling you to update software do you:
A) nod approvingly and think "I must do that"?
or
B) freak out and get a sudden urge to eat cheeseburgers?
In all seriousness question three can be omitted.
EDIT: 80 views and only two posters! (thank you Simone and Luis).
I am actually interested in the results
SOLUTION TO DLL ERROR: install this …
Added by Danny Boyes at 3:32am on October 25, 2011
of Space, 1984) and specified in (Turner A. , “Depthmap: A Program to Perform Visibility Graph Analysis, 2007), intuitively describe the difficulty of getting to other spaces from a certain space. In other words, the higher the entropy value, the more difficult it is to reach other spaces from that space and vice-versa. We compute the spatial entropy of the node as using the point depth set:
(11)
“The term is the maximum depth from vertex and is the frequency of point depth *d* from the vertex” (ibid). Technically, we compute it using the function below, which itself uses some outputs and by-products from previous calculations:
Algorithm 4: Entropy Computation
Given the graph (adjacency lists), Depths as List of List of integer, DepthMap as Dictionary of integer
Initialize Entropies as List(double)
For node as integer in range [0, |V|)
integer How_Many_of_D=0
double S_node=0
For depth as integer in range [1, Depths[node].Max()]
How_Many_of_D=DepthMap.Branch[(node,depth)].Count
double frequency= How_Many_of_D/|V|
S_node = S_node - frequency * Math.Log(frequency, 2)
Next
Entropies [node] = S_node
Next
…
troducción a su plugin de modelado paramétrico, Grasshopper.
Con este tipo de herramientas podemos pensar formas más allá de las cajas para diseñar, porque seremos capaces controlar con total rigor geometrías muy complejas.
En el siguiente video, podemos ver un ejemplo realizado durante un curso impartido anteriormente en Madrid por el profesor, Francisco Tabanera, en el que se realiza una interpretación del proyecto de BIG para la Biblioteca Nacional de Kazajstán.
<a title="Interpretación de la Biblioteca Naiconal de Kazakstan, de BIG" href="http://www.youtube.com/watch?v=YLldO-SxgPw" target="_blank"></a>
A lo largo del curso se realizarán diferentes ejemplos que podrán ser realizados por todos los asistentes, ya que no es necesario ningún conocimiento previo para su seguimiento.
El curso se desarrollará en las oficinas de Arquitecton en Barcelona con el siguiente horario:
HORARIO
Sábado 1 de Marzo
De 9.30 a 13.30h.
Sábado 1 de Marzo
De 15.30 a 19.30h.
El curso está planteado para un máximo de 9 alumnos, para conseguir el máximo aprovechamiento posible por parte de los mismos.
El curso tiene un precio de 90€. Estudiantes y desempleados tienen un descuento del 10%. Es posible asegurarte una plaza con un primer pago de 25€ a modo de reserva.
Apúntate aquí…