o express my gratitude. I've been experimenting with your definitions (and still am), but let me extend my question.
Actually what I'm trying to achieve, is to recreate another project by Andrew Kudless, the spore lamp (I mentioned the Chrysalis at the beginning just because of the animation, which wasn't included in the Spore Lamp presentation).
Basically the spore lamp seems to me to be something like a preliminary study to the Chrysalis III project (I think it's a similar approach).
Andrew stated on his site that he used kangaroo for this project, so the Spore Lamp consists in my opinion either of a relaxed voronoi 3d diagram (b-rep, b-rep intersection) on a sphere which then has been planarized, or more likely it is a sort of relaxed facet dome.
The trick is to:
1. obtain a nicely-balanced voronoish diagram (or facet dome cells)
2. keep each cell/polyline planar (or force them with kangaroo to be planar) in order to move scale and loft them later on.
Here is what I have by now. (files: matsys spore lamp attempt)
That's the closest appearance that I got so far (simple move scale and loft of facet dome cells with the amount of transformations being proportional to the power of the initial cell area: bigger cell = bigger opening etc.) - with no relaxation of the diagram. But it's obviously not the same thing as the matsys design.
Here are some of my attempts of facet dome relaxation, but well, it certainly still not the right approach, and most importantly I don't know how to keep or force the cells to be planar after the relaxation.
1. pulling vertices to a sphere - no anchor points. That obviously doesn't make sense at all, but the relaxation without anchor points gives at the beginning a pattern that is closer to what I am looking for. (files: relaxation 01)
2. pulling vertices to a sphere - two faces of the initial facet dome anchored (files: relaxation 02)
3. pulling vertices to the initial geometry (facet dome) no anchor points (files: relaxation 03)
The cell pattern of the lamp kinda looks like this:
you can find it here: http://www.grasshopper3d.com/forum/topics/kangaroo-0-095-released?g...
Done with Plankton (of course without the "gradient increase" appearance), but in fact not, I took a look at Daniel Parker's Plankton example files, and it's not quite the same thing. Also the cells aren't planar...
The last problem is that during the relaxation attempts that I did, the biggest initial cells became enormous, and it's not like that in the elegant project by Andrew Kudless, that I'd like to achieve.
So to sum up:
Goal no 1: Obtain an elegant voronoi /facet dome cell pattern on a sphere (or an ellipsoid surface, whatever).
Goal no 2: Keep the cells planar in order to be able to loft them later and obtain those pyramidal forms, and assemble easily
Any ideas? Or maybe there's a completely different approach to that?…
les automatically at the right angle to form the cap of an icosahedron.
To complete the full icosahedron, we consider just the six points we already know, the five pentagon vertices and the raised pyramid tip and reorient one of the vertices using three-point transformation so it obtains the exact same relationship between vertices only one more stage beyond our little cap pyramid, and we do a five-fold polar array:
I used a password-protected cluster I ran into one the forum somewhere to reproduce Rhino's 3-point orient command:
A final 3-point orientation transforms in space the original pyramid tip down to the bottom:
Now we can create a convex hull which gives an icosahedron mesh:
So that's how you build an icosahedron in Rhino from scratch, only using rather long winded Grasshopper.
Now we use the Weaverbird plug-in to subdivide the faces and then project the vertices out onto a sphere via finding the closest points to a sphere and then recreating a convex hull to make a geodesic dome mesh:
Subdividing two times works fine but 3 times blows up convex hull, so I'll just have due with the the subdivision step and leave out projecting back to a sphere, since the algorithm already gives a nice spherical result that you can see inside this disaster:
Now you know what a standard geodescic dome is, just an icosahedron with faces divided into smaller triangles, projected out to a sphere.
Actually, the mere subdivision is just a bit blobby instead of a sphere, damn it, so I'll have to topologically recreate the mesh after projecting the points indeed back onto our sphere.
Using a subdivision plug-in may be slightly throwing the perfect result off, so manually creating subdivision points on each mesh face may be in order, doing them flat against each icosahedron face:
You can also start with the two other triangulated Platonic solids but those give less regular triangles:
…
ree..
First-End List Component cannot manage branches inside every dimensions..
"Smart T8" Component is developed for managing the multi dimensional data tree with first-end algorithm.
It works with path index location..
"-1" or negative numbers mean the location of item..
"0" means the location of the last path index..
positive numbers mean the location from the back..
----
Now look at this example.. a simple 3-dimensional boxes..
In the data tree.. of {0;0;i;j} (k)
"k" is the item index.. Y direction..
"j" is the last path.. X direction..
and "i" is the level.. Z direction..
----
When index < 0 (i.e. "-1" or negative)
"Smart T8" performs like the First-End Item Component..
It selects first items in each list and puts them out to "F"..
and in this example.. they are boxes with same Y coordinate(=0)..
In the below image..
F(Red) M(Transparent Green) E(Blue) are classified by Y coordinates..
----
When index = 0
"Smart T8" focuses on the last path index..
It selects first list of every {0; 0; i; *} set of lists.. (i.e. every levels)
In this example.. they are boxes with same X coordinate(=0)..
because the last path means X grid..
In the below image..
F(Red) M(Transparent Green) E(Blue) are classified by X coordinates..
----
When index = 1
"Smart T8" focuses on the third path index.. (i.e. 1 step from the back)
It selects first list of every {0; 0; *; j} set of lists..
Actually in this case.. they are first levels of every YZ planes..
In this example.. they are boxes with same Z coordinate(=0)..
because "Smart T8" manages levels now (index=1)..
In the below image..
F(Red) M(Transparent Green) E(Blue) are classified by levels..
----
When index > 1.. (if it is meaningless index or out of range..)
It performs First-End List Component..
It selects only the first and end list of all lists..
----
The "Smart T8" component works with 3 or more dimensional data tree well..
Please control the focusing index and enjoy it.. :)
…
a problem with SSL. Any Ideas? I am using the following code:
import json,httplib connection = httplib.HTTPSConnection('api.parse.com', 443) connection.connect() connection.request('GET', '/1/classes/MY-CLASS', '', { "X-Parse-Application-Id": "MY-APP-ID", "X-Parse-REST-API-Key": "MY-REST-API-KEY" }) result = json.loads(connection.getresponse().read()) print result
I Get the Following Messages:
Runtime error (IOException): Authentication failed because the remote party has closed the transport stream. Traceback: line 280, in do_handshake, "C:\Program Files\Rhinoceros 5.0 (64-bit)\Plug-ins\IronPython\Lib\ssl.py" line 120, in __init__, "C:\Program Files\Rhinoceros 5.0 (64-bit)\Plug-ins\IronPython\Lib\ssl.py" line 336, in wrap_socket, "C:\Program Files\Rhinoceros 5.0 (64-bit)\Plug-ins\IronPython\Lib\ssl.py" line 1156, in connect, "C:\Program Files\Rhinoceros 5.0 (64-bit)\Plug-ins\IronPython\Lib\httplib.py" line 3, in script Any help would be greatly appreciated! Thanks in advance! -Zach…
t defined from the discussion of radiation exchange between urban surfaces and the sky in urban heat island research (See Oke's literature list below). It will be affected by the proportion of sky visible from a given calculation point on a surface (vertical or horizontal) as a result of the obstruction of urban geometry, but it is not entirely associated with the solid angle subtended by the visible sky patch/patches.
So, I think using "geometry way" to approximate Sky View Factor is not correct. Sky View Factor calculation shall be based on the first principle defining the concept: radiation exchange between urban surface and sky hemisphere:
(image extracted from Johnson, G. T., & Watson, 1984)
Therefore, I always refer to the following "theoretical" Sky View Factors calculated at the centre of an infinitely long street canyon with different Height-to-width ratios in Oke's original paper (1981) as the ultimate benchmark to validate different methods to calculate SVF:
So, I agree with Compagnon (2004) on the method he used to calculate SVF: a simple radiation (or illuminance) simulation using a uniform sky.
The following images are the results of the workflow I built in the procedural modeling software Houdini (using its python library) according to this principle by calling Radiance to do the simulation and calculation, and the SVF values calculated for different canyon H/W ratios (shown at the bottom of each image) are very close to the values shown in Oke's paper.
H/W=0.25, SVF=0.895
H/W=1, SVF=0.447
H/W=2, SVF=0.246
It seems that the Sky View Factor calculated from the viewAnalysis component in Ladybug is not aligned with Oke's result for a given H/W ration: (GH file attached)
According to the definition shown in this component, I assume the value calculated is the percentage of visible sky which is a geometric calculation (shooting evenly distributed rays from sensor point to the sky and calculate the ratio of rays not blocked by urban geometry?), i.e solid angle subtended by visible sky patches, and it is not aligned with the original radiation exchange definition of Sky View Factor.
I'd suggest to call this geometrically calculated ratio of visible sky "Sky Exposure Factor" which is "true" to its definition and way of calculation (see the paper on Sky Exposure Factor below) so as to avoid confusion with "The Sky View Factor based on radiation exchange" as discussed in urban climate literature.
Appreciate your comments and advice!
References:
SVF: definition based on first principle
Oke, T. R. (1981). Canyon geometry and the nocturnal urban heat island: comparison of scale model and field observations. Journal of Climatology, 1(3), 237-254.
Oke, T. R. (1987). Boundary layer climates (2nd ed.). London ; New York: Methuen.
Johnson, G. T., & Watson, I. D. (1984). The Determination of View-Factors in Urban Canyons. Journal of American Meteorological Society, 23, 329-335.
Watson, I. D., & Johnson, G. T. (1987). Graphical estimation of sky view-factors in urban environments. INTERNATIONAL JOURNAL OF CLIMATOLOGY, 7(2), 193-197. doi: 10.1002/joc.3370070210
Papers on SVF calculation:
Brown, M. J., Grimmond, S., & Ratti, C. (2001). Comparison of Methodologies for Computing Sky View Factor in Urban Environments. Los Alamos, New Mexico, USA: Los Alamos National Laboratory.
SVF calculation based on first principle:
Compagnon, R. (2004). Solar and daylight availability in the urban fabric. Energy and Buildings, 36(4), 321-328.
paper on Sky Exposure Factor:
Zhang, J., Heng, C. K., Malone-Lee, L. C., Hii, D. J. C., Janssen, P., Leung, K. S., & Tan, B. K. (2012). Evaluating environmental implications of density: A comparative case study on the relationship between density, urban block typology and sky exposure. Automation in Construction, 22, 90-101. doi: 10.1016/j.autcon.2011.06.011
…
doing this with the current tools or a bit of scripting since the Flickr API allows you to make requests in a REST format, but utilizing the Flickr.net API library makes it much simpler.
First and foremost, you need a Flickr API key...do you have one of those?
A great way to get to know the Flickr API is with the API Explorer. Here is a link to the page for the flickr.photos.search method explorer: http://www.flickr.com/services/api/explore/flickr.photos.search
The cool thing about this page is that it generates the REST Http call towards the bottom. So, here is what I did:
1. Grab the coordinates of the bounding box per Flickr API request:
bbox (Optional)
A comma-delimited list of 4 values defining the Bounding Box of the area that will be searched. The 4 values represent the bottom-left corner of the box and the top-right corner, minimum_longitude, minimum_latitude, maximum_longitude, maximum_latitude. Longitude has a range of -180 to 180 , latitude of -90 to 90. Defaults to -180, -90, 180, 90 if not specified. Unlike standard photo queries, geo (or bounding box) queries will only return 250 results per page. Geo queries require some sort of limiting agent in order to prevent the database from crying. This is basically like the check against "parameterless searches" for queries without a geo component. A tag, for instance, is considered a limiting agent as are user defined min_date_taken and min_date_upload parameters — If no limiting factor is passed we return only photos added in the last 12 hours (though we may extend the limit in the future).
So, I went to Google Earth, picked a city (London, UK) and dropped two pins:
This gave me two locations, which I can put into the Explorer Page next to the bbox option. Here is what I put for these two points: -0.155941,51.496768,-0.116783,51.511431
2. Check has_geo
3. In extras, type in geo
4. Make the call!
You will see a list of responses in an XML format, these responses will be from the first page. Geolocated photos are limited to 250 / page, so you will have to grab them page by page.
If you want to add more options (minimum upload date, maximum upload date, etc) you can do this as well)
The best is at the bottom, you get the full http call for this: http://api.flickr.com/services/rest/?method=flickr.photos.search&api_key=ffd44f601393a46e86aa3a5f8a013360&bbox=-0.155941%2C51.496768%2C-0.116783%2C51.511431&has_geo=&extras=geo&format=rest&api_sig=b42330e5d1523bd5fe60c2ad43acde99
Notice this call has some other api key, you should eventually replace this with your own.
You could copy and paste this into a browser and you will get the results with the latitude and longitude:
So this is really what you need to know to do this through GH. Since gHowl has an XML parser component that can access files on the web, you should be able to use the same http call into this component.
Eventually, we get a response, and we need to grab the lat and lon data. With gHowl we can map these to xyz coordinates, and generate the heatmap...this is just a linear mapping:
Attached are both the Rhino file and the Grasshopper file, as well as the image underlay.
I am working on a series of components that makes this more straightforward, but for now, this should get you started.
…
as one element.
Thank you
Comment by karamba on October 7, 2014 at 11:27pm
Hello Patricio, divide the beams in such a way that each boundary vertex of the shell becomes an endpoint of a beam segment.
Best, Clemens
Comment by Llordella Patricio on October 8, 2014 at 8:30amDelete Comment
Hi Clemens,
I did what you suggested but now assemble element doesn´t work properly. Could you please tell me how to fix it? Thanks in advance, Patricio
8-10-14losa%20cadena.gh
Comment by karamba on October 8, 2014 at 11:59am
Hi Patricio, if you flatten the 'Elem'-input at the 'Assemble'-component the definition works. The triangular shell elements have linear displacement interpolations whereas the beam deflections are exact. In order to get correct results you should refine the shell mesh.
Best, Clemens
Comment by Llordella Patricio on October 9, 2014 at 8:35amDelete Comment
Hello, succeeds in creating the mesh to the slab, and built the beam segment, but when I see the deformations are not expected because the beam is deformed as the slab.
Thanks for the help
PS: maybe I'm using the program for a type of structure that is not the most appropriate, as I saw in the examples of other structures. But this type of structure is that students taught
best regards
Patricio
9-10-14%20Example%201.gh
Comment by karamba on October 9, 2014 at 10:46am
You could use the 'Mesh Edges'-component to retrieve the naked edges and turn them into beams - see attached file:91014Example1_cp.gh
Best regards,
Clemens
Comment by Llordella Patricio on October 15, 2014 at 3:41pmDelete Comment
Dear clemens
I was doing a rough estimate of the deformation, and I can not achieve the same result with Karamba. When I make a rough estimate of the result with Karamba beams and mine are very similar, I think the problem is when I connect the shell, because there are no similar results.
I sent the GH file, and an image of the calculation
The structure is concrete The result I get is 0.58cm
thank youPatricio
15-10-14%20Example.gh
Comment by karamba yesterday
Dear Patricio,
try to increase the number of shell elements. As mentioned in the manual they are linear elements. A mesh that is too coarse leads to a response which is stiffer than the real structure.
Best,
Clemens
…
n be obtained for curved NURBS surfaces as well as unconventional window configurations".
And I also noticed the following information form the optional input in the runEnergySimulation component.
"meshSettings_: Optional mesh settings for your geometry from any one of the native Grasshopper mesh setting components. These will be used to change the meshing of curved surfaces before they are run through EnergyPlus (note that meshing of curved surfaces is done since Energyplus is not able to calculate heat flow through non-planar surfaces). Default Grasshopper meshing is used if nothing is input here but you may want to decrease your calculation time by changing it to Coarse or increase your curvature definition (and calculation time) by making it finer".
1) My case is an one-story, rectangular-plan large hall (40m*70m*25m) with a curved roof. The roof surface is a part of a standard sphere and the walls and floor are all planar (the each wall has one curved edge as showed in the image).
For testing, I threw the original curved roof surface into daylight and energy simulations without making customized meshings, because I assumed that it might be automatically converted to meshs by Honeybee - Am I right? As showed in the image, how can I reduce the number of meshs in a proper way? Must two connected surfaces (i.e. wall and roof) be STRICTLY/SEAMLESSLY connected or not (considering different divisions of meshs in the respective surface)? - Is a connection tolerance allowed?
2) But, when I run the annual daylight simulation for this case, it gave me a lot of warnings "oconv: warning - zero area for polygon".- is that normal? and how to avoid this? Does the daylight simulation allow "curved NURBS surfaces"?
3) Moreover, when I run an energy simulation for this case, it costed extremely long time. It was just so long that I did not even have results out of one simulation. - I guessed it might be the problem caused by the curved roof surface (or automatic meshing?), but I don't have experience of converting a curved NURBS/spheral surface into correct meshs that can be recognized by Honeybee simulations (Daylight and Energy) in a proper way.
4) The large window on the wall was generated by the "_glzRatio". But the automatically generated wall meshs around this window are just too "fine", which might largely increase simulation time. Is there a proper way to get rid of it? (Considering that the size, shape and position of the window will have large influence on the daylight distribution in the building, it is worthy to keep the size, shape and position of the window as it should be in reality).
In sum, considering all above, could your please provide me some suggestions/tutorials/links that might be helpful for dealing with "curved NURBS surfaces" in Honeybee simulations.
Thank you all in advance!
Best,
Ding
…
j
1
c
e
h
k
2
f
-------------------------------------------------------------------------------------------------
To these...
0;0
0;1
0
a
i
------------------------------------------------------------
0;0
0;1
0;2
0
b
g
j
1
c
h
k
------------------------------------------------------------
0;0
0
d
1
e
2
f
------------------------------------------------------------
Thanx……