hat since we create a list of materials and we assign them to surfaces - volumes the next step could be to have an Life Cycle Analysis and Financial assessment produced.
The most common form to produce an LCA into a form that is commonly used and easily communicated is in the form of Environmental Product Declarations (EPDs) that follow ISO 14025:2006. As every form of LCA, EPDs raise a bunch of question regarding their boundaries and the accuracy of the results especially if we include the factor of location. In comparison with other LCA practices though, EPDs have to be followed by Product Category Rules (defining the boundaries of the study) that can be reviewed by external parties if the EPD is to go public. Part from that EPD results reflect each stage of the life cycle of a product including potential benefits from Reuse or Recycling. Finally if you have a system - for example a building - you can add the EPDs of the different subcomponents forming the building and get a final EPD for the building itself - the point where I think HB's functionality is fully aligned.
The financial assessment can easily be concluded if one has the price of the material he/she uses. Finally the environmental indicators of the EPDs (LCI, LCIA) can be translated into Shadow Costs (Shadow costs for Environmental Indicators here) and added to the final financial assessment as an option.
I have developed a similar plug-in (in C#) for Grasshopper for my master's thesis last year. The project focused on the comparison between constructing normally and constructing implementing Design for Deconstruction practices in steel buildings. The idea was to compare the two cases based on their environmental and financial performance. In the process I included also options for transportation of the material and for shadow cost, embodied energy and carbon assessment and more. The final outcome can be visualised in Rhino's viewports and exported to excel sheets. The plug-in is connected to local db with EPD data for steel profiles. The same scheme though can be followed for any type of material if we have the right database to connect it to!
Please have a look if interested at the report here! And let me know if you have any questions!
Please note that the report includes 3+ chapters dedicated to design for deconstruction practices e.t.c that are irrelevant with the topic but maybe interesting to read:)
Also if someone is interested in the report I can always send it to you.
(I will upload a video -runthrough of the plug-in later this week)
I would be very interested to have these capabilities in LB and HB and happy to help realising it!
Thanks
Tasos
…
egin working on a design, we first have to systematically examine the resources and restrictions which, on the one hand, make every design project possible and, on the other hand, also define and delimit it. Knowing what we have to work with enables us to explore its boundaries and at the same time to venture beyond those boundaries. This is our studio’s sphere of action; our projects emerge as a critical reflection of the discipline of architecture, in its essence, on fundamental concepts, their general form, and their underlying media and processes. The goal of our work is to master a variety of forms of the architectural repertoire of the 20th century, but especially to develop and expand this repertoire, as has been happening in the past 20 years. The goal of this workshop is to introduce a series of these techniques and expertises and to apply the knowledge transfer on a given site in Timisoara. GUESTS: STUDIO ZAHA HADID VIENNA: http://www1.uni-ak.ac.at/architektur/ https://www.facebook.com/StudioHadidVienna Ass. Dipl.-Ing.MArch. AA Dist. Robert NEUMAYR-BEELITZ - lecturer/critic http://www.unsquare.at/ AProf. Mag.arch. Mag.theol. Johannes TRAUPMANN - critic http://www.pxt.at/ Univ.-Ass. Dipl.-Ing. Jens Erik MEHLAN - critic http://moh-architecture.com/ Univ.Stud.Ass. Daniel BOLOJAN - tutor - Grasshopper http://nonstandardstudio.wordpress.com/ Univ.Stud.Ass. Bogdan ZAHA - tutor - Maya http://bogdanzaha.tumblr.com/ LOCAL: Prof.Dr.Arh.Urb.Conf. Florin MACHEDON - critic (BUC)
more information on https://encodedfields.wordpress.com/…
r Material Science and Ligaproduction.
The exhibition started on May 12th and will be presented until August 19th 2012.
What is the meaning of »modular«? Essentially, everything in the world consists of a com- bination of elements, thus, of modules. As the basic building block of the elements, an atom forms the smallest unit in a structure’s totality. It is part of a whole, serving as a model for decoding and making comprehensible complex systems. In many disciplines, for instance in music, the sequence of smallest common units derives from an ordering prin- ciple, a rhythmic spacing, and from an aesthetic whose modular structure has both regular and irregular proportions.
In architecture, the module and modular construction have been governing principles for thousands of years. Primates use twigs as construction components for their dwellings, si- milar to the more familiar birds’ nests. During the course of biological and cultural evolution, refined methods of connecting components have been developed. Increasingly sophisticated construction techniques have evolved parallel to the tools, construction equipment and weapons available, as well as to the construction materials and support systems that were chanced upon or invented.
Ever since the earliest settlements thousands of years ago, the module has defined construction. Its dimensions, production and assembly have developed from preindustrial craft techniques to the construction of buildings, arising with the invention of the steam engine and leading into the Industrial Era. The first computer in the 1930s marked another technological leap. So what possibilities does the computer offer today’s architects for de- sign and construction?
While industrial manufacturing methods still require a critical amount of similar elements for mass production, the use of computers increasingly facilitates construction based on customized production of short-run elements with individual formats and complex geome- tries. At least that’s the theory. Computer-controlled machines and robots cut and stack structural components according to drawings – i.e. data sets – developed by designers and producers. Thanks to these technologies, architecture in the digital age is experiencing an evolution in construction and modules. The pioneers in this area are the projects developed at academic parametric design research units.
This exhibition features various examples from the development of digital technologies, presented in their historical context and categorized according to material: wood, stone, concrete, metal and synthetics. The »Housing Modules« excursion presents a selection of special urban planning systems as a series of space modules.
The historical modules each represent a paradigm shift in the evolution of an individual material. Since modules offer a tremendous wealth of opportunity, this section does not attempt to deliver the full picture: rather it intends to serve as an inspiration for further exploration.
In keeping with the Architectural Particles theme, the exhibition’s architecture consists of a modular system of tetrahedrons and octahedrons. The resulting crystalline shapes high- light the connection to nature while recalling modular construction systems from various architectural eras.…
rk for Rhino, this is a first go at a very simple tool to get an idea of how fast different computers are at performing the sort of calculations used in Kangaroo, with the aim of informing those buying or upgrading their machines.
If you could take a couple of minutes to download and run this definition (after closing other running applications), then post here the result and your PC specs, hopefully we can start building a basic picture of what effect different hardware really has on the speed Kangaroo runs.
Most of the information can be found in the System page of Control Panel.
RAM speed can be checked in your BIOS, or with a tool like CPU-Z (note that the reported frequency from this should be doubled to get the actual RAM speed rating - eg if the frequency is 800MHz you should write DDR3-1600. It's confusing I know - see some discussion of this here), or by searching online for the specs of your PC model number.
This definition is purely testing the speed of the internal physics calculation, not display, so graphics-cards are irrelevant.
For now this is just to get a single general measure of overall Kangaroo speed, but it might also be interesting later to run a variety of tests to see how the speed varies with the size and complexity of simulation.
Of course a way of benchmarking general Grasshopper performance would be very nice to have as well, but would involve a lot more variables, and I'd be interested if anyone has ideas about how that could work.
Note - I posted a couple of versions of this earlier with various errors that were causing incorrect results. If you downloaded the earlier KangaMark01.gh or KangaMark02.gh file, please disregard that and any results from it and use the one posted here below:…
edit 29/04/14 - Here is a new collection of more than 80 example files, organized by category:
KangarooExamples.zip
This zip is the most up to date collection of examples at the moment, and collects t
ur setup. Can you say what sensor you are using? Are you using an Arduino to write this ascii information to the serial port? If so, there may be some formatting code for the string that you'll need to do to get the Read component to function properly. I see that you were able to open the port and Start reading... so my first thought is that the data is formatted correctly....
All of the read components look for a specific character (in this case two characters) to indicate when it has reached the end of the line being read and should spit out the data. In this case, Firefly uses the Carriage Return (\r) and Line Feed (\n) to know when it has reached the end of the line. In arduino, these are automatically added to any line if you use the Serial.println("blah, blah, blah"); command. Notice, this is different from the Serial.print("nothing to see here"); command. This doesn't mean that you can't still use the regular print command... it's just you need to use the println command to indicate when you've reached the end of the line. Let's take a look at a simple example.
void setup() { Serial.begin(9600);}void loop() { int sensorValue = analogRead(A0); Serial.print("The value of the sensor is: "); Serial.println(sensorValue);
delay(20); // important to wait some small time so you aren't sending just a ton of info over to GH which will cause it to crash :(
}
The first print statement prints a string to the serial port... and the next one adds the current sensor value... and THEN adds the carriage return and line feed to start a new line. The nice thing about using these together is that you can concatenate any type of data you want. If you were to upload this sketch, you should see a sentence being printed to the serial port that says "The value of the sensor is: 512". I made up the number, but you get the idea. Notice, I also had to include a delay function. You don't always need this (there are other ways to go about this) but the important thing to note is that the loop cycle on the Arduino can run really fast. I mean... really fast. So, you wont want to send so much data over to GH, because this could flood the string buffer in the Read component and cause it to crash (eventually). It's a good idea to add some small time interval just to slow it down a bit. I should say that I've optimized the refresh rate in the next release so it's significantly faster... so hopefully this wont be as big of a problem... but hopefully that helps some.
Now... Why are you writing data to a sensor? Sensors by default are considered inputs... so I'm quite confused as to why you would want to send data back (if you are... then you need some way to handle the string data being sent from GH... this is the whole reason we built the Firefly firmata... it sets up the two-way protocol so you don't have to deal with all of that mess... If you're going to read and write, you're better off just uploading the firmata and using the Uno Read and Write components). Also, I'm not very familiar with the Hyperterm or Advanced Serial Port Terminal... but I will say that could get COM conflicts if you're trying to open the port with different tools. Anyway, I hope some of this helps you get up and running.
Cheers,
Andy
…
cremental release is available for download. It fixes several bugs reported in the 0.9.0005 & 0.9.0006 versions. To wit:
Computer mice with smooth scrolling would not zoom well, this is fixed.
Previewable parameters with a lot of consecutive null items would crash, this is fixed.
Identical GHA files would collide during the loading process, this is handled.
GHA files with identical names would collide during the loading process, this is handled.
Solver Undo setting was not persistent, this is fixed.
Widget ZUI Zoom setting was not persistent, this is fixed.
Markov Widget Corner setting was not persistent, this is fixed.
Markov Widget Suggestion Count setting was not persistent, this is fixed.
Drag and Drop on Document and Template preview materials wasn't recorded, this is fixed.
AssignDataToParameter() COM-Access method was broken, this is fixed.
Geometry and Generic parameters with persistent data would not deserialize correctly, this is fixed.
Operator shortcuts via the Canvas popup instantiation menu no longer assigned data to the second parameter, this is fixed.
Cull Duplicates component did not always show the correct label upon deserialization, this is fixed.
Legacy VB/C# components would not correctly deserialize List access on input parameters, this is fixed.
Cloud Display component would still display old sprites on disconnect, this is fixed.
Minor changes to a document would trigger lengthy preview cache updates, slowing Grasshopper down. This is fixed.
Sphere 4Pt did not work correctly, this it fixed.
Failed data conversions in parameters would result in missing entries, this is fixed.
Text Tag components (2D & 3D) would not bake via the component menu, this is fixed.
There are also some new features:
Added Jump object for quickly navigating across a Canvas (Params.Util dropdown).
Added Relative Differences component which is basically the inverse of Mass Addition (Math.Operators dropdown).
Added tooltip wiggle controls to the Preferences window, Interface section.
'Draw Full Names' now also attempts to change the display of existing components, but only in the active document.
Drag+Dropping GHA, GHPY and GHUSER files onto the canvas now puts the original file into the bin.
Replaced Set Union component with a new one that has variable input parameters.
Replaced Set Intersection component with a new one that has variable input parameters.
Replaced And and Ternary And components with a single new one that has variable input parameters.
Replaced Or and Ternary Or components with a single new one that has variable input parameters.
Replaced Concatenate component with a new one that has variable input parameters.
Concatenate component now has a segment join option available via the component menu.
Added Digit options to the Transform Matrix Display object.
Integer parameters which represent options now have more informative context menus.
--
David Rutten
david@mcneel.com
Poprad, Slovakia
…
Added by David Rutten at 11:06am on September 14, 2012
it seems that was this. Now all is working fine !
Glad that it worked! But I am still a bit worried. Gismo components only modify the gdal-data/osmconf.ini file and no other MapWinGIS file. So your MapWinGIS installation files should not be compromised. The fact that you did not get the "COM CLSID" error message when running the "Gismo Gismo" component suggests that MapWinGIS has been properly installed. So I wonder if the cause for the permanent "invalid shapes" warning has again something with the fact that your system is again not allowing the MapWinGIS to properly edit the osmconf.ini. Maybe this problem will appear again, and again, and reinstallation of MapWinGIS every time can be somewhat bothersome.
- About the terrain generation, is it possible to have the texture from google or other provider mapped onto the terrain surface from gismo component ? (Same as using the ladybug terrain generator in fact). I try to used the image extracted by ladybug component and then applied it to the gismo terrain but the texture is rotated by 90°.
The issue with the rotation can be solved by swapping/reversing the U,V directions of the terrain surface. A slightly more important issue is that terrain surface generated with Gismo "Terrain Generator" component might have a bit smaller radius than what the radius_ input required. This stems from the fact that the terrain data first needs to be downloaded in geographic coordinate system, and then projected. Some projecting issues may occur at the very edges of the projected terrain, so I had to slightly cut out the very edges of the terrain which results in the actual terrain diameters being slightly shorted in both directions. This means that if you apply the same satellite image from Ladybug "Terrain Generator" component to Gismo "Terrain Generator" component the results may not be the same.I attached below a python component which tries to solve this issue by extending the edges of Gismo "Terrain Generator" terrain, and then cutting them with the cuboid of the exact dimensions as the radius_ input. Have in mind that this extension of the original terrain at its edges is not a correct representation of the actual terrain in that location. But rather just an extension of the isoparameteric curve of the terrain surface. So basically: some 0 to 10% (0 to 10 percent of the width and length) of the terrain around all four edges is not the actual terrain for that location, but rather just its extension.The python component is located at the very right of the definition attached below.
Also, if you would like to use the satellite images from Ladybug "Terrain Generator" component along with "OSM shapes", sometimes you may find slight differences in position of the shapes. This is due to openstreetmap data not being based on Google Maps (that's what Ladybug "Terrain Generator" component is using), but rather on Bing, MapQuest and a few others.
- About the requiredKeys_ input of OSM shapes, I understand what you mean and your advice, but in most cases I use it, the component was working fine even without input. I think it's better to extract all tags, values and keys of the selected area, instead of searching for specific ones as I try to find all data related to what I want after, isn't it ? To check what keys are present on the area also.
Ineed, you are correct.I though you were trying to only create a terrain, 3d buildings and maybe find some school or similar 3d building, for these two locations. The recommendation I mentioned previously is due to shapefiles having a limit (2044) to how many keys it can contain. This requires further testing of some big cities locations with maybe larger radii, which I haven't performed due to my poor PC configuration. But in theory, I imagine that it may happen that a downloaded .osm file may have more than 2044 keys. In that case shapefile will only record 2044 of them, and disregard the others. That was my point.But again 2044 is a lot of keys, and I haven't been checking much this in practice. For example, when I set the radius_ to 1000 meters, and use your "3 Rue de Bretonvilliers Paris" location I get around 350 something keys, which is way below the 2044.Another reason why one should use the requiredKeys_ input is to make the Gismo OSM components run quicker: for example, the upper mentioned 350 something keys will result in 350 values for each branch of the "OSM shapes" component's "values" output.Which means if you have 10 000 shapes, the "OSM shapes" component will have 10 000 branches with 350 items on each branch (values). This can make all Gismo OSM components very heavy, and significantly elongate the calculation process.With requiredKeys_ input you may end up with only a couple of tens of items per each branch.Sorry for the long reply.…
Added by djordje to Gismo at 8:57am on June 11, 2017
rtical Sky Component (VSC), and now Sky Exposure Factor (SEF). For everyone else following this post, this discussion has been ongoing in these other threads:
http://www.grasshopper3d.com/forum/topics/sky-view-factor-vs-vertical-sky-component?groupUrl=ladybug&xg_source=msg_com_gr_forum&groupId=2985220%3AGroup%3A658987&id=2985220%3ATopic%3A1377260&page=1#comments
https://github.com/mostaphaRoudsari/ladybug/issues/230
Grasshope, you have gone right to Oke, the grandfather of urban climatology, whose papers I have several times and yet I somehow I always missed the finer details of the sky view calculation. From his definition, I had always thought of Sky View Factor as a purely solid angle or "view factor" calculation in the sense of Mean Radiant Temperature. However, the numbers and formulas that you give here clearly show that Oke meant that this metric for quantifying and understanding urban heat island must refer back to the urban surfaces and their orientation in relation to the sky. It cannot simply be the view from points in space.
To clarify the distinction in simple geometric terms: The key difference is that Sky Exposure refers to the sky seen by a point in space while Sky View refers to that seen by a surface. Both of them involve the calculation of either projected rays or solid angle calculations to the sky (since they both are “view” calculations). However, while Sky Exposure treats each patch of the sky with relatively equal weight, Sky View weights these patches by their area after being projected into the plane of the surface being evaluated. In other words, the sky view calculation for a horizontal surface would give more importance to the sky patches that are directly overhead than those near the horizon because these overhead patch are “in front” of the surface (as opposed to on the side).
To express this difference in the trigonometric terms you cite here:
Wall View = 0.5(sin2 θ + cos θ – 1) / (cos θ)
Wall Exposure = θ/π
I both cases:
θ = tan-1(H / 0.5W) - ** This is the solid angle or ray-tracing calculation
SkyViewOrExposure = (1 - 2 (WallViewOrExposure))
To put this in more simpler terms for the View Analysis component, all that I actually have to do to convert sky exposure to sky view is multiply each of the traced view rays by 2cos(ϕ), where ϕ is the angle between the surface normal and the given view ray being traced.
I have done this by adding this line of code () and I have verified that I get the values from Oke’s paper that you cite above, Grasshope. Accordingly, the View Analysis component now has the option to compute either Sky Exposure or Sky View. You can see this happening in this new example file:
http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Sky_Exposure,_Sky_View,_and_Sky_Component&slide=0&scale=1&offset=0,0
To (once and for all!) clearly define the difference between the three metrics at the top of my reply and to explain how to calculate each with Ladybug Honeybee:
Sky Exposure Factor - The percentage of the overlying hemispherical sky that is directly visible from a given POINT or set of POINTS. This is equivalent to a geometric solid angle calculation or ray-tracing calculation from points. It is useful for evaluating one's general visual connection to the sky at a given point and should be applied to cases where direct views to the sky are the parameter in question.
Sky exposure is calculated with the Ladybug_View Analysis component like so:
Sky View Factor – The percentage of the overlying hemispherical sky that is directly visible from a given SURFACE or set of SURFACES. While Sky Exposure treats each patch of the sky with relatively equal weight, Sky View weights these patches by their area projected into the plane of the surface being evaluated. In other words, Sky View for a horizontal surface would give more importance to the sky patches that are overhead and less to those near the horizon. Sky View is an important factor in for modelling urban heat island since the inability of warm urban surfaces to radiate heat to a cool night sky is one of the largest contributors of the heat island effect.
Sky View is calculates with either the Ladybug_View Analysis component like so:
Or with the Honeybee_Vertical Sky Component Recipe like so:
Sky Component - The portion of the daylight factor (at a surface indoors) contributed by luminance from the sky, excluding direct sunlight. This is essentially the same as Sky View Factor but it often incorporates a sky condition that is not uniform, such as a cloudy sky or sky that is more indicative of diffuse sky light. Another way of conceiving of this metric is a Daylight Factor calculation without any light bounces. It is useful for understanding the direct daylight contribution of diffuse skylight and, although many consider it an older (and perhaps outdated) daylight metric, it is still required by some codes and standards.
Sky Component can be calculated with the Honeybee_Vertical Sky Component Recipe like so:
In addition to the added capability in the view analysis component, I have revised the component description to include the definitions above. I have also corrected the Hydra example file in which I cite sky view as an urban heat island metric to use the new formula:
http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Sky_View_in_an_Urban_Canyon&slide=1&scale=1&offset=0,0
Finally, all of this discussion has made me realize that the Vertical Sky Component recipe for Honeybee might not always be evaluating VERTICAL sky. The sky component might be vertical, horizontal, or in any direction that the input test surface is placed and pts vectors are oriented. Accordingly, Mostapha, I think that we should change the name of the component to simply be “Sky Component” instead of “Vertical Sky Component”. Please let me know if you agree.
Thanks again, Grasshope, for all of the great work! All of this never would have made sense without your research.
-Chris…