n fact) according a vast variety of "modes" PLUS the required clash detection (ALWAYS via trigonometry). In plain English: outline any collection of Breps and "apply" a truss that is topologically sound (planarization in case of quads etc is an added constrain). PLUS outline/solve what comes "next" after that truss (like the planar glazing "add-on" brackets of yours [ the ones that need redesign, he he], or some roofing/facade skin system [secondary supports, corrugated sheet metal, insulation, final cladding, dogs and cats])
2. Imaging doing this in real life (nothing to do with "abstract" formations of "lines" or "shapes" or whatever). This means primarily adopting a BIM umbrella: in plain English AECOSim, Revit or Allplan (I'm a Bentley man so I use AECOSim + Generative Components). This also means using "in-parallel" a top MCAD app for 1:1 details, FEA/FIM and the vast paraphernalia required for real-life studies destined for real-life projects (made with real-life money by real-life people). My choice: CATIA/Siemens NX.
3. What to send to Microstation (if not using Generative Components, that is) and/or CATIA? In what "state"? To do what exactly? For instance even if you could design this feature driven tensile membrane anchor custom node in Rhino (you can't) it could be 100% useless in CATIA:
4. Imaging masterminding ways to send them nested instance definitions of ... er ... a coordinate system (all what you need). In plain English: since is utterly pointless to send them nested blocks that can't been parametrically controlled (variations/modifications/PLM management/BOM/specs etc etc)... send them simply the "instructions" to place coordinate systems of components that ARE parametrically designed within Microstation and/or CATIA (classic feature driven design approach blah blah). So GH solves topology et all (working on data imported via, say, Excel sheets related with sizes of components etc etc) and sends to Microstation simply this (a myriad of "this" actually):
I do hope that the gist of the "method" (the ONLY way to invite GH to the party) is clear.
best, Peter…
I tell you what I had to do and how I did it.
I have the following situation. A urban context with a square plot 40m x 40m surrounded by buildings.
If I extrude the plot I get 4 surfaces and I need to calculate the minimum daily quantity of direct sunlight hours each test point receives in the period from 22nd of April to 22nd of August. For example for the test point at index 21 of surface with index 1 (I am just creating these numbers in my mind) the minimum is on 27th of April and the test point receive 8 hours (this is also invented for the sake of the example) of direct sunlight. All the other days it receives more. So the values I have to found are these minimums for all the test points. Now how to calculate these minimum quantities is a different issue of the topic of this post and actually I manage it.
Continuing with the explanation of what I had to... so I have only the initial plot that generate 4 surfaces, then I want to test smaller plots generated by an offset of 4 m of the original one, and the relative 4 surfaces for each smaller plot.
So in this case I think I cannot use your suggestion because the object don't exist yet.
I managed creating a loop with Anemone, the loop generate an offset starting from the original at 0 until 4 (then I multiply it by 4 to obtain the offset at 0, 4, 8, 12 and 16. Then I did like you also suggest I record every time the result with the DataRecorder and I create for each result a different branch with the index coming from the loop (0, 1, 2, 3 and 4) with the Flatten component.
In this image you can see all the surfaces saved in the same way as described above and in white the test points that receive minmum or equal than 2.5 hours per day of direct sunlight in the period from from 22nd of April to 22nd of August and in dark gray the test points that receive less.
The main point of this discussion is just the fact that instead use this tricky way I used, or the one you suggest, to analyze separately (because they shade each other) 20 geometries (in this case 20 they could be many more) it would be good if it would be possible just to input all the geometries at the same time and they would not shade each other so to get directly all the results with one run and in a more simple way.
Francesco
…
requires four weather data inputs: air temperature (_dryBulbTemperature), relative humidity (relativeHumidity_), wind speed at 1.1 meters from the ground (windSpeed_) and mean radiant temperature (meanRadiantTemperature_).You can add values to the first three inputs from the Ladybug "Import Epw" component. For the last (meanRadiantTemperature_), you can add it from Ladybug's "Outdoor Solar Adjusted Temperature Calculator" component, or let "Thermal Comfort Index" component to calculate it. Both use different methods to calculate the final values.
I attached an example file below with second option.For more precise calculations you can use Honeybee and Chris' microclimate maps.An icing on the cake for the end: one of Ladybug developers yesterday released a set of Ladybug components for modelling in ENVI-met application. ENVI-met is cutting-edge microclimate software, which can be downloaded for free. It opens a number of advanced new analysis in outdoor domain, which couldn't have been done with the current Ladybug+Honeybee tools. So you can perform the simulation in ENVI-met 4 free software, and then add mean radiant temperature values from ENVI-met simulation to "Thermal Comfort Indices" component. Here is an example file.If you would like to go with the last approach, then the best would be to post a question about it in this topic.
1) You can make a polygonized tree.I haven't subtracted the trunk from the crown, but I guess it makes sense that it can be done.2) In most solar related simulations, a default albedo value of 0.2 is used. This corresponds to average albedo value taken from materials surrounding the urban or countryside location (concrete, grass, gravel, sand, asphalt...). However the presence of snow can significantly magnify the average albedo value several times. "Sunpath shading" components albedo_ input has an ability to calculate albedo due to presence of snow, if nothing is added to it (to albedo_ input). As you are performing the analysis of PET in a horizontal plane, it will not affect your calculations.3) Most thermal comfort indices will require performing analysis at 1.1 meters above the ground. This is considered to be height of standing person's gravity center.The same goes for PET index. So you are correct: you should place the analysis grid at 1.1 meters above the ground before adding it to the "Sunpath Shading" component.It is worth mentioning that "Thermal Comfort Indices" component used in this topic's PET_on_Grid2.gh and PET_on_Grid3.gh files is from last year, and much slower than the newest one (VER 0.0.64 MAR 18 2017) used in the example attached below. Just a remainder if you have been using older version of this component.Let me know if I misunderstood some of your questions, or if I missed to answer some of them.
EDIT: sorry for posting a double reply. When I posted it the first time, I only got links visible, with no text. Something has been wrong with grasshopper ning forum for the last couple of months.…
search for residential type and surprisingly there are none. This can be, but i'm surprised.
The location in example is the Financial District of Manhattan. I assume there might not be too many purely residential buildings there. If you increase the radius to 300meters it will find one.The OSMobject "Residential building" will look for mostly purely residential buildings. For example those in Chinatown or Lower East Side.However most of the time a building might be a multi-purpose: shops on the ground floor, offices above, and above them residential apartments. Users can sometimes avoid tagging these kind of buildings, and may just tag them with "buildings"="yes", not the type of the building too (for example: "building"="multiuse"). So this may be the problem why you might not get too many residential buildings.I guess the only solution to this issue is to add these tags by yourself. Then Gismo will instantly make use of them.I mentioned previously that I will create a couple of video tutorials, but I seemed to never found enough time. I apologize for that. The process is actually quite simple.
Here is small step by step tutorial on how to do that. It may take you about 2 minutes to tag your building and use that tag in Gismo.
Also office buildings. I imagine this is not up to you, but can be kind of disappointing. I wanted for example to do some Ladybug analysis only on residential or office buildings ... pitty.
"Office building" has not been added to "OSMobjects" dropdown list. I have just added it.However, whenever some sort of object is not defined in "OSMobjects" dropdown list, one can use the _requiredKey and requiredValues_ inputs of the "OSM tag" component:
I just tried looking for office building for the same location we have in the create_legend_example.gh file and it found 3 of them. There would probably need to be more, but it may be that nobody tagged those with "building"="office"
The legend is nice, though i think is not completely synchronized with the LegendBakeParameters: You need to provide a point for the LegenPlane input and another for the titleOriginPt output of the CreateLegend.
Unlike Ladybug, Gismo threats the title and the legend separately. So the legend's color bar would have its own starting point (plane) while the title will have its own. I found myself puzzled sometimes in Ladybug, why this wasn't possible.Or did I misunderstand you?…
Added by djordje to Gismo at 12:33pm on May 8, 2017
hope this number will grow in future. Currently available features are:
1) Creation of 2d or 3d context for any kind of building related analysis: automatically generate the 2d/3d surrounding buildings for the location where you would like to perform visibility, solar radiation, cfd or any other type of analysis. You need some other plugin for the last three, like Ladybug. It only creates the context=surroundings! The "automatic generation" process also includes creation of the local topography (terrain) along with buildings.
2) Identification of certain 2d or 3d elements in the created context. For example: selection of all hotels, parks, hospitals, restaurants, residential buildings etc.
3) Performing direct terrain analysis (hillshading, slope, ruggedness, roughness, water flow...)
4) Creation of terrain shading masks and horizon files for further solar and photovoltaics analysis.
Gismo will be very grateful if he could get any suggestions, improvements, bug reports and testing in the following period. In case you are willing to provide any of these, the requirements, installation steps and .gh example files can be found here, here and here.
Thank you in advance !!…
Added by djordje to Gismo at 9:10am on January 29, 2017
peuvent se diviser une surface avec ne importe quel motif imaginable. 3. Ici, je fournir un moyen de le faire via Lunchbox ... cela fonctionne mais il est fixe et donc nous avons besoin de jouer avec des arbres de données afin de créer le motif approprié par cas. 4. L'autre composante est un joint C # qui fait beaucoup de choses autres que de diviser ne importe quelle collection de points avec de nombreux modèles (voir le modèle ANDRE que je ai fait pour vous). 5. Vous devez décomposer une polysurface en morceaux afin de travailler sur les subdivisions. 6. Je donne une autre définition ainsi que pourrait agir comme un tutoriel sur la façon de traiter des ensembles de points via des composants de GH standards et des méthodes classiques.
Avertissez si tous ceux-ci apparaissent floue pour vous: Si oui, je pourrais écrire une définition utilisant des composants de GH classiques - mais vous perdrez les variations de motifs de division.
mieux, Peter
…
and export the geometry out to VVVV to render it LIVE! RawRRRR. In this case, a digital audio workstation Ableton Live, a leading industrial standard in contemporary music production.
the good news is that VVVV and ableton live lite is both free.
https://www.ableton.com/en/products/live-lite/
i am not trying to use ipad as a controller for grasshoppper. I wanted to work with a timeline (similar to MAYA or Ableton or any other DAW(digital audio workstation)) inside grasshopper in an intuitive way. Currently there is no way of SEQUENCING your definition the way you want to see that i know of.
no more combersome export import workflows... i dont need hyperrealistic renderings most of the time. so much time invested in googling the right way to import, export ... mesh settings...this workflow works for some, for some not ...that workflow works if ... and still you cannot render it live nor change sequence of instruction WHILE THE VIDEO is played. and I think no one wants to present rhinoceros viewport. BUT vvvv veiwport is different. it is used for VJing and many custom audio visual installation for events, done professionally. you can see an example of how sound and visuals come together from this post, using only VVVV and ableton. http://vvvv.org/documentation/meso-amstel-pulse
I propose a NEW method. make a definition, wire it to ableton, draw in some midi notes, and see it thru VVVV LIVE while you sequence the animation the WAY YOU WANT TO BE SEEN DURING YOUR PRESENTATION FROM THE BEGINNING, make a whole set of sequences in ableton, go back change some notes in ableton and the whole sequence will change RIGHT INFRONT of you. yes, you can just add some sound anywhere in the process. or take the sound waves (sqaure, saw, whateve) or take the audio and influence geometric parameters using custom patches via vvvv. I cannot even begin to tell you how sophisticated digital audio sound design technology got last ten year.. this is just one example which isn't even that advanced in todays standard in sound design ( and the famous producers would say its not about the tools at all.) http://www.youtube.com/watch?v=Iwz32bEgV8o
I just want to point out that grasshopper shares the same interface with VVVV (1998) and maxforlive, a plug in inside ableton. audio mulch is yet another one that shares this interface of plugging components to each other and allows users to create their own sound instruments. vvvv is built based on vb, i believe.
so current wish list is ...
1) grasshopper recieves a sequence of commands from ableton DONE
thanks to sebastian's OSCglue vvvv patch and this one http://vvvv.org/contribution/vvvv-and-grasshopper-demo-with-ghowl-udp
after this is done, its a matter of trimming and splitting the incoming string.
2) translate numeric oscillation from ableton to change GH values
video below shows what the controll interface of both values (numbers) and the midi notes look like.
https://vimeo.com/19743303
3) midi note in = toggle GH component (this one could be tricky)
for this... i am thinking it would be great if ...it is possible to make "midi learn" function in grasshopper where one can DROP IN A COMPONENT LIKE GALAPAGOS OR TIMER and assign the component to a signal in, in this case a midi note. there are total 128 midi notes (http://www.midimountain.com/midi/midi_note_numbers.html) and this is only for one channel. there are infinite channels in ableton. I usually use 16.
I have already figured out a way to send string into grasshopper from ableton live. but problem is, how for grasshopper to listen, not just take it in, and interpret midi and cc value changes ( usually runs from 0 to 128) and perform certain actions.
Basically what I am trying to achieve is this : some time passes then a parameter is set to change from value 0 to 50, for example. then some time passes again, then another parameter becomes "previewed", then baked. I have seen some examples of hoopsnake but I couldn't tell that you can really control the values in a clear x and y graph where x is time and y is the value. but this woud be considered a basic feature of modulation and automation in music production. NVM, its been DONE by Mr Heumann. https://vimeo.com/39730831
4) send points, lines, surfaces and meshes back out to VVVV
5) render it using VVVV and play with enormous collection of components in VVVV..its been around since 1998 for the sake of awesomeness.
this kind of a digital operation-hardware connection is usually whats done in digital music production solutions. I did look into midi controller - grasshopper work, and I know its been done, but that has obvious limitations of not being precise. and it only takes 0 o 128. I am thinking that midi can be useful for this because then I can program very precise and complex sequence with ease from music production software like ableton live.
This is an ongoing design research for a performative exhibition due in Bochum, Germany, this January. I will post definition if I get somewhere. A good place to start for me is the nesting sliders by Monique . http://www.grasshopper3d.com/forum/topics/nesting-sliders
…
ers can be applied from the right click Context Menu of either a component's input or output parameters. With the exception of <Principal> and <Degrees> they work exactly like their corresponding Grasshopper Component. When a I/O Modifier is applied to a parameter a visual Tag (icon) is displayed. If you hover over a Tag a tool tip will be displayed showing what it is and what it does.
The full list of these Tags:
1) Principal
An input with the Principal Icon is designated the principal input of a component for the purposes of path assignment.
For example:
2) Reverse
The Reverse I/O Modifier will reverse the order of a list (or lists in a multiple path structure)
3) Flatten
The Flatten I/O Modifier will reduce a multi-path tree down to a single list on the {0} path
4) Graft
The Graft I/O Modifier will create a new branch for each individual item in a list (or lists)
5) Simplify
The Simplify I/O Modifier will remove the overlap shared amongst all branches. [Note that a single branch does not share any overlap with anything else.]
6) Degrees
The Degrees Input Modifier indicates that the numbers received are actually measured in Degrees rather than Radians. Think of it more like a preference setting for each angle input on a Grasshopper Component that state you prefer to work in Degrees. There is no Output option as this is only available on Angle Inputs.
7) Expression
The Expression I/O Modifier allows you change the input value by evaluating an expression such as -x/2 which will have the input and make it negative. If you hover over the Tag a tool tip will be displayed with the expression. Since the release of GH version 0.9.0068 all I/O Expression Modifiers use "x" instead of the nickname of the parameter.
8) Reparameterize
The Reparameterize I/O Modifier will only work on lines, curves and surfaces forcing the domains of all geometry to the [0.0 to 1.0] range.
9) Invert
The Invert Input Modifier works in a similar way to a Not Gate in Boolean Logic negating the input. A good example of when to use this is on [Cull Pattern] where you wish to invert the logic to get the opposite results. There is no Output option as this is only available on Boolean Inputs.
…
ing the maps to the broader community.
At the moment, there are just a few known issues left that I have to fix for complex geometric cases but they should run smoothly for most energy models that you generate with Honeybee. Within the next month, I will be clearing up these last issues and, by the end of the month, there will be an updated youtube tutorial playlist on the comfort tools and how to use them.
In the meantime, there's an updated example file (http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Indoor_Microclimate_Map) and I wanted to get you all excited with some images and animations coming out of the design part of my thesis. I also wanted to post some documentation of all of the previous research that has made these climate maps possible and give out some much deserved thanks. To begin, this image gives you a sense of how the thermal maps are made by integrating several streams of data for EnergyPlus:
(https://drive.google.com/file/d/0Bz2PwDvkjovJaTMtWDRHMExvLUk/view?usp=sharing)
To get you excited, this youtube playlist has a whole bunch of time-lapse thermal animations that a lot of you should enjoy:
https://www.youtube.com/playlist?list=PLruLh1AdY-Sj3ehUTSfKa1IHPSiuJU52A
To give a brief summary of what you are looking at in the playlist, there are two proposed designs for completely passive co-habitation spaces in New York and Los Angeles.
These diagrams explain the Los Angeles design:
(https://drive.google.com/file/d/0Bz2PwDvkjovJM0JkM0tLZ1kxUmc/view?usp=sharing)
And this video gives you and idea of how it thermally performs:
These diagrams explain the New York design:
(https://drive.google.com/file/d/0Bz2PwDvkjovJS1BZVVZiTWF4MXM/view?usp=sharing)
And this video shows you the thermal performance:
Now to credit all of the awesome people that have made the creation of these thermal maps possible:
1) As any HB user knows, the open source engines and libraries under the hood of HB are EnergyPlus and OpenStudio and the incredible thermal richness of these maps would not have been possible without these DoE teams creating such a robust modeler so a big credit is definitely due to them.
2) Many of the initial ideas for these thermal maps come from an MIT Masters thesis that was completed a few years ago by Amanda Webb called "cMap". Even though these cMaps were only taking into account surface temperature from E+, it was the viewing of her radiant temperature maps that initially touched-off the series of events that led to my thesis so a great credit is due to her. You can find her thesis here (http://dspace.mit.edu/handle/1721.1/72870).
3) Since the thesis of A. Webb, there were two key developments that made the high resolution of the current maps believable as a good approximation of the actual thermal environment of a building. The first is a PhD thesis by Alejandra Menchaca (also conducted here at MIT) that developed a computationally fast way of estimating sub-zone air temperature stratification. The method, which works simply by weighing the heat gain in a room against the incoming airflow was validated by many CFD simulations over the course of Alejandra's thesis. You can find here final thesis document here (http://dspace.mit.edu/handle/1721.1/74907).
4) The other main development since the A. Webb thesis that made the radiant map much more accurate is a fast means of estimating the radiant temperature increase felt by an occupant sitting in the sun. This method was developed by some awesome scientists at the UC Berkeley Center for the Built Environment (CBE) Including Tyler Hoyt, who has been particularly helpful to me by supporting the CBE's Github page. The original paper on this fast means of estimating the solar temperature delta can be found here (http://escholarship.org/uc/item/89m1h2dg) although they should have an official publication in a journal soon.
5) The ASHRAE comfort models under the hood of LB+HB all are derived from the javascript of the CBE comfort tool (http://smap.cbe.berkeley.edu/comforttool). A huge chunk of credit definitely goes to this group and I encourage any other researchers who are getting deep into comfort to check the code resources on their github page (https://github.com/CenterForTheBuiltEnvironment/comfort_tool).
6) And, last but not least, a huge share of credit is due to Mostapha and all members of the LB+HB community. It is because of resources and help that Mostapha initially gave me that I learned how to code in the first place and the knowledge of a community that would use the things that I developed was, by fa,r the biggest motivation throughout this thesis and all of my LB efforts.
Thank you all and stay awesome,
-Chris…