as mine but couldn't manage to make it work.
The following script was working on the python module in Rhino, but not in Ghpython.
Note that I have imported a library, and it seems to be importing on Ghpython
--------------------------------------------------------------------------------------------------
from khepri.rhino import *
def iterate_quads(f, ptss):
return [[f(p0, p1, p2, p3)
for p0, p1, p2, p3
in zip(pts0, pts1, pts1[1:], pts0[1:])]
for pts0, pts1
in zip(ptss, ptss[1:])]
def iterate_hexagono(pts, n, v):
return iterate_quads(lambda p0, p1, p2, p3: hexagono_quad(p0, p1, p2, p3, n, v), pts)
def hexagono_quad(p0, p1, p2, p3, n, v):
def chapa(pts):
return intersection(extrusion(line(pts), 280), shape_from_ref(v.copy_ref(v.realize()._ref)))
#return extrusion(line(pts), -40)
topo = intermediate_loc(p3, p2) + vx(distance(p3, p2)/4 * n), intermediate_loc(p3, p2) - vx(distance(p3, p2)/4 * n)
base = intermediate_loc(p0, p1) + vx(distance(p0, p1)/4 * n), intermediate_loc(p0, p1) - vx(distance(p0, p1)/4 * n)
lateral_esq = intermediate_loc(p3, p0), intermediate_loc(p3, p0) + vx(distance(intermediate_loc(p3, p0),intermediate_loc(p2, p1))/4 * n)
lateral_dir = intermediate_loc(p2, p1), intermediate_loc(p2, p1) - vx(distance(intermediate_loc(p2, p1),intermediate_loc(p3, p0))/4 * n)
conex_1 = intermediate_loc(p3, p2) - vx(distance(p3, p2)/4 * n), intermediate_loc(p3, p0) + vx(distance(intermediate_loc(p3, p0),intermediate_loc(p2, p1))/4 * n)
conex_2 = intermediate_loc(p3, p0) + vx(distance(intermediate_loc(p3, p0), intermediate_loc(p2, p1))/4 * n), intermediate_loc(p0, p1) - vx(distance(p0, p1)/4 * n)
conex_3 = intermediate_loc(p0, p1) + vx(distance(p0, p1)/4 * n), intermediate_loc(p2, p1) - vx(distance(intermediate_loc(p2, p1),intermediate_loc(p3, p0))/4 * n)
conex_4 = intermediate_loc(p2, p1) - vx(distance(intermediate_loc(p2, p1),intermediate_loc(p3, p0))/4 * n), intermediate_loc(p3, p2) + vx(distance(p3, p2)/4 * n)
return chapa(topo), chapa(base), chapa(lateral_esq), chapa(lateral_dir), chapa(conex_1), chapa(conex_2), chapa(conex_3), chapa(conex_4)
s = prompt_shape("Escolha superficie")
v = prompt_shape("Escolha solido")
iterate_hexagono(map_surface_division(lambda p:p, s, 5, 15), 0.5, v)
---------------------------------------------------------------------------------------------------
I imported the geometry from another cad software, and then I would select the surface and solid to perform a pattern iteration on the surface to be constrained inside the solid as a internal structure.
The problem is that the surface comes with u, v and normals all weird from the other software so I wanted to pass it through Grasshopper so I can get more control and also perform other computations on Gh on the Ghpython output. Sorry, maybe I’m over complicating. All I want is the Gh inputs working on Ghpython.
I’ll attach the Gh definition,. Need help with the Ghpython component, the rest is just me fooling around.
When I try to run the sript in Ghpython I get:
Runtime error (MissingMemberException): 'NurbsSurface' object has no attribute 'realize'
Traceback:
line 39, in map_surface_division, "<string>"
I'm also attaching the module I've imported
Any help will be very appreciated and sorry about my english
Thanks!
…
peuvent se diviser une surface avec ne importe quel motif imaginable. 3. Ici, je fournir un moyen de le faire via Lunchbox ... cela fonctionne mais il est fixe et donc nous avons besoin de jouer avec des arbres de données afin de créer le motif approprié par cas. 4. L'autre composante est un joint C # qui fait beaucoup de choses autres que de diviser ne importe quelle collection de points avec de nombreux modèles (voir le modèle ANDRE que je ai fait pour vous). 5. Vous devez décomposer une polysurface en morceaux afin de travailler sur les subdivisions. 6. Je donne une autre définition ainsi que pourrait agir comme un tutoriel sur la façon de traiter des ensembles de points via des composants de GH standards et des méthodes classiques.
Avertissez si tous ceux-ci apparaissent floue pour vous: Si oui, je pourrais écrire une définition utilisant des composants de GH classiques - mais vous perdrez les variations de motifs de division.
mieux, Peter
…
and export the geometry out to VVVV to render it LIVE! RawRRRR. In this case, a digital audio workstation Ableton Live, a leading industrial standard in contemporary music production.
the good news is that VVVV and ableton live lite is both free.
https://www.ableton.com/en/products/live-lite/
i am not trying to use ipad as a controller for grasshoppper. I wanted to work with a timeline (similar to MAYA or Ableton or any other DAW(digital audio workstation)) inside grasshopper in an intuitive way. Currently there is no way of SEQUENCING your definition the way you want to see that i know of.
no more combersome export import workflows... i dont need hyperrealistic renderings most of the time. so much time invested in googling the right way to import, export ... mesh settings...this workflow works for some, for some not ...that workflow works if ... and still you cannot render it live nor change sequence of instruction WHILE THE VIDEO is played. and I think no one wants to present rhinoceros viewport. BUT vvvv veiwport is different. it is used for VJing and many custom audio visual installation for events, done professionally. you can see an example of how sound and visuals come together from this post, using only VVVV and ableton. http://vvvv.org/documentation/meso-amstel-pulse
I propose a NEW method. make a definition, wire it to ableton, draw in some midi notes, and see it thru VVVV LIVE while you sequence the animation the WAY YOU WANT TO BE SEEN DURING YOUR PRESENTATION FROM THE BEGINNING, make a whole set of sequences in ableton, go back change some notes in ableton and the whole sequence will change RIGHT INFRONT of you. yes, you can just add some sound anywhere in the process. or take the sound waves (sqaure, saw, whateve) or take the audio and influence geometric parameters using custom patches via vvvv. I cannot even begin to tell you how sophisticated digital audio sound design technology got last ten year.. this is just one example which isn't even that advanced in todays standard in sound design ( and the famous producers would say its not about the tools at all.) http://www.youtube.com/watch?v=Iwz32bEgV8o
I just want to point out that grasshopper shares the same interface with VVVV (1998) and maxforlive, a plug in inside ableton. audio mulch is yet another one that shares this interface of plugging components to each other and allows users to create their own sound instruments. vvvv is built based on vb, i believe.
so current wish list is ...
1) grasshopper recieves a sequence of commands from ableton DONE
thanks to sebastian's OSCglue vvvv patch and this one http://vvvv.org/contribution/vvvv-and-grasshopper-demo-with-ghowl-udp
after this is done, its a matter of trimming and splitting the incoming string.
2) translate numeric oscillation from ableton to change GH values
video below shows what the controll interface of both values (numbers) and the midi notes look like.
https://vimeo.com/19743303
3) midi note in = toggle GH component (this one could be tricky)
for this... i am thinking it would be great if ...it is possible to make "midi learn" function in grasshopper where one can DROP IN A COMPONENT LIKE GALAPAGOS OR TIMER and assign the component to a signal in, in this case a midi note. there are total 128 midi notes (http://www.midimountain.com/midi/midi_note_numbers.html) and this is only for one channel. there are infinite channels in ableton. I usually use 16.
I have already figured out a way to send string into grasshopper from ableton live. but problem is, how for grasshopper to listen, not just take it in, and interpret midi and cc value changes ( usually runs from 0 to 128) and perform certain actions.
Basically what I am trying to achieve is this : some time passes then a parameter is set to change from value 0 to 50, for example. then some time passes again, then another parameter becomes "previewed", then baked. I have seen some examples of hoopsnake but I couldn't tell that you can really control the values in a clear x and y graph where x is time and y is the value. but this woud be considered a basic feature of modulation and automation in music production. NVM, its been DONE by Mr Heumann. https://vimeo.com/39730831
4) send points, lines, surfaces and meshes back out to VVVV
5) render it using VVVV and play with enormous collection of components in VVVV..its been around since 1998 for the sake of awesomeness.
this kind of a digital operation-hardware connection is usually whats done in digital music production solutions. I did look into midi controller - grasshopper work, and I know its been done, but that has obvious limitations of not being precise. and it only takes 0 o 128. I am thinking that midi can be useful for this because then I can program very precise and complex sequence with ease from music production software like ableton live.
This is an ongoing design research for a performative exhibition due in Bochum, Germany, this January. I will post definition if I get somewhere. A good place to start for me is the nesting sliders by Monique . http://www.grasshopper3d.com/forum/topics/nesting-sliders
…
ing the maps to the broader community.
At the moment, there are just a few known issues left that I have to fix for complex geometric cases but they should run smoothly for most energy models that you generate with Honeybee. Within the next month, I will be clearing up these last issues and, by the end of the month, there will be an updated youtube tutorial playlist on the comfort tools and how to use them.
In the meantime, there's an updated example file (http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Indoor_Microclimate_Map) and I wanted to get you all excited with some images and animations coming out of the design part of my thesis. I also wanted to post some documentation of all of the previous research that has made these climate maps possible and give out some much deserved thanks. To begin, this image gives you a sense of how the thermal maps are made by integrating several streams of data for EnergyPlus:
(https://drive.google.com/file/d/0Bz2PwDvkjovJaTMtWDRHMExvLUk/view?usp=sharing)
To get you excited, this youtube playlist has a whole bunch of time-lapse thermal animations that a lot of you should enjoy:
https://www.youtube.com/playlist?list=PLruLh1AdY-Sj3ehUTSfKa1IHPSiuJU52A
To give a brief summary of what you are looking at in the playlist, there are two proposed designs for completely passive co-habitation spaces in New York and Los Angeles.
These diagrams explain the Los Angeles design:
(https://drive.google.com/file/d/0Bz2PwDvkjovJM0JkM0tLZ1kxUmc/view?usp=sharing)
And this video gives you and idea of how it thermally performs:
These diagrams explain the New York design:
(https://drive.google.com/file/d/0Bz2PwDvkjovJS1BZVVZiTWF4MXM/view?usp=sharing)
And this video shows you the thermal performance:
Now to credit all of the awesome people that have made the creation of these thermal maps possible:
1) As any HB user knows, the open source engines and libraries under the hood of HB are EnergyPlus and OpenStudio and the incredible thermal richness of these maps would not have been possible without these DoE teams creating such a robust modeler so a big credit is definitely due to them.
2) Many of the initial ideas for these thermal maps come from an MIT Masters thesis that was completed a few years ago by Amanda Webb called "cMap". Even though these cMaps were only taking into account surface temperature from E+, it was the viewing of her radiant temperature maps that initially touched-off the series of events that led to my thesis so a great credit is due to her. You can find her thesis here (http://dspace.mit.edu/handle/1721.1/72870).
3) Since the thesis of A. Webb, there were two key developments that made the high resolution of the current maps believable as a good approximation of the actual thermal environment of a building. The first is a PhD thesis by Alejandra Menchaca (also conducted here at MIT) that developed a computationally fast way of estimating sub-zone air temperature stratification. The method, which works simply by weighing the heat gain in a room against the incoming airflow was validated by many CFD simulations over the course of Alejandra's thesis. You can find here final thesis document here (http://dspace.mit.edu/handle/1721.1/74907).
4) The other main development since the A. Webb thesis that made the radiant map much more accurate is a fast means of estimating the radiant temperature increase felt by an occupant sitting in the sun. This method was developed by some awesome scientists at the UC Berkeley Center for the Built Environment (CBE) Including Tyler Hoyt, who has been particularly helpful to me by supporting the CBE's Github page. The original paper on this fast means of estimating the solar temperature delta can be found here (http://escholarship.org/uc/item/89m1h2dg) although they should have an official publication in a journal soon.
5) The ASHRAE comfort models under the hood of LB+HB all are derived from the javascript of the CBE comfort tool (http://smap.cbe.berkeley.edu/comforttool). A huge chunk of credit definitely goes to this group and I encourage any other researchers who are getting deep into comfort to check the code resources on their github page (https://github.com/CenterForTheBuiltEnvironment/comfort_tool).
6) And, last but not least, a huge share of credit is due to Mostapha and all members of the LB+HB community. It is because of resources and help that Mostapha initially gave me that I learned how to code in the first place and the knowledge of a community that would use the things that I developed was, by fa,r the biggest motivation throughout this thesis and all of my LB efforts.
Thank you all and stay awesome,
-Chris…
ll geometry.
The difference with programs like Inventor is that they are made for production, regardless of the fabrication method. I won't go into detail about that, and instead focus on the modeling process.
In this little model, the starting point actually is a bit obvious, the foundation.
The only contents in the 3dm file are 27 lines. These indicate the location of each footing, and the direction of the tilt of each column. Everything else is defined in GH with the use of numbers as input parameters.
Needless to say, instead of those lines you could obviously generate lines and control the number of columns and panels, hence establish their layout, with any algorithmic or non-algorithmic criteria you please. That marks a major difference between GH and Inventor.
You can generate geometry with Inventor via scripting/customization (beyond iLogic), with transient graphics for visual feedback similar to GH's red-default previews. However Inventor's modeling functions are not set to input and output data trees. I won't go into detail on that, but suffice to say that the data tree associativity of GH was for me the first major difference I noticed. I've used other apps with node diagram interfaces like digital fusion for non-linear video editing since the late 90's, so the canvas did not call my attention when I first started using GH.
Anyways, here's a screen capture of the foundational lines:
In the first group of components, the centerlines of the rear columns are modeled:
And the locations in elevation for connection points are set. Those elevations were just numbers I copied from Excel, but you can obviously control that any way you please. I was just trying to model this quickly.
The same was done for the rear columns:
The above, believe it or not, took me the first 5 hours to get.
Here's a screen capture of what the model and definition looked like after 4 hours, not much:
If you're interested, next post I can get into the sketching part you mentioned, which is a bit cumbersome with GH, but not really.
I wouldn't say that using GH to do this little model was cumbersome, it just needed some thinking at the beginning. You do similar initial thinking when working with a feature-based modeler.…
Added by Santiago Diaz at 12:44am on February 24, 2011
Hi,
I have a similar problem, I tried the solution with Rhino 4sr9 and Rhino 5 (64bit), with the first works fine but the second doesn't work and gives me an errorThanks in advance for your reply
ns, which have a certain distance from the edge of the flat. I have a circuit, but the problem is the missing plane there and also some body. I need help relatively quickly. to the system: I work with Rhino 5 and Grasshopper version 28/09/2012, build 0.9.0014 here schematically the black bar should be flat and gray, the body thereon
…
chitects, Asymptote Architecture, Mario Bellini Architects and others to design the paneling systems.
Get a quick introduction to Rhino and Grasshopper.
Learn how to digitally reconstruction data from 3D scanners and even from regular photographs.
Experience how to print 3D models using state of the art machines.
Grant the opportunity to perform basic energy and performance analysis of your designs.
All this will be provided in a comprehensive 5 days workshop to be taught by international experts in the field as well as local researchers.
Organized by AUC American University in Cairo and GMVS Geometric Modeling and Visulization Center
…
Added by Zaghloul4d at 6:48pm on December 22, 2010
horas.
Los datos al contextualizar la fachada serán:
Vehículos (ISD: input social data)
Personas (ISD: input social data)
Edificaciones contiguas: (UI: urban input)
Sol (Radiación e iluminación): (EFI: energetic flow input)
Creación de energía solar y térmica: (ECI: energetic contribution input)
Objetivos específicos:
Cada asistente generará una fachada contextual a esos 5 inputs.
Entenderá la plataforma de Grasshopper
Comprenderá los conceptos de diseño generativo
Usará los conceptos de programación orientada a objetos (POO)
Generará renders y modelos físicos de la fachada (Fabricación digital)
Costos: $3,250 alumnos $4,180 alumnos de posgrado y profesores $4,830 profesionales
Aulas VI salón 6205, ITESM CEM
Informes: (55)-34449396 mexdf@krfr.org bioarchitecturestudio@gmail.com
Para más información visitanos en:
Fachadas ContextualesWorkshop >Fachadas Contextuales< KRFR|SEEDKRFR|SEED Red Internacional de Investigación OR/gan
http://www.bioarchitecturestudio.wordpress.com
…