to incorporating math and geometry in computational design education, Paneling Tools
Marlo Ransdell, PhD Creative Director, at FSU , Digital Fabrication in Design Research and Education
Andy Payne, LIFT architects | Harvard GSD | FireFly
Jay H Song, Chair, Jewelry School of Design, Jewelry as Personal Expression, Extra+Ordinary@Jewelry.com
Pei- Jung (P.J.) Chen, Professor of Jewelry, SCAD
Gustavo Fontana, designer/co-founder nimbistand, Diseñar, desarrollar y comercializar productos por tu cuenta.
Joe Anand, CEO MecSoft Corporation, RhinoCAM
Julian Ossa, Chair, Industrial Design Director, Diseño – Una opción de vida a todo vapor!, UPB
Minche Mena, SHINE Architecture, Principal
J. Alstan Jakubiec, Daylighting and Environmental Performance in Architectural Design Solemma, LLC
Carlos Garnier R&D Director / Jaime Cadena – General Director, Plug Design, www.plugdesign.com.mx
Mario Nakov, www.chaosgroup.com [ V-Ray ]
Andres Gonzalez, RhinoFabStudio
Workshops:
o) Paneling Tools
o) RhinoCAM
o) Rhinology in Design, for Jewelry
o) Footwear
o) V-Ray: Jewelry Design
o) V-Ray: Architects and Industrial Designers
o) FireFly
o) J. Alstan Jakubiec, DIVA
The cost for each workshop or the Lectures is 95.0 US$
To register:
WORK-SHOPS April 2 - RHINO DAY
WORK-SHOPS April 3 - RHINO DAY
REGISTRATION RHINO DAY
NOTE: All students and faculty members that register to this event, will receive a Rhino 5 Educational License at the event.
…
east make all our algorithms thread-safe, so they can all be called from multiple threads, this is the first step towards multi-threading.
But multi-threading is not just something you switch on or off, it's an approach. Let's take the meshing of Breps for example. Let's assume that at some point one or more breps are added to the document. The wireframes of these breps can be drawn immediately, but the shading meshes need to be calculated first. How do we go about doing this? Allow me to enumerate some obvious solutions:
We put everything on hold and compute all meshes, one at a time. Then, when we're done we'll yield control back to the Rhino window so that key presses and mouse events can once again be processed. This is the simplest of all solutions and also the worst from the users point of view.
We allow the views to be redrawn, mouse events and key presses to be handled, but we perform the meshing in a background thread. I.e. whatever processor cycles are left over from regular use are now put to work on computing meshes. Once we're done computing these meshes we can start drawing the shaded breps. This is a lot better as it doesn't block the UI, but it also means that for a while (potentially a very long time) our breps will not be shaded in the viewport. This approach is already a lot harder from a programming perspective because you now have multiple threads all with access to the same Breps in memory and you need to make sure that they don't start to perform conflicting operations. Rhino already does this (and has been doing for a long time) on a lot of commands, otherwise you wouldn't be able to abort meshing/intersections/booleans etc. with an Escape press.
So we can compute the meshes on the UI-thread or on a background thread. How about using our multiple cores to speed up the process? Again, there are several ways in which this can be achieved:
Say we have a quad-core machine, i.e. four processors at our disposal. We could choose to assign the meshing of the first brep to the first processor, the second brep to the second processor, the third brep to the third processor and so on. Once a processor is done with the meshing of a specific brep, we'll give it the next brep to mesh until we're done meshing all the breps. This is a good solution when multiple breps need to be meshed at once, but it doesn't help at all if we only need to compute the mesh for a single brep, which is of course a very common case in Rhino.
To go a level deeper, we need to start adding multi-threading to the mesher itself. Let's say that the mesher is set up in such a way that it will assign each face of the brep to a new core, then -once all faces have been meshed- it will stitch together the partial meshes into a single large mesh. Now we've sped up the meshing of breps with multiple faces, but not individual surfaces.
We can of course go deeper still. Perhaps there is some operation that is repeated over and over during the meshing of a single face. We could also choose to multi-thread this operation, thus speeding up the meshing of all surfaces and breps.
All of the above approaches are possible, some are very difficult, some are actually not possible if we're not allowed to break the SDK. A further problem is that there's overhead involved with multi-threading. Very few operations will actually become 4 times faster if you distribute the work across 4 cores. Often one core will simply take longer than the other 3, often the partial results need to be aggregated which takes additional cycles and/or memory. What this means is that if you were to apply all of the above methods (multi-thread the meshing of individual faces, multi-thread the meshing of breps with multiple faces and multi-thread the meshing of multiple breps) you're probably worse off than you were before.
--
David Rutten
david@mcneel.com
Poprad, Slovakia
* an example would be the z-sorting of objects in viewport prior to repainting, which is a step performed on every redraw as far as I know.…
e current data should be turned to be original data, is that right?there are 3 cases.
what do this components effect the performance after if i turn the component to be flatten and graft and to connect to other component?
{, is that0}
{0;0}
{0;0;0}
{0;0;0;0}
{0;0;0;0;0}
{0;0;0;0;0;0}…
with this machine.
As Jason says, Rhino and Grasshopper are mainly single-threaded, so I prioritized single core speed and got an i7 4790k, which comfortably overclocks to 4.7GHz (with a decent air cooler, but no fancy liquid cooling).
The Kangaroo2 solver is actually multi-threaded now, but the difference this makes is not great as you might imagine. Using 4 cores is certainly nowhere near 4 times faster, because although parts of the calculation are easily parallelized, everything still needs to be recombined at each iteration, and this is usually the bottleneck. I think there is still room for some improvement in how it is multi-threaded, but I wouldn't hold your breath for any massive changes on this front soon.
I'd be interested to know how the performance scales with the Xeon chips (more cores, significantly more expensive, but relatively low clock speeds). At the time I made the guess that they weren't worth it, but it would be good to really test this out.
RAM is relatively cheap these days, so I went with 32GB of it at 2133MHz. It does seem that the speed of the RAM matters, as enabling XMP in the BIOS (to make it run above the default 1333) seemed to make a noticeable difference.
Graphics-wise my personal feeling is that the gaming oriented GTX cards offer better value than the much more expensive 'professional' Quadro range - and have read that the hardware between the 2 has historically been very similar or even identical despite the Quadros being several times the price, with the difference being mainly in the drivers. There are some threads on discourse.mcneel.com about this, and it seems that recent GTX cards like the 970 do very well in Holomark (the Rhino performance benchmarking tool).
I got a GTX 770 (this was just before the 900 series came out), which is probably way overkill just for Rhino/Grasshopper, as they don't use the GPU for more than display (Though some of the render plugins do, and I think for those more CUDA cores is what matters, so there GTX is probably still better value.)
Probably swapping this for a much cheaper card wouldn't make much difference to Rhino/GH performance anyway (though if you want to use the PC for other stuff like gaming or virtual reality it does).
I don't have much experience with AMD cards, so can't comment on how they compare to Nvidia.
Eventually I do hope to make Kangaroo run the physics on the GPU, and potentially this does have a big speed impact. Nvidia recently released some impressive demos of their FLEX engine, which really fly with a decent graphics card. That is very much game-physics, and not suitable for most of the things Kangaroo is used for, but theoretically Kangaroo could also be adapted to use CUDA (or OpenCL), though it involves a lot of big changes, and I don't have a timeline for this yet.
In the much shorter term there are some things in the pipeline that should speed up Kangaroo for certain things like collisions between large numbers of objects, just by using some different algorithms.
Altogether my machine was still well under €2K, and I've been really happy with it. That said, the difference in performance between this and my 4 year old €700 i5 laptop is actually not that huge in day-to-day Grasshopper usage. It does seem that there is a strong case of diminishing returns with buying a PC - I'd hazard a guess that even spending 3 times this amount (as another thread on this forum was discussing recently) you'd be hard pushed to get anything that made a really significant difference to the experience of using it, and if you really want to spend more money, you would be better off just upgrading more frequently (and getting a nice monitor(s)).
Anyway, a long ramble, I hope some of it is useful. As I said, I'm no hardware expert, and would be interested to hear different opinions.
I also think it will be nice to make a simple benchmarking tool for Kangaroo and have people run it on their various machines and report back results (as with Holomark), to help others make informed decisions on these things. I'll try and put something together for this soon.
…
onents (radiation, sunlight-hours and view analysis) which let you study the effect of the orientation of your building and the analysis result. When you come to a question similar to "what is the orientation that the building receives the most/least amount of radiation?" is probably the right time to use this component.
HOW?
I'll try to explain the steps using a simple example. Here is my design geometries. The building in the center is the building to be designed and the rest of the buildings are context. I want to see the effect of orientation on the amount of the radiation on the test building surfaces from the start of Oct. to the end of Feb. for Chicago.
First I need to set up the normal radiation analysis and run it for the building as it is right now. [I'm not going to explain how you can set up this since you can find it in the sample file (Download the sample file from here)]
Now I need to set up the parameters for orientation study using orientationStudyPar component. You can find it under the Extra tab:
At minimum I need to input the divisionAngle, and the totalAngle and set runTheStudy to True. In this case I put 45 for divisionAngle and 180 for the totalAngle which means I want the study to be run for angles 0, 45, 90, 135 and 180.
[Note1: The divisionAngle should be divisible by totalAngle.]
[Note 2: If you don't provide any point for the basePoint, the component will use the center of the geometry as the center of the rotation.]
[Note 3: You can also rotate the context with the geometry! Normally you don't have the chance to change the context to make your design work but if you got lucky the rotateContext input is for you! Set it to True. The default is set to False.]
You're all set for the orientation study, just connect the orientationStudyPar output to OrientationStudyP input in the component and wait for the result!
The component will run the study for all the orientations and preview the latest geometry. To see the result just grab a quick graph and connect it to totalRadiation. As you can see in the graph 135 is the orientation that I receive the maximum radiation. Dang!
If you want to see all the result geometries set bakeIt to True, and the result will be baked under LadyBug> RadaitionStudy>[projectname]> . The layer name starts with a number which is the totalRadiation.
Mostapha…
doing this with the current tools or a bit of scripting since the Flickr API allows you to make requests in a REST format, but utilizing the Flickr.net API library makes it much simpler.
First and foremost, you need a Flickr API key...do you have one of those?
A great way to get to know the Flickr API is with the API Explorer. Here is a link to the page for the flickr.photos.search method explorer: http://www.flickr.com/services/api/explore/flickr.photos.search
The cool thing about this page is that it generates the REST Http call towards the bottom. So, here is what I did:
1. Grab the coordinates of the bounding box per Flickr API request:
bbox (Optional)
A comma-delimited list of 4 values defining the Bounding Box of the area that will be searched. The 4 values represent the bottom-left corner of the box and the top-right corner, minimum_longitude, minimum_latitude, maximum_longitude, maximum_latitude. Longitude has a range of -180 to 180 , latitude of -90 to 90. Defaults to -180, -90, 180, 90 if not specified. Unlike standard photo queries, geo (or bounding box) queries will only return 250 results per page. Geo queries require some sort of limiting agent in order to prevent the database from crying. This is basically like the check against "parameterless searches" for queries without a geo component. A tag, for instance, is considered a limiting agent as are user defined min_date_taken and min_date_upload parameters — If no limiting factor is passed we return only photos added in the last 12 hours (though we may extend the limit in the future).
So, I went to Google Earth, picked a city (London, UK) and dropped two pins:
This gave me two locations, which I can put into the Explorer Page next to the bbox option. Here is what I put for these two points: -0.155941,51.496768,-0.116783,51.511431
2. Check has_geo
3. In extras, type in geo
4. Make the call!
You will see a list of responses in an XML format, these responses will be from the first page. Geolocated photos are limited to 250 / page, so you will have to grab them page by page.
If you want to add more options (minimum upload date, maximum upload date, etc) you can do this as well)
The best is at the bottom, you get the full http call for this: http://api.flickr.com/services/rest/?method=flickr.photos.search&api_key=ffd44f601393a46e86aa3a5f8a013360&bbox=-0.155941%2C51.496768%2C-0.116783%2C51.511431&has_geo=&extras=geo&format=rest&api_sig=b42330e5d1523bd5fe60c2ad43acde99
Notice this call has some other api key, you should eventually replace this with your own.
You could copy and paste this into a browser and you will get the results with the latitude and longitude:
So this is really what you need to know to do this through GH. Since gHowl has an XML parser component that can access files on the web, you should be able to use the same http call into this component.
Eventually, we get a response, and we need to grab the lat and lon data. With gHowl we can map these to xyz coordinates, and generate the heatmap...this is just a linear mapping:
Attached are both the Rhino file and the Grasshopper file, as well as the image underlay.
I am working on a series of components that makes this more straightforward, but for now, this should get you started.
…
and export the geometry out to VVVV to render it LIVE! RawRRRR. In this case, a digital audio workstation Ableton Live, a leading industrial standard in contemporary music production.
the good news is that VVVV and ableton live lite is both free.
https://www.ableton.com/en/products/live-lite/
i am not trying to use ipad as a controller for grasshoppper. I wanted to work with a timeline (similar to MAYA or Ableton or any other DAW(digital audio workstation)) inside grasshopper in an intuitive way. Currently there is no way of SEQUENCING your definition the way you want to see that i know of.
no more combersome export import workflows... i dont need hyperrealistic renderings most of the time. so much time invested in googling the right way to import, export ... mesh settings...this workflow works for some, for some not ...that workflow works if ... and still you cannot render it live nor change sequence of instruction WHILE THE VIDEO is played. and I think no one wants to present rhinoceros viewport. BUT vvvv veiwport is different. it is used for VJing and many custom audio visual installation for events, done professionally. you can see an example of how sound and visuals come together from this post, using only VVVV and ableton. http://vvvv.org/documentation/meso-amstel-pulse
I propose a NEW method. make a definition, wire it to ableton, draw in some midi notes, and see it thru VVVV LIVE while you sequence the animation the WAY YOU WANT TO BE SEEN DURING YOUR PRESENTATION FROM THE BEGINNING, make a whole set of sequences in ableton, go back change some notes in ableton and the whole sequence will change RIGHT INFRONT of you. yes, you can just add some sound anywhere in the process. or take the sound waves (sqaure, saw, whateve) or take the audio and influence geometric parameters using custom patches via vvvv. I cannot even begin to tell you how sophisticated digital audio sound design technology got last ten year.. this is just one example which isn't even that advanced in todays standard in sound design ( and the famous producers would say its not about the tools at all.) http://www.youtube.com/watch?v=Iwz32bEgV8o
I just want to point out that grasshopper shares the same interface with VVVV (1998) and maxforlive, a plug in inside ableton. audio mulch is yet another one that shares this interface of plugging components to each other and allows users to create their own sound instruments. vvvv is built based on vb, i believe.
so current wish list is ...
1) grasshopper recieves a sequence of commands from ableton DONE
thanks to sebastian's OSCglue vvvv patch and this one http://vvvv.org/contribution/vvvv-and-grasshopper-demo-with-ghowl-udp
after this is done, its a matter of trimming and splitting the incoming string.
2) translate numeric oscillation from ableton to change GH values
video below shows what the controll interface of both values (numbers) and the midi notes look like.
https://vimeo.com/19743303
3) midi note in = toggle GH component (this one could be tricky)
for this... i am thinking it would be great if ...it is possible to make "midi learn" function in grasshopper where one can DROP IN A COMPONENT LIKE GALAPAGOS OR TIMER and assign the component to a signal in, in this case a midi note. there are total 128 midi notes (http://www.midimountain.com/midi/midi_note_numbers.html) and this is only for one channel. there are infinite channels in ableton. I usually use 16.
I have already figured out a way to send string into grasshopper from ableton live. but problem is, how for grasshopper to listen, not just take it in, and interpret midi and cc value changes ( usually runs from 0 to 128) and perform certain actions.
Basically what I am trying to achieve is this : some time passes then a parameter is set to change from value 0 to 50, for example. then some time passes again, then another parameter becomes "previewed", then baked. I have seen some examples of hoopsnake but I couldn't tell that you can really control the values in a clear x and y graph where x is time and y is the value. but this woud be considered a basic feature of modulation and automation in music production. NVM, its been DONE by Mr Heumann. https://vimeo.com/39730831
4) send points, lines, surfaces and meshes back out to VVVV
5) render it using VVVV and play with enormous collection of components in VVVV..its been around since 1998 for the sake of awesomeness.
this kind of a digital operation-hardware connection is usually whats done in digital music production solutions. I did look into midi controller - grasshopper work, and I know its been done, but that has obvious limitations of not being precise. and it only takes 0 o 128. I am thinking that midi can be useful for this because then I can program very precise and complex sequence with ease from music production software like ableton live.
This is an ongoing design research for a performative exhibition due in Bochum, Germany, this January. I will post definition if I get somewhere. A good place to start for me is the nesting sliders by Monique . http://www.grasshopper3d.com/forum/topics/nesting-sliders
…
as one element.
Thank you
Comment by karamba on October 7, 2014 at 11:27pm
Hello Patricio, divide the beams in such a way that each boundary vertex of the shell becomes an endpoint of a beam segment.
Best, Clemens
Comment by Llordella Patricio on October 8, 2014 at 8:30amDelete Comment
Hi Clemens,
I did what you suggested but now assemble element doesn´t work properly. Could you please tell me how to fix it? Thanks in advance, Patricio
8-10-14losa%20cadena.gh
Comment by karamba on October 8, 2014 at 11:59am
Hi Patricio, if you flatten the 'Elem'-input at the 'Assemble'-component the definition works. The triangular shell elements have linear displacement interpolations whereas the beam deflections are exact. In order to get correct results you should refine the shell mesh.
Best, Clemens
Comment by Llordella Patricio on October 9, 2014 at 8:35amDelete Comment
Hello, succeeds in creating the mesh to the slab, and built the beam segment, but when I see the deformations are not expected because the beam is deformed as the slab.
Thanks for the help
PS: maybe I'm using the program for a type of structure that is not the most appropriate, as I saw in the examples of other structures. But this type of structure is that students taught
best regards
Patricio
9-10-14%20Example%201.gh
Comment by karamba on October 9, 2014 at 10:46am
You could use the 'Mesh Edges'-component to retrieve the naked edges and turn them into beams - see attached file:91014Example1_cp.gh
Best regards,
Clemens
Comment by Llordella Patricio on October 15, 2014 at 3:41pmDelete Comment
Dear clemens
I was doing a rough estimate of the deformation, and I can not achieve the same result with Karamba. When I make a rough estimate of the result with Karamba beams and mine are very similar, I think the problem is when I connect the shell, because there are no similar results.
I sent the GH file, and an image of the calculation
The structure is concrete The result I get is 0.58cm
thank youPatricio
15-10-14%20Example.gh
Comment by karamba yesterday
Dear Patricio,
try to increase the number of shell elements. As mentioned in the manual they are linear elements. A mesh that is too coarse leads to a response which is stiffer than the real structure.
Best,
Clemens
…
n be obtained for curved NURBS surfaces as well as unconventional window configurations".
And I also noticed the following information form the optional input in the runEnergySimulation component.
"meshSettings_: Optional mesh settings for your geometry from any one of the native Grasshopper mesh setting components. These will be used to change the meshing of curved surfaces before they are run through EnergyPlus (note that meshing of curved surfaces is done since Energyplus is not able to calculate heat flow through non-planar surfaces). Default Grasshopper meshing is used if nothing is input here but you may want to decrease your calculation time by changing it to Coarse or increase your curvature definition (and calculation time) by making it finer".
1) My case is an one-story, rectangular-plan large hall (40m*70m*25m) with a curved roof. The roof surface is a part of a standard sphere and the walls and floor are all planar (the each wall has one curved edge as showed in the image).
For testing, I threw the original curved roof surface into daylight and energy simulations without making customized meshings, because I assumed that it might be automatically converted to meshs by Honeybee - Am I right? As showed in the image, how can I reduce the number of meshs in a proper way? Must two connected surfaces (i.e. wall and roof) be STRICTLY/SEAMLESSLY connected or not (considering different divisions of meshs in the respective surface)? - Is a connection tolerance allowed?
2) But, when I run the annual daylight simulation for this case, it gave me a lot of warnings "oconv: warning - zero area for polygon".- is that normal? and how to avoid this? Does the daylight simulation allow "curved NURBS surfaces"?
3) Moreover, when I run an energy simulation for this case, it costed extremely long time. It was just so long that I did not even have results out of one simulation. - I guessed it might be the problem caused by the curved roof surface (or automatic meshing?), but I don't have experience of converting a curved NURBS/spheral surface into correct meshs that can be recognized by Honeybee simulations (Daylight and Energy) in a proper way.
4) The large window on the wall was generated by the "_glzRatio". But the automatically generated wall meshs around this window are just too "fine", which might largely increase simulation time. Is there a proper way to get rid of it? (Considering that the size, shape and position of the window will have large influence on the daylight distribution in the building, it is worthy to keep the size, shape and position of the window as it should be in reality).
In sum, considering all above, could your please provide me some suggestions/tutorials/links that might be helpful for dealing with "curved NURBS surfaces" in Honeybee simulations.
Thank you all in advance!
Best,
Ding
…