me work I was doing on DP on GH. Here are my conclusions:
- As Rhino is not a constraint-based modeller, assembly design without plugins(RhinoWorks or else) is just not possible. So as long as constraints will not be present in rhino... no constraints, no AEC.
- The list management that GH offers is 10 000 time more efficient and user friendly. So a good point would be to link all the list management tools with GH-like interface. In fact, for all operations that are not concerning assembly (wireframe generation for example), GH is way ahead in terms of speed IF you're not dealing with geodesic curves or parallels on surface, eventually boolean operations, that are really a weakness of Rhino in terms of precision and stability. You can also do amazing synchronised attributes datatrees quite easily in GH, that you can then synchronise via Excel with a massive product based on Catia without problem. It can easily save you a few days of work.
- Rhino does not handle pre-computation of the geometry without loading effectively that geometry, so you will not be able today to work on a product bigger than 2Gb (maybe 3) in rhino in any way, even on rhino v5 64 with 16Gb of Ram. With the constraint stuff, I really think it is the second bad point about rhino.
- As Jon said, I think Rhino has to be understood as a sketch-oriented application for the construction (this is not pejorative, that's what I personnaly prefere) in a sense that its usefulness is to allow research of design possibilities, that you can of course link afterwards with what you want, but too much basic options are missing to rhino to be really viable for AEC. I personnaly don't want to see geometrical sets to appear in rhino, it is absolutely useless considering grasshopper evolution towards clusters for exemple.
After that, in purely technical terms I would say that:
1) Possible, partially already working --> Clusters (waiting for updates)/nested definitions + SQL for attributes management on several working definitions.
2) --> I think there are two ideas here: a) exporting some dead geometry in an arborescence of files (can be done quite easily with LocalCode but it will remain dead. You can also create a definition based on dead geometry and update this geometry using the geometry cache. Of course if this geometry is automatically exported via LocalCode from a precedent definition, when you update the upper definitions then the modification is repercuted on all your model. Personnaly I think it is best not to do it in rhino. b) otherwise, it is just synchronisation of public attributes attached to existing parts/products, as I described previously.
3) Geometry Cache. You can also auto-loop you file using loading/unloading input geometry of your desifnition with LocalCode and some VB.
But maybe I am wrong on some points of course.
Best,
Thibault.
…
ation production and consumption that represent our physical world in numbers and complex networks. The advent of computational systems has not only helped in developing data production but also in transmitting data between different disciplines including architecture through fields of numbers/codes. Historically, numbers and proportions played a vital role in architectural production, now; the complex flow of data is opening unexpected territories for architects. Data Flow is an advanced computational design workshop that focuses on capturing, processing and utilizing real time data from the surrounding environment by means of physical computing and parametric design tools, enabling the participants to develop informed design solutions that adapt to the environment. The workshop knowledge objective is to reconsider abstract data as a design opportunity by developing the quantitative flow of data as a qualitative design approach. /// Application To apply, please follow this link to fill the application form https://docs.google.com/forms/d/1xzKn-cZzfvu24ktTNP1ElGBAufdryfLNCXvpheucrS8/viewform /// Fees* 1700 EGP for students / 2000 EGP for graduates and young professionals * 20 % discount for early registration and payment before 22 nd of August 2014 more info on the workshop webpage: http://www.encodestudio.net/#!dataflow/cslb…
closer". 2 ends means a kind of "terminal" (massif east/ hollow west) SS 316/304 stuff that east has the threads and west is pressed around the cable. Classic structural analysis dictates the forces AND then (if the things are NOT commercially available) comes FEA that validates the nuts and the bits of any bespoke/custom system (if bits they can't sustain the forces > change country ASAP > Brazil + plastic surgery is highly recommended).
Spam on:
Wait a minute: WHAT are you after? Design some WOW truss or computing the forces of it? Because these 2 are different animals that are treated by different kind of disciplines: The Architect designs something and the Structural Engineer (in parallel) evaluates that something ... whilst the idiot (the Architect) does some other variant (since the first was crap).
In the old days that "I design" + "you compute" combo was a bit of a token ... since the "I-re-design" part was out of question. But these days it's not nuclear science provided that you can mastermind a fully parametric system that is adaptable enough to what the structural department could possibly dictate (that does this ^@$%$@ thing provided as an "indication" of these freaky systems).
Spam off.
2. That thing shown is not tensegrity in the classic sense (i.e. simplex, W, Xtruss etc etc) where the out boundaries of a given module they DON'T carry any member (cable or "thin" massif extrusion) that is NOT under tension. For instance a simplex module IS "pure" tensegrity since ... blah, blah. But on your thing the upper members are under pressure ... blah, blah.
3. That brings us to the 1M question: pure tensegrity (in the Name of Science) or a "bastardized" one? (in the Name of Something). If the latter ... why bother and not using a classic MERO KK system that costs 10 times less? (or carbon MERO [almost thin air] or a membrane or synthetic goat skin or solidified air (C)(tm)). …
th (60° max in Paris), but the problem stil arises for the angle theta (for the south but also for the others orientations). For the diffuse radiation, this difference should be 10% as you noticed.
2) I have done some simulations and tried to analyse the weather file used. You can find my results in the Excel File attached. Some simulations take into account the glazing and others just determine the "occultation factor" of the shading device, to which I apply then the solar factor of the window. I found there is a noticeable difference between "_shading_1" and "_Focc_1" for exemple, we should have found similar values ... ? It seems to happen something strange when the rays passe through the glass to reach the analysis points. Facing those results, I still have trouble to draw conclusions. I also determined the diffuse part of radiations for each day from the weather file used, it may help to understand ... If you have any suggestion to explain those results, please let me know.
3) Another point attracts my attention :
The horizontal infrared radiation intensity of the weather file is quite high and constant. I'm wondering if HB take into account this solar radiation's component which represent about 50% of the solar energy ?!
http://bigladdersoftware.com/epx/docs/8-3/auxiliary-programs/energyplus-weather-file-epw-data-dictionary.html#field-horizontal-infrared-radiation-intensity
I continue my research about what is going under the hood (reading documents on Radiance and Daysim calculations) and let you know about the progress of my searches.
Thank you again for your support !
Regards,
Severine
…
r availability on each orientation.
But to make thinks (hopefuly! :-) ) clearer, I attach a simplified version of my analysis using the same one surface to run the three different cases. I assume that the direction of the surface is now the same, still results are different. The top case in rhino correspond to the top case in the GH canvas, the lower in Rhino to the lower in GH..
I expected a difference in each run.. but the cases differ of 100% not 10% that would be reasonable..
Case 1 158 W/m2 for the "only test point" option
Case 2 314 W/m2 for the "test point + pts Vectors" case
Case 3 282 W/m2 for the ladybug option
The analysis is made the day 1 , hour 12h and the the solar radiation condition are:
Direct 125 W/m2
Diffuse 164 W/m2
Global Horiz 207 W/m2
The interesting thing is that the three cases made for the orizontal surface give the same results.
moreover if the materials reflectance is changed to 1, the results are very similar but the values are higher than the sum of direct + diffuse as for case 2
125+164 = 289 results give 314
(diffuse radiation is obviously calculated on horizontal surface in the weather file and the surface analysis is vertical so the percentage of diffuse radiation that the surface will receive will be even less).
Hope that I've been clearer and sorry if you already have answered my question, but I'm not understanding the results. (i'm not a GH pro-user but I'm quite familiar with analysis and these stuff)
Thank you again
filippo
…
Rhino Trainer), Davide Lombardi, Maurizio Arturo Degni
tariffa EarlyBird per gli iscritti entro il 28 Marzo 2015
INFO: http://www.arturotedeschi.com/wordpress/?project=form-finding-strategies-avanzato
La simulazione fisica interattiva, integrata nell’ambito della modellazione parametrica consente di indagare nuove soluzioni formali ottimizzate per l’architettura ed il design. Il workshop approfondirà le strategie e le principali tecniche di FORM FINDING utilizzando il motore fisico KANGAROO integrato a plugin di analisi strutturale (MILLIPEDE e KARAMBA). Le tecniche saranno applicate a diversa scala: dall’architettura (modellazione di superfici e coperture a semplice compressione) al design del prodotto, dove la simulazione digitale sarà integrata a tecniche di refinement (WEAVERBIRD). Il workshop e rivolto a studenti e professionisti con conoscenze base di modellazione algoritmica con Grasshopper.
Il programma approfondirà le metodologie e gli strumenti atti ad individuare soluzioni strutturali ottimizzate (es. superfici a semplice compressione) attraverso un’ampia trattazione di casi studio (Ponte sul Basento, Copertura British Museum) e l’applicazione di tecniche digitali basate sul form-finding gravitazionale e l’analisi FEM (Finite Element Method). Nella seconda parte del corso gli studenti affronteranno lo studio di innovative tecniche di ottimizzazione (Evolutionary Structural Optimization ed Extended Evolutionary Structural Optimization) basate sulla eliminazione della materia ridondante per una geometria assegnata, caratterizzata da un determinato sistema di vincoli, sottoposta ad una specifica condizione di carico.
…
t know if it's common knowledge but in the PD of jewelry, for large scale production, the options are in the dozens if not in the hundreds as in a 3 stone ring (that's my next quest and believe me it is extremely complex and elaborated) which, if you do not draw the line somewhere, you could end up with a definition 10 times as big. I could make a list of the preliminary factors and you could begin to understand at least this one presented here, that looks simple but is not.
If you are a real jeweler and know how many details (interdependent with each others) are needed in order to cover unpredictable factors and lousy tolerances then you'll begin to cover an overextended territory.
For example: if you get to certain stone size then the prongs need to change, but not the bezels, and the bite for setting can go for technical integrity up to a point, because then the look is not appropriate.
If the metal it's platinum you can leave some things as they are but interconnections for metal flow has to change in some area but not if it is in gold.
Some stone count may not fit a particular finger size without going too high or too low, so the bezels need to compensate for this in thickness and visual relationship between them so that when I input a different finger size GH knows what to do based on many more factors etc. etc.
The fact that all geometry is in GH accounts for so many more components.
All this needs to work across the definition, so that if I say this is the stone size I want, all the prongs will need to move apart to have the right bite but with a diameter that is not out of proportions otherwise the stones need automatically to move slightly apart. It's endless.
For this reason we needed to define the market expectations (and have all controls for those ones in GH) and leave the eccentric to a manual manipulation.
Grasshopper it's a hell of a tool to transfer my 40 years of jewelry making (since a little boy :)
but I think I am using maybe 20% of its power.
We used SolidThinking because of the construction tree but there is nothing like Rhino and GH combined!
I wish I was free to share this definition in order to learn from advanced minds here but this time I can't. The next one will be mine (intellectual and technical property) and I can't wait to see how other will take it to the next level.
That's the best way to learn.…
ngle list is identified by a unique path. For example {0}, {0;0;0} or {0;3;0} are all different paths. When data form multiple sources is merged (as in the [V] input of your polyline component), then the various paths are also merged. Thus, the point in the first panel at path {0;0;0} will be put in the same list as the point in the third panel at {0;0;0} as well as the point in the fourth panel at {0;0;0}. Thus, the polyline component will create a polyline through those three points. The second panel contains data with a different path format (only two numbers) and these points will not be merged with anything else because their paths are unique. However a polyline through a single point cannot be made which is probably why the component is orange.
I cannot fix your file because you didn't upload it, but here's some general advice:
Don't put panels in between source and target components. Panels convert the data into text, and this text will then be converted back into whatever type is required on the right. Sometimes this works fine (for example with booleans or integers), sometimes it won't work at all (for example with curves, meshes or breps) and sometimes it will work poorly (for example with points and vectors). The reason it works poorly is because the panel rounds the coordinates to 6 decimal places because this makes for easier viewing. However when points are recreated from the text, the remaining 10 decimals are now lost.It's fine to use panels to inspect data, but inserting them in between source and target components is rarely a good idea.
If you have data that exists in multiple lists but you want to put it all into a single list, you should use a Flatten component.
If you have data in various lists that you want to merge into a single tree (tree = list of lists), but you want to keep all the lists separate, you can use the Entwine component.
You should flatten all your individual point lists, then use Entwine to put them all together and finally plug the result of Entwine into the Polyline V input.…
Added by David Rutten at 3:04pm on September 9, 2016
I went with 3 blocks:
Create a bloc:
Defined color:
I create a randomized list (several possible method, here is jitter):
With Anemone plugin create an algo for move object:
Result:
Anaysis of result:
Animate random seed slider (0 to 10):
…
Added by Rémy Maurcot at 3:24am on November 27, 2014
o, presso la sede Eurac e il TIS, nei giorni 21,22 e 23 maggio 2015.
Il processo di progettazione integrata è riconosciuto come metodo per ottenere gli elevati livelli di qualità oggi richiesti agli edifici. Con questo approccio diventano sempre più rilevanti il comfort visivo e la gestione dell’illuminazione naturale in relazione al risparmio energetico. Di fatto, il nuovo protocollo Leed v4 riconosce crediti ad hoc e conferma l’importanza della progettazione daylighting per “collegare gli occupanti con lo spazio esterno, rinforzare i ritmi circadiani, ridurre l’uso dell’illuminazione elettrica con l’introduzione della luce naturale negli spazi”.
Una progettazione robusta richiede l’uso di strumenti di simulazione efficaci e Radiance è riconosciuto come uno dei software con le capacità di fornire risultati affidabili. Radiance è utilizzato sia a livello di ricerca che tra i progettisti, ed è tra i più accurati per la simulazione professionale della luce naturale ed artificiale. Non ha limiti di complessità geometrica ed è adatto a essere integrato in altri software di calcolo e interfacce grafiche. Le principali e più versatili tra queste (DIVA4Rhino, plug-ins per Grasshopper e Rhinoceros3D), essendo in grado di facilitare notevolmente le procedure di programmazione, saranno oggetto del corso.
Il corso è rivolto a progettisti e ricercatori che vogliano acquisire strumenti pratici per la simulazione con Radiance al fine di mettere a punto e verificare le soluzioni più adatte alle proprie esigenze. Sono previste lezioni di teoria e pratica con esempi ed esercitazioni volte a coprire in modo dimostrativo ed interattivo i concetti trattati.
Il corso viene riconosciuto con 15 crediti dall’Ordine degli Architetti.
Le domande di iscrizione devono essere presentate entro il 27 aprile 2015.
Scarica la brochure con tutte le informazioni Corso Radiance - EURAC.pdf
Il corso è sponsorizzato da Pellinindustrie.…