azione parametrica e generativa attraverso Grasshopper, plug-in di programmazione visuale per Rhinoceros 3D (uno dei più diffusi modellatori NURBS per l‘architettura e il design). Il workshop mira a gestire e sviluppare il rapporto tra informazione e geometria lavorando sui sistemi ad involucro in condizioni specifiche.La discretizzazione di superfici (pannellizazione Nurbs o Mesh), la modellazione delle geometrie attraverso informazioni (siano esse provenienti da analisi ambientali, mappe o database) e l’estrazione e la gestione di queste informazioni, richiede la comprensione di strutture di dati al fine di gestire completamente processo che va dalla progettazione alla costruzione.I partecipanti impareranno come costruire e sviluppare strutture di dati parametrici per informare geometrie ‘data-driven’ e come estrarre le informazioni rilevanti da tali modelli per il processo di costruzione.
Modulo 2 – Il workshop, volto a promuovere le nuove tecnologie digitali di supporto alla progettazione e alla fabbricazione, esplorerà l’integrazione tra design e prototipazione tramite processi di stampa 3d di materiale ceramico al fine di comprenderne allo stesso tempo sia il comportamento del materiale che i vincoli e le opportunità offerte dall’utilizzo di tali tecnologie.Infatti utilizzando grasshopper ed una macchina a controllo numerico i partecipanti apprenderanno le modalità per la generazione parametrica dei modelli e la creazione del codice per la loro prototipazione (Gcode creato direttamente in Grasshopper). Il workshop darà quindi ai partecipanti la possibilità di testare direttamente i loro elaborati digitali stampandoli in modo da comprendere come le informazioni articolate tramite tali strumenti di design producano specifici effetti sia morfologici che estetici.…
t are my daily tools.
Perhaps Luis can verify this: I assume to use C# from Grasshopper the C# code must be packaged in a .DLL - Luis, is that so?
If Grasshopper can use C# .DLLs, read on, otherwise the rest of this is low-value text.
And even if Grasshopper can use C# .DLLs, be warned that I expect some re-coding will be needed to get this TUIO code to do anything with Grasshopper.
You will need Visual Studio 2008 or C# Express 2008 (Express versions are free. Visual Studio is not free).
Download and unzip the TUIO source code. In C# Express open the .sln file. The code must have been written in C# 2005, because C# 2008 will ask to convert. Just click the usual buttons to convert. On my machine the conversion was painless and error-free.
Visual Studio lets people organize sets of related code projects into solutions (.sln files represent solutions). After conversion of the TUIO solution you will notice the solution contains three related C# projects: TUIO_DEMO, TUIO_DUMP and TUIO_LIB.
And now we get to the indications that some re-coding will be needed. TUIO_DEMO builds to a windows application .EXE, so would ignore that piece. TUIO_LIB builds to a .DLL file, so you likely keep that. The problem is TUIO_DUMP. TUIO_DUMP builds to a console .EXE file.
If Grasshopper requires DLL files, then what you will want to build is one new DLL file. This one new DLL will contain all the source for TUIO_LIB, likely without requiring any changes. Good.
The bad news is that TUIO_LIB has defined it's own client interface, and the best way to deal with that is probably: to understand and edit the code in TUIO_DUMP. The goals of the TUIO_DUMP edits will be 1) to support the interface that TUIO_LIB defines (and requires), and 2) to get the TUIO data up into Grasshopper somehow. As indicated above, I don't know exactly how Grasshopper interfaces with C# code. I just assume based on Luis' response that there is some reasonable way.
So, to build that one new DLL you would combine all the unedited TUIO_LIB C# source files and the edited TUIO_DUMP C# sojurce files into a single new C# project. That new project would be a C# library project (also called DLL project).
Hopefully I'm wrong and this will take less work that I have outlined.
Good luck in any case :-)…
Added by Bruce Ramsey at 11:53am on March 12, 2010
So it is either point in time or annual. I can t set a period? This month to that month etc?
Yes, and no. Yes because you can get that results that you mentioned but no, because if you are running an annual analysis, the study will run for the whole year. You can use the components afterward to read the results for a specific month or each time step, but you can it set it up before hand.
In the _gridSize input of the Ladybug_Radiation_Analysis component, you say "This value should be smaller than the smallest dimension of the test geometry for meaningful results". But what if we want to run it for individual panels (attached)? I set the grid size to something larger than the panels size, in order to just test the center of each panel. Do you think there would be any mistakes with the results there?
As far as the surfaces are not overshadowed it will give you the same results. Your attached example is correct.
If there is no context in the context_ input of the Ladybug_Radiation_Analysis the analysis doesn't run. So I 've been using an insignificant small geometry as fake context to simulate the "no context" situation. Do you see any issues with what I am doing? Have you considered changing the component to run even with no context?
Are you sure about this? The component should run with no context. Can you please check again and send me an example that you can't run the analysis with no context. That can be a bug.
(in previous comment) Regarding the "004_gridBasedAnalysis.gh", and while switching from type 0 to 2 at the GRD, -you are right- the outputs in the component are correct as you have it in the images. But the Analysis type remains as 1 in the "glz_.6_TVis_.4.typ" so the units in the legend read always as wh/m2. Even when changing the target folder. Something maybe with the overwriting the file? (could be at my system but I doubt cause I have admin privileges).
This might be a bug too. I will check this later and let you know. I will also reply to the other discussion later tomorrow. Thanks for bringing up these sort of questions.
Mostapha…
plane at which it is defined, not just the ground plane. So technically you can apply it at a higher level to guarantee solar exposure for the facades of surrounding buildings (for passive solar gain) although this obstructs solar access at ground level.
You're right that by limiting the south side, you're limiting the passive solar energy potential for the building mass within the SolarEnvelope. But I think this gets at a broader point about the use of this tool in the context of building energy performance: it's only one strategy amongst many, and many of these strategies will contradict each other.
Here are a couple examples of relevant conflicts, off the top of my head, to your problem: (1) in order to maximise passive solar gain, it would be ideal to concentrate built massing on the north side side of the lot (increasing solar exposure on the south face) - as you suggest - but that would then obstruct solar exposure to the south face of the building immediately north of your site. (2) Increasing the surface area to passive solar gain and daylighting could reduce building energy, but it can also increase building energy due to surface heat loss, or increased air conditioning loads. If air conditioning loads are high, it would be better to do the opposite of the solar envelope, use the neighbouring massing to obstruct heat gain.
So the interaction between overshadowing, passive or active energy gain, built massing, glazing ratios and building energy is complicated. A couple of things you need to address: what is the dominant load in your buildings - is it heating (like residential) or lighting (like office)? You may require different energy strategies for each. If the building is multi-use, you have an opportunity to combine different strategies. I believe commercial ground programs for example usually don't require daylighting or passive solar gain, as their primary energy load is air conditioning. As I mentioned above, you can therefore raise the SolarEnvelope, ignoring the ground plane, and instead focusing solar exposure to neighbouring facades.
Keep in mind that at a certain point (as I believe the paper Chris linked mentions) SolarEnvelopes constrain density to a point where it can be detrimental to broader urban goals. So if you want higher densities I wouldn't use the SolarEnvelope so 'strictly'.
s…
lding floor plan. This is done by minimizing the number of areas with less than a certain amount of lux. To simulate the daylight level or illuminance of the floor plan, DIVA was used.
Below is what my grasshopper definition looks like:
1. In Galapagos, what is the difference between the Evolutionary Solver and Simulated Annealing Solver?
(a) When I used the Evolutionary Solver, somehow there wasn't any results showing although I let the simulation run for quite some time. I could see that the solver was running as the DIVA simulation pop-ups would appear but in Galapagos, everything remained blank. Could anyone tell me why this happens? This is what it looks like:
(b) When I used the Simulated Annealing Solver, there would be results appearing for awhile and then suddenly there would be an error and the whole process would stop. The graph would disappear and the results visible but there are some problems retrieving the information. See below:
2. Also, is there any way I could save the results/combinations achieved in Galapagos?
Any help is much appreciated!!!
THANK YOU!…
, please let me know.
Also, I'd like to preface this request with a great deal of gratitude and thanks for creating/working on this software. So - thank you!
1) Variable parameter insertion for clusters
Right now, if you have a cluster and want to add an input/output from the parent document, you have to open the cluster add an input/output, close the cluster, and hook up the cluster. This is all good and well, but if the cluster takes a few seconds to execute, adding/removing clusters becomes pretty cumbersome.
Instead, if there the variable parameter capability was added to cluster (ala the C#/VB/Python components), it would be great. I'm imagining the functionality being:
You zoom into the cluster, and click on the 'plus'/'minus' buttons, which adds/removes an input/output (a generic input param) within the cluster, in an arbitrary location. You can then hook parameters up, and then double-click to enter the cluster. After you find the input components, you can continue hooking them up to the rest of your cluster.
This would be speed up the ability to work fluidly with clusters rapidly.
2) De-clustering functionality
As is with coding, sometimes a cluster makes sense; sometimes it doesn't. Sometimes the wrong aspects of an operation have been clustered, and it makes sense to undo the entire thing. But when this happens, you have to copy-paste from within a cluster, and have to manually hook up all of the lines back together, which is a great annoyance.
It would help greatly to right-click on a cluster, select 'decluster', and have the cluster un-clustered, with all of the connections reconnected in their original position/location. By making the 'cluster' operation work both ways, I think this would also really enhance a cluster workflow.
This would also solve the problem of adding things into a cluster. Normally, you have to copy components, paste them into the cluster, and re-link them within, which is a simple operation, yet takes a minute or so. With this functionality, you'd be able to decluster, select the declustered components + new components, and cluster again -- quickly adding components/items to a cluster.
Thanks for hearing me out!
Best,
Dan…
Added by Dan Taeyoung at 3:41pm on January 10, 2014
ce witch should have perforated holes in it.
I have done the surface from several sinus-curves which has been lofted together to simulate rippling water. Then I add some holes on the lofted facade from a "Curve Analysis Holes" definition from co-de-it. There is a problem though, this definition from co-de-it only puts holes/circles which are surfaces on the lofted surface, it doesn't make actual holes/perforations in the loft...
Is there a way to transform the surface circles from co-de-it to cut holes in the lofted surface? I have tried two things to make this happen.
1. In GH I have extruded the surface holes from co-de-it and then made a "BBX" and "SurfaceSplit" so that the extruded holes (now pipes) can cut into the lofted surface facade and then only Bake the lofted surface with holes into it. But It does not seem to work, even though I thought this was the way to go.
2. The other way I tried to get the holes to cut into the lofted facade surface was manually in Rhino. After I have baked out the lofted surface as an object and then the holes from co-de-it as a separate grouped object I can manually extrude the holes on "both sides" of the lofted surface to then make BooleanSplit and thus cut away the holes from the lofted surface. Problem is that in order to do the operation manually with Rhino it takes much ram, else it will crash. I have 16 GB ram with additional 16 GB virtual RAM, and still it crashes while calculation the Boolean Operation.
The loft is my own definition and the holes comes from Co-de-iT as a "curvature analysis pattern"-definition. I will try to solve this meanwhile as I am eager to find it out. Please have a look you too!
Many thanks,
/Sweden
…
es which you can see below in my mesh repair report I ran on the mesh after baking it.
This is a bad mesh.
Here is what is wrong with this mesh: Mesh has 2 non manifold edges. <<------because of the duplicate face Mesh has 1 duplicate face. Skipping face direction check because of positive non manifold edge count.
General information about this mesh: Mesh does not have any degenerate faces. Mesh does not have any extremely short edges. Mesh does not have any naked edges. Mesh does not have any self intersecting faces. Mesh does not have any disjoint pieces. Mesh does not have any unused vertices.
Continuing the repair process does rid the duplicate face. I also realized that rhino 5 has the command called "ExtractDuplicateMeshFaces" which works quite nicely.
However, this "method" is not available in rhinocommon currently. So I just wonder who can I ask to add it? It seems it would make sense to be in rhinocommon considering we have methods available for each of the other tests run by mesh repair. The reason I am interested in this command is it seems to work very fast.
Thanks
…
Added by Michael Pryor at 12:53am on December 11, 2014
region edges...)
Examples from other posts suggest using the SurfaceSplit component to generate regions however, this wouldn't be suitable due to the original unclean edges.
Two other possible solutions:
Option 1: IsoVist Ray component in conjunction with ConvexHull works well, except it becomes painfully slow when multiple regions are required.
Preferred Option 2: PullPoints component gathers closest surrounding points around the centre point allowing a ConvexHull operation to generate the boundary.
The second option is currently the most promising however I'm running into a problem I don't quite understand...
When the PullPoint component searches for the closest surrounding points...
it appears to be missing out certain shared points, generating inaccuracies with the final region...
Can anyone explain why certain points are being excluded from the search and whether there's a way of resolving the problem...?
Many Thanks...
…
Added by Dean Foskett at 4:45pm on December 13, 2014