uier momento del diseño de un modelo 3D y este se readapta sin necesidad de redibujar la zona alterada.
Otra de las principales características del trabajo paramétrico es que nos permite automatizar procesos de trabajo o diseño. Esto quiere decir que, con procesos sencillos, podemos generar geometrías complejas y siempre justificadas en función de unos parámetros que nosotros definamos; lo que, en cierto modo, elimina la arbitrariedad en el diseño y nos arma de argumentos en la toma de decisiones de proyecto. Por otro lado, se pueden generar texturas y patrones de manera aleatoria o variable en función de atractores.
Tras la realización de este workshop, el alumno será capaz de desarrollar sus propias gramáticas, con la confianza que da comprender los términos básicos de programación sobre los que se apoya todo el sistema de trabajo de Grasshopper.
Grasshopper nos abre todo un mundo de posibilidades en el diseño y en la fabricación digital.
PARA QUIÉN
El workshop está dirigido a estudiantes y profesionales de la arquitectura, el interiorismo, la ingeniería, el diseño de producto, el diseño industrial y, en general, perfiles creativos y disciplinas artísticas que quieran introducirse en el mundo del diseño paramétrico.
Es recomendable tener conocimientos previos de Rhinoceros (nivel básico) ya que hay algunos conceptos que pueden ser útiles para un mejor seguimiento del workshop.
…
mple problem.
Imagine you're dividing a space (100m²) into two rooms, one of which (room A) should be 60m², the other (room B) 40m². Now it follows that the sum of both rooms must always add up to 100m². And if you make one room smaller by 5m², the other one gets bigger by 5m².
The simplest expression that would convert room areas into a fitness value is, I think:
Abs(A - 40) + Abs(B - 60)
or, in English, the sum total of the discrepancies between the actual areas and the desired areas.
If the rooms are both 50m² we get a fitness of:
Abs(50-40) + Abs(50-60) = 20
If room A equals 10m² and room B equals 90m², we get:
Abs(10-40) + Abs(90-60) = 60
If both rooms are exactly right, we get:
Abs(40-40) + Abs(60-60) = 0
So the point here is to minimize fitness, and once the fitness has reached zero we know we're home free.
But this is a very straightforward case. What if we're trying to optimize a problem, while knowing there's no way on Earth we'll be able to solve all constraints? This is after all what Evolutionary solvers are good at. So what if the problem is not as clear cut?
This time try to imagine we want every room to be 50m², but all the rooms are too small. Let's write down three cases like before:
(Room A = 30m², Room B = 40m²)
Abs(30 - 50) + Abs(40 - 50) = 30
(Room A = 35m², Room B = 35m²)
Abs(35 - 50) + Abs(35 - 50) = 30
(Room A = 25m², Room B = 45m²)
Abs(25 - 50) + Abs(45 - 50) = 30
Holy Crap! They're all the same! Well this is no good, it's like three bald men fighting over a comb. Even though all solutions fail to meet constraints, they certainly shouldn't all be equally fit. Let's assume for the time being we'd rather have both rooms fail to meet demands in equal amounts instead of one room being ok-ish and the other being way off. How can we add this assumption to the fitness function?
Basically we need to exaggerate large departures from the ideal and trivialize small departures. Our naive fitness function was linear, our new and improved fitness function must be non-linear. The simplest non-linear function is the parabola (x²). So let's see where that gets us.
Abs(30 - 50)² + Abs(40 - 50)² = 500
Abs(35 - 50)² + Abs(35 - 50)² = 450
Abs(25 - 50)² + Abs(45 - 50)² = 650
Phew... The case where both room fail to meet demands equally has the lowest value (and thus the highest fitness) whereas the most extreme discrepancy has the highest value (and thus the lowest fitness).
This approach is called Least Squares fitting and it's one of the most common fitting algorithms in statistics.
Whether you decide to weigh your competing factors equally or differently, and whether you decide to treat deviations linearly or non-linearly is entirely up to you. It requires you have a decent understanding of the problem at hand and also a decent understanding of the mathematical behaviour of the fitness function.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
Added by David Rutten at 6:16am on February 25, 2011
about it.
2. Nick's comment below got me thinking about unit testing for clusters. Being able to work will data flowing in from outside the cluster or having multiple states to test against could be really cool. Creating definitions that were valid across a general cross section of possible input parameters was a significant issue for us. It was all too easy to write the definition as if we were drawing (often we were working from sketches) and then have it fail when the input parameters changed slightly.
4. I wasn't thinking about threading the solver itself. I was thinking along the lines of some IDEs that I've seen which compile your project while you type it. I know that threading within components and at the rhinocommon-level is a freaking hard problem that has been discussed at length already. (although when, 5-10 years from now, it's finished it will be very cool)
Let's say the solver is threaded and the canvas remains responsive. As soon as you make a change to the GH file, the solver needs to be terminated as it is now computing stale data.
What if the solver was a little more atomic and like a server? A GH file is just a list of jobs to do with the order of the jobs and the info to do them rigidly defined - right? The UI could pass the solver stuff to do and store the results back in the components on a component by component basis (i have no idea what the most efficient way to do this is in reality - I'm just talking conceptually) this might even allow running multiple solvers to allow for at least the parallelism the might be built into a given GH file to be exploited (not within components but rather solving non-interdependent branches of components simultaneously). This type of parallelism would more than make up for the performance hit you alluded to for separating the UI and the solver (at least for most of the definitions i write).
I was imagining a couple of scenarios:
a) Writing a parallel module: solver starts chewing away - you see it working - you know it's done 1/3 of the work - if you have something to do at that point you could connect up to some of the already calculated parameters and write something in parallel to the main trunk which is still being solved.
b) Skipping modifications: you need to make a series of interventions at different intervals along a section of code. Sure you could freeze out that bit of a section of down steam code and make modifications so you can observe the effects more quickly. Unfreeze a bit more and repeat etc. etc. until your done and then unfreeze that big chunk at the end to make sure you haven't blown anything up. Just letting it resolve as far as it can while you sit there waiting for inspiration seems a lot more intuitive to me though.
On a file which takes 15 minutes to solve that's no big deal, but you certainly don't want to be adding a 20 millisecond delay to a solution which only takes 30 milliseconds.
You also wouldn't notice it at that point :-) perhaps for things where it would really make a difference, like Galapagos interactivity, it could be disabled - or could the existing "speed" setting just digest this need? Since the vast majority of time that Gh is solving is on files under active development not on finished code, i think qualitative performance is probably more important that quantitative performance (again with cases like Galapagos needing to be accommodated). In our case the code only had to "work" once since its output went to a cnc machine to make a one off project and it didn't really matter if it took 15 seconds or 15 hours for the final run.
Lastly, I have no way to predict how long a component is going to take. I can probably work out how far along in steps a component is, but not how far along in time.
that's ok, from a user point of view, just seeing a percentage tick along once in a while would be nice reassurance that the thing is just slow and has not, in fact, crashed. Maybe there could be two modes of display: the simple percentage version for unpredictable code and, for those of us able to calculate the time taken for our algorithm based on the number of input parameters, a count down in seconds or minutes or whatever.
I think a good place to start with these sort of problems is to keep on improving clusters, ... etc etc
i totally agree.
…
Added by Dieter Toews at 7:53pm on September 4, 2013
isseminated at the firms I've worked at:
Always write your scripts as though someone else is going to have to use and debug them without any instruction from you. This is kind of an overall governing principle that drives a lot of the other best practices.
Structure your definitions left to right. This way it is clear what is dependent on what, what executes in what order, and makes it easy to use the "Moses" tool (alt-click and drag on the canvas to spread components apart) to insert intermediate functionality to an already-existing definition.
For a given functional group (a set of components that does a well-defined thing) keep all the data going in to the group as labeled parameters on the left, and everything going out of the group as labeled parameters on the right. This is the intent of my "Best Practicizer" tool in Metahopper (now a menu item instead of a component). In essence, you're treating each group as though it's about to be clustered - you've defined what it does, what its inputs are, and what its outputs are. This makes troubleshooting much easier - if something is going wrong, you can easily isolate which group is causing the trouble by looking at inputs and outputs.3b. If you're grabbing some value or data (e.g. "STEP COUNT") many times from elsewhere in the definition, don't make a bunch of long wires that connect all the way back to the original source - grab that data into ONE labeled parameter and then connect all your inputs that need it to that - makes for one long wire instead of 20.
Annotate, annotate, annotate. Label your params (and if you're in icon mode, switch them manually to text). Label your groups. Use scribbles to mark larger regions of functionality. Use panels for "instructions" wherever it might not be clear how someone is supposed to use your tools.
Avoid "wireless" (Hidden Wires) connections. If you MUST use them, make sure you create params at both ends with matching names so it's clear what the data represents and where it comes from.
Cluster where possible. It's extremely helpful to isolate functional groups into clusters - it makes debugging faster and easier, since you don't have to wait for the whole definition to recompute when making small edits to the inside of a cluster, and it sets you up well to create code that can be re-used later on. However, don't take whole definitions and cluster them. As a rule of thumb, if a cluster has more than ~10 inputs, it should probably be broken into multiple clusters. There is a slight performance impact when clustering, because unlike an un-clustered group of components, which only executes the parts of the definition where something has changed, any time ANY input to a cluster changes, the WHOLE cluster re-computes. Because of this, a cluster shouldn't generally wrap any groups of components that are not related / don't connect with each other.
Color code your groups. Many firms develop a standard around group coloring so that it's easy to understand what parts of a definition are doing what kind of task. For instance, at Woods Bagot where I work, we have different colors for component groups that highlight inputs, outputs, rhino references, baking, and visualization. You may find that a different set is useful to you, but having a consistent standard can improve legibility. That's my 2c. At the end of the day, everyone works a little bit differently, and that's unavoidable (and not even a bad thing!) As long as you keep #1 in mind, all the rest will follow.
…
):
import rhinoscriptsyntax as rsstart=rs.AddPoint(0,0,0)end=rs.AddPoint([10,0,0])divide=20vec=rs.VectorCreate(end,start)vec=rs.VectorDivide(vec,divide)centerList=list()for i in range(divide): newVec=rs.VectorScale(vec,i) centerList.append(rs.CopyObject(start,newVec))for center in centerList: rs.AddCircle(center,2.0)any help appreciate thanks,roy…
Added by roy orengo at 4:03pm on November 15, 2016
):
import rhinoscriptsyntax as rsstart=rs.AddPoint(0,0,0)end=rs.AddPoint([10,0,0])divide=20vec=rs.VectorCreate(end,start)vec=rs.VectorDivide(vec,divide)centerList=list()for i in range(divide): newVec=rs.VectorScale(vec,i) centerList.append(rs.CopyObject(start,newVec))for center in centerList: rs.AddCircle(center,2.0)any help appreciate thanks,roy…
Added by roy orengo at 3:09am on November 16, 2016