the end of the workshop Student performance objectives
- Understanding some basic concepts of Grasshopper, such as; Mathematical Function, Geometry, etc.
- Creating a simple parametric design system.
---------------------------------------------------
Schedule :
Deadline for Registration : April 02,2013
Workshop Starts : Thursday, April 02, 2013 - 5:30 pm
The workshop consists of 10 lectures, Each lecture lasts for 3 hours.
3 lectures per week
---------------------------------------------------
Fees :
600 L.E
You have to fill the Registration Form below for place reservation.We only have few places available.
---------------------------------------------------
Prerequisite :
-Basic knowledge of any 3d modeling software “Sketchup, 3dsmax, Rhino, Maya, ...,etc.” is required to attend the workshop.
---------------------------------------------------
Registration Form:
https://docs.google.com/forms/d/1W5CptB7FyU2d37_aqtSaBN_sxPqj7491HUN_NFgGyg8/viewform
---------------------------------------------------
Previous workshop
https://www.facebook.com/events/469048376477647/
https://www.facebook.com/media/set/?set=a.548388031851299.1073741826.470747186282051&type=1
https://www.facebook.com/events/178326265647678/…
th the most crucial and imposing challenges that Mexico City faces and the ways in which architecture and urbanism can shape the metropolis at different scales. In these sense the progamme sees the city as a laboratory where the virtual and experimental tradition of the Architectural Association finds a fertile and concrete ground for the application of its methodology in Mexico.
“Manufactured Landscapes/Manufactured Urbanities” explores the metropolitan condition understood as a manufactured process by and for human beings. Henceforth the traditional opposing concepts, artificial vs nature, are replaced under the premise, nature does not exist, where nature is not natural but naturalised and the artificial is not an external or impose construct but manufactured intrinsically.
With this as a starting point the programme will study 2 instances of Mexico City’s “Manufactured Landscapes/Manufactured Urbanities”: The ravines in the west of Mexico City, last bastion of the existing “Nature” and its crucial role in the viability of Mexico City and social housing, as the fundamental construct of the “artificial” habitat in the metropolis´s urban tissue. These “Manufactured Landscapes/Manufactured Urbanities” and the ways in which they are designed, produced, reinvented regenerated, show a vast spectrum representative of the crucial urban conditions to be address and therefore they posed an enormous urban and architectonic challenge to confront in order to apply contemporary design methodologies.
To tackle the complexities of the “Manufactured Landscapes/Manufactured Urbanities”, the programme will immerse students and staff in a 10 day intensive workshop within a multidisciplinary environment where national and international experts from various fields will enrich their proposals. Students will work in architecture and/or urban scale teams and will critically assess the impact of their multiple scales interventions.
A backbone of lectures, talks and seminars, including local and international speakers, are designed to broaden and reflect the relevance and the importance of the topic for Mexico City. Finally a public exhibition of student’s work will be held at Centro Cultural de España in autumn 2013.
…
of curved surfaces, its fabrication methods are still a challenge especially on the level of complex surfaces and large scale assemblies. Mathematical surfaces in general and hyperbolic paraboloids (Hypars) in particular embed neverending opportunities for planar construction techniques. The ancient surface of Hypars is one of the most interesting mathematical forms for architects from quantitative structural optimization to qualitative ornaments. Hypars have been used extensively in the works of avant-garde architects including: Gaudi, the Philips pavilion of le Corbusier, Shell structures of Felix Candela and Frei Otto tensile structures, as well as, many contemporary architects. Hypars workshop aims to develop the computational design techniques of complex organizations of hyperbolic surfaces from the structural to the ornamental scale in respect to the planar fabrication methods. The mathematical and geometrical qualities of ruled surfaces will be explored in parallel to material and assembly logic of planar elements for 1:1 prototype of an outdoor canopy in Alexandria. Geometrical properties of Hypars will be coded on the platform of Rhino and Grasshopper while the physical prototypes will be in wood and paper which offer an integrated and intuitive understanding of complex geometries and physical relationships. The workshop objective is to reconsider materials and fabrication as a design tool for architects. /// Application To apply, please follow this link to fill the application form https://docs.google.com/forms/d/1S2-7YNifUing8SVX3Iz9ArrgQgjIk77w9jzG70sIHv0/viewform /// Fees* 1700 EGP for students / 2000 EGP for graduates and young professionals * 20 % discount for early registration and payment before 22 nd of August 2014 more info on the workshop webpage: http://www.encodestudio.net/#!hypar/co9p…
you prefer) skin of a future to be truss (of quad type). If we divide the classic way these surfaces we end up with a dense grid near by the inner limits (no good - useless) and a sparse grid near by the exterior limits (no good - stupid).
So the ideal solution could be a transition schema from sparse (in) to dense (out). Obviously with no mesh around (useless for further work). Kinda like a fractal logic so to speak.
But...as I'm getting(?) more familiar(???) with GH...er...hmm...read my notes inside file. In a nutshell it's hard to imagine doing complex real-life AEC work with GH with the current state of things/capabilities (problematic clusters, one Canvas, one way control : from Canvas to Viewports), no real-life visual tree explorer/manager, no bake management of any kind, no preview management of any kind etc etc etc).
I mean (back to original definition) that I spend more time trying to preview portions of the whole solution (10% completed - imagine what's next...he he) and/or to figure out what belongs where ... than adding some new logic.
It's the law of entropy that is always proportional .... etc etc, I guess.
Keep in mind that this (as a typical AEC thing) is not predefined : it's not like applying a Voronoi grid into a blob...you work by trial-and-error, you add stuff, you see results, you change approach, go back again etc etc - meaning that the amount of "help" items present in a given solution is probably ten times the amount of the real (i.e. final) things. This is also a big problem in Microstation (blame that stupid old-times Level way to organize things, forget un realistic Cells and Planet Utopia Refs - at least with regard the current state of things).
It's quite explainable why CATIA dominate in the Plant market segment (and has serious plans to invade in more humble AEC segments as well).
best, Peter…
ial command:create: Divide Curve, Voronoi, Area, Circle
If there are multiple instances of a single component, then you can assign them IDs (according to Ángel's suggestion) using square brackets:create: Divide Curve, Circle[1], Circle[2]You can use numbers or words, whatever you want to identify a component.
Parameters are written in parenthesis, in front if they are input parameters, trailing if they are outputs:Voronoi(C) --> (G)AreaThis will conflict somewhat with components which already use parenthesis in their name, but we can simply consider the first or last parenthesis pair to indicate the parameter. In other words, the ambiguity can be resolved because all alternative interpretations are invalid.
K didn't like my usage of an inverse arrow ( <-- ) to assign properties, I didn't like her suggestion of a different inverse arrow ( <== ). The equals symbol seems to be a halfway decent alternative, eventhough K still doesn't like it:Voronoi = Preview:Off
All sorts of properties can be assigned using this notation, including name, position, enabledness etc.
We haven't decided on a good way to assign initial properties quickly. Your first suggestion [Slider=60] may work in conjunction with the create statement, but it is somewhat awkward. I suppose the logical way for this to work is to simply type:slider = 0..10..50using the shorthand notation for creating a new object (by mentioning it out of the blue) and immediately assigning a property to it.
Does this approach violate some of the goals you had in mind originally?
--
David Rutten
david@mcneel.com…
grid size 3 = 2.7 mins
grid size 2 = ??? memory peaks and rhino freezes.
However now that I have switch the unit of the rhino file to feet,
now grid size 3 = 18 mins.
which makes i suppose since the analysis will have to work with smaller tolerance.
The below img is what i got after 18 mins. I think also the fact that I have joined the individual units with solid union also make it longer maybe? you can see the mesh triangulation not only around the corners of masses but also inbetween different units (if you look at the top level you will see)
oh, and I also have very little disk space left.
I would like to share the file but right its a big mess and has a lot of stuff that is unrelated to this particular memory issue, like revit interoperability and urban modelling. and the definition is set up so that it needs to have an excel file that feeds what you see on the lower left corner, wing mass scales. In order to compare design studies I am animating the index of list component that feeds the different scale of the wings and the width of the floor plates you see. you can see it in my video here. I will try to clean it up a bit when I get a chance, but it seems like grid size 3 might work as a starting point.
when I get around to extract values from the mesh vertices and actually apply different facade designs driven from the parameters, I would know better what grid size might be necessary.
…
w number. If the script is slow you can also double click a number slider to access a panel that lets you slide a value without invoking a recalculation.
You don't need most of the inputs, which are for controlling the transition to the borders of open meshes. No, there's no manual beyond right-click help.
FixC and FixV are to fix and thus retain open borders, mostly, or sharp creases and there is art in them, meaning tricks you just have to blunder into or search for.
Flip is an alternative remeshing strategy worth changing from 0 to 1 to see the effect.
MeshMachine is only giving a nice even curvature-adaptive (Adapt setting 0.8 or so is more reliable than 1) mesh, merely, not thickening mesh wires into struts.
The struts are currently individual capped mesh cylinders. You could also use very slow nurbs cylinders. They may or may more likely not successfully Boolean union together in Rhino. Their diameter is set in the Mesh Pipe component.
There are other plug-ins for thickening the wires of a mesh. Exoskeleton, Intralattice and my favorite, somewhat tweaky Cocoon marching cubes which is however very robust, and I sometimes run the overly fine mesh result into MeshMachine to make it regular and adaptive, since the Cocoon refine component is hard to control. I mostly enter 1s into most inputs though.
If you turn on menu item Display > Canvas Widgets > Profiler and zoom in close enough to the canvas, you'll see timer readouts for how long each component took for a solution, so I can see that the pipes are the slow part, so I'd normally right click disable the chain early on, and right click turn on preview for the earlier mesh step before I make the pipes. The MeshMachine step takes only 2 seconds, and that's with Iter (internal iterations) at 10 instead of a workable 5.
Also turn on Display > Preview Mesh Edges to see the actual MeshMachine mesh.
…
ng is deciding how and where to store your data. If you're writing textual code using any one of a huge number of programming languages there are a lot of different options, each with its own benefits and drawbacks. Sometimes you just need to store a single data point. At other times you may need a list of exactly one hundred data points. At other times still circumstances may demand a list of a variable number of data points.
In programming jargon, lists and arrays are typically used to store an ordered collection of data points, where each item is directly accessible. Bags and hash sets are examples of unordered data storage. These storage mechanisms do not have a concept of which data comes first and which next, but they are much better at searching the data set for specific values. Stacks and queues are ordered data structures where only the youngest or oldest data points are accessible respectively. These are popular structures for code designed to create and execute schedules. Linked lists are chains of consecutive data points, where each point knows only about its direct neighbours. As a result, it's a lot of work to find the one-millionth point in a linked list, but it's incredibly efficient to insert or remove points from the middle of the chain. Dictionaries store data in the form of key-value pairs, allowing one to index complicated data points using simple lookup codes.
The above is a just a small sampling of popular data storage mechanisms, there are many, many others. From multidimensional arrays to SQL databases. From readonly collections to concurrent k-dTrees. It takes a fair amount of knowledge and practice to be able to navigate this bewildering sea of options and pick the best suited storage mechanism for any particular problem. We did not wish to confront our users with this plethora of programmatic principles, and instead decided to offer only a single data storage mechanism.*
Data storage in Grasshopper
In order to see what mechanism would be optimal for Grasshopper, it is necessary to first list the different possible ways in which components may wish to access and store data, and also how families of data points flow through a Grasshopper network, often acquiring more complexity over time.
A lot of components operate on individual values and also output individual values as results. This is the simplest category, let's call it 1:1 (pronounced as "one to one", indicating a mapping from single inputs to single outputs). Two examples of 1:1 components are Subtraction and Construct Point. Subtraction takes two arguments on the left (A and B), and outputs the difference (A-B) to the right. Even when the component is called upon to calculate the difference between two collections of 12 million values each, at any one time it only cares about three values; A, B and the difference between the two. Similarly, Construct Point takes three separate numbers as input arguments and combines them to form a single xyz point.
Another common category of components create lists of data from single input values. We'll refer to these components as 1:N. Range and Divide Curve are oft used examples in this category. Range takes a single numeric domain and a single integer, but it outputs a list of numbers that divide the domain into the specified number of steps. Similarly, Divide Curve requires a single curve and a division count, but it outputs several lists of data, where the length of each list is a function of the division count.
The opposite behaviour also occurs. Common N:1 components are Polyline and Loft, both of which consume a list of points and curves respectively, yet output only a single curve or surface.
Lastly (in the list category), N:N components are also available. A fair number of components operate on lists of data and also output lists of data. Sort and Reverse List are examples of N:N components you will almost certainly encounter when using Grasshopper. It is true that N:N components mostly fall into the data management category, in the sense that they are mostly employed to change the way data is stored, rather than to create entirely new data, but they are common and important nonetheless.
A rare few components are even more complex than 1:N, N:1, or N:N, in that they are not content to operate on or output single lists of data points. The Divide Surface and Square Grid components want to output not just lists of points, but several lists of points, each of which represents a single row or column in a grid. We can refer to these components as 1:N' or N':1 or N:N' or ... depending on how the inputs and outputs are defined.
The above listing of data mapping categories encapsulate all components that ship with Grasshopper, though they do not necessarily minister to all imaginable mappings. However in the spirit of getting on with the software it was decided that a data structure that could handle individual values, lists of values, and lists of lists of values would solve at least 99% of the then existing problems and was thus considered to be a 'good thing'.
Data storage as the outcome of a process
If the problems of 1:N' mappings only occurred in those few components to do with grids, it would probably not warrant support for lists-of-lists in the core data structure. However, 1:N' or N:N' mappings can be the result of the concatenation of two or more 1:N components. Consider the following case: A collection of three polysurfaces (a box, a capped cylinder, and a triangular prism) is imported from Rhino into Grasshopper. The shapes are all exploded into their separate faces, resulting in 6 faces for the box, 3 for the cylinder, and 5 for the prism. Across each face, a collection of isocurves is drawn, resembling a hatching. Ultimately, each isocurve is divided into equally spaced points.
This is not an unreasonably elaborate case, but it already shows how shockingly quickly layers of complexity are introduced into the data as it flows from the left to the right side of the network.
It's no good ending up with a single huge list containing all the points. The data structure we use must be detailed enough to allow us to select from it any logical subset. This means that the ultimate data structure must contain a record of all the mappings that were applied from start to finish. It must be possible to select all the points that are associated with the second polysurface, but not the first or third. It must also be possible to select all points that are associated with the first face of each polysurface, but not any subsequent faces. Or a selection which includes only the fourth point of each division and no others.
The only way such selection sets can be defined, is if the data structure contains a record of the "history" of each data point. I.e. for every point we must be able to figure out which original shape it came from (the cube, the cylinder or the prism), which of the exploded faces it is associated with, which isocurve on that face was involved and the index of the point within the curve division family.
A flexible mechanism for variable history records.
The storage constraints mentioned so far (to wit, the requirement of storing individual values, lists of values, and lists of lists of values), combined with the relational constraints (to wit, the ability to measure the relatedness of various lists within the entire collection) lead us to Data Trees. The data structure we chose is certainly not the only imaginable solution to this problem, and due to its terse notation can appear fairly obtuse to the untrained eye. However since data trees only employ non-negative integers to identify both lists and items within lists, the structure is very amenable to simple arithmetic operations, which makes the structure very pliable from an algorithmic point of view.
A data tree is an ordered collection of lists. Each list is associated with a path, which serves as the identifier of that list. This means that two lists in the same tree cannot have the same path. A path is a collection of one or more non-negative integers. Path notation employs curly brackets and semi-colons as separators. The simplest path contains only the number zero and is written as: {0}. More complicated paths containing more elements are written as: {2;4;6}. Just as a path identifies a list within the tree, an index identifies a data point within a list. An index is always a single, non-negative integer. Indices are written inside square brackets and appended to path notation, in order to fully identify a single piece of data within an entire data tree: {2,4,6}[10].
Since both path elements and indices are zero-based (we start counting at zero, not one), there is a slight disconnect between the ordinality and the cardinality of numbers within data trees. The first element equals index 0, the second element can be found at index 1, the third element maps to index 2, and so on and so forth. This means that the "Eleventh point of the seventh isocurve of the fifth face of the third polysurface" will be written as {2;4;6}[10]. The first path element corresponds with the oldest mapping that occurred within the file, and each subsequent element represents a more recent operation. In this sense the path elements can be likened to taxonomic identifiers. The species {Animalia;Mammalia;Hominidea;Homo} and {Animalia;Mammalia;Hominidea;Pan} are more closely related to each other than to {Animalia;Mammalia; Cervidea;Rangifer}** because they share more codes at the start of their classification. Similarly, the paths {2;4;4} and {2;4;6} are more closely related to each other than they are to {2;3;5}.
The messy reality of data trees.
Although you may agree with me that in theory the data tree approach is solid, you may still get frustrated at the rate at which data trees grow more complex. Often Grasshopper will choose to add additional elements to the paths in a tree where none in fact is needed, resulting in paths that all share a lot of zeroes in certain places. For example a data tree might contain the paths:
{0;0;0;0;0}
{0;0;0;0;1}
{0;0;0;0;2}
{0;0;0;0;3}
{0;0;1;0;0}
{0;0;1;0;1}
{0;0;1;0;2}
{0;0;1;0;3}
instead of the far more economical:
{0;0}
{0;1}
{0;2}
{0;3}
{1;0}
{1;1}
{1;2}
{1;3}
The reason all these zeroes are added is because we value consistency over economics. It doesn't matter whether a component actually outputs more than one list, if the component belongs to the 1:N, 1:N', or N:N' groups, it will always add an extra integer to all the paths, because some day in the future, when the inputs change, it may need that extra integer to keep its lists untangled. We feel it's bad behaviour for the topology of a data tree to be subject to the topical values in that tree. Any component which relies on a specific topology will no longer work when that topology changes, and that should happen as seldom as possible.
Conclusion
Although data trees can be difficult to work with and probably cause more confusion than any other part of Grasshopper, they seem to work well in the majority of cases and we haven't been able to come up with a better solution. That's not to say we never will, but data trees are here to stay for the foreseeable future.
* This is not something we hit on immediately. The very first versions of Grasshopper only allowed for the storage of a single data point per parameter, making operations like [Loft] or [Divide Curve] impossible. Later versions allowed for a single list per parameter, which was still insufficient for all but the most simple algorithms.
** I'm skipping a lot of taxonometric classifications here to keep it simple.…
Added by David Rutten at 2:22pm on January 20, 2015
onsidered period.
Even if the end of July for the mediterranean climate is not the best period to perform an adaptive comfort analysis (it's just a pretest to define a LB model) I want to refine the Adaptive comfort Chart (AC) by changing the external air temperature data imported from the .epw file with that of monitored data as reported here below:
Where the monitored ext air temperature are in this form (green panel below):
I have used the comfortPar component to set the following parameters:
Adaptive chart as defined by EN 15251
90% of occupants comfortable
the prevailing outdoor temperature from a weighted running mean of the last week
fully conditioned space (even if it is not properly in line with AC as already discussed)
The question is this: the AC component could correctly apply the code below if there is only a list of external temperature data for a restricted period (without indication about the limits of this period) and not for an entire year?
else: #Calculate a running mean temperature. alpha = 0.8 divisor = 1 + alpha + math.pow(alpha,2) + math.pow(alpha,3) + math.pow(alpha,4) + math.pow(alpha,5) dividend = (sum(_prevailingOutdoorTemp[-24:-1] + [_prevailingOutdoorTemp[-1]])/24) + (alpha*(sum(_prevailingOutdoorTemp[-48:-24])/24)) + (math.pow(alpha,2)*(sum(_prevailingOutdoorTemp[-72:-48])/24)) + (math.pow(alpha,3)*(sum(_prevailingOutdoorTemp[-96:-72])/24)) + (math.pow(alpha,4)*(sum(_prevailingOutdoorTemp[-120:-96])/24)) + (math.pow(alpha,5)*(sum(_prevailingOutdoorTemp[-144:-120])/24)) startingTemp = dividend/divisor if startingTemp < 10: coldTimes.append(0) outdoorTemp = _prevailingOutdoorTemp[7:] startingMean = sum(outdoorTemp[:24])/24 dailyRunMeans = [startingTemp] dailyMeans = [startingMean] prevailTemp.extend(duplicateData([startingTemp], 24)) startHour = 24
…
three categories, each one corresponding to different shapeType_ input:- polygons (shapeType_ = 0): anything consisted of closed polygons: buildings, grass areas, forests, lakes, etc
- polylines (shapeType_ = 1): non closed polylines as: streets, roads, highways, rivers, canals, train tracks ...- points (shapeType_ = 2): any point features, like: Trees, building entrances, benches, junctions between roads... Store locations: restaurants, bars, pharmacies, post offices...
So basically when you ran the "OSM shapes" component with the shapeType_ = 2, you will get a lot of points. If you would like to get only 3d trees, you run the "OSM 3D" component and it will create 3d trees from only those points which are in fact trees. You can also check which points are trees by looking at the exact location on openstreetmap.org. For example:
Or use the "OSM Search" component which will identify all trees among the points, regardless of whether 3d trees can be created or not.However, when it comes to 3d trees there is a catch:
Sometimes the geometry which Gismo streams from OpenStreetMap.org does not contain a "height" key. Or it does contain it but the value for that key is missing.OpenStreetMap is free editable map database, so anyone with internet access and free registered account on openstreetmap.org can add features (like trees) to the map database. However, regular people sometimes do not have height measuring devices which are needed for specific objects as trees.So "OSM 3D" component will generate 3d trees from only those tree points which contain a valid "height" key.However, a small workaround is to input a domain(range) into the randomHeightRange_ input of "OSM 3D" component (for example the following one: "5 to 10"):
This will result in creation of other 3d trees which do not have defined height, by randomizing their height. randomHeightRange_ input can also be applied to 3d buildings, and it is definitively something I need to write a separate article on.
In the end it may be that nobody mapped the trees in the area you are looking for.
After you map a tree to openstreetmap.org then it will instantly be available to you or any other user of Gismo. I will be adding some tutorials in the future on how this can be done. But probably not in the next couple of weeks.
Let me know if any of this helps, or if I completely misunderstood your issue.…
Added by djordje to Gismo at 3:52am on February 8, 2017