of computational design and a comprehensive knowledge of cutting-edge technologies in the fields of parametric architecture, robotics, digital manufacturing and 3d printing for the construction industry. The program is a part-time executive format (one week per month during one year) designed for a selected group of architects, engineers, designers and digital artists. DESIGN by DATA is made of courses, fabrication and prototyping workshops, conferences, digital talks and networking events. The program take place in multiple locations in Paris and is a true opportunity to enter the international ecosystem of architectural innovation. A coworking membership and a full fablab access to digital manufacturing machines are included in the program.
..........
PROGRAM
A) ART AND CULTURE IN THE DIGITAL TURN Digital Culture and Liquid Spaces (Yasmine Abbas)
Smart cities and Collective Intelligence (Domenico Di Siena)
Art, Technology and the Creative Process (Eric Vernhes)
Advanced Mesh Modelling and Data Bodies (Andrea Graziano)
Agent Based Tectonics for Architecture (Alessio Erioli)
B) COMPUTATIONAL DESIGN: COMPLEX GEOMETRIES AND OPTIMISATION
Management and Design of Complex Geometries (Olivier Baverel)
Conceptual Structural Design (Romain Mesnil)
Algorithmic Optimization (Cyril Douthe)
Permormance-driven Design (Sébastien Perrault)
C) ADDITIVE MANUFACTURING AND ROBOTIC FABRICATION
Robotics and Industrial Fabrication (Thibault Schwartz )
3d Printing and Material Science (Justin Dirrenberger)
Drones and Aerial Robotics for Environnemental Design and Architecture (Aldo Sollazzo)
Digital Prototyping and Final Project Fabrication (Minh Man Nguyen)
APPLY FOR SEPTEMBER 2016
http://www.enpc.fr/design-by-data…
e chosen to dive into Grasshopper. I’m about 6 months in. If some of my comments are completely off, please take that to mean that a feature is too inaccessible to a newish user rather that it’s just missing, as I may have stated.
One of my primary pain points is this. Things that can be done in other programs are invariably easier in other programs. This is a big enough issue that I doubt there’s an easy solution that an armchair qb like myself can offer up.
The interface:
I’ve used a lot of 3D programs. I’ve never encountered one as difficult as grasshopper. What in other programs is a dialog box, is 8 or 10 components strung together in grasshopper. The wisdom for this I often hear among the grasshopper community is that this allows for parametric design. Yet PTC (Parametric Technology Corp.) has been doing parametric design software since 1985 and has a far cleaner and more intuitive interface. So does SolidWorks, Inventor, CATIA, NX, and a bunch of others.
In the early 2000's, when parametric design software was all the rage, McNeel stated quite strongly the Rhino would remain a direct modeler and would not become a parametric modeler. Trends come. Trends go. And the industry has been swinging back to direct modeling. So McNeel’s decision was probably ok. But I have to wonder if part of McNeel’s reluctance to incorporate some of the tried and proven ideas of other parametric packages doesn't have roots in their earlier declaration to not incorporate parametrics.
A Visual Programming Language:
I read a lot about the awesomeness and flexibility of Grasshopper being a visual programming language. Let’s be clear, this is DOS era speak. I believe GH should continue to have the ability to be extended and massaged with code, as most design programs do. But as long as this is front and center, GH will remain out of reach to the average designer.
Context sensitivity:
There is no reason a program in 2014 should allow me to make decisions that will not work. For example, if a component input is in all cases incompatible with another component's output, I shouldn't be able to connect them.
Sliders:
I hate sliders. I understand them, but I hate ‘em. I think they should be optional. Ya, I know I can r-click on the N of a component and set the integer. It’s a pain, and it gives no feedback. The “N” should turn into the number if set. AAAnd, sliders should be context sensitive. I like that the name of a slider changes when I plug it into something. But if I plug it into something that'll only accept a 1, a 2, or a 3, that slider should self set accordingly. I shouldn't be able to plug in a “50” and have everything after turn red.
Components:
Give components a little “+” or a drawer on the bottom or something that by clicking, opens the component into something akin to a dialog box. This should give access to all of the variables in the component. I shouldn't have to r-click on each thing on a component to do all of the settings.
And this item I’m guessing on. I’m not yet good enough at GH to know if this may have adverse effects. Reverse, Flatten, Graft, etc.; could these be context sensitive? Could some of these items disappear if they are contextually inappropriate or gray out if they're unlikely?
Tighter integration with Rhino:
I'm not entirely certain what this would look like. Currently my work flow entails baking, making a few Rhino edits, and reinserting into GH. I question the whole baking thing, btw. Why isn't it just live geometry? That’s how other parametric apps work. Maybe add more Rhino functionality to GH. GH has no 3D offset. I have to bake, offsetserf, and reinsert the geometry. I’m currently looking at the “Geometry Cache” and “Geometry Pipeline” components to see if they help. But I haven't been able to figure it out. Which leads me to:
Update all of the documentation:
I'm guessing this is an in process thing and you're working toward rolling GH from 0.9.00075 to 1.0. GH was being updated nearly weekly earlier this year. Then it suddenly stopped. If we're talking weeks before a full release, so be it. But if we're looking at something longer, a documentation update would help a lot. Geometry Cache and Geometry Pipeline’s help still read “This is the autogenerated help topic for this object. Developers: override the HtmlHelp_Source() function in the base class to provide custom help.” This does not help. And the Grasshopper Primer 2nd Ed. was written for GH 0.60007.
Grasshopper is fundamentally a 2D program:
I know you'll disagree completely, but I'm sticking to this. How else could an omission like offsetsurf happen? Pretty much every 3D program in existence has this. I’m sure I can probably figure out how to deconstruct the breps, join the curves, loft, trim, and so forth. But does writing an algorithm to do what all other 3D programs do with a dialog box seem reasonable? I'm sure if you go command by command you'll find a ton on such things.
If you look at the vast majority of things done in GH, you'll note that they're mostly either flat or a fundamentally 2D pattern on a warped surface.
I've been working on a part that is a 3D voronoi trimmed to a 3D model. I've been trying to turn the trimmed voronoi into legitimate geometry for over a month without success.
http://www.grasshopper3d.com/profiles/blogs/question-voronoi-3d-continued
I’ve researched it enough to have found many others have had the exact same problem and have not solved it. It’s really not that conceptually difficult. But GH lacks the tools.
Make screen organization easier:
I have a touch of OCD, and I like my GH layout to flow neatly. Allow input/output nodes to be re-ordered. This will allow a reduction in crossed wires. Make the wire positions a bit more editable. I sometimes use a geometry component as a wire anchor to clean things up. Being able to grab a wire and pull it out of the way would be kinda nice.
I think GH has some awesome abilities. I also think accessing those abilities could be significantly easier.
~p…
"meshed" i assume that meant converting Surfaces with MeshUV\DeMesh?, and from your screenshots thats a substantial number of vertices and therefore lines to draw, well worth it though from the results!, i agree with your answer to 3) that a more automatic solution is required,.
1) By mesh, I should have said produce a surface – then convert surface to mesh – followed by de-mesh to get access to vertices etc.
You can reduce the resolution of points if you need to, depending on your hardware. The more points you use the harder and it is to compute a solution, however the more points you use, the more accurate your interpolated surface. You need to find your own balance between speed and accuracy.
- ..thats great news, equalizing vertex numbers is exactly what i need to do since my Blend surface "keyframes" by nature will likely have unequal point counts. However, a) ..when using default Rhino surf's your intruiging def. starting to work for me only after i replaced you "custom" Domain(VB\Python?,let me know) with Deconstruct Domain. then it connected each surf's vertices but did Not produce an intermediate surface or points. b) ..when using my IDENTICAL Blend surf's in your def. with Deconstruct Domain and Merge comp's it then produced intermediate vertices,. see def. screenshots or i can send def's i you like,. I'll also produce the 2nd, Non-identical Blend surf keyframe to test in your def.
2) I am not sure what you mean by my ‘custom domain’ are you referring to the definition in my second post – or the post I sent for David to look at? Perhaps you can circle the component and upload a screenshot so I know what you are referring to? Your second screen shot appears to have worked OK
- .. agreed, 6) does or will your latest def. contain more automated, vertex correspondence, Ln creation?
3) No, I moved away from morphing surfaces and moved my solution to generating surfaces based on point data. This cut out the requirement for me to generate the surface to begin with and allows very automatic production of surfaces from data out of excel. Perhaps this would also be a good solution for you? You could:
Move your point data to excel, by exporting the x, y, z of your vertices for each surface.
Use excel as your information repository then write a definition to interpolate between your start and end points from excel.
This is basically what I have done now, as I have 1700 different ‘surface’ snap shots from the data I am working with.
- ..perhaps i missed something, but after using Brep > Join on my polysurface SDivide still saw it as subsurfaces instead of a single surface,.
4) Sorry, perhaps I should have tried that – I didn’t get as far as trying to subdivide. There should be a way to then re-create as one surface if it is necessary… I will try and find out when I have time.
How many sets of surfaces are you trying to merge through? It is also possible to morph from 1 to 2, 2 to 3, 3 to 4 …… x-1 to x by using a slider which calculates the range and picks the correct two surfaces to morph. If you need more info let me know and I will write something. - ..that sounds perfect, esp. since the sets of surfaces will be as nearly unlimited as the feature film they're modeled from. Yes, i'd love to learn more info\def's on this subject, thanks,..
Sounds to me like you might be better taking the excel read, interpolate route? If you have nearly unlimited surfaces, then they must be generated from some other data source yes?
Let me know your thoughts, if you would like to discuss anything I am happy to make myself available on skype at some stage to talk you through some of this stuff.
Cheers
Lyndon
EDIT: I have uploaded a video, which shows a surface generated using excel data - which basically loops between 'snapshots in time' to give you an idea of whether this would suit your needs.
https://www.youtube.com/watch?v=f9XAne9byQc&feature=youtu.be
…
. From the Thermal Comfort Indices component, Comfort Index 11 (TCI-11):MRT = f(Ta, Tground, Rprim, e)
with:- Ta = DryBulbTemperature coming from ImportEPW component- Tground = f(Ta, N) where N comes from totalSkyCover input. Tground influences the long-wave radiation emitted by the ground in the MRT calculation.- Rprim defined as solar radiation absorbed by nude man = f(Kglob, hS1, ac)- ac is the clothingAlbedo in % (bodyCharacteristics input)- I can't find any definition in the code of Kglob and hS1. Could you tell me please what are those values referencered to? --> probably the globalHorizontalRadiation but how?- e = vapour pressure calculated from Ta and Relative Humidity input
Do you agree that in this case the MRT does not depend on these inputs: location, meanRadiantTemperature, dewPointTemperature and wind speed?It does not depend neither on the other bodyCharacteristics like bodyPosture, age, sex, met, activityDuration...?
MRT calculated by the TCI-11 method is the mean radiant temperature of a vector pointing vertically with a sky view factor of 100%?For ParisOrly epw,
2. From the SolarAdjustedTemperature component (that seems to be more used for the UTCI calculation examples on Hydra compared to TCI-11).
In contrast to the TCI-11, this component distinguishes diffuse and direct radiation and contextualizes the calculation thanks to _ContextShading input, right? It can also be applied to a mannequin thanks to the CumSkyMatrix and thus evaluate the dishomogeneity of radiation exposure.This component seems not to consider the influence of vapour pressure on the result --> is it then more precise to put the MRT output (from the TCI) as an input of meanRadTemperature for SolarAdjustedTemperature?The default groundReflectivity is set to 0.25 --> is GroundReflectivity taken into account in the Tground or MRT calculation in the TCI component? If yes, what is the hypothesised groundReflectivity?The default clothing albedo of 37% (TCI-11 bodyCharacteristics) corresponds to Clothing Absorptivity of 63%?
If the CumSkyMatrix input is not supplied, I get 9 results for the mannequin --> where are those points/results coming from?
If the CumSkyMatrix input is supplied,I suppose the calculation of the 482 results correspond to a calculation method similar to the radiation analysis component that is averaged over the analysis period. Right?But I don't understand why the mannequin is composed of 481 faces and meshFaceResult gives 482 results.
Finally, what is the link between the MESH results, the solarAdjustedMRT and the Effective Radiant field ? Is there a paper to have a detailed explanation of the method?
3. Here are some results for the ParisOrly energyplus weather data. You can find here attached the grasshopper definition.There is no shading in this simulation and the result coming from the ThermalComfort indices for MRT is very different compared to the solar adjusted MRT.Why such a big difference and which of the result should be plugged into the UTCI calculation component?
Results for ParisOrly.epwM,D,H:1,1,12
Ta : 6.5°Crh: 100%globalHorizontalRadiation: 54 Wh/m2totalSkyCover: 10MRT (TCI-11): 1.2°C
_CumSkyMtxOrDirNormRad = directNormalRadiation : 0 Wh/m2diffuseHorizontalRad: 54 Wh/m2_meanRadTemp = TasolarAdjustedMRT: 10.64°CMRTDelta: 4.14°C
_CumSkyMtxOrDirNormRad = CumulativeSkyMtxdiffuseHorizontalRad: 54 Wh/m2_meanRadTemp = TasolarAdjustedMRT: 10.47°CMRTDelta: 3.97°C
_CumSkyMtxOrDirNormRad = CumulativeSkyMtxdiffuseHorizontalRad: 54 Wh/m2_meanRadTemp = MRT (TCI-11)solarAdjustedMRT: 5.17°CMRTDelta: 3.97°C
Thanks a lot for your helpRegards,
Aymeric
…
can work in any node of a given hierarchy tree (loaded in your work session) by making the node "active". "Nodes" can be other things as well (like workplane, clip definitions etc).
Why to do that weird thing? Well, think any design being "flat" > meaning that all objects are placed in a single file (and in a single layer). Not that good > although the items are present you barely can handle them (because power is nothing without control, he he).
Let's go one step further: we can start classifying objects in "groups" (like a directories/files organization in any O/S). This means, in MCAD speak, creating assemblies (a void thing kinda like a directory) that contain components/entities (kinda like files).
Several steps further we end up with severely nested "arrangements" of entities (an assembly could be parent of something and child of something else).
For instance, it could be rather obvious the logical classification of a "geodetic" (so to speak) structure like this : a 40000m2 "hangar" defining some thematic park.
I mean : a void master that owns 4 equal void segment sets that own 4 "legs" that own various geodesic structural members + cables + membranes + you name it etc etc.
Each "leg" owns the concrete base (Shared) and a rather complex set of objects.
Notice that some tensile membrane "fixture" combos (see above)...act as perimeter light fixtures as well...meaning that the membrane tension plate may could be a child of a void "light" parent...or may could be a "stand alone" assembly etc etc.
These arrangements can be internal (belonging in, say, a x node within the current active file) or external (belonging in a y node within another file). If they deal with the same (topologically speaking) object they define clusters of Shared entities (or variations)- where only the view transformation matrix changes (in the simple scenario, he he). For instance the disk shown above is a Shared Assembly that owns the bolts, the plates, the tension member etc etc. Selective Instancing allows modifying some attributes without affecting the topology (i.e. the geometry).
The whole (terrible) mess is controlled by some tree like "dialog" (in Catia is "transparent") that is called Structure Browser. By controlled I mean (1) display/display mode with regard any tree member combo/selection set (assembly and/or component) in any View (2) clip state control (3) active status (for modifications/variations) (4) workplane control (5) drag and drop ownership control (6) ....
Now...what if I would chan…
this occasion, but it could be converted for DT in no time). Requires some minutes more as regards ... some things, but the usual update is due to some days.
Bad news: it's C#
Good news: User's Manual :
1. That thing (the C#, not me) after sorting (in a "sequential way", so tho speak) the panels (their order was chaotic) allows you to start the massacre by locating a focus of interest (and the user controllable +/- Range derived from it).2. The Range is variable (obviously) and takes care not to exceed the indices of the panel list (OK, that's elementary).
3. If you click the right button (Sadistic Q: where is it? he he) things are deleted and a new constantly self-updating list is your new List. Thus the massacre of panels is totally controllable. An autoZoom thing is also included (free of charge, but it's a bit nerve braking). Zoom factor is variable as well.
4. Then you move over (via the index slider) and start the massacre again. Notice the change of Range.
5. If you turn begin to false (initialization) and then begin to true > start all over again.
6. The other C# thing allows you to increment the index slider in a rather more convenient way. It's a bit weird: it uses delegates (A delegate is an object that knows how to call a method) and events (An event is a construct that exposes just the subset of delegate features required for the broadcaster/subscriber model - but don't ask what this means, he he) in order to talk with your slider (with a defined NickName) and perform the required value control.
NOTE: without realizing it you've just (indirectly) asked one of the most important questions even exposed in this Noble Forum. I hear you : what question? Well ... wait some days for the mother of all threads: "Total control in collections on a per Item basis"
may the Force (the dark option) be with you (and me)
best, Peter…
ers of the last surface in the Brep, however, only the corners of the bounding box of the surface are generated)
It seems the rs.SurfacePoints only returens the control points of a surface rather than the actual corners of the surface. Can you advise if there's a way to do it?
Thank you!
Code:
import rhinoscriptsyntax as rsall_parts = rs.ExplodePolysurfaces(brep)centers = []vectors = []lines = []vertices = []cnt = 0for part in all_parts: center, err = rs.SurfaceAreaCentroid(part) centers.append(center) #rs.AddText(str(cnt), center) uv = rs.SurfaceClosestPoint(part, center) vector = rs.SurfaceNormal(part, uv) vectors.append(vector) N_start = center N_end = rs.VectorAdd(center, vector) line = rs.AddLine(N_start, N_end) lines.append(line) #vertices = rs.SurfacePoints(part) vertices = rs.SurfaceEditPoints(part) cnt +=1#C = centers#N = vectors#L = linesV = vertices#todo:#explore the surface methods in rhinoscript.surface...#import rhinoscript.surface.…
Added by Grasshope at 10:34pm on September 15, 2015
cture (EPA_s)
When Karamba cross section optimiser is finding the most weight-efficient section for the given load then the actual EPA_s is just been calculated. The issue here is that one of the main load on a lattice tower is from the wind pressure. So the wind load is based on the EPA_s which we can get only after optimising the sections. With other words, one of our input is based on an output from the design.
This can be solved (for instance) with a recursive looping. The recursive looping work as per the following steps:
start the loop with an arbitrary wind pressure value (in this case 400N/m linear load on the members)
run the analysis
optimise the members
get the effective area of the members
send them back in the loop for wind load generation
repeat from step 2.
At the Anemone container we can define the number of loops (S). I found it sufficient to apply 6 loops in this case. After 4-5 loops no more optimisation is observed regarding the total weight of the structure.
One thing is also important to be mentioned. Since I am using Galapagos as evolutionary solver, I just did not want to run the loops manually. For an automated optimisation, one of the Galapagos input has been used as trigger for the Anemone loop. Which is a useful stuff ;) (I have tried HoopSnake before but without success)
Since my definition is part of a proprietary solution, I'm unable to share the .gh file. But I hope it still can give some hint what can be done by these components.
…
e existing wires.
2) The capsule display is very similar to the first graph, but instead of drawing a line connecting relative y-values for each slider, each slider get's assigned a colour (from dark red to yellow) based on it's relative position. It allows you to see whether two genomes are similar or not without taking up too many y pixels.
3) This is a tricky one to explain. Every genome in a single species has the same 'dimensionality'. For example, if there are only two sliders you can say that the entire genome space for the species is 2-dimensional. For every possible combination of these two sliders, there is a fitness value (or a height) on this two dimensional plane. If your genome consists of 6 sliders, then we're talking about a 6-dimensional space.
As you probably know, distances between points are computed with the same formula, regardless of the dimensions of these points. Pythagoras' method works for all points with identical and integer dimensions. So even though I cannot display a 6-dimensional genome space on a two-dimensional computer screen, I can compute the distances between all the genomes in a species/generation. This then gives me a matrix with the distances from every genome to every other genome. I translate this distance matrix to a node-spring particle system and solve that system in two-dimensions, which ultimately results in the point-scatter graph you see on the screen.
The axes of this 2D representation of the ND distances are meaningless. The absolute position of the points inside this grid are governed partly by chance. However the relative positions are meaningful in that they convey which genomes are similar and which ones are different. Points which appear close together represent similar genomes, points which appear far apart represent different genomes.
Basically it becomes very simple to see the entire collection of genomes and get a feel for how varied the set is. You can often even see sub-species appear as distinct clusters of points.
4) For every generation, I display the fittest genome (upper boundary of yellow area), the worst genome (lower boundary of yellow area), average genome fitness (the thick red line) and the standard deviation of the fitness distribution in both directions (the orange area). Everything below the average is hatched.
Have you seen the Blog entry about galapagos?
--
David Rutten
david@mcneel.com
Seattle, WA…
Added by David Rutten at 1:37pm on November 26, 2010