to incorporating math and geometry in computational design education, Paneling Tools
Marlo Ransdell, PhD Creative Director, at FSU , Digital Fabrication in Design Research and Education
Andy Payne, LIFT architects | Harvard GSD | FireFly
Jay H Song, Chair, Jewelry School of Design, Jewelry as Personal Expression, Extra+Ordinary@Jewelry.com
Pei- Jung (P.J.) Chen, Professor of Jewelry, SCAD
Gustavo Fontana, designer/co-founder nimbistand, Diseñar, desarrollar y comercializar productos por tu cuenta.
Joe Anand, CEO MecSoft Corporation, RhinoCAM
Julian Ossa, Chair, Industrial Design Director, Diseño – Una opción de vida a todo vapor!, UPB
Minche Mena, SHINE Architecture, Principal
J. Alstan Jakubiec, Daylighting and Environmental Performance in Architectural Design Solemma, LLC
Carlos Garnier R&D Director / Jaime Cadena – General Director, Plug Design, www.plugdesign.com.mx
Mario Nakov, www.chaosgroup.com [ V-Ray ]
Andres Gonzalez, RhinoFabStudio
Workshops:
o) Paneling Tools
o) RhinoCAM
o) Rhinology in Design, for Jewelry
o) Footwear
o) V-Ray: Jewelry Design
o) V-Ray: Architects and Industrial Designers
o) FireFly
o) J. Alstan Jakubiec, DIVA
The cost for each workshop or the Lectures is 95.0 US$
To register:
WORK-SHOPS April 2 - RHINO DAY
WORK-SHOPS April 3 - RHINO DAY
REGISTRATION RHINO DAY
NOTE: All students and faculty members that register to this event, will receive a Rhino 5 Educational License at the event.
…
ity...? How to define this parameters and simulate them? How to simulate and evaluate form? How to work with Evolutionary Solver inside Grasshopper3d? How to evaluate end data and choose the fittest geometry? How to optimize geometry to increase overall energy efficiency of project!?
»»» Rhinoceros 5 + Grasshopper 3D & Sub-Plugins *required grasshopper plugins: Elk, LadyBug + Honeybee, Mesh edit (uto tools), Mesh+, Weaverbird, Human, TT Toolbox, Lunchbox, Horster tools, Exoskeleton & Cytoskeleton
>>>Please download and install Rhino + GH3D & Sub-Plugins before workshop start!<<<
with Igor Mitrić
DIGITAL FABRICATION BASICS - 3D SCANNING AND 3D PRINTING
Workshop would provide overview of current state of technologies for 3D scanning and 3D printing with those affordable and practical devices for research and development new design projects. Attendees would use 3D scanner to generate 3D model in virtual computer space, remodel, and prepare for 3D printer.
with Roberto Vdović
OPTIONAL FIELD TRIP - ON SITE ENERGY MEASUREMENTS (19.6.2015)
with Benedikt Borišič and Veronika Madritsch
Participants will receive CERTIFICATES of knowledge acquired at the workshop for each section. Participation is FREE! The number of places is LIMITED!
MORE INFORMATION AND APPLICATION WWW.LIVECONST.EU
WORKING SCRITPS
Day 1 City.gh
Day 2 Lady Bug.gh
Day 3 Galapagos
Record Galapagos.gh
Day 3_First Half.gh
Day 3 Second Half.gh
Tower from any Curve.gh
…
o I can apply your color gradient code (not shown but in GH file, off screen) after the Z sort:170316_SpheresStandardizer_2017Mar16b.gh
The fact that sphere 'Volume' is required a second time, after 'Pull' to wires, reminds me of a similar issue we dealt with last week: http://www.grasshopper3d.com/forum/topics/trimming-points-pulled-fr...
Seems to me that 'Pull Point' has a serious defect that requires extraordinary effort and/or kludgy code to remedy. If you don't graft the curves, 'Pull' returns each point pulled to it's nearest curve - exactly what you want, except without knowing which curve puled it?
In this code (above), you are using 'Pull D (Distance)', 'Smaller' with an arbitrary value as 'B' and 'Cull' to associate the closest curve with each point. In the other thread, I ended up creating brep cylinders around the curves to get the correct result. Ridiculous!!
I've spent a lot of time trying and utterly failing to find a truly proper solution. Is there one? (see "AHA!!!!" below!)
Searching the forum, I quickly found a couple old posts referring to the same problem:
pull point (bug?) May 27, 2009http://www.grasshopper3d.com/forum/topics/pull-point-bug
Small request April 18, 2013http://www.grasshopper3d.com/forum/topics/small-request
=========================
AHA!!!! I had given up and was about to post the above when I finally solved it. Created a cluster called 'PullT' that does the job, sorting by 'D (Distance)'. Here's the cluster:
And here's how it's used: 170316_SpheresStandardizer_2017Mar16c.gh
Notice that 'PullT' emits a cull pattern ('Pc') that can be used on related data to structure it into the same tree pattern - 'Volume (V)' in this case, so it's only used once. Could do the same with the original mesh spheres if there was reason to do so.
I've tested it on last week's code in the other thread and it seems to work fine; will post it there shortly.…
essarily architectural. As you can guess from the tone of my previous response, I finished with school and had a hard time finding a job that focused on the technologies I delt with all through undergrad and grad. During grad school I was working with ASGvis (the makers of V-Ray) so I got exposed to the software side of things both on the support/management side and the development side. Now I'm off on my own doing development projects like RhinoHair, a few others, and some custom plugins for clients. Not necessarily what I thought I'd be doing after grad school, but I'm certainly enjoying it more than the "standard" practice of architecture.
I definitely understand "creating" a program. I did both my undergrad and grad at Catholic U here in DC, and although there was some ground work laid in regards to fabrication, I was one of only two or three students spearheading a lot of the scripting/GH/parametric stuff and some of the topics that go along with them (algorithmic design, adaptive systems, advanced geometry). One thing that was incredibly helpful for me was to pair up with the most advanced and forward thinking professor(s) that you can and take their studios, electives, and/or help out with their research. I was lucky enough to pair with a professor who had been at MIT and really encouraged me to explore my interests and sharpen my technicial skills.
It might also be a good idea to stick your head in some other departments, probably the math and engineering ones, or even biology and economics if there are some forward thinking professors. Talk to some people and get a different perspective on things. When I went to the ACADIA conference in 2008 it really opened my eyes to some of the potential influence from those different arenas.
Fabrication wise, I'd really try to focus more on milling (3 axis is fairly standard, 5 axis if you can get access) than 3d printing. Printing is a lot of fun, but ultimately we're not printing buildings (yet), so some of the milling processes will be much more valuble. If your school doesn't have those kind of facilities on campus (either in the Arch dept or engineering or something), then contact a local fabricator and see if you can work together somehow or someway. You'd be surprised and how many fabricators are interested in talking to architects.…
Added by Damien Alomar at 3:13pm on February 8, 2010
ll truss creation. The reason that this is interesting is the drilling axis calculations. Anyway I'll mail you the original with some comments of mine with regard the problematic areas (or I think so). I confess that I haven't spent time to find the reason for the missing truss members (note: occurs rather "randomly" - meaning that in some surfaces the logic works fine).
2. The good news are that this definition shown below works on any surface (so far, that is) and makes all the required truss members.
3. But personally I'm after truss creations between 2 nurbs surfaces. OK, this sounds rather trivial...but it's not depending on the given shape/curvature. For instance applying the classic tessellation logic (meaning the same u/v values across the whole surf) in surfaces that greatly vary in curvature and size (per edge) ... makes a rather unrealistic truss solution with high tube density in places that you don't need them at all.
A fact that guides us towards Evolute tools relaxation logic (so to speak), easy to say not so easy to do.
Anyway both definitions mailed.
n5 is not ready yet (it attempts a variable density truss deployment - well, at least in Planet Utopia, he he)
…
something (C# or components) that does a planer periodic nurbs - any shape imaginable in fact (shown a humble "figure of 8").
2. Imagine a capability (C# only: sorry) to create a "guide" (indicative/intermediate) surface. Basically: patch the nurbs from step 1 against a variety of user controlled curves/points/cats/dogs/you name it.
3. Imagine doing this U/v quad mesh thingy (we can fill the "gaps" [C# only: sorry] with the base boundary easily - especially when triangulating the mesh - but better work as shown):
4. Imagine calling the cavalry (Kangaroo) and instructing to do ... things on that "normalized" mesh.
5. What things? Well ... like equalize edges, "inflate", planarize the quads (extra WOW stuff that one), pull it against the "guide" surface [from step 2] or some other weird ideas of mine.
this is what V2 does (WIP).
more soon
…
question above.
All I do now is to realize the optimisation process in Grasshopper by clicking one button. As I described in the pdf file ,I will make the parametric model in GH, define loads and prestress, nodes, elements, material, property, section size in GH[Write] and send these information to FEA. Make a component "analyse in FEA". Then read the analysis result like node displacement(stiffness check), normal force in each element(unit check), total self weight(fitness) from FEA to GH[Read]. ----------------------- Apart from these, I need C# to make a loop for my optimisation procedure and for unit check[Later steps].
2) About programming.
Last week I tried to make some components in GH about math and list. However, I am stuck in my self-study of C#, no progress in Write GH information to FEA. About V1B you sent to me, I understand the essential part about how it works to make the rectangular grid and anchors , but got confused with LexerOps and U V (I know they are intervals, need to read another time to absorb how they works and why the code looks like that). I've made an appointment with my university supervisor on this Wednesday to get a start.
3) About system type (you asked in our former discussion)
First is the one I sent you in email. I choose this type as a start point. The other options are Buckminster Fuller tensegrity dome , David Geiger tensegrity dome and tensegrity truss system (see in attach jpg file).
These pics you posted are very nice connections. I do not understand why you give me these pics at this moment? And do you have any advice about my next step?
Best,
Price never Surrender.…
Added by Zhengyu Xue at 5:27am on November 30, 2015
s,
renderanimation.ghx - posted by Vicente Soler,
unfortunatly there is no modetools gh7 and custom components(legacy) of the others
do not work.
2.)
time ago david was posting this picture
( http://www.grasshopper3d.com/photo/metaballs-the-naughty-way?context=user)
using meshfrompoints .
just wanted to know if the new meshUtil meshfrompoints is based on this rhinoUtil though you have
to give u,v direction and in case not if it would be possible to access the command via vb component .
3.)
also was wondering if i could use the custom components to interact with 3rd party appl - T splines
to get some polymodel commands like PolyExtrude into Gh , because currenlty i am mostly using
1 degree nurbscommands , for instance lofting square around one degree polyline and than convert to mesh and smooth . however rhino is mostly creating trimmed surfaces i already need workarounds to create clean meshes . joints i could only solve useing qhull-convexhul from un didi (http://dimitrie.wordpress.com/) which now os also not working anymore
i would appreciate any help
especially on question 1
thx
in advance
ChristopH.Her
…
ng is deciding how and where to store your data. If you're writing textual code using any one of a huge number of programming languages there are a lot of different options, each with its own benefits and drawbacks. Sometimes you just need to store a single data point. At other times you may need a list of exactly one hundred data points. At other times still circumstances may demand a list of a variable number of data points.
In programming jargon, lists and arrays are typically used to store an ordered collection of data points, where each item is directly accessible. Bags and hash sets are examples of unordered data storage. These storage mechanisms do not have a concept of which data comes first and which next, but they are much better at searching the data set for specific values. Stacks and queues are ordered data structures where only the youngest or oldest data points are accessible respectively. These are popular structures for code designed to create and execute schedules. Linked lists are chains of consecutive data points, where each point knows only about its direct neighbours. As a result, it's a lot of work to find the one-millionth point in a linked list, but it's incredibly efficient to insert or remove points from the middle of the chain. Dictionaries store data in the form of key-value pairs, allowing one to index complicated data points using simple lookup codes.
The above is a just a small sampling of popular data storage mechanisms, there are many, many others. From multidimensional arrays to SQL databases. From readonly collections to concurrent k-dTrees. It takes a fair amount of knowledge and practice to be able to navigate this bewildering sea of options and pick the best suited storage mechanism for any particular problem. We did not wish to confront our users with this plethora of programmatic principles, and instead decided to offer only a single data storage mechanism.*
Data storage in Grasshopper
In order to see what mechanism would be optimal for Grasshopper, it is necessary to first list the different possible ways in which components may wish to access and store data, and also how families of data points flow through a Grasshopper network, often acquiring more complexity over time.
A lot of components operate on individual values and also output individual values as results. This is the simplest category, let's call it 1:1 (pronounced as "one to one", indicating a mapping from single inputs to single outputs). Two examples of 1:1 components are Subtraction and Construct Point. Subtraction takes two arguments on the left (A and B), and outputs the difference (A-B) to the right. Even when the component is called upon to calculate the difference between two collections of 12 million values each, at any one time it only cares about three values; A, B and the difference between the two. Similarly, Construct Point takes three separate numbers as input arguments and combines them to form a single xyz point.
Another common category of components create lists of data from single input values. We'll refer to these components as 1:N. Range and Divide Curve are oft used examples in this category. Range takes a single numeric domain and a single integer, but it outputs a list of numbers that divide the domain into the specified number of steps. Similarly, Divide Curve requires a single curve and a division count, but it outputs several lists of data, where the length of each list is a function of the division count.
The opposite behaviour also occurs. Common N:1 components are Polyline and Loft, both of which consume a list of points and curves respectively, yet output only a single curve or surface.
Lastly (in the list category), N:N components are also available. A fair number of components operate on lists of data and also output lists of data. Sort and Reverse List are examples of N:N components you will almost certainly encounter when using Grasshopper. It is true that N:N components mostly fall into the data management category, in the sense that they are mostly employed to change the way data is stored, rather than to create entirely new data, but they are common and important nonetheless.
A rare few components are even more complex than 1:N, N:1, or N:N, in that they are not content to operate on or output single lists of data points. The Divide Surface and Square Grid components want to output not just lists of points, but several lists of points, each of which represents a single row or column in a grid. We can refer to these components as 1:N' or N':1 or N:N' or ... depending on how the inputs and outputs are defined.
The above listing of data mapping categories encapsulate all components that ship with Grasshopper, though they do not necessarily minister to all imaginable mappings. However in the spirit of getting on with the software it was decided that a data structure that could handle individual values, lists of values, and lists of lists of values would solve at least 99% of the then existing problems and was thus considered to be a 'good thing'.
Data storage as the outcome of a process
If the problems of 1:N' mappings only occurred in those few components to do with grids, it would probably not warrant support for lists-of-lists in the core data structure. However, 1:N' or N:N' mappings can be the result of the concatenation of two or more 1:N components. Consider the following case: A collection of three polysurfaces (a box, a capped cylinder, and a triangular prism) is imported from Rhino into Grasshopper. The shapes are all exploded into their separate faces, resulting in 6 faces for the box, 3 for the cylinder, and 5 for the prism. Across each face, a collection of isocurves is drawn, resembling a hatching. Ultimately, each isocurve is divided into equally spaced points.
This is not an unreasonably elaborate case, but it already shows how shockingly quickly layers of complexity are introduced into the data as it flows from the left to the right side of the network.
It's no good ending up with a single huge list containing all the points. The data structure we use must be detailed enough to allow us to select from it any logical subset. This means that the ultimate data structure must contain a record of all the mappings that were applied from start to finish. It must be possible to select all the points that are associated with the second polysurface, but not the first or third. It must also be possible to select all points that are associated with the first face of each polysurface, but not any subsequent faces. Or a selection which includes only the fourth point of each division and no others.
The only way such selection sets can be defined, is if the data structure contains a record of the "history" of each data point. I.e. for every point we must be able to figure out which original shape it came from (the cube, the cylinder or the prism), which of the exploded faces it is associated with, which isocurve on that face was involved and the index of the point within the curve division family.
A flexible mechanism for variable history records.
The storage constraints mentioned so far (to wit, the requirement of storing individual values, lists of values, and lists of lists of values), combined with the relational constraints (to wit, the ability to measure the relatedness of various lists within the entire collection) lead us to Data Trees. The data structure we chose is certainly not the only imaginable solution to this problem, and due to its terse notation can appear fairly obtuse to the untrained eye. However since data trees only employ non-negative integers to identify both lists and items within lists, the structure is very amenable to simple arithmetic operations, which makes the structure very pliable from an algorithmic point of view.
A data tree is an ordered collection of lists. Each list is associated with a path, which serves as the identifier of that list. This means that two lists in the same tree cannot have the same path. A path is a collection of one or more non-negative integers. Path notation employs curly brackets and semi-colons as separators. The simplest path contains only the number zero and is written as: {0}. More complicated paths containing more elements are written as: {2;4;6}. Just as a path identifies a list within the tree, an index identifies a data point within a list. An index is always a single, non-negative integer. Indices are written inside square brackets and appended to path notation, in order to fully identify a single piece of data within an entire data tree: {2,4,6}[10].
Since both path elements and indices are zero-based (we start counting at zero, not one), there is a slight disconnect between the ordinality and the cardinality of numbers within data trees. The first element equals index 0, the second element can be found at index 1, the third element maps to index 2, and so on and so forth. This means that the "Eleventh point of the seventh isocurve of the fifth face of the third polysurface" will be written as {2;4;6}[10]. The first path element corresponds with the oldest mapping that occurred within the file, and each subsequent element represents a more recent operation. In this sense the path elements can be likened to taxonomic identifiers. The species {Animalia;Mammalia;Hominidea;Homo} and {Animalia;Mammalia;Hominidea;Pan} are more closely related to each other than to {Animalia;Mammalia; Cervidea;Rangifer}** because they share more codes at the start of their classification. Similarly, the paths {2;4;4} and {2;4;6} are more closely related to each other than they are to {2;3;5}.
The messy reality of data trees.
Although you may agree with me that in theory the data tree approach is solid, you may still get frustrated at the rate at which data trees grow more complex. Often Grasshopper will choose to add additional elements to the paths in a tree where none in fact is needed, resulting in paths that all share a lot of zeroes in certain places. For example a data tree might contain the paths:
{0;0;0;0;0}
{0;0;0;0;1}
{0;0;0;0;2}
{0;0;0;0;3}
{0;0;1;0;0}
{0;0;1;0;1}
{0;0;1;0;2}
{0;0;1;0;3}
instead of the far more economical:
{0;0}
{0;1}
{0;2}
{0;3}
{1;0}
{1;1}
{1;2}
{1;3}
The reason all these zeroes are added is because we value consistency over economics. It doesn't matter whether a component actually outputs more than one list, if the component belongs to the 1:N, 1:N', or N:N' groups, it will always add an extra integer to all the paths, because some day in the future, when the inputs change, it may need that extra integer to keep its lists untangled. We feel it's bad behaviour for the topology of a data tree to be subject to the topical values in that tree. Any component which relies on a specific topology will no longer work when that topology changes, and that should happen as seldom as possible.
Conclusion
Although data trees can be difficult to work with and probably cause more confusion than any other part of Grasshopper, they seem to work well in the majority of cases and we haven't been able to come up with a better solution. That's not to say we never will, but data trees are here to stay for the foreseeable future.
* This is not something we hit on immediately. The very first versions of Grasshopper only allowed for the storage of a single data point per parameter, making operations like [Loft] or [Divide Curve] impossible. Later versions allowed for a single list per parameter, which was still insufficient for all but the most simple algorithms.
** I'm skipping a lot of taxonometric classifications here to keep it simple.…
Added by David Rutten at 2:22pm on January 20, 2015
it seems that was this. Now all is working fine !
Glad that it worked! But I am still a bit worried. Gismo components only modify the gdal-data/osmconf.ini file and no other MapWinGIS file. So your MapWinGIS installation files should not be compromised. The fact that you did not get the "COM CLSID" error message when running the "Gismo Gismo" component suggests that MapWinGIS has been properly installed. So I wonder if the cause for the permanent "invalid shapes" warning has again something with the fact that your system is again not allowing the MapWinGIS to properly edit the osmconf.ini. Maybe this problem will appear again, and again, and reinstallation of MapWinGIS every time can be somewhat bothersome.
- About the terrain generation, is it possible to have the texture from google or other provider mapped onto the terrain surface from gismo component ? (Same as using the ladybug terrain generator in fact). I try to used the image extracted by ladybug component and then applied it to the gismo terrain but the texture is rotated by 90°.
The issue with the rotation can be solved by swapping/reversing the U,V directions of the terrain surface. A slightly more important issue is that terrain surface generated with Gismo "Terrain Generator" component might have a bit smaller radius than what the radius_ input required. This stems from the fact that the terrain data first needs to be downloaded in geographic coordinate system, and then projected. Some projecting issues may occur at the very edges of the projected terrain, so I had to slightly cut out the very edges of the terrain which results in the actual terrain diameters being slightly shorted in both directions. This means that if you apply the same satellite image from Ladybug "Terrain Generator" component to Gismo "Terrain Generator" component the results may not be the same.I attached below a python component which tries to solve this issue by extending the edges of Gismo "Terrain Generator" terrain, and then cutting them with the cuboid of the exact dimensions as the radius_ input. Have in mind that this extension of the original terrain at its edges is not a correct representation of the actual terrain in that location. But rather just an extension of the isoparameteric curve of the terrain surface. So basically: some 0 to 10% (0 to 10 percent of the width and length) of the terrain around all four edges is not the actual terrain for that location, but rather just its extension.The python component is located at the very right of the definition attached below.
Also, if you would like to use the satellite images from Ladybug "Terrain Generator" component along with "OSM shapes", sometimes you may find slight differences in position of the shapes. This is due to openstreetmap data not being based on Google Maps (that's what Ladybug "Terrain Generator" component is using), but rather on Bing, MapQuest and a few others.
- About the requiredKeys_ input of OSM shapes, I understand what you mean and your advice, but in most cases I use it, the component was working fine even without input. I think it's better to extract all tags, values and keys of the selected area, instead of searching for specific ones as I try to find all data related to what I want after, isn't it ? To check what keys are present on the area also.
Ineed, you are correct.I though you were trying to only create a terrain, 3d buildings and maybe find some school or similar 3d building, for these two locations. The recommendation I mentioned previously is due to shapefiles having a limit (2044) to how many keys it can contain. This requires further testing of some big cities locations with maybe larger radii, which I haven't performed due to my poor PC configuration. But in theory, I imagine that it may happen that a downloaded .osm file may have more than 2044 keys. In that case shapefile will only record 2044 of them, and disregard the others. That was my point.But again 2044 is a lot of keys, and I haven't been checking much this in practice. For example, when I set the radius_ to 1000 meters, and use your "3 Rue de Bretonvilliers Paris" location I get around 350 something keys, which is way below the 2044.Another reason why one should use the requiredKeys_ input is to make the Gismo OSM components run quicker: for example, the upper mentioned 350 something keys will result in 350 values for each branch of the "OSM shapes" component's "values" output.Which means if you have 10 000 shapes, the "OSM shapes" component will have 10 000 branches with 350 items on each branch (values). This can make all Gismo OSM components very heavy, and significantly elongate the calculation process.With requiredKeys_ input you may end up with only a couple of tens of items per each branch.Sorry for the long reply.…
Added by djordje to Gismo at 8:57am on June 11, 2017