mations we use a STANDARD thingy (Plane.WorldXY) VS any other plane (that's what the Orient does). This applies for blocks/cats/dogs/anything: meaning that if anyone in the present or the future uses such a "component" he knows the origin (especially if other CAD apps are used in parallel).
2. NEVER EVER make a thing (i.e. the profile) to be oriented "off center" (in the occasion domain start/end values for x/y). If you want to do that treat the destination plane accordingly. That way you build up a mentality were the "source" is standard - so to speak.
3. RHS (but HEB/HEA/IPN/IPE blah, blah) fillets are related with thickness (in real-life) ... therefore when you offset (always inwards: meaning neg values for counter clock wise closed curves) ... take into consideration that simple fact.
…
ou mean by 'Activate Direct Rhino Modifying'. Perhaps you could expand?
I like the idea of mixing and matching script and 'direct' modeling. There seems to be a lot of potential platforms for this:
1. Implict History: Is there a way for GH to read the direct modifications (with History activated) and translate this as a component (or cluster of components?)? IH seems to record the UI events and the associated elements. GH would need to write as well as read the IH info, in order to preserve as much flexibility downstream as possible. You mentioned Houdini. H seems to record all 'implicit' or direct mods, done via the CAD mouse-based UI, in its network graph. Maybe, this should be captured in the IH cluster/component mentioned above.
2. RhinoParametrics: RP has done a lot of work to intercept and translate Rhino commands into its version of Implicit History. Seems to be centred on points, which makes sense as so much of the traditional 'dumb' way of inputing CAD info is based on mouse clicks on screen (points) predicated by commands, active locks, workplanes etc.
3. Gumball: Rubberduck's use of the new Gumball tool to capture 'direct' modeling inputs thru the Gumball points to a good source for capturing this kind or input, that is related to the 'macro recorder' approach taken by RP and IH.
4. The new Geom Cache component seems to be able to preserve a lot of info about the baked object. There may be even a way to read tagged info generated both GH baked with the "reference" object, and external to GH (by IH, the gumball or even third party apps like RP).
Would be interesting to know what kind of info is 'preserved'. Houdini seems to have a pretty consistent approach to geometric data, that seems to allow parallel NURBS/subD/mesh versions of the geometry. It also seems to have a coherent heirarchical approach to vertices/edges/loops/faces etc that allows the subelements to be arbitarily grouped for 'direct' modeling, and still be part of a procedural script.
I guess the polygon / mesh approach to geometry lends itself to this. If all the procedural commands/components all understand mesh geometry in either vertex, edge, face format, then combining direct and script modeling is doable in transparent way?
In your example above, the Geo Cache node 'flattens' the object to dumb geometry which is manipulated using Rhino, then used as a Reference object, in the next section of the graph. I guess there is nothing to stop the follow on components reading the precedenting graph for parameters, for additional intelligence?
Does GH 'get' or 'put' parameter data?
…
file. A TSpline made thing in fact.
2. This atroci ... er ... hmm ... I mean unspeakable beauty uses an exo-skeletal load bearing structure hence is THAT big (BTW: Apparently nobody knows what thermal bridge is nor thermal expansion nor vapor condensation ... but these are "minor" details these holly blob days, he he).
3. 2 means that some nodes of that "grid" MUST "meet" floors in order to support them and (hopefully) withstand some seismic forces. BTW: A Richter scale 9 (for an hour) is all what this building actually needs (that's acid "humor").
4. The "smarter" way to do this is to spread "some" (i.e a lot) random points (Note: David's algo yields "evenly-spaced-points" within the limits of the possible) on the guide blob (a polysurface in fact).
5. Then ... you need some algo that tests proximity AND "adjusts" the Z in order to have some node points "co-planar" (Z) with the floors.
6. Then you triangulate all that stuff (the points, that is) using some decent Ball Pivot Algorithm (NOT Delauney) and you get a triangulated mesh that "engulfs" the guide blob. If you want some quads (as shown) this is also possible.
7. So you have edges ... i.e poly lines (per mesh face) and if you offset them ... you have "drilling" profiles that you must use against a second guide "thickened" blob for creating a continuously smooth exo-skeletal LBS (as shown). Of course Rhino (being a surface modeller) could require years to do this solid difference opp (or an eternity).
8. Rounding the "lips" of that LBS Brep is out of question with Rhino or GH (but it can been done very easily using other apps). Then you must "split" the Brep (in modules? in nodes + "rodes"? you tell me) in order to make it in real-life (what about forgetting all that?, he he).
9. Then, there's the glazing thingy that is made via quads meaning planarity. This is achievable with Kangaroo2 but is a bit tricky.
Moral: WHAT a gigantic pile of worms is this thread of yours...
more soon.
…
r.
Jon has already done some very interesting stuff with regard decomposing matters using IFC schema (I'm not a strong admirer of any schema policy mind - for a variety of reasons).
Now the chaotic case:
1. This is deliberately fuzzy, faulty and chaotic in order to indicate the need (at least IMHO) for a next step with regard handling and visualizing (on a per individual data item basis, not on a per branch basis) data trees.
2. Why this Tree Manager future thing could boost GH up to an unseen level? Exploit the PDF attached - use Saved views and/or the Model Tree "decomposer" (file is greatly reduced in detail - only 1 out of 5 floors shown, no envelope stuff, stripped out of everything actually etc etc etc). Among a variety of things observe that there's transformations that are "selectively" applied whilst various components remain intact (in other words: invite existed "static" objects into the smart chaos) - this means that we need a far better control VS the series (of various type of data) that outline the solution of similar things.
3. What could/should do such a "visual" Tree Manager? Could he function within the existed "one Canvas for all things" environment? Do we need N "sub-canvas" (kinda the Views in any CAD app these days) to handle and visualize complex tree operations? Do we need control on a per data item basis? Do we need a re-mapper of a totally different kind? Do we need a Bake Manager? Do we need a Scenario (parameter combos stored etc) Manager?
Let's the debate begin
Best, Peter
…
to carry out without them. We will go through these plugins learning how they work, main features and advantages playing with practical exercises.
We will highlight key concepts in advanced design, architecture and engineering: topology, form-finding, structural optimization, fractals, loops, genetic and repetitive algorithms...
Also, we will see how to capture nice views and designs from your scripting, with a correct export option, animations...
This course is On-line live sessions (18hours), using our platform online.controlmad.com
STRUCTURE:
- Interactive flexible geometry
- Generative design
- Reaction diffusion
- Geometry from DNA parameters
- Generative path visualization
- Growth simulation by sub-D
- Generating and genetic algorithms
- Visualization techniques
Main plug-ins shown:
> Kangaroo: The most famous and downloaded app for Grasshopper (it is built in the current Grasshopper for Rhino 6). It is a live physics engine interactive simulation, optimization and form-finding directly within Grasshopper
> Galapagos: available in the current Grasshopper build, it is a platform for the application of Evolutionary Algorithms to be used on a wide variety of problems by non-programmers
> Biomorpher: Interactive Evolutionary Algorithms (IEAs) helping designers to explore the wide combinatorial space of parametric models without always knowing where you are headed.
> Anemone: works using repetitive algorithms to create loops or sequencial structures like those ones seen in fractals.
Dates: July 10,11,17 and 18 (total 4 days)
Registration deadline: Monday, July 5th
Timetable: Saturday and Sunday 9,30 - 2pm (Madrid Time Zone CEST)…
Added by Diego Cuevas at 3:40am on September 11, 2018
ee. That said these things (masterminded by a certain David R) are not bad at all ... but if you write code that is "supposedly" transferable (kinda) to other CAD apps ... well ... I would strongly recommend the other classic nested C# collections.
2. The HLP method is one out of many: for instance for a better approximation of the required fitted plane we can use the divide Curve method etc etc.
3. GH components use (in most of cases) methods exposed in Rhino SDK > get the thingy and start digging into the rabbit hole. Of course David did some other components as well that use "less" classic SDK methods (if at all).
4. HLP is a classic approach to count the beans in nurbs curves. Of course I could use PolyCurves and recursive explosion blah, blah ... but here we are not after segments (at least at present time). On the other hand if that was a Faceted Dome (planar Polylines) ... well getting the nodes that way it could be an overkill (this means business for V2).
5. Mastermind some plane orientation policies in order to finish(?) the @$%@$ thing. For instance: Given Plane plane, define a Plane.WorldXY at plane.Origin and section these 2 > then get the cross product (sectionVector, plane.ZAxis) for the new orientedPlane Y axis etc etc (this presupposes that any plane Z axis points "outwards": use Dot Product and a center point as apex etc etc).…
and...how to bake meaningful assembly/component type of structures for the rest of the tedious work required > you know what I mean > the ugly part of our business > documentation drawings, BOM, tech etc etc etc.
For instance, let's focus to the planar glazing support items: absolutely no need to make them it via any smart app since they are plenty of them around in the market (unless you are I.M.Pei and you do that exceptional Pyramid wonder thing).
But...the goal is...hmm...to create some kind of "smart" (kinda, he he) solution where components (the "baked" ones, so to speak) are structured in such a way that further work (via conventional CAD apps) is easily managed. To speak in Rhino dialect: nested Blocks and/or nested Refs. Like having components in GH that could manage nested Block/Ref stuff (but I guess that you can do it rather easily via VB).
Back to that ugly truss: It's obvious that this is a nested collection of "repetitions" (should I call them iterations?) : meaning that a void top node owns a module truss that owns 2 supportive sub-trusses that are made by some pipes that own connecting items that own the planar glazing items etc etc etc.
With regard the "own" thing: Imagine a CAD file that is simply a container/place holder of some individual entities (called Models). These Models can be "linked" to others (in a nested parent/child relation). Links can be external of internal. They can be either References or Cells or Shared Cells. This the way that Microstation classifies/handles "entities" (a bit primitive, mind, but nobody's perfect - for the real thing see CATIA/NX).
Back to that ugly truss: Obviously this structure (actually the assembly/component combo related with the given solution) has to be transfered into classic 2d extractions (say: plans, elevations, sections et all). This is done why a weird thing called Dynamic Views/live markers in Microstation (you define Clip planes in 3d space that manage 2d extraction content in something called Drawing Model that controls other weird things called Sheet Models, all these live linked etc etc).
To make things more spicy...these 2d extractions can been viewed as master detail directives: from where 1:1 classic details are made (that is: you apply more Dynamic Views and live markers and life goes on - red pepper extra strong Russian vodka is a must when you do that type of work).
This is where Rhino is out of his depth (but to be fair: it's not designed for this type of work) and also this is where Microstation has no competition at least for AEC purposes (but to be fair: it is designed for this type of work).
Of course Autodesk...well expect soon the Gen Comp equivalent for Revit...a fact that complicates things (for Bentley) a bit given the Revit mania in the AEC world.
Moral: intelligence is good but it's only the tip of the iceberg. …
igned by this software may be terrible, this is how the future is being shaped, so an understanding of the technology is important.
http://bimandintegrateddesign.com/2014/10/24/googles-bim-busting-app-for-design-and-construction/
https://vimeo.com/107291814
-Projects are due May 8th at the WAAC Final Gallery (I think at 5:30 PM). You will have your board(s) pinned up and your physical model complete underneath. The location is still being worked out, so I will let you know when I know. After the physical submission, a digital submission is required as well. There should be at minimum -
A board with the discussed drawings and images below, named LastName_FirstName_FinalProject.pdf
A photo of your physical model (if not included on the board), named
LastName_FirstName_FinalModel.pdf.
These should be posted on the dropbox sometime before the last day of the semester. Your project will not be graded if you do not physically submit on May 8th and digitally submit sometime before the semester is over.
-Project brief is below
Project Brief: Up until now, you have been using grasshopper to develop, analyze, and fabricate architectural ideas in a very controlled format. The final project is a chance to combine this knowledge with your own design intent and aspirations. The project will use specific deliverables to spur growth, but also allow for you, the designer, to do what you please within the following boundaries.
Requirements:
# open project# must be a design project # story of what you are designing and why you are using grasshopper - specific design intent# must have physical scale model # must have 24” x 36” board - made in Adobe InDesign or Photoshop # grasshopper definition image # 1 artistic rendering - any format - with scale figures # 5 iterations of your project must be presented # 1 diagram to visually describe your project # text describing project # process drawings - photos/sketches/models/other iterations# this is the bare minimum - to have an excellent project, one must go above and beyond these requirements# talk to me if you have out of the box ideas of presenting/ teams / etc...
That is all, there are no assignments due this week, just keep working on those projects. I am available for help during the week, just email or post in the forum. USE THE GRASSHOPPER FORUM IF YOU ARE STUCK. There are many people on here that are way smarter than I that can help you.
See you all next week!…
hich are manged code with ease if you know c#...
You have much better support of the net framework.
You can use Attributes .... I have no idea if this is possible in Phyton but i do not think so... and you can have a meta layer in programming because of this.
You are strong typed ... something which is much better than dynamic because you will have better control in big projects. A and you can be dynamic or use expandoobjects. (I never used this because i did not find any reason doing so.)
You can use Lambda sugar so you can programm functional ... you can programm declarative ... what ever you like... and even better Aspect like if you like ... and even better make this sideeffect free... and and and by the way when you learned c# you are closer to c and c++ ... something the industry likes and do stuff cloer to the computer.
The main commercial slogan for Phyton is what Phyton has only one way of doing something, which keeps the programmer in the right track. I think this is a quite arrogant and stupid way. If you Programm in one style or another where is a reason for it.
Main thing is do not believe what your university tutors say only because they find it hip. They are mostly idiots. Experiment by your self. And c# is a very good start. You will have a low and high language at the same time. Only it you want to do Iphone apps or web stuff think of something else.…
load path and what is a realistic and buildable structure vs one designed ad-hoc that looks cool.
No GH isn't geared to parametric constraint based modelling like CATIA. Then again, neither is CATIA a graphical algorithm. You are comparing apples with oranges.
Having used the CATIA API a lot, I understand the differences.
I disagree as regards the deployment of CATIA. It is very much still a "ivory tower" tool and the knowledge of its use in the AEC is very small and not shared widely. It will continue to do so until it becomes accessible to a wider audience much like rhino and GH. It has also stifled now for over 10 years, and with the release of V6, I don't see it ever catching up. The business model of rhino is better in that it encourages hobbyists to push the tool instead of waiting for a gargantuan software developer to make changes.
With constraint based parametric modelling and parametric modelling per se, the naming convention and ordering of data is key. I agree this is where more work is needed mote in GH.
Interesting job ad you shared, shows how little the person advertising understands parametric modelling. Understanding means nothing unless you have applied it.
To add, by work flow, what I implied is that we have an interoperable work flow to go form Excel to SAP to rhino to Tekla. All of that can potentially be set up on the canvas, on the fly, with the use of plug-ins. You cant do that with GC/CATIA or anything else. They don't provide a medium to define work flows.…