NOW > https://designbydata.org/apply/
Design by Data provides attendees with a cross-disciplinary culture of computational design and a comprehensive knowledge of cutting-edge technologies in the fields of parametric architecture, robotics, digital manufacturing and 3D printing for the construction industry.
The program is run by the prestigious École des Ponts ParisTech and designed for a selected group of architects, engineers and designers offering a variety of courses, fabrication and prototyping workshops, conferences, digital talks and networking events.
The program is a 12 month Executive part-time course (one week per month) including 350 hours of teaching plus a one-year research project. Besides the one week per month teaching courses, candidates can both keep their professional activities or develop their research and projects using the coworking and fablab facilities of the school - a full-time membership to access the digital fabrication resources of the school is included in the program fee.
For details and applications please visit >https://designbydata.org/
…
be on the same project, but you get the idea. I see the real difference from a firm like mine and a small boutique is that most projects from those cutting edge firms are truly focused on "Form" with a big "F" so to mean more than what it looks like, but form that might be influenced by any number of attributes (We Work4Her is one of my favorite examples of that sort of rigor). While we pride ourselves on our design skill, our Knowledge is what we trade on. That means breadth of knowledge and the depth that we can bring to projects. I don't think either is the "correct" way, but they do bring a different set of requirements to the fold. The attraction of something that’s harder to design in like Revit is that it provides us opportunities, not solutions, which could ultimately take the form of a "BLM" sort of Building Lifecycle Management System, a kin to a PLM that you see in manufacturing.
It’s really the intelligence that you can get from Revit that we are after and the efficiency at which it can be gained. It certainly not the design tool for everyone, but we are trying to get teams to work in a more knowledgeable way earlier and earlier in a design process with whatever tool they choose.
This is why I push teams to use tools like grasshopper instead of just rhino as the definitions themselves can be instructive, beyond just seeing the end result and you can do some “Automatic” reporting from Grasshopper like FAR and Floor area and so on.
I think the challenge is that firms like mine don’t want files, we want systems, we don’t want to know about one project, and we want to know about all our projects. Last year that was 1.5 projects for every person in the firm. This need leads to a natural tendency for us and to vendors to collect solutions.
I have been waiting for a real Rhino BIM (VisualARQ is good for a big house in my mind) product for years, but I think don’t that’s McNeal’s focus so it leaves it up to those that are. I don’t think that’s an issue, I just think it would be better if more products where competing in the same space, Autodesk in McNeal’s and McNeal is Autodesk’s and so on.
Not a ton of insight here, just more of a perspective.
Greg…
curves A and B.
For each point pA on curve A,
you need the corresponding tangent vector tA on curve A, and the lists of "cone" vectors pB(j)-pA and tangent vectors tB(j) on curve B. so you have three vectors tA, tB(j) and AB(j)
these three vectors define a parallelogram thas varies along j
3d determinant of the three vectors above gives you the volume of this parallelogram. When 3dDet = 0 then it means it's flat, the vectors are coplanar. Thats what we're looking for.
So you just need to plot the curve 3Ddet = f(pB) , still for each point on A
'pB is the parameter here'
graphically solve these cuves to find the zeros and you feed back the resulting parameter in curve B. draw te line, done.
You can manage double solutions or cusps directly on the plot by using clostest point and >= conditions to kill unwanted results.
I do it twice, from crv A to crrv B and from B to A to make sure I catch start and end generatrices each time.
The videos you posted are interesting. I don't understand how it works with just 2 slider to tune the curves.
…
ed already). Why not doing some "structure" (i.e assembly/component) related tools that could "classify" GH baked things into nested blocks...that...er...could been exported,say, to Microstation/CATIA/NX as 3d Shared Cells/Components? (the rest could be history, he he).
That way GH/Rhino could be considered as viable "companion" apps to the big AEC boys (because intelligence without proper structures = nothing). I add CATIA to AEC segment due to the Plant market segment.
VBN (Very Bad News) section: Er...the higher the tolerance the less pipe is shown: for instance NO bottom pipe any more (meaning that God - Himself - definitely doesn't approve PLines/Pipes on these ugly trusses of mine , simple as that). But the good news are that your stuff works (but one needs segments here - kinda like a straight Loft). Anyway this truss (in modular version) is already near-by completion - I'll post great(????) things(?) soon(???).
Next Project : Marry Caroline of Monaco (or something equivalent) > quit this $##%##^ business > ride NCR Ducatis (in pink) and some Ferrari Italia (in purple).
Be The Force (dark option) with us
…
e it would of course be amazing if these could be displayed in a Rhino window / baked as objects...). I use the BarGraph as a histogram constantly for exploring the data I generate as I'm designing - in fact the graph components are one of the most frequent components I use at the 'end' of my design process. Would be nice to add Titles to the graph/bargraph and labels to axes, as well as the feature requests Marc points out above.
Also wondered if the 'MD slider' would soon have a 3D option similar to the colour picker? Would be useful.
Of course many other graph types would come in quite handy (I often export my data to Excel in order to visualise better) - 2D scatter with the tree structure indicating different data sets and therefore different colour/point types on the graph (Excel-style) would be handy. Of course these could be created as Grasshopper objects and displayed in the viewport but I find the work needed to get to a presentable output this way is often too much and its faster for me to just look at the data in Excel. Also in the Rhino viewport you often want to be visualising the end result of your definition (i.e. geometry) and not have to zoom somewhere else or fiddle around to try to display a graph of values at the same time. I could imagine an 'output' control panel could be quite handy, where you drag and lock in the various text panels / graphs / etc which are useful to you and tell you information about your design as you are varying the input parameters. This could be outside of GH possibly and maybe linked to one side of the Rhino viewport.
Any thoughts? Of course some of these requests are asking Grasshopper to expand a bit more into the 'data display/interpretation' space - however I think this is extremely important as with each design I create there is most always associated data which tells me about its performance in some way or another and viewing that / illustrating it to clients in a quick and friendly way is key. Of course what is there already is most impressive and useful!
Cheers
Luke…
ers optimally in a rectangle or square area, even adjust for sloped ceilings, stay a fixed distance from any obstruction you define (even entire layers can be obstructions), and can show you the resulting coverage area for each sprinkler or all of them as a whole. It also has completely taken care of automatic pipe routing, hanging, and making reports, etc.
The main flaw is that it only allows you to specify one fixed distance from all obstructions - in reality each obstruction needs to be spaced away from different distances. Also, it's limited by only being able to auto-calc rectangles so you have to divide complex rooms into multiple rectangles. It still works out in the end, but not as good as it could be, and definitely not in relation to complex obstructions like placing sprinklers inside webbed steel members.
In my opinion the only need for a human in the entire process (theoretically) is when you have to go to the physical realm to collect information not given to the computer yet - such as when you don't know the dimensions of something, or there is something left out or implied, etc. But if you have a completely detailed realistic 3d model, with every little thing defined and every possible mathematical rule or algorithm there could possibly be, then theoretically sprinkler systems would be designed completely by computer formulas. The problem with this idea is that programmers have always been limited to starting from scratch, no one releases their source code for this type of thing, and programming is still stuck in the incoherent world of computer language. GH and other 'visual programming' or logic systems seem to make more things like this possible and that's what I've been searching for.
Inventor iLogic, Catia Knowledge Advisor / Optimizer, Rhino/GH, Intergraph/Alias ISOGEN related stuff is all I have found so far, but I'm sure there is something out there that can do this. Sprinkler design is a relatively small field which, even though it's more complex in my opinion than HVAC and plumbing, the vast amount of rules means there is less open to 'human judgement', and almost nothing, except sprinklers being center of tile and the pipe not hanging down too low, is open to the owners idea of aesthetics or what looks good. That equates to a computer being the best designer, again, imho.…
thought that architect's love for drawing comes from the necessity of translate abstract ideas into built 3D reality, and the technology behind that 2D representation has not evolve so much until some decades ago. Our teachers come from that times: times when computers try to find their place in the reality representation world. If you try to imagine that people that have always drawn with pencils adapting to this new tools...some become fan of new methods, other just keep the old fashion workflow (like Andrew said in the article, Schumacher VS Graves)
We've bear (at least Andrew and me :P) in 80's with first video games, computers (I still remember my old x286 with 1Mb RAM and 20Mb of HD and that MS-DOS interface)...New technology was natural for us...But there is a big difference between traditional drawing and new computer aided tools: the learning curve. To draw you only need to take a pen and put over a paper (that interface is understood by children easily) , but traditional computational tools (new touch interfaces are out of this group) are based in a complex logic and environment that is not easy to understand for some people.
In the workshops I'm teaching in, I try to put all that tools (new and old one) in my students hands and motivate them to mix and use them together (Andrew knows a little bit about that :P). Why not to make a lines sketch with GH and then print it and render with some markers?; the last step could be scan the result and enhance it in Photoshop adding textures, vegetation, some background...There are no rules, only a bunch of tools to explore and use to develop your ideas, evolve and finally represent them.
I bet to the touch interfaces (with some augmented reality sauce) like that one that will be able to blend both worlds, analog and digital, offering that fluidity and natural interaction that Grave miss in digital tools. And our generation attached to this "not natural" interfaces will need to change its mind and adapt to that new and amazing interface that our children will love.
Only to complete:
<iframe width="560" height="315" src="http://www.youtube.com/embed/aXV-yaFmQNk" frameborder="0" allowfullscreen></iframe>…
Added by Ángel Linares at 5:40pm on September 10, 2012
it into points on that surface. From Each Point draw a Line and then divide that line into points. The end result is a 3D cube of points made from multiple rows and columns in all directions.
The file is attached so you can have a play for yourself.
The first example shows, as you say, a very nonsensical path structure resulting from such a simple problem. Why on earth does it need so many zeros at the Front {0;0;......
If you change the first slider from 1 to 2 things become a little more clearer.
Because we are now supplying two sets of surfaces on on the same path then the second zero {0;0... starts to make sense. as we can see that when it refers to the second surface it is now a {0;1;...
If you then change the Multi-Branch Toggle to True the first zero starts to make sense.
So with 1 surface on different original Paths {0} and {1} we get the first zero meaning something.
Have a go yourself by changing the settings in any of the Purple circles and see what happens. You'll find that the additional levels of path structure a present for the "what if" scenarios. But there has to be consistency because I don't want to create a definition with the intention of having multiple possibilities and finding because I add complexity that the structure suddenly becomes complex.
I hope this helps
Danny
…
If Boolean unions were more robust in Rhino/Grasshopper you could make local extrusions of each surface after rounding the corners somehow, possibly using T-Splines, fillets, or a mesh you smooth, then just union them and smooth the result to remove artifacts. But Booleans will not work here since any coincident surfaces already screws that up, let alone a huge collection of widely varying geometry that guarantees poor intersection curves here and there enough to ruin the whole Boolen union of either NURBS or a mesh.
So, you need an inventive strategy to locally thicken each surface in a way that does not depend on the overall geometry, and then some way to wrap that in a single surface or mesh. As stated, your problem, with pure exact NURBS is not well defined mathematically, as our fuzzy logic human brains may imagine it is. If you merely translate a copy of the polysurface straight up, you get zero width for the near vertical side surfaces, yet if you extrude each surface in the direction of its normal, you get little overlap at the joints between orthogonal surfaces.
If we could loft solids (closed NURBS surfaces), it would be easier, perhaps, to just work on extracted wires turned into hot dogs.
What works here, for now, is Weaverbird Thicken, acting on a mesh, which I also have access to manually in Rhino as wbThicken, though there is no thickness control outside of Grasshopper, so I have to run it repeatedly to get substantial thickness. But Grasshopper affords two out of three algorithm options that work and a thickness setting that preserves your original kinky surface by only thickening on one side:
One of your small surfaces isn't flat which doesn't matter for meshes but may complicate a NURBS strategy.
…