thought that architect's love for drawing comes from the necessity of translate abstract ideas into built 3D reality, and the technology behind that 2D representation has not evolve so much until some decades ago. Our teachers come from that times: times when computers try to find their place in the reality representation world. If you try to imagine that people that have always drawn with pencils adapting to this new tools...some become fan of new methods, other just keep the old fashion workflow (like Andrew said in the article, Schumacher VS Graves)
We've bear (at least Andrew and me :P) in 80's with first video games, computers (I still remember my old x286 with 1Mb RAM and 20Mb of HD and that MS-DOS interface)...New technology was natural for us...But there is a big difference between traditional drawing and new computer aided tools: the learning curve. To draw you only need to take a pen and put over a paper (that interface is understood by children easily) , but traditional computational tools (new touch interfaces are out of this group) are based in a complex logic and environment that is not easy to understand for some people.
In the workshops I'm teaching in, I try to put all that tools (new and old one) in my students hands and motivate them to mix and use them together (Andrew knows a little bit about that :P). Why not to make a lines sketch with GH and then print it and render with some markers?; the last step could be scan the result and enhance it in Photoshop adding textures, vegetation, some background...There are no rules, only a bunch of tools to explore and use to develop your ideas, evolve and finally represent them.
I bet to the touch interfaces (with some augmented reality sauce) like that one that will be able to blend both worlds, analog and digital, offering that fluidity and natural interaction that Grave miss in digital tools. And our generation attached to this "not natural" interfaces will need to change its mind and adapt to that new and amazing interface that our children will love.
Only to complete:
<iframe width="560" height="315" src="http://www.youtube.com/embed/aXV-yaFmQNk" frameborder="0" allowfullscreen></iframe>…
Added by Ángel Linares at 5:40pm on September 10, 2012
e volume. The yellow line above.
This volume, green on the above image
So with this there was an intersection with the Brep volume of the chair and the lattice.
After that I used cocoon. Here the parameters I used for the Brep and curve. So The Brep was offsetted.
The model is 80 unit height and cell size is 0.2 so roughly there are 400 divisions in Z. If cubic it will give 6.4 millions of cells. To my point of view it is important to choose well the cell size in order to have not hundred of million of cells. Here 6 millions was usable. The general thing with Cocoon is alwas to test it on small objects first.
A close view of mesh. Edge length is 0.1 unit. There are 6 millions of triangles.
…
We invite participants of the eCAADe 2015 conference to propose workshops on their current research. The workshops will be held on the 14th an the 15th of September…
are invisible in the picture.
So what you see it's a common band that has lost all those characteristics of the original in order to protect the process.
We also did an "invisible setting" prototype which has built in Flexibility.
If you are in the jewelry industry you would know what I mean and it is close to a miracle.
It's a shame I can not share details and this is why I am planning my next major work on something 10 times more complex then this, at least.
It's will be for my own business and for the jewelry industry as well.
I hate to tease people and then not to be able to produce anything more than an image.
But I thought it would be better than nothing, at least for jeweler designers, so they can see that there are more and more users and that complexity it is not something to shy away from, and it's worth the time spent because the returns on production are far larger than for special orders and this is why GH is useful.
We can design a piece of jewelry usually in less then 1 hour, hence GH is not really worth the time.
But for production with so many variables (Finger sizes controlling most of the outcome together with stone sizes etc.) then GH it's a MUST!
I really appreciate everyone's comments and suspicions and I understand why.
99% of the people out there do not really understand the complexity of jewelry at the industrial level. It' s not just form but the post-production that's the killer.
This industry it's still an hybrid of technology and art with, and due to the lack of the old school pros, unfortunately, we face very lousy and unpredictable execution in the post production (after the casting process). This leaves you with a design process and intention that requires a lot of control over every possible variant of the object.
One wrong design aspect it's multiplied thousands of times at the benches (for every single piece) = bad profits!
It sound more serious that it is but very few companies are willing to do so (delivering good product vs low quality and this also happens because the consumer is not longer aware of the difference. So, who does keep quality, it's only because of integrity, third party QA or just pride).
This is way GH is invaluable. This is why that Def looks like out of proportion for that (Visual) simple band.
It is because there are dozens and dozens of variable effecting everything else. In fact it is not even complete as it is in order to cover everything but the most critical ones.
Sorry for the long replays. I am an instructor and a professional jeweler by trade since I was very young and I love to teach, so I overflow with explanations... and Components :)).
Next time it will be "in the open" as they say...…
uments:
1. You are targeting CATIA don't you? (not exactly tomorrow but ... soon) and/or SolidWorks (hello C# haven't we met before?).
2. You MUST deal with nested block instances instead of what you are trying to do right now (I'm talking about the real MERO things not abstract Lines and points). This is not doable with GH components I'm afraid (but it's rather easy with code).
3. You MUST deal with RDBMS in order to keep track with what's going on in your company per project per case per designer (who sells that bolt? what's his cat name? is he a reliable supplier? what I'm doing in life? ... that sort of "queries"). At this point: CATIA is 1% CAD things and 99% PLM stuff (Product Life cycle Management). We do want that since it's 21st century running don't we?.
I hear you: but these are 3 arguments ... indeed but ... hey who's counting? he he.
Method:
A. This def attached has a very simple C# that gets mesh Pts and makes a nice U/V style collection of points (DataTree in plain English).
B. Then we go to that umbrella sticks thingy: we can calculate anything (already the thing does "some") plus your collections of divided points (with the right way, he he) VS a given node: you said (Skype) that you want to calculate angles with these (from 2 to 6) in mind: obvious since you are doing real-life MERO things.
C. Then we could calculate the appropriate Planes for PlaneToPlane transformations: get a nested instance definition (the red things that you've showed to me yesterday) placed at 0,0,0 (Plane.WorldXY) and put in in every Plane collection related with every node (clash defection is an obvious must).
Case resolved, closed: what about that Vodka?
More in Skype
…
merely automates finding clear intersections between pairs of objects and then splits the objects along those intersection *curves*, deletes the trims, then joins the remains, and cycles on. But within the confusing Rhino Settings tolerance value, wherever surfaces actually just sort of come closely together, there *is* *no* clear intersection curve. So it bugs out and stops working EVERY time you try more than a dozen or two spheres.
Some software can do this by switching to volumetric pixels (voxels). $9K-$30K Geomagic Freeform is an example of this. It also fails sometimes, often due to memory issues, as you can imagine since it needs to fill all inner space of each sphere definition with 3D pixels.
Materialize Magics for $16K can often handle such Booleans well. It will take a seeming lifetime to figure out such often pirate software kludges though.
One thing you can try though is to simply drape a mesh or NURBS plane onto the top of your spheres.
There's a well known *reason* your Booleans are failing. Nobody here has yet even hinted at it:
The main reason is that Rhino/Grasshopper developers don't care about the human element. The math exists to make this work very fast, every time. It just has to join things *right*, incorporating human knowledge of kissing surfaces, instead of acting stupidly, like some pocket calculator. But that would involve hacks that make 99% of complex Booleans work instead of 10%, and we can't have that since it will be SLOWER for the other 1% that just happen to have no nearly kissing or really kissing surfaces.
You could also use the new Cocoon plugin to do a surface *around* your structures, with a given radius of extension beyond the spheres, then offset that surface back the same radius. That is 100% robust, but won't offer quite as sharp of intersections, more rounded, like most everybody wants anyway.
You can *test* Boolean failures, by running a Grasshopper intersection command, to see the intersection curves, and zoom in to see how badly many of them are, all knotted, or twisted, or even with gaps, often with gaps.
It's a math problem nobody at McNeel wants to solve, sorry.
Just write a check for $25K and spend six months taking notes, like I did, and you can merge your simple spheres finally.…
Added by Nik Willmore at 6:33pm on October 20, 2015
nowledge, tools, materials and machines. The Clusters provide a focus for workshop participants working together within a common framework.
Clusters provide a forum for the exchange of ideas, processes and techniques and act as a catalyst for design resolution. The Workshop is made up of ten Clusters that respond in diverse ways to the sg2012 Challenge Material Intensities. The Call for Clusters is now open to proposals which respond in innovative ways to this year's challenge.
Deadline: September 19 2011
More information can be found here:
http://smartgeometry.org/index.php?option=com_content&view=article&id=129&Itemid=146
sg2012 takes place from 19-24 March 2012 at EMPAC (http://empac.rpi.edu/) and is hosted by Rensselaer Polytechnic Institute in Troy, upstate New York USA. The Workshop and Conference will be a gathering of the global community of innovators and pioneers in the fields of architecture, design and engineering.
The event will be in two parts: a four day Workshop 19-22 March, and a public conference beginning with Talkshop 23 March, followed by a Symposium 24 March. The event follows the format of the highly successful preceding events sg2010 Barcelona and sg2011 Copenhagen.
sg2012 Challenge Material Intensities
Simulation, Energy, Environment
Imagine the design space of architecture was no longer at the scale of rooms, walls and atria, but that of cells, grains and vapour droplets. Rather than the flow of people, services, or construction schedules, the focus becomes the flow of light, vapour, molecular vibrations and growth schedules: design from the inside out.
The sg2012 challenge, Material Intensities, is intended to dissolve our notion of the built environment as inert constructions enclosing physically sealed spaces. Spaces and boundaries are abundant with vibration, fluctuating intensities, shifting gradients and flows. The materials that define them are in a continual state of becoming: a dance of energy and information.Material potential is defined by multiple properties: acoustical, chemical, electrical, environmental, magnetic, manufacturing, mechanical, optical, radiological, sensorial, and thermal. The challenge for sg2012 Material Intensities is to consider material economy when creating environments, micro-climates and contexts congenial for social interaction, activities and organisation. This challenge calls for design innovation and dialogue between disciplines and responsibilities.sg2010 Working Prototypes strove to emancipate digital design from the hard drive by moving from the virtual to the actual in wrestling with the tangible world of physical fabrication. sg2011 Building the Invisible focused on informing digital design with real world data. sg2012 Material Intensities strives to energise our digital prototypes and infuse them with material behaviour. They have the potential to become rich simulations informed by the material dynamics, chemical composition, energy flows, force fields and environmental conditions that feed back into the design process.
More information can be found at http://www.smartgeometry.org…
lts.
In the visualization, points is an interesting option. It's a matter of aesthetics I guess, I go with surfaces :) Also what you can try is selecting Filters -> Slice (you can also find it in the icons above the pipeline viewer), in the Slice options below the pipeline press Z normal and on the Z coordinate press some height relevant to the buildings (e.g. 1.75m a typical human scale). That would show you the flow around the buildings on that height. Experiment with selecting other normals and values. Keep playing with the filters there's some cool things in there. Also you can check out the mailing list and extensive paraview documentation.
Concerning the errors I apologize because I just downloaded your case.
It appears that the decomposeParDict is not included in the system folder. I am not sure if this is due to BF not going through the whole workflow yet or an ommission on our side. Please feel free to add it in Github. I will also note it down and pass it to Mostaph to check. In the meantime please find attached a VERY detailed decomposeParDict file. I took the liberty to set it at 4 processors (the numberOfSubDomains value) and also selected (that is uncommented) the scotch decomposition method. It's the easiest method to use since it is automatic and doesn't require any more inputs on how the domain is decomposed on the x,y,z directions (which would require you to change values in the attached file).
Now, the different folders created are simply snapshots of the current solution at the specific timestep. To control how often the solver is saving change the writeInterval number in the controlDict file. You can also change almost all these values on the fly, while OF is running.
Finally, concerning the other errors of parafoam it seems somehow parafoam is reading the intial condition names instead of actual results from the solution files and it doesn't like it.
Does this happen only when you open the case (i.e. at 0 time) or does it also happen when you move to an other timestep?
Also, are you using paraFoam, paraview or the paraFoam -builtin method?
The extension of the paraFoam file seems to be .foam which means you are probably using the built in viewer. That might be the issue but I'm not sure.
Can you try running paraview, navigate to your case folder, open the .foam file and see if there is still an error?
Also, if it isn't much trouble can you zip one of the time folders and attach it here? I'd like to take a look at what's inside to check against what the error report says.
Once again thanks for testing!
Kind regards,
Theodore.…