thought that architect's love for drawing comes from the necessity of translate abstract ideas into built 3D reality, and the technology behind that 2D representation has not evolve so much until some decades ago. Our teachers come from that times: times when computers try to find their place in the reality representation world. If you try to imagine that people that have always drawn with pencils adapting to this new tools...some become fan of new methods, other just keep the old fashion workflow (like Andrew said in the article, Schumacher VS Graves)
We've bear (at least Andrew and me :P) in 80's with first video games, computers (I still remember my old x286 with 1Mb RAM and 20Mb of HD and that MS-DOS interface)...New technology was natural for us...But there is a big difference between traditional drawing and new computer aided tools: the learning curve. To draw you only need to take a pen and put over a paper (that interface is understood by children easily) , but traditional computational tools (new touch interfaces are out of this group) are based in a complex logic and environment that is not easy to understand for some people.
In the workshops I'm teaching in, I try to put all that tools (new and old one) in my students hands and motivate them to mix and use them together (Andrew knows a little bit about that :P). Why not to make a lines sketch with GH and then print it and render with some markers?; the last step could be scan the result and enhance it in Photoshop adding textures, vegetation, some background...There are no rules, only a bunch of tools to explore and use to develop your ideas, evolve and finally represent them.
I bet to the touch interfaces (with some augmented reality sauce) like that one that will be able to blend both worlds, analog and digital, offering that fluidity and natural interaction that Grave miss in digital tools. And our generation attached to this "not natural" interfaces will need to change its mind and adapt to that new and amazing interface that our children will love.
Only to complete:
<iframe width="560" height="315" src="http://www.youtube.com/embed/aXV-yaFmQNk" frameborder="0" allowfullscreen></iframe>…
Added by Ángel Linares at 5:40pm on September 10, 2012
e volume. The yellow line above.
This volume, green on the above image
So with this there was an intersection with the Brep volume of the chair and the lattice.
After that I used cocoon. Here the parameters I used for the Brep and curve. So The Brep was offsetted.
The model is 80 unit height and cell size is 0.2 so roughly there are 400 divisions in Z. If cubic it will give 6.4 millions of cells. To my point of view it is important to choose well the cell size in order to have not hundred of million of cells. Here 6 millions was usable. The general thing with Cocoon is alwas to test it on small objects first.
A close view of mesh. Edge length is 0.1 unit. There are 6 millions of triangles.
…
r "virtual partitions" as follows:
What I mean "air walls" here, is derived from the description of the E+ documentation with the header of "Air wall, Open air connection between zones". (Page 17, http://apps1.eere.energy.gov/buildings/energyplus/pdfs/tips_and_tricks_using_energyplus.pdf)
As I understand, the term "air wall" used in E+ here refers to a description of something like "boundary condition" between adjacent interzone heat transfer surfaces, but not a kind of "construction or material" (like air space resistance or air gaps within a wall/double glazing window).
The main purpose of introducing the "air wall", is to simulate or approximate the airflow/convection/natural ventilation effect between multiple thermal zones which are connected by a large opening.
In my previous tests, using HBzones and GB, I managed to create the gbXML file which can be successfully imported to DB (without assigning any constructions within HB). And the adjacency condition can be recognized automatically by DB, even when I did not use the "Solve adjacencies" component in HB - shared surfaces between multiple thermal zones are recognized automatically by BD as "internal - partition"(which are standard partitions, but not virtual partitions).
In order to create/approximate "virtual partition", I need to manually draw a "hole" in the standard partition surface (fig.1&2). Again, the reason why we want to use "virtual partitions"(or "air wall") is that it allows airflow between multiple thermal zones which are connected by large openings and we could get different temperature of the each subdivided thermal zone which compose a large thermal zone.
My question is, if there is a possible way to simulate/approximate this kind of "virtual partitions"(or "air wall") in HBzones or in GB? If so, I would like to test if DB recognizes it or not. Actually, we expect that there is no need to involve any manual operations (like drawing a "hole" in the standard partition surface) in DB, due to an automatic optimization loop.
Thank you!
Best,
Ding
fig.1
fig.2
…
d the fact that one pipe goes out and one goes in, that the surface normal direction is opposite for the two surfaces? Based on an earlier thread, you should know why by now. The two curves have opposite directions (again!); see the white arrows using Rhino 'Analyze | Direction'?
As before, you can fix that by flipping one curve to match the other. HOWEVER, you connected your curves directly to the 'Divide' components instead of using 'Crv' geometry params - bad form. And as before, you "fixed it" by reversing the list of starting points ('S' input to 'BiArc'). Better like this - 'Crv' params are internalized, no need for Rhino file:
Well, well! That didn't fix the opposite surface normals after all! Trust me, though, using geometry params and being conscious about matching curve directions is "best practice". But I haven't lofted 'BiArc' curves for awhile, it's late and I want to move on. OH! I just noticed that you reversed the 'Z' direction for one half of the 'BiArc' - that explains it:
Moving on... You've basically got it, though I would do it differently - same result, like this:
I haven't really explained surface normal vectors - can you figure it out from here? One more little wrinkle (Normal_2017Mar17b.gh):
…
Added by Joseph Oster at 12:03am on March 18, 2017
ported to Rhino and "set" in Grasshopper, i trim both surfaces from their rectangular bases so that when sDivide is used it creates and distributes the same number of points on each surface.But heres the problems: a) if i use the "trimmed" surfaces with SrfGrid it errors warning: "A point in the grid is null. fitting operation aborted".I'd learned this was caused by "nulls" replacing position Data Items when the rectangular grid(surface base) was trimmed away. So i used Clean Tree which worked removing all nulls, then Shift Paths\Flip Matrix to create line-endpoint pairs for Polyline\Evaluate Curve. I Flattened the last Flip Matrix placing all data items in one source for SrfGrid, like in the working Untrim\CopyTrim definition.This time,.b) SrfGrid errored with: "The UCount value is not valid for this amount of points",.So, i substituted a 356 value, numeric Slider in the Addition B param., and tested its range until a valid UCount was found. Then SrfGrid fitted a surface thru the points, BUT,d) those SrfGrid surfaces are extremely deformed even thought the points preceding it from Evaluate Curve are accurate,SEE: def: "3b-RGH_SurfaceBlend.gh",AND,.a2) if i use Untrim with CopyTrim then SrfGrid works, but since the Jokers limbs WILL be in different surface positions then the blends between the Arm (for example) will rise from its relative FLAT position on the untrimmed Source surface to the Arm on the Target surface, rather than morphing from the Corresponding Arm position on the Source surface,. ..see def.: "4-RGH_SurfaceBlend.gh"So please let me know,..1) how to produce accurate surfaces from SrfGrid in def.: "3b-RGH_SurfaceBlend.gh",. ..(NOTE: BOTH these def's contain 2 indentical, "internalized" surfaces, but if def. 3b can be made to work it will also work with Dis-similar surfaces)2) which component to use or how else to determine the correct UCount value for a specified amount of points(ie:155), re: SrfGrid error: "The UCount value is not valid for this amount of points",.3) how else to force SrfGrid to work with Trimmed surfaces?, AND,..4) how to force intersurface, point-blend correspondence lines: Polylines(PLine) to be connected between correctly! correponding positions (Limbs) on the surfaces?,
Really! appreciate all help, definitions and kind generosity common to this knowledgable membership,
Cheers!,
Jeff…
lts.
In the visualization, points is an interesting option. It's a matter of aesthetics I guess, I go with surfaces :) Also what you can try is selecting Filters -> Slice (you can also find it in the icons above the pipeline viewer), in the Slice options below the pipeline press Z normal and on the Z coordinate press some height relevant to the buildings (e.g. 1.75m a typical human scale). That would show you the flow around the buildings on that height. Experiment with selecting other normals and values. Keep playing with the filters there's some cool things in there. Also you can check out the mailing list and extensive paraview documentation.
Concerning the errors I apologize because I just downloaded your case.
It appears that the decomposeParDict is not included in the system folder. I am not sure if this is due to BF not going through the whole workflow yet or an ommission on our side. Please feel free to add it in Github. I will also note it down and pass it to Mostaph to check. In the meantime please find attached a VERY detailed decomposeParDict file. I took the liberty to set it at 4 processors (the numberOfSubDomains value) and also selected (that is uncommented) the scotch decomposition method. It's the easiest method to use since it is automatic and doesn't require any more inputs on how the domain is decomposed on the x,y,z directions (which would require you to change values in the attached file).
Now, the different folders created are simply snapshots of the current solution at the specific timestep. To control how often the solver is saving change the writeInterval number in the controlDict file. You can also change almost all these values on the fly, while OF is running.
Finally, concerning the other errors of parafoam it seems somehow parafoam is reading the intial condition names instead of actual results from the solution files and it doesn't like it.
Does this happen only when you open the case (i.e. at 0 time) or does it also happen when you move to an other timestep?
Also, are you using paraFoam, paraview or the paraFoam -builtin method?
The extension of the paraFoam file seems to be .foam which means you are probably using the built in viewer. That might be the issue but I'm not sure.
Can you try running paraview, navigate to your case folder, open the .foam file and see if there is still an error?
Also, if it isn't much trouble can you zip one of the time folders and attach it here? I'd like to take a look at what's inside to check against what the error report says.
Once again thanks for testing!
Kind regards,
Theodore.…
lla progettazione parametrica e le tecniche di modellazione algoritmica per la generazione di forme complesse
___________________________________________________________________________________
luogo:
Sala meeting Hotel Mercure Milano Centro Piazza Oberdan 12 – 20129 MILANO
Scadenza iscrizioni: 12 Novembre 2011 – ore 15.00
___________________________________________________________________________________
info e prenotazioni:
Le Penseur (coordinamento formazione)
info@lepenseur.it
081 564 21 84
347 548 71 78
quote di partecipazione e programma (formato PDF)
ulteriori informazioni sui corsi PLUG > IT
___________________________________________________________________________________
PROGRAMMA DEL CORSO
GIORNO_01
10.00 – 10.30: presentazione workshop
10.30 – 11.30: introduzione alla progettazione parametrica: teoria, esempi, casi studio
11.30 – 13.00: Grasshopper: concetti base, logica algoritmica, interfaccia grafica
13.00 – 14.00: break | lunch
14.00 – 16.00: nozioni fondamentali: componenti, connessioni, data flow
16.00 – 18.00: esercitazione
GIORNO_02
10.00 – 12.00: funzioni matematiche e logiche, serie, gestione dei dati
12.00 – 15.00: analisi e definizione di curve e superfici
GIORNO_03
10.00 – 12.00: definizione di griglie e pattern complessi
12.00 – 13.00: trasformazioni geometriche, paneling
13.00 – 14.00: break | lunch
14.00 – 16.00: esercitazione
16.00 – 18.00: attrattori, image sampler
GIORNO_04
10.00 – 13.00: data tree: gestione di dati complessi
13.00 – 14.00: break | lunch
14.00 – 15.00: digital fabrication: teoria ed esempi
15.00 – 18.00: nesting: scomposizione di oggetti tridimensionali in sezioni e posizionamento su piani di taglio per macchine a controllo numerico CNC…
rce of power.
A fortified emplacement for heavy guns.
Synonyms
accumulator
And use component:
com·po·nent
/kəmˈpōnənt/
Noun
A part or element of a larger whole, esp. a part of a machine or vehicle.
Adjective
Constituting part of a larger whole; constituent.
Synonyms
noun.
constituent - element - ingredient - part
adjective.
constituent - constitutive
…
n to finding a concave contour polyline (which is in general what you need). In your case each contour section contains a series of points of which you do not know the order and you need to sort them so that by connecting them you find the contour. This is fairly easy to do when the contour is convex (basically you find the average point then calculate the vectors from the average to the points and sort the vectors by angle - sorting the points by the same angle gives you the right order for the contour), but generally impossible to find uniquely when the contour is concave (PS: convex means that, for ANY 2 points inside the figure, a straight line connecting them doesn't intersect with the border curve - i.e. circles, ellipses, rectangles, triangles - concave shapes are a star, a crescent moon, an arrow, a boomerang, etc.).
The problem goes like this: given a generic list of points:
Each of these configurations for a perimeter equally fits the above:
Laurent already went for another possible solution, the stochastic approach (by subdividing the connecting lines), I slightly adjusted a few things over his solution:
namely, I added a rounding option to adjust for some weird tolerance issues (some points that should be at Y=80 were at Y=79.99998 or something) and a more straightforward solution to group them by section plane using sets logic. This, coupled with alpha shape, gives a quite good approach, still very coarse in terms of results but that depends on the sampling resolution of the field (i.e. number of height sections in which you calculate the metaballs) and sampling length of the connecting lines.
Definition attached.…
Diffraction , I left it, how it is.
For the unusual issues that comes in the image source component, so, is it something strange? But, I still have the same issues when I sets any integer component (single or multiple) in the “reflection order” of the image source component, in the “image source order” in the ray tracing component, and again, when I connect the output “Direct sound data” of Direct Sound component in the Energy Time Curve.
Do I wrong something with the integer component? I used it already in the first parts, for sets “grasshopper layers”, in the “Scene” component, but here it works. Should I start with a new file?
For the multi-object optimization, thank you for all suggestions. Yes, I red PHD thesis work of Tomas Mendez and the article “ EDT, C80 and G Driven Auditorium design” and still others. Thank you to all these articles, I decided where to focus my thesis.
I understand the potential of Multi-object optimization, and problems that I can finding without using it. Actually, in the beginning of my thesis, I tried to jet in contact with the Politecnico di Torino, but was not easy because I’m not a Politecnico student.
Here, in University of Florence (Building engineering), there isn’t a department or someone that is already familiar with these field of study, so, as you can image, for design my thesis, I can confide on online resources. So far, my Professor suggest me to begin with a Nonlinear Global optimization like Galapagos, and only after see the multi-object. In this way, step by step if something doesn’t work is easier to understand way and where something is going wrong: if are problems due to the setting of the programs, because we are not practical about these, or if there is a wrong in the simulations or in the algorithm and ect.
Do you think is a good way for go on?
Thank you very much,
Kind Regards
Giulia
…