about it.
2. Nick's comment below got me thinking about unit testing for clusters. Being able to work will data flowing in from outside the cluster or having multiple states to test against could be really cool. Creating definitions that were valid across a general cross section of possible input parameters was a significant issue for us. It was all too easy to write the definition as if we were drawing (often we were working from sketches) and then have it fail when the input parameters changed slightly.
4. I wasn't thinking about threading the solver itself. I was thinking along the lines of some IDEs that I've seen which compile your project while you type it. I know that threading within components and at the rhinocommon-level is a freaking hard problem that has been discussed at length already. (although when, 5-10 years from now, it's finished it will be very cool)
Let's say the solver is threaded and the canvas remains responsive. As soon as you make a change to the GH file, the solver needs to be terminated as it is now computing stale data.
What if the solver was a little more atomic and like a server? A GH file is just a list of jobs to do with the order of the jobs and the info to do them rigidly defined - right? The UI could pass the solver stuff to do and store the results back in the components on a component by component basis (i have no idea what the most efficient way to do this is in reality - I'm just talking conceptually) this might even allow running multiple solvers to allow for at least the parallelism the might be built into a given GH file to be exploited (not within components but rather solving non-interdependent branches of components simultaneously). This type of parallelism would more than make up for the performance hit you alluded to for separating the UI and the solver (at least for most of the definitions i write).
I was imagining a couple of scenarios:
a) Writing a parallel module: solver starts chewing away - you see it working - you know it's done 1/3 of the work - if you have something to do at that point you could connect up to some of the already calculated parameters and write something in parallel to the main trunk which is still being solved.
b) Skipping modifications: you need to make a series of interventions at different intervals along a section of code. Sure you could freeze out that bit of a section of down steam code and make modifications so you can observe the effects more quickly. Unfreeze a bit more and repeat etc. etc. until your done and then unfreeze that big chunk at the end to make sure you haven't blown anything up. Just letting it resolve as far as it can while you sit there waiting for inspiration seems a lot more intuitive to me though.
On a file which takes 15 minutes to solve that's no big deal, but you certainly don't want to be adding a 20 millisecond delay to a solution which only takes 30 milliseconds.
You also wouldn't notice it at that point :-) perhaps for things where it would really make a difference, like Galapagos interactivity, it could be disabled - or could the existing "speed" setting just digest this need? Since the vast majority of time that Gh is solving is on files under active development not on finished code, i think qualitative performance is probably more important that quantitative performance (again with cases like Galapagos needing to be accommodated). In our case the code only had to "work" once since its output went to a cnc machine to make a one off project and it didn't really matter if it took 15 seconds or 15 hours for the final run.
Lastly, I have no way to predict how long a component is going to take. I can probably work out how far along in steps a component is, but not how far along in time.
that's ok, from a user point of view, just seeing a percentage tick along once in a while would be nice reassurance that the thing is just slow and has not, in fact, crashed. Maybe there could be two modes of display: the simple percentage version for unpredictable code and, for those of us able to calculate the time taken for our algorithm based on the number of input parameters, a count down in seconds or minutes or whatever.
I think a good place to start with these sort of problems is to keep on improving clusters, ... etc etc
i totally agree.
…
Added by Dieter Toews at 7:53pm on September 4, 2013
ack to .ghx?
This is in relation to a discussion I've been having with David Rutten & Scott Davidson about GH consuming memory in a relatively large GH definition (~. I think what I've learned from this is that one should limit the size of the GH file, or put some incremental stops in the definition to limit the length of calculations that it runs at once. Is this a valid conclusion?
The GH file we're talking about is 7Mb & the Rhino file is about 120Mb, but when working w/ the GH def. I try to only keep about 2 curves turned on.
Here's a summary of the discussion:
Hi Mike,thanks for sending it over. I've been fiddling with the file for about 10 minutes and it climbed from 1.7 GB to 1.9GB, but then I've been switching previews on which means more meshes get calculated so you'd expect a higher memory consumption. It is possible we're leaking memory, but if you're working for hours on end, memory fragmentation might also explain part of the increase. Basically, memory gets fragmented just like disks get fragmented after prolonged use, difference is that memory cannot be defragmented unless you restart the application and allow it to start with a clean slate. I'll try and find any leaks we may have missed in the past.Goodwill,David
──────────── David Rutten
On 09/03/2011 06:19, Mike Calvino wrote:
Thanks very much David for the quick response. I've attached the files zipped. I can't figure out what's doing it. After working in the file for awhile, the memory usage in the Windows Task Manager climbs . . . it's gotten to 1.57+Gb before I exited GH & Rhino5Wip & let it dissipate, then restart & work for awhile before it does it again. It probably takes like 4 or 5 hours before it gets that high. That's the highest it's gotten, & that only happened while I was working in a Rhino file that had all of the elements baked into it - turned off at least, but it still climbed to 1.57+Gb. It seems to climbs when you work in the file & move around in both the GH def. & the Rhino file. Like turn on a few of the Extr components at the right end of the "StandareRibExtuder" groups, you can watch the MemUsage go up, but when you turn them off, it does not go down. - goes up fast at this point. Maybe I need to figure out how to do the definition with fewer components, I'm sure that's part of it, but I must confess, I think I'm still early on in the learning curve.I really hope that this is not operator error on my part & I do apologize up front if it is. I have done a disk cleanup, I have tried excluding .3dm & .ghx files from my NOD32 antivirus, no change. I hope you can find something.Let me know if you have any trouble with the files.See if you find anything & please let me know . . . thanks!Cheers! --Mike CalvinoCalvino Architecture Studio, inc.www.calvinodesign.com
…
he "return" is comment out as shown below?
After restarting Rhino and Grasshopper, I opened the outdoors_airflow demo file, and the first step of creating the case file is ok:
Then the blockMesh component gives the following error: seems I have to manually start OF first..
so, as the error message suggested, I open OF by Start_OF.bat:
Then come back to the blockMesh component, now it can be executed while the OF command line window is also openning:
... and the blockMesh finished successfully:
... so I proceeded to run snappyHexMesh, checkMesh and update fvScheme:
... up to the simpleFoam component, I got the error again:
The warning message is:
1. Solution exception: --> OpenFOAM command Failed!#0 Foam::error::printStack(Foam::Ostream&) in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #1 Foam::sigFpe::sigHandler(int) in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #2 ? in "/lib64/libc.so.6" #3 double Foam::sumProd<double>(Foam::UList<double> const&, Foam::UList<double> const&) in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #4 Foam::PCG::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #5 Foam::GAMGSolver::solveCoarsestLevel(Foam::Field<double>&, Foam::Field<double> const&) const in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #6 Foam::GAMGSolver::Vcycle(Foam::PtrList<Foam::lduMatrix::smoother> const&, Foam::Field<double>&, Foam::Field<double> const&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::Field<double> >&, unsigned char) const in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #7 Foam::GAMGSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libOpenFOAM.so" #8 Foam::fvMatrix<double>::solveSegregated(Foam::dictionary const&) in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/lib/libfiniteVolume.so" #9 Foam::fvMatrix<double>::solve(Foam::dictionary const&) in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/bin/simpleFoam" #10 Foam::fvMatrix<double>::solve() in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/bin/simpleFoam" #11 ? in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/bin/simpleFoam" #12 __libc_start_main in "/lib64/libc.so.6" #13 ? in "/opt/OpenFOAM/OpenFOAM-v1606+/platforms/linux64GccDPInt32Opt/bin/simpleFoam"
... and the command lines in the readMe! output are pretty long and it is saved in the text file attached here.
So, my questions are:
1. why I have to manually start OF first before I can use the blockMesh component? Should butterfly automatically start OF?
2. what might be the cause of the unsuccessful run of simpleFoam in the end?
Hope you can kindly advise! Thank you!
- Ji
…
os3D + Grasshopper3D + Maya + Advanced Plugins & Demo Toolkits]
// Level
Basic, Intermediate & Advanced
(Previous parametric design knowledge not obligatory - Studio is adaptive to basic & advanced users)
// Agenda
The workshop aims to provide a detailed insight to ‘parametric design’ and embedded logics behind it through a series of design explorations using Rhinoceros & Grasshopper platforms, along with understanding of data-driven design strategies. An insight to Computational Design and its subsets of Parametric Design, Algorithmic Design, Generative Design and Evolutionary Design will be provided through presentations, technical sessions & studio work. Studio work will be focusing on modulation of geometry and iterative form using Parametric Design methods that will lead to explorations of spatial geometries that can be articulated as architectural constructs or abstract artistic interventions. There will be a demonstration of Fluid Form Modelling using Autodesk Maya on Day03.
The 1st batch of workshop in London took place in January 2020 with an exploratory learning output for ~20+ participants that travelled from different parts of the world to be a part of 3-day workshop titled Parametric Modulations. V2.0 is the evolved workshop with new toolkit add-ons for the 2nd batch and demonstrative tools & examples for future use of participants to get a deeper understanding of Computational & Parametric Design.
// Methodology
The 3-day studio / workshop shall focus on inculcating the following aspects as a part of curriculum:
Computational Design Techniques & Parametric Design
Data, Mathematics & Geometry
Geometry Rationalization
Iterative Form Development
Digital simulation of forces
Environmental Analysis (Tool-kits & Example files for future use)
Collaborative Design Exercises to understand application of the learnt tools
Documentation and Presentation
Hands-on Demonstration of Maya (Polygonal Mesh Modelling)
The workshop is suitable for beginners, intermediate as well as advanced users of these tools and very helpful for anyone planning to start their Masters in UK as this 3-day workshop would serve as a bootcamp to kick-start anyone's journey in Computational Design.
…
diseño, construcción y entendimiento de nuestro entorno.
BIM está poniendo a disposición de los diseñadores y gestores auténticas bases de datos que pueden generarse, conectarse y editarse de forma paramétrica, proporcionando una sólida capa de realidad a los ejercicios de diseño generativo y computación que son objeto de estudio en Algomad, el seminario que busca popularizar la programación y la parametrización en el diseño y en la experiencia de nuestro entorno construido.
Tras un paréntesis en 2015, Algomad vuelve con el objetivo de demostrar cómo una visión computacional del BIM es una oportunidad para mejorar la forma de trabajar de ingenieros, arquitectos, constructoras y operadores de edificios e infraestructuras, tendiendo un puente entre las técnicas de diseño digital más avanzadas y la realidad de la construcción.
Algomad 2016 tendrá lugar en el centro de Madrid, en IE School of Architecture and Design, IE University, los días 3, 4 y 5 de Noviembre de 2016 y comprenderá 4 talleres así como ponencias a cargo de expertos de primer nivel.
Estructura de Algomad 2016
Algomad 2016 se estructura en torno a tres áreas temáticas principales:
BIM, como la metodología total específica para el sector de la construcción.
Computación, englobando las aplicaciones de programación y parametrización al diseño de edificios e infraestructuras.
Realidad, como marco de trabajo, buscando siempre resolver problemas reales a través de los dos puntos anteriores.
Público objetivo
Arquitectos, arquitectos técnicos, ingenieros y en general académicos, estudiantes de últimos cursos y profesionales del mundo inmobiliario y de la construcción que compartan un interés por la digitalización de nuestro sector. Se espera un nivel mínimo en el uso de herramientas BIM y de parametrización. Algomad proporcionará formación adicional y gratuita en las herramientas básicas a emplear en los talleres para asegurar un correcto desempeño.…
itects are at the spoke of a number of different specialties, and their work affects many different people. It's not like an architect is a painter, whose work may offend or upset the occasional viewer. As an architect you have a responsibility to produce quality work. How can anybody trust you with this responsibility if you're taking a purely artistic approach? What guarantees do you have that your clients money won't be spend on a poorly designed project if you can provide no rational for why your design is the way it is?
2. What is any sense in purely architectural discourse?
I don't get. Discourse is there to flesh out problems and agree on solutions. It might not always accomplish that, but what's the difference between talking about architecture as opposed to any other topic?
3. strictly looked, can be determined sense generally in a purely architectural discourse?
I'm sorry I don't understand.
4. What is purely architectural discourse?
I imagine it's having a discussion where you only talk about architecture?
5. What is Funktionalismus or Rationalismus without philosophical support?
Functionalism and Rationalism are ideologies. Some would even call them methodologies. They are inherently philosophical things as they are nothing more than a collection of ideas and views. As a society we've decided that a certain level of rationalism is a good thing. The Enlightenment continued this trend after the Dark Age hiatus and it quickly led to a large number of very tangible benefits for almost everyone.
I'm not arguing for or against Functionalism as an architectural style. I'm asking for a measure of rationalism in our academic process.
6. Would not be the pure functional fulfilment empty ?
Let's find out. In the meantime I'll settle for a little functionalism.
7. Would be not a critical position on the promise of purely rational algorithms applied?
Algorithms and algorithmic design are rational in the sense that they do not allow for ambiguity. But that doesn't make them rational in the real-world sense. These are not the same kind of 'rational's. I can make an algorithm that produces total nonsense, but does so completely reliably. I can also use an algorithm in a setting for which it wasn't intended, thus invalidating the results.
This is actually the crux of the problem. Which algorithms does one use to solve a problem and what data do they require? If you can't answer this question or if you do not understand the algorithms you are using (at least on a superficial level) then I'd say you have no business using them.
--
David Rutten
david@mcneel.com
Tirol, Austria…
Added by David Rutten at 12:48pm on August 19, 2013
an almost planar tissue (your case) can cause a variety of issues up to the undo able state (metal parts/components grow in size as well for no reason). See forces estimated by FF below.
2. Therefor I strongly suggest to consider Plan B (a) mastermind a secondary "anchor" capability in order to achieve a far more stable system (b) use a mount design that can support this (and comply with the attractor concept of yours). Here's a variable mount custom system (mostly machined AND not cast) that is suitable for the scope (Rhino reads the stp file OK .... but makes a colossally big file - thus I attach here the original).
3. On first sight lot's of things in this system appear "odd". For instance: is it stable? Why these double cables are used? How far can be adjusted? (that's a classic case for feature driven parametric design - not doable with Rhino).
4. This concept (strut axis exported only) is tested in FORMFINDER and some other far more complex membrane apps that I use quite often (not RhinoMembrane). Here's is what FF tells us about:
Observe a different kind of "stress" when this is converted to radial type:
5. If you insert the stp file to the Rhino file provided (exactly as exported from FORMFINDER - no mods of mine of any kind) you'll see what goes where (and why). That way the usage of double cables is rather obvious (and a lot other things - for instance the way that the struts achieve "equilibrium", see the slots in the base mount plate.
6. If this approach is worth considering your definition requires some serious rethinking (far more simpler/manageable with the drawback that the real parts they are "static" they can adjust only as far this particular solution allows them to do - controlling them parametrically is clearly impossible with the current state of R/GH capabilities).`
All in all: this case works because the cables push the anchor points downwards and the struts push them upwards.
more in a while
…
reaky thing consisting from triangulated "modules" (i.e an assembly out of this, this and that) where the exterior edges ARE always under tension (= SS 304/316 cables OR nylon) and the interior ones MAY be under compression ( = steel, aluminum, wood, carbon) OR ... some of them ...may be under tension. Bastardized T trusses deviate a bit from theory ... but who cares? (not me anyway). T trusses have many variants (but as the greatest ever said: Less is More).
2. Large scale T for AEC is the art of pointless since it costs around the GNP of Nigeria. Here's some indicative components from a module of a multi adjustable TX system costing (the module) ~ the price of my Panigale (Google that):
The above is mailed to a friend who has MIT (yes, that MIT: the top dog) on sight ... therefor he needs some appropriate "credentials", he he.
3. The distance that separates the above with the demo TDT node provided is around 666.666 miles - but we don't care: we are after Art not some testimony to vanity.
4. On purpose I've used a smallish ring to give you a clear indication upon the constrain numero uno in truss design: CLASH matters.
5. You'll need:
(a) A decision related with the tensioners (classic Norseman + SS cables or nylon machined thingies?).
(b) A machinist who can do elementary stuff (like the adapters) and can weld this to that (the "ring" for instance). His abilities must be 1 in a scale of 100. If the fella has a computer (not a CRAY) and he knows what 3dPDF is (hmm) ... well ... use that way to communicate with him PRIOR designing anything: He must agree on the parts BEFORE the whole is attempted (as a design in GH or in some other app).
(c) A carpenter with a wood lathe for the obvious. BTW: BEFORE doing any TDT attempt > ask the carpenter about the available wood strut sizes. Against popular belief DO NOT varnish the wood (use exterior alkyd/oil stains from some top maker like the notorious US company PPG).
http://www.ppgpaints.com/products/paints-stains-data-sheets
(d) Good quality cigars (and espresso) plus some classic music (ZZTop, PFloyd, Cure, Stones, U2 etc etc) during the assembly.
(e) Faith to the Dark Side (see my avatar).
May the Force (the dark option) be with you.…
hacia donde crecerán las venas, y tenemos otro conjunto de puntos 'N' que son los que forman el patrón de venas.
1. Por cada 's' perteneciente a S, buscamos el 'n' perteneciente a N más cercano. Ese 'n' va a "moverse".
2. Por cada 'n' que se mueve, hacemos un vector dirigido a todos los 's' hacia los que se mueve.
3. Calculamos el vector medio de todos los vectores del paso 2, movemos 'n' con ese vector y lo añadimos a V.
4. Si algún 's' está muy cerca de algún 'n', ese 's' se elimina.
5. Se repite el proceso.
Esto es para formar venaciones abiertas sin autocrecimiento (como la siguiente imagen, hecho con Visual Basic).
Para las cerradas (las reticuladas que forman algo como células, como en la imagen tuya), el paso 1 y 4 son distintos y no sabría decirte cómo hacerlo. En ese pdf explica un método usando delaunay pero es muy lento, además gh no tiene ese algoritmo en 3d (entonces solo se podría hacer este patrón en 2d), por lo que estoy buscando otras vías, solo he logrado llegar a esto:
Es más complicado de lo que parece.
No obstante, si te conformas con menos, hay muchas formas de crear raíces y patrones similares, con SortestWalk, Anemone, etc... Hay ejemplos en este foro.
Si realmente quieres conseguir ese patrón, deberías aprender a programar porque para añadir distintos radios a las venas es necesario que las venas tengan topología y eso se complica demasiado desde gh. Nervous System para su "Hyphae" usó C++ con la librería CGAL, que es una muy poderosa librería de algoritmos de 3d.
…
d work exactly as the physical model. In the model, we have a curved surface which can be analysed into squares. These squares are filled with two kind of units which are connected with each other and create a grid that follows this curved surface.
We have managed to analyse this curved surface into a planar surface consisted of squares and we painted the squares with colours to represent the kind of unit that "fills" each square. So, now in rhino I have managed to build the curved surface that I want it to be filled with the two types of units.
I also have the planar surface built in Gh with the squares split into two lists, each one for each kind of unit. Because these units are mambranes, I used kangaroo to make them act like mambranes.
I hope I described the problem clearly. The point is to keep the dimensions of the units
the same and make it work in Kangaroo. Do you have anything in mind that I should look up or any advice ? Thank you in advance and i m sorry for the extended description.
*Pic 1: the curved surfaces that has to be filled with the units
*Pic 2: The binary system that shows which square is occupied by which unit
Blue=2 , Red=1, White= Blank
*Pic 3: unit 1
*Pic 4: unit 2
*Pic 5: a point of view of the physical model (not the final curve at the surface)
…