It seems that the X and Z direction are graded, with the grading in the X direction not being symmetrical (which is ok if X is the wind direction). If you didn't we would need to check our default settings but if memory serves they should be set at no grading.
As a side note here, a suggestion is to keep your refinement levels, either global or the ones set per boundary, to equal (optimal case) or not very different values. The (2,10) you supplied could potentially create sudden cell size changes which is not optimal, even though probably the 10 isn't going to be used. Settings like (2,3) or (2,4) would be ok. If you want to go higher, raise both numbers.
The update fvSchemes functionality is an added option of BF itself and not a command within OF. Essentially what it does is apply proper schemes to your model according to the mesh quality you have achieved. This is why OF won't recognize it. When we get BF going with the new version this will be done automatically for you.
Finally, the error you are seeing is again eliminated through the use of BF which takes care of the clean up and folder set up for you prior to running the case. For now, to do it manually, what you need to do is this:
1.Delete the /constant/polyMesh folder
2.Cut the polyMesh folder from the /2 folder and paste it to your /constant/polyMesh folder.
3.Delete the /1 and 2/ folders
4.Rename the /0.org folder to /0 (if that was not the case already)
4a.Run the case in serial mode with the simpleFoam command.
or
4b.Run the case in parallel with:
decomposePar mpirun -np x simpleFoam -parallel
Where the x is the number of processors set in your decomposeParDict found in the /system folder, usually set at 4. You can change that number of processors by simply changing that number in the beginning of the dictionary. To avoid other problems here with different number of processors change to scotch method in the same dictionary.
P.S.: Another easier but dirty (to my sensibilities) way is to simply paste the 0 folder inside the 2 folder. Provided boundaries and boundary conditions are set ok this should run. This would work for the serial way.
I hope this helps! When BF is up and running soon most of this will be hidden in the background. But the brute force method you are taking now is also quite educational.
Let me know if you have any other questions or issues! Thanks for testing!
Kind regards,
Theodore.
…
ración de 150 horas divididas en cuatro módulos, arrancando el 22 de Marzo del 2011 y terminando la segunda semana de Junio con sesiones los Martes y Jueves de 18:00 a 22:00hrs y algunos Sábados de 10:00 a 14:00hrs.
El tema central del diplomado es el uso integral de la herramienta digital en el proceso de diseño a partir de la base teórica del fenómeno de la emergencia (entendida como la obtención de resultados complejos a partir de la interacción de elementos simples con reglas de bajo nivel de sofisticación).
El desarrollo del programa se concentra en la aplicación práctica de las reflexiones teóricas generadas mediante el uso de herramientas digitales generativas, principalmente Grasshopper (plug-in de modelado parametrico para Rhinoceros).
Contaremos con la presencia de dos colaboradores internacionales: EL primero será un miembro de LaN (Live Architecture Network) que impartirá un curso sobre programación avanzada en Grasshopper enfocandolo a la realización de un objeto construido, haciendo énfasis en la transición entre lo virtual, lo análogo y lo físico. El segundo es Jalal el Ali, maestro en arquitectura por la Architectural Association, líder de la Unidad de Geometría Generativa de Buro Happold y actual líder de proyecto en Zaha Hadid Architects, quien dará un curso intensivo enfocado al uso de la herramienta digital y la producción digital, enseñando procesos que ha aplicado en la empresa donde trabaja. Jalal pronunciará también una conferencia magistral.
Es un programa promueve el uso de nuevas tecnologías y la integración de procesos de producción desde la concepción del diseño, aplicando los conocimientos teóricos en un objeto físico usando el laboratorio de fabricación de la Universidad Iberoamericana.
…
ively and creatively solve today’s product development challenges.
Our Rhino3D Foundations for Industrial Design class provides an in-depth look at 2D and 3D tools and methods with Rhino3D, a NURBs surface modeling software. In this class, we will systematically work through Rhino3D’s core features, using them to model the various components of a consumer product. Over the course of 3 days, we’ll cover some foundational topics, including Rhino interface and navigation, Rhino3D object types and properties, creating and editing 2D and 3D geometry, procedural modeling, automation, transforming geometry, Rhino modeling best practices, freeform vs. precision modeling, and exporting geometry.
You’ll take away the following:
Navigate the Rhino modeling environment
Create, edit, and modify curves, surfaces, and solids
Precision model using coordinate input and object snaps
Use transformation and universal deformation tools
Apply best practices for layer management and model annotation
Download the course one-pager. Need more information? Connect with us.
This class is ideal for:
Industrial designers who are new to Rhino3D and want to learn its concepts and technical features in an instructor-led environment.
For groups of 10 or more, contact Mode Lab at hello@modelab.is
Interested in additional training options?
https://www.modelab.is/upcoming-computational-design-events…
lly it should not make much of a difference - random number generation is not affected, mutation also is not. crossover is a bit more tricky, I use Simulated Binary Crossover (SBX-20) which was introduced already in 1194:
Deb K., Agrawal R. B.: Simulated Binary Crossover for Continuous Search Space, inIITK/ME/SMD-94027, Convenor, Technical Reports, Indian Institue of Technology, Kanpur, India,November 1994
Abst ract. The success of binary-coded gene t ic algorithms (GA s) inproblems having discrete sear ch sp ace largely depends on the codingused to represent the prob lem variables and on the crossover ope ratorthat propagates buildin g blocks from pare nt strings to childrenst rings . In solving optimization problems having continuous searchspace, binary-co ded GAs discr et ize the search space by using a codingof the problem var iables in binary st rings. However , t he coding of realvaluedvari ables in finit e-length st rings causes a number of difficulties:inability to achieve arbit rary pr ecision in the obtained solution , fixedmapping of problem var iab les, inh eren t Hamming cliff problem associatedwit h binary coding, and processing of Holland 's schemata incont inuous search space. Although a number of real-coded GAs aredevelop ed to solve optimization problems having a cont inuous searchspace, the search powers of these crossover operators are not adequate .In t his paper , t he search power of a crossover operator is defined int erms of the probability of creating an arbitrary child solut ion froma given pair of parent solutions . Motivated by t he success of binarycodedGAs in discret e search space problems , we develop a real-codedcrossover (which we call the simulated binar y crossover , or SBX) operatorwhose search power is similar to that of the single-point crossoverused in binary-coded GAs . Simulation results on a number of realvaluedt est problems of varying difficulty and dimensionality suggestt hat the real-cod ed GAs with t he SBX operator ar e ab le to perform asgood or bet t er than binary-cod ed GAs wit h t he single-po int crossover.SBX is found to be particularly useful in problems having mult ip le optimalsolutions with a narrow global basin an d in prob lems where thelower and upper bo unds of the global optimum are not known a priori.Further , a simulation on a two-var iable blocked function showsthat the real-coded GA with SBX work s as suggested by Goldberg
and in most cases t he performance of real-coded GA with SBX is similarto that of binary GAs with a single-point crossover. Based onth ese encouraging results, this paper suggests a number of extensionsto the present study.
7. ConclusionsIn this paper, a real-coded crossover operator has been develop ed bas ed ont he search characte rist ics of a single-point crossover used in binary -codedGAs. In ord er to define the search power of a crossover operator, a spreadfactor has been introduced as the ratio of the absolute differences of thechildren points to that of the parent points. Thereaft er , the probabilityof creat ing a child point for two given parent points has been derived forthe single-point crossover. Motivat ed by the success of binary-coded GAsin problems wit h discrete sear ch space, a simul ated bin ary crossover (SBX)operator has been develop ed to solve problems having cont inuous searchspace. The SBX operator has search power similar to that of the single-po intcrossover.On a number of t est fun ctions, including De Jong's five te st fun ct ions, ithas been found that real-coded GAs with the SBX operator can overcome anumb er of difficult ies inherent with binary-coded GAs in solving cont inuoussearch space problems-Hamming cliff problem, arbitrary pr ecision problem,and fixed mapped coding problem. In the comparison of real-coded GAs wit ha SBX operator and binary-coded GAs with a single-point crossover ope rat or ,it has been observed that the performance of the former is better than thelatt er on continuous functions and the performance of the former is similarto the lat ter in solving discret e and difficult functions. In comparison withanother real-coded crossover operator (i.e. , BLX-0 .5) suggested elsewhere ,SBX performs better in difficult test functions. It has also been observedthat SBX is particularly useful in problems where the bounds of the optimum
point is not known a priori and wher e there are multi ple optima, of whichone is global.Real-coded GAs wit h t he SBX op erator have also been tried in solvinga two-variab le blocked function (the concept of blocked fun ctions was introducedin [10]). Blocked fun ct ions are difficult for real-coded GAs , becauselocal optimal points block t he progress of search to continue towards t heglobal optimal point . The simulat ion results on t he two-var iable blockedfunction have shown that in most occasions , the sea rch proceeds the way aspr edicted in [10]. Most importantly, it has been observed that the real-codedGAs wit h SBX work similar to that of t he binary-coded GAs wit h single-pointcrossover in overcoming t he barrier of the local peaks and converging to t heglobal bas in. However , it is premature to conclude whether real-coded GAswit h SBX op erator can overcome t he local barriers in higher-dimensionalblocked fun ct ions.These results are encour aging and suggest avenues for further research.Because the SBX ope rat or uses a probability distribut ion for choosing a childpo int , the real-coded GAs wit h SBX are one st ep ahead of the binary-codedGAs in te rms of ach ieving a convergence proof for GAs. With a direct probabilist ic relationship between children and parent points used in t his paper,cues from t he clas sical stochast ic optimization methods can be borrowed toachieve a convergence proof of GAs , or a much closer tie between the classicaloptimization methods and GAs is on t he horizon.
In short, according to the authors my SBX operator using real gene values is as good as older ones specially designed for discrete searches, and better in continuous searches. SBX as far as i know meanwhile is a standard general crossover operator.
But:
- there might be better ones out there i just havent seen yet. please tell me.
- besides tournament selection and mutation, crossover is just one part of the breeding pipeline. also there is the elite management for MOEA which is AT LEAST as important as the breeding itself.
- depending on the problem, there are almost always better specific ways of how to code the mutation and the crossover operators. but octopus is meant to keep it general for the moment - maybe there's a way for an interface to code those things yourself..!?
2) elite size = SPEA-2 archive size, yes. the rate depends on your convergence behaviour i would say. i usually start off with at least half the size of the population, but mostly the same size (as it is hard-coded in the new version, i just realize) is big enough.
4) the non-dominated front is always put into the archive first. if the archive size is exceeded, the least important individual (the significant strategy in SPEA-2) are truncated one by one until the size is reached. if it is smaller, the fittest dominated individuals are put into the elite. the latter happens in the beginning of the run, when the front wasn't discovered well yet.
3) yes it is. this is a custom implementation i figured out myself. however i'm close to have the HypE algorithm working in the new version, which natively has got the possibility to articulate perference relations on sets of solutions.
…
ight be able to provide more insight). Whenever you run a new simulation in Radiance, it is not always necessary to re-write all of the initial simulation files from scratch. These initial simulation files include both a .rad geometry file as well as a separate .pts file that contains the test point locations. If all that you are changing in a given parametric run is the locations of the test points (like your case), it is not necessary to re-write (or reinterpret) the entire .rad geometry file. My guess is that there is some type of check for this built into either code Mostapha wrote or radiance functions that Mostapha is calling. As such, it seems that the rad geometry file is not being re-written (or re-interpreted by radiance) completely when all that you change is the test points and this actually seems to be saving you an extra 10 seconds each time that you run the component without changing the materials or the building geometry. Other times (like when you plug in custom radParameters), it seems that it re-writes (or re-interprets) the .rad geometry file from scratch since this file is probably affected by customized rad parameters.
So far, if this explanation is holding, it seems like there would be no concern on your end but I also recognize that the difference between these long and short simulations is giving you radiation results that are ever so slightly different from each other (by my estimates, they differ by about 0.2%). Compared to the other types of assumptions that the radiance model is making, though, these are mere rounding errors that probably originate from the number of decimal places in the vertices of the rad geometry file. Rather than worrying about whether your simulations are giving you the right rounding errors to give you matching results, I would encourage you to instead contemplate how much your radiance results are matching reality given all of the assumptions that you are making about the climate (with the epw file for a "typical" year) and with the number of light bounces in the radiance simulation. To give you an example, I ran your model with a higher quality of simulation type (3 ambient bounces) and this gives you results that differ by 1.1% from the original simulation that you were running with only 2 ambient bounces (this is practically an order of magnitude larger than 0.2%).
To address your unease I will say that, for a long time, I also felt uneasy any time that I encountered something that seemed unpredictable in software that I was using. Once I started coding my own stuff, though, I realized quickly that unpredictable behavior is an unavoidable aspect of all software. There is always a tradeoff between accurate results and the time it takes to get them, which produces a multitude of possible ways to arrive at a solution. Add into this complex situation the fact that you might have an almost infinite number of possible inputs to a given set of code.
Because of the unpredictable multitude of cases, there is no application that is completely free from limitations and assumptions. In this light, what ends up being more important than the actual calculation method used is the social infrastructure that is in place to help understand what is being run under the hood, hence why both Radiance and Honeybee are open source and why we try to build a robust community of support through forums like this one!
-Chris…
r." I'm sorry to hear that, I take the interface and ease-of-use rather seriously so this sounds like a fundamental failure on my part. On the other hand, Grasshopper isn't supposed to be on a par with most other 3D programs. It is emphatically not meant for manual/direct modelling. If you would normally tackle a problem by drawing geometry by hand, Grasshopper is not (and should never be advertised as) a good alternative."What in other programs is a dialog box, is 8 or 10 components strung together in grasshopper. The wisdom for this I often hear among the grasshopper community is that this allows for parametric design."Grasshopper ships with about 1000 components (rounded to the nearest power of ten). I'm adding more all the time, either because new functionality has been exposed in the Rhino SDK or because a certain component makes a lot of sense to a lot of people. Adding pre-canned components that do the same as '8 or 10 components strung together' for the heck of it will balloon the total number of components everyone has to deal with. If you find yourself using the same 8 to 10 components together all the time, then please mention it on this forum. A lot of the currently existing components have been added because someone asked for it."[...] has a far cleaner and more intuitive interface. So does SolidWorks, Inventor, CATIA, NX, and a bunch of others."Again, GH was not designed to be an alternative to these sort of modellers. I don't like referring to GH as 'parameteric' as that term has been co-opted by relational modellers. I prefer to use 'algorithmic' instead. The idea behind parameteric seems to be that one models by hand, but every click exists within a context, and when the context changes the software figures out where to move the click to. The idea behind algorithmic is that you don't model by hand.This is not to say there is no value in the parametric approach. Obviously it is a winning strategy and many people love to use it. We have considered adding some features to GH that would make manual modelling less of a chore and we would still very much like to do so. However this is such a large chunk of work that we have to be very careful about investing the time. Before I start down this road I want to make sure that the choice I'm making is not 'lame-ass algorithmic modeller with some lame-ass parametrics tacked on' vs. 'kick-ass algorithmic modeller with no parametrics tacked on'.
Visual Programming.I'm not exactly sure I understand your grievance here, but I suspect I agree. The visual part is front and centre at the moment and it should remain there. However we need to improve upon it and at the same time give programmers more tools to achieve what they want.
Context sensitivity."There is no reason a program in 2014 should allow me to make decisions that will not work. For example, if a component input is in all cases incompatible with another component's output, I shouldn't be able to connect them."Unfortunately it's not as simple as that. Whether or not a conversion between two data types makes sense is often dependent on the actual values. If you plug a list of curves into a Line component, none of them may be convertible. Should I therefore not allow this connection to be made? What if there is a single curve that could be converted to a line? What if you want to make the connection now, but only later plan to add some convertible curves to the data? What you made the connection back when it was valid, but now it's no longer valid, wouldn't it be weird if there was a connection you couldn't make again?I've started work on GH2 and one of the first things I'm writing now is the new data-conversion logic. The goal this time around is to not just try and convert type A into type B, but include information about what sort of conversion was needed (straightforward, exotic, far-fetched. etc.) and information regarding why that type was assigned.You are right that under some conditions, we can be sure that a conversion will always fail. For example connecting a Boolean output with a Curve input. But even there my preferred solution is to tell people why that doesn't make sense rather than not allowing it in the first place.
Sliders."I think they should be optional."They are optional."The “N” should turn into the number if set."What if you assign more than one integer? I think I'd rather see a component with inputs 'N', 'P' and 'X' rather than '5', '8' and '35.7', but I concede that is a personal preference."But if I plug it into something that'll only accept a 1, a 2, or a 3, that slider should self set accordingly."Agreed.
Components."Give components a little “+” or a drawer on the bottom or something that by clicking, opens the component into something akin to a dialog box. This should give access to all of the variables in the component. I shouldn't have to r-click on each thing on a component to do all of the settings."I was thinking of just zooming in on a component would eventually provide easier ways to access settings and data."Could some of these items disappear if they are contextually inappropriate or gray out if they're unlikely?"It's almost impossible for me to know whether these things are 'unlikely' in any given situation. There are probably some cases where a suggestion along the lines of "Hey, this component is about to run 40,524 times. It seems like it would make sense to Graft the 'P' input." would be useful.
Integration."Why isn't it just live geometry?"This is an unfortunate side-effect of the way the Rhino SDK was designed. Pumping all my geometry through the Rhino document would severely impact performance and memory usage. It also complicates the matter to an almost impossible degree as any command and plugin running in Rhino now has access to 'my' geometry."Maybe add more Rhino functionality to GH. GH has no 3D offset."That's the plan moving forward. A lot of algorithms in Rhino (Make2D, FilletEdge, Shelling, BlendSrf, the list goes on) are not available as part of the public SDK. The Rhino development team is going to try and rectify this for Rhino6 and beyond. As soon as these functions become available I'll start adding them to GH (provided they make sense of course).On the whole I agree that integration needs a lot of work, and it's work that has to happen on both sides of the isle.
Documentation.Absolutely. Development for GH1 has slowed because I'm now working on GH2. We decided that GH1 is 'feature complete', basically to avoid feature creep. GH2 is a ground-up rewrite so it will take a long time until something is ready for testing. During this time, minor additions and of course bug fixes will be available for GH1, but on a much lower frequency.Documentation is woefully inadequate at present. The primer is being updated (and the new version looks great), but for GH2 we're planning a completely new help system. People have been hired to provide the content. With a bit of luck and a lot of work this will be one of the main selling points of GH2.
2D-ness."I know you'll disagree completely, but I'm sticking to this. How else could an omission like offsetsurf happen?"I don't fully disagree. A lot of geometry is either flat or happens inside surfaces. The reason there's no shelling (I'm assuming that's what you meant, there are two Offset Surface components in GH) is because (a) it's a very new feature in Rhino and doesn't work too well yet and (b) as a result of that isn't available to plugins.
Organisation.Agreed. We need to come up with better ways to organise, document, version, share and simplify GH files. GH1 UI is ok for small projects (<100 components) but can't handle more complexity.
Don't get me wrong, I appreciate the feedback, I really do, but I want to be honest and open about my own plans and where they might conflict with your wishes. Grasshopper is being used far beyond the boundaries of what we expected and it's clear that there are major shortcomings that must be addressed before too long. We didn't get it right with the first version, I don't expect we'll get it completely right with the second version but if we can improve upon the -say- five biggest drawbacks (performance, documentation, organisation, plugin management and no mac version) I'll be a happy puppy.
--
David Rutten
david@mcneel.com…
So it's not true that Bounds.X is only a getter. However it does behave as though it is. This is because RectangleF is a Value Type instead of a Reference Type. When you assign a variable of one value type to another variable of the same type, you always assign a copy of the first value. So when you request the Bounds from an attributes class, what you get is a copy of the actual bounds. Changing the X on this copy would be a useless operation which is why Visual Studio catches this mistake.
Let's assume that Dog is a class (a reference type) and it has a get/set property for fur type. Then, if I type:
Dog A = new Dog();
A.Coat = Long;
Dog B = A;
B.Coat = Short;
At the end of these lines, both A and B have a short coat, because the act of assigning A to B (line 3) means that both A and B now point to the same instance of Dog in memory. In effect, A and B are the same. If Dog were a struct (a value type), then at the end of this code A and B would have different coats, because assigning A to B means creating a copy of A. Any changes made to B will not affect A.
The one place where this causes annoying situations is exactly where you ran into it. If a property returns a value type then it's typically not useful to call properties and methods on that returned data, as it would only affect the copy of the actual data instead of the original data. That's why, if you want to change the Bounds of an attribute, you need code like this:
RectangleF box = Bounds;
box.X +=10;
Bounds = box;
On to the second problem, which is that doing it this way won't help you one bit. Laying out a component is a difficult job and the size of the Bounds depends on many things:
The display mode of the component (icon or text).
The size of the text (depending on which Font to use).
The maximum number of input and output parameters.
The maximum width of the longest input/output parameter name.
The maximum number of state icons to draw on the input/output parameters.
Changing the Bounds after the layout has occurred will basically just invalidate the parameter layout, resulting in parameter names and grips being drawn in the wrong places.
If you want to affect the size of the Bounds for a GH_Component class, you're going to have to dive in and do the laying out yourself. As mentioned before, this is not trivial.
There are static methods on GH_ComponentAttributes which are helpful when doing this, have a look at:
LayoutComponentBox()
LayoutInputParams()
LayoutOutputParams()
LayoutBounds()
Unfortunately they are undocumented.
--
David Rutten
david@mcneel.com…
Added by David Rutten at 1:39pm on January 31, 2014
o, presso la sede Eurac e il TIS, nei giorni 21,22 e 23 maggio 2015.
Il processo di progettazione integrata è riconosciuto come metodo per ottenere gli elevati livelli di qualità oggi richiesti agli edifici. Con questo approccio diventano sempre più rilevanti il comfort visivo e la gestione dell’illuminazione naturale in relazione al risparmio energetico. Di fatto, il nuovo protocollo Leed v4 riconosce crediti ad hoc e conferma l’importanza della progettazione daylighting per “collegare gli occupanti con lo spazio esterno, rinforzare i ritmi circadiani, ridurre l’uso dell’illuminazione elettrica con l’introduzione della luce naturale negli spazi”.
Una progettazione robusta richiede l’uso di strumenti di simulazione efficaci e Radiance è riconosciuto come uno dei software con le capacità di fornire risultati affidabili. Radiance è utilizzato sia a livello di ricerca che tra i progettisti, ed è tra i più accurati per la simulazione professionale della luce naturale ed artificiale. Non ha limiti di complessità geometrica ed è adatto a essere integrato in altri software di calcolo e interfacce grafiche. Le principali e più versatili tra queste (DIVA4Rhino, plug-ins per Grasshopper e Rhinoceros3D), essendo in grado di facilitare notevolmente le procedure di programmazione, saranno oggetto del corso.
Il corso è rivolto a progettisti e ricercatori che vogliano acquisire strumenti pratici per la simulazione con Radiance al fine di mettere a punto e verificare le soluzioni più adatte alle proprie esigenze. Sono previste lezioni di teoria e pratica con esempi ed esercitazioni volte a coprire in modo dimostrativo ed interattivo i concetti trattati.
Il corso viene riconosciuto con 15 crediti dall’Ordine degli Architetti.
Le domande di iscrizione devono essere presentate entro il 27 aprile 2015.
Scarica la brochure con tutte le informazioni Corso Radiance - EURAC.pdf
Il corso è sponsorizzato da Pellinindustrie.…