ll geometry.
The difference with programs like Inventor is that they are made for production, regardless of the fabrication method. I won't go into detail about that, and instead focus on the modeling process.
In this little model, the starting point actually is a bit obvious, the foundation.
The only contents in the 3dm file are 27 lines. These indicate the location of each footing, and the direction of the tilt of each column. Everything else is defined in GH with the use of numbers as input parameters.
Needless to say, instead of those lines you could obviously generate lines and control the number of columns and panels, hence establish their layout, with any algorithmic or non-algorithmic criteria you please. That marks a major difference between GH and Inventor.
You can generate geometry with Inventor via scripting/customization (beyond iLogic), with transient graphics for visual feedback similar to GH's red-default previews. However Inventor's modeling functions are not set to input and output data trees. I won't go into detail on that, but suffice to say that the data tree associativity of GH was for me the first major difference I noticed. I've used other apps with node diagram interfaces like digital fusion for non-linear video editing since the late 90's, so the canvas did not call my attention when I first started using GH.
Anyways, here's a screen capture of the foundational lines:
In the first group of components, the centerlines of the rear columns are modeled:
And the locations in elevation for connection points are set. Those elevations were just numbers I copied from Excel, but you can obviously control that any way you please. I was just trying to model this quickly.
The same was done for the rear columns:
The above, believe it or not, took me the first 5 hours to get.
Here's a screen capture of what the model and definition looked like after 4 hours, not much:
If you're interested, next post I can get into the sketching part you mentioned, which is a bit cumbersome with GH, but not really.
I wouldn't say that using GH to do this little model was cumbersome, it just needed some thinking at the beginning. You do similar initial thinking when working with a feature-based modeler.…
Added by Santiago Diaz at 12:44am on February 24, 2011
ences, so not terribly important in the end. After all, it's not really worth going through a lot of trouble to get a 15% speed increase; 15% faster than slow is still pretty slow.
Also processor speed has pretty much peaked these past few years, there have been no more significant increases lately. Instead, manufacturers have started putting more cores on motherboards, which is something GH unfortunately cannot take advantage of.
Multi-threading (very high on the list for GH2) brings with it a promise of full core utilisation (minus the inevitable overhead for aggregating computed results), but there are some problems that may end up being significant. Here's a non-exhaustive list:
It's not possible to modify the UI from a non-UI thread. This is probably not that big a deal for Grasshopper components, especially since we can make methods such a Rhino.RhinoApp.WriteLine() thread safe.
Not all methods used by component code are necessarily thread safe. There used to be a lot of stuff in the Rhino SDK that simply wouldn't work correct or would crash if the same method was run more than once simultaneously. Rhino core team has been working hard to remedy this problem, and I'm confident we can fix any problems that still come up, though it may take some time. If components rely on other code libraries then the problem may not be solvable at all. So we need to make sure multi-threading is an optional property of components.
There's overhead involved in multi-threading, it's especially difficult to get a good performance gain when dealing with lots of very fast operations. The overhead in these cases can actually make stuff perform slower.
There's the question on what level should multi-threading be implemented. Obviously the lower the better, but that means a lot of extra work, complicated patterns of responsibilities and a lot of communications between different developers.
There's the question on how the interface should behave during solutions now. If all the computation is happening in a thread, the interface can stay 'live'. So what should it look like if a solution takes -say- 5 seconds to complete? Should you be able to see the waves of data streaming through the network, turning components and wires grey and orange like strobe lights? What happens if you modify a slider during a solution? Simple answer is to abort the current solution and start a new one with the new slider value. But as you slowly drag the slider from left to right, you end up computing 400 partial solution and never getting to a final answer, even though you could have computed 2 full solutions in the same time and given better feedback. Does the preview geometry in the Rhino viewports flicker in and out of existence as solutions cascade through the network?
…
and where the decimal place should be.
The reason it only shows the first 5 numbers that make up 1,000,000 is because anything smaller than 100 is considered insignificant when talking about 1 million. Think of it like this if 1 million represents an Olympic size swimming pool then 10 would represent the volume of a full tank of petrol for an average family car. You would have to stand there for an extremely long time to fill up the pool from a petrol pump.
It's important to know that these insignificant digits are still there for the purpose of calculations but are just not being displayed.
There are times when you may want to display these numbers in a format that makes more sense, for these occasions we can use the Format() function.
Format() Function
For versions BEFORE 0.9.0001 the VB Format Function is available through the Expression Components found on the Math Tab > Script Panel
Either by using the F input* or the Expressions Editor found on the Context Menu you can apply a format mask to the x input.
* except FxN
Anatomy of the formatting function above:
Format(..............................) <-- VB function
Format("........................."....) <-- Display String
Format("{0....................}"....) <-- Place Holder for first variable
Format("{0:0.000000000}"...) <-- Format Mask for 9 decimal places
Format("{0:0.000000000}", x) <-- Variable
This can be applied to points and their components:
For versions AFTER 0.9.0001 there is a dedicated Format Component or you can use the Expressions Components successor Evaluate.
For more information on the tags used in the Format Function see these links.
Standard formatting tags Custom formatting tags
WARNING:
If you format a number to be displayed in this way it becomes a string and will no longer have the complete Real number available for calculations. Always use the input to the format function for further requirements in calculations.…
l operations. Aside from its geopolitical position and commercial significance, Thessaloniki has been for many centuries the military and administrative hub of the region, and beyond this the transportation link between Europe and the Levant. A series of design studies will be put forward to rethink the way by which city environment in Thessaloniki have been affecting its’ population according to changing needs and to visualize such urban shifts on a more hyper specific contextualized construction model. Throughout the investigations on the research agenda, current trends on the habits of architectural practice will be re-visited.
Innovative urban interventions informed by bottom-up rules extracted from existing city conditions will formulate the major focus of design proposals. Design teams will be working with simulation tools and digital fabrication methods throughout the design research phase. The design brief will be initially explored through the combinatorial use of different computational design tools. Methods of connecting form‐finding methods with form‐making techniques will be investigated. Various manufacturing techniques enabling a hands‐on experience on the diverse range of digital fabrication systems will formulate the starting point for the physical tests. Finally, the design and fabrication of a one-to-one scale pavilion will unify the goals of the programme.
Prominent features of the programme / skills developed:
- Participants will be part of an active learning environment where the large tutor to student ratio (5:1) allows for personalized tutorials and debates.
- The toolset of AA Thessaloniki includes Autodesk Maya, Rhinoceros, Grasshopper and Arduino.
- Participants will have access to digital fabrication tools such as 3-axis CNC router, laser-cutter, and 3d-printer.
- Design seminars and lecture series will support the key objectives of the programme, disseminating knowledge on new design anatomies including machinic control, computational space, and complexity in systems, and innovative urban design approaches.
Eligibility: The workshop is open to architecture and design students and professionals worldwide.
Accreditation: Participants receive the AA Visiting School Certificate with the completion of the Programme.
Fees: The AA Visiting School requires a fee of £600 per participant, which includes a £60 Visiting membership fee. The deadline for applications is 15 October 2015. No portfolio or CV is required.
Discount options are available. Please contact the AA Visiting School Coordinator for more details.
Online application link:
https://www.aaschool.ac.uk/STUDY/ONLINEAPPLICATION/visitingApplication.php?schoolID=316
Programme Director:
Alexandros Kallegias (AA Greece VS Director): alexandros.Kallegias@aaschool.ac.uk…
es at the beginning. But as I make changes to the input (or just hit the recompute button) the time it takes to execute increases. This has happened to me with other scripts I've written with the python component. Why does this happen? And how do I fix it? Does python hold onto data from one execution to the next? The only solution I have found is to relaunch Rhino. Even if I copy the component into a fresh grasshopper canvas, the computation time does not return to original.
The images below illustrate the time increase. I simply hit the recompute button between each pass. All inputs remain the same the whole time. There are 6400 curves being projected. I will say that with fewer curves, the increase in time is nonexistent or perceivable. (I have 24 GB RAM and it is did not even reach 50% of usage during the tests)
My python code:
import ghpythonlib.components as ghcompimport ghpythonlib.parallel
def project (tempc): tempresult=ghcomp.Project(tempc,B,D) return tempresult
a=ghpythonlib.parallel.run(project,C,True)
I have attached the GH file with the inputs internalized if anyone wants to try for themselves.
Pass 1= 444ms
Pass 5= 610ms
Pass 10= 908ms
Pass 15= 1.2s
Pass 20= 1.4s
…
Added by Lawrence Yun at 3:19pm on December 10, 2014
lly it should not make much of a difference - random number generation is not affected, mutation also is not. crossover is a bit more tricky, I use Simulated Binary Crossover (SBX-20) which was introduced already in 1194:
Deb K., Agrawal R. B.: Simulated Binary Crossover for Continuous Search Space, inIITK/ME/SMD-94027, Convenor, Technical Reports, Indian Institue of Technology, Kanpur, India,November 1994
Abst ract. The success of binary-coded gene t ic algorithms (GA s) inproblems having discrete sear ch sp ace largely depends on the codingused to represent the prob lem variables and on the crossover ope ratorthat propagates buildin g blocks from pare nt strings to childrenst rings . In solving optimization problems having continuous searchspace, binary-co ded GAs discr et ize the search space by using a codingof the problem var iables in binary st rings. However , t he coding of realvaluedvari ables in finit e-length st rings causes a number of difficulties:inability to achieve arbit rary pr ecision in the obtained solution , fixedmapping of problem var iab les, inh eren t Hamming cliff problem associatedwit h binary coding, and processing of Holland 's schemata incont inuous search space. Although a number of real-coded GAs aredevelop ed to solve optimization problems having a cont inuous searchspace, the search powers of these crossover operators are not adequate .In t his paper , t he search power of a crossover operator is defined int erms of the probability of creating an arbitrary child solut ion froma given pair of parent solutions . Motivated by t he success of binarycodedGAs in discret e search space problems , we develop a real-codedcrossover (which we call the simulated binar y crossover , or SBX) operatorwhose search power is similar to that of the single-point crossoverused in binary-coded GAs . Simulation results on a number of realvaluedt est problems of varying difficulty and dimensionality suggestt hat the real-cod ed GAs with t he SBX operator ar e ab le to perform asgood or bet t er than binary-cod ed GAs wit h t he single-po int crossover.SBX is found to be particularly useful in problems having mult ip le optimalsolutions with a narrow global basin an d in prob lems where thelower and upper bo unds of the global optimum are not known a priori.Further , a simulation on a two-var iable blocked function showsthat the real-coded GA with SBX work s as suggested by Goldberg
and in most cases t he performance of real-coded GA with SBX is similarto that of binary GAs with a single-point crossover. Based onth ese encouraging results, this paper suggests a number of extensionsto the present study.
7. ConclusionsIn this paper, a real-coded crossover operator has been develop ed bas ed ont he search characte rist ics of a single-point crossover used in binary -codedGAs. In ord er to define the search power of a crossover operator, a spreadfactor has been introduced as the ratio of the absolute differences of thechildren points to that of the parent points. Thereaft er , the probabilityof creat ing a child point for two given parent points has been derived forthe single-point crossover. Motivat ed by the success of binary-coded GAsin problems wit h discrete sear ch space, a simul ated bin ary crossover (SBX)operator has been develop ed to solve problems having cont inuous searchspace. The SBX operator has search power similar to that of the single-po intcrossover.On a number of t est fun ctions, including De Jong's five te st fun ct ions, ithas been found that real-coded GAs with the SBX operator can overcome anumb er of difficult ies inherent with binary-coded GAs in solving cont inuoussearch space problems-Hamming cliff problem, arbitrary pr ecision problem,and fixed mapped coding problem. In the comparison of real-coded GAs wit ha SBX operator and binary-coded GAs with a single-point crossover ope rat or ,it has been observed that the performance of the former is better than thelatt er on continuous functions and the performance of the former is similarto the lat ter in solving discret e and difficult functions. In comparison withanother real-coded crossover operator (i.e. , BLX-0 .5) suggested elsewhere ,SBX performs better in difficult test functions. It has also been observedthat SBX is particularly useful in problems where the bounds of the optimum
point is not known a priori and wher e there are multi ple optima, of whichone is global.Real-coded GAs wit h t he SBX op erator have also been tried in solvinga two-variab le blocked function (the concept of blocked fun ctions was introducedin [10]). Blocked fun ct ions are difficult for real-coded GAs , becauselocal optimal points block t he progress of search to continue towards t heglobal optimal point . The simulat ion results on t he two-var iable blockedfunction have shown that in most occasions , the sea rch proceeds the way aspr edicted in [10]. Most importantly, it has been observed that the real-codedGAs wit h SBX work similar to that of t he binary-coded GAs wit h single-pointcrossover in overcoming t he barrier of the local peaks and converging to t heglobal bas in. However , it is premature to conclude whether real-coded GAswit h SBX op erator can overcome t he local barriers in higher-dimensionalblocked fun ct ions.These results are encour aging and suggest avenues for further research.Because the SBX ope rat or uses a probability distribut ion for choosing a childpo int , the real-coded GAs wit h SBX are one st ep ahead of the binary-codedGAs in te rms of ach ieving a convergence proof for GAs. With a direct probabilist ic relationship between children and parent points used in t his paper,cues from t he clas sical stochast ic optimization methods can be borrowed toachieve a convergence proof of GAs , or a much closer tie between the classicaloptimization methods and GAs is on t he horizon.
In short, according to the authors my SBX operator using real gene values is as good as older ones specially designed for discrete searches, and better in continuous searches. SBX as far as i know meanwhile is a standard general crossover operator.
But:
- there might be better ones out there i just havent seen yet. please tell me.
- besides tournament selection and mutation, crossover is just one part of the breeding pipeline. also there is the elite management for MOEA which is AT LEAST as important as the breeding itself.
- depending on the problem, there are almost always better specific ways of how to code the mutation and the crossover operators. but octopus is meant to keep it general for the moment - maybe there's a way for an interface to code those things yourself..!?
2) elite size = SPEA-2 archive size, yes. the rate depends on your convergence behaviour i would say. i usually start off with at least half the size of the population, but mostly the same size (as it is hard-coded in the new version, i just realize) is big enough.
4) the non-dominated front is always put into the archive first. if the archive size is exceeded, the least important individual (the significant strategy in SPEA-2) are truncated one by one until the size is reached. if it is smaller, the fittest dominated individuals are put into the elite. the latter happens in the beginning of the run, when the front wasn't discovered well yet.
3) yes it is. this is a custom implementation i figured out myself. however i'm close to have the HypE algorithm working in the new version, which natively has got the possibility to articulate perference relations on sets of solutions.
…
esentar Digital Process: Generative Design Technologies Workshop; Taller especializado que se llevara a cabo en 4 de las ciudades mas importantes de la republica mexicana [Puebla] [Mexico DF] [Guadalajara] [Leon] en Enero y Febrero de 2012. http://gendesigntech.wordpress.com/
Enfocado principalmente a arquitectos, diseñadores industriales, diseñadores de interiores, Urbanistas, Artistas digitales, estudiantes y profesionistas afines al diseño; este Workshop tiene como objetivo proporcionar a los participantes los conocimientos y recursos tecnológicos que les permitan desarrollar los elementos de un proyecto desde la concepción hasta su aplicación de manera completa. Apoyándose en un conjunto potente y flexible de plataformas, los participantes aprenderán a generar, analizar y racionalizar morfologías complejas, formas orgánicas libres y algoritmos computacionales avanzados así como a producir visualizaciones fotorealístas aplicables en diversos proyectos de Diseño. A lo largo de 5 dias de intenso trabajo, exploración y retroalimentación los participantes seran guiados en el desarrollo de un flujo de trabajo mas dinamico, que les permitira explotar al maximo el potencial de las herramientas y potencializar sus habilidades, aptitudes y capacidades. Instructores: Leonardo Nuevo Arenas [Complex Geometry] José Eduardo Sánchez [DesignNest] Daniel Camiro/Luis de la Parra [Chido Studio] http://issuu.com/chidostudiodiseno/docs/digprowork Conoce el programa aquí. http://gendesigntech.wordpress.com/program/ Para registrarte por favor visita. http://gendesigntech.wordpress.com/registro
…
ing results and I think it is based on the assumption of small displacements. That’s why I want to try with LaDeform.
But doing this I met some problems. I tried to experiment it on the small examples that are provided with Karamba:
1.LaDeform in load-controlled behavior
I know Karamba has mainly been created make form-finding and not properly precise calculations, but I’d like to evaluate deformations of my structure under certain loads (load-controlled). It is said to let it in Default value for MaxDisp (-1).
[Rhino view for deflection of the rope]
In this example derived from a Karamba example (Large_Deformation_Rope.gh), the programs shows different ways to get approximately equal max deflection. But, getting into it, I realized Load Multiplier for gravity is different from one model to another (-3.237 for Analyze TH1 and -134 for LaDeform). So what is the interest of the example if the quite similar shape of deflections are not got under the same loadings? (quite different loadings indeed)
Doesn’t it show on the contrary that LaDeform algorithm does not work properly, if you need to change the load multiplier?
The Grasshopper file is shown below.
2.MaxDisp
When I use the model is “max disp”, I command the deformation, but how can I get the value of the virtual force exerted (which I don’t know because it is now imposed)? What is its link with the imposed deflection?
Otherwise I can’t figure how to use it with displacement-controlled loading
3.Iterative process
As it seems impossible to use LaDeform process, I tried to test it by iterations, as you recommend it on the forum, saying that it is equivalent to an iterative Analyze Th1 process.
I tried to reproduce this loading but the result is not very enthusiastic as you can see. The Rhino file shows the progressive loading, with the corresponding Grasshopper files, where I
- disassemble the model,
- get the previous deformed model
- put in another part of the load,
- re-assemble and then calculate it on the previous deformed shape.
Do you have any idea why the answer is not the same ? (LaDeform seem to give like 5 times less for the same loadings) (and even controlling it by displacements the shapes do not fit the principle of the algorithm would let think)
[RhinView for Iterative process]
First step by analyze Th1, and result by LaDeform
4.Analyze Th1 after LaDeform?
Some tutorials of Karamba show that an analysis with Analyze Th1 is sometimes made immediately after a calculation in large deformations. What is its reason? It seems to sometimes change considerably the result. What is the sense of such an operation? Would it mean that LaDeform is not trustworthy?
ð My question is then: is there a way to make the use of LaDeform for other purposes than form-finding affordable and coherent? If I mistake using it, where?
Thank you very much for your help,
…