Grasshopper

algorithmic modeling for Rhino

Hi, 

I have had a play with Octopus (still with the version for GH 9.0014) and have a few questions / points of confirmation:

  1. When I look at the genome of a solution in the plot in Octopus, it shows values that appear to be a percentage of the possible range  of values the referenced slider. I wanted to use discrete values to reduce the search space... and had set my sliders as such, but it seems that the genome values displayed in Octopus are not discrete.  Even for the initial generation. Why is that? Does Octopus work with continuous real values that are rounded to integer discrete values?
  2. Just to confirm is the 'elite' the same as the 'external archive' described in  the original paper from Zitzler, Laumanns and Thiele? I know this is very specific to each optimisation problem... but as a starting point do anyone have any rule of thumb suggestion of what should be the ratio elite size/population size? 
  3. Is the 'weight for preferred individual' parameter in 'advanced settings' the weight given to user-preferred solutions?
  4. I am not sure I understand the difference between displaying the 'non-dominated front' and the 'archive'. Doesn't the archive include all non-dominated solutions?

Thank you.... apologies for the long list.

Fred

Views: 1233

Replies to This Discussion

Hi!

1) True, Octopus works with real values that get rounded. There are a lot more special tricks for combinatorial problems, maybe i ll look into that once i worked the current priorities! But basically it should not make much of a difference - random number generation is not affected, mutation also is not. crossover is a bit more tricky, I use Simulated Binary Crossover (SBX-20) which was introduced already in 1194:

Deb K., Agrawal R. B.: Simulated Binary Crossover for Continuous Search Space, in
IITK/ME/SMD-94027, Convenor, Technical Reports, Indian Institue of Technology, Kanpur, India,
November 1994

Abst ract. The success of binary-coded gene t ic algorithms (GA s) in
problems having discrete sear ch sp ace largely depends on the coding
used to represent the prob lem variables and on the crossover ope rator
that propagates buildin g blocks from pare nt strings to children
st rings . In solving optimization problems having continuous search
space, binary-co ded GAs discr et ize the search space by using a coding
of the problem var iables in binary st rings. However , t he coding of realvalued
vari ables in finit e-length st rings causes a number of difficulties:
inability to achieve arbit rary pr ecision in the obtained solution , fixed
mapping of problem var iab les, inh eren t Hamming cliff problem associated
wit h binary coding, and processing of Holland 's schemata in
cont inuous search space. Although a number of real-coded GAs are
develop ed to solve optimization problems having a cont inuous search
space, the search powers of these crossover operators are not adequate .
In t his paper , t he search power of a crossover operator is defined in
t erms of the probability of creating an arbitrary child solut ion from
a given pair of parent solutions . Motivated by t he success of binarycoded
GAs in discret e search space problems , we develop a real-coded
crossover (which we call the simulated binar y crossover , or SBX) operator
whose search power is similar to that of the single-point crossover
used in binary-coded GAs . Simulation results on a number of realvalued
t est problems of varying difficulty and dimensionality suggest
t hat the real-cod ed GAs with t he SBX operator ar e ab le to perform as
good or bet t er than binary-cod ed GAs wit h t he single-po int crossover.
SBX is found to be particularly useful in problems having mult ip le optimal
solutions with a narrow global basin an d in prob lems where the
lower and upper bo unds of the global optimum are not known a priori.
Further , a simulation on a two-var iable blocked function shows
that the real-coded GA with SBX work s as suggested by Goldberg

and in most cases t he performance of real-coded GA with SBX is similar
to that of binary GAs with a single-point crossover. Based on
th ese encouraging results, this paper suggests a number of extensions
to the present study.

7. Conclusions
In this paper, a real-coded crossover operator has been develop ed bas ed on
t he search characte rist ics of a single-point crossover used in binary -coded
GAs. In ord er to define the search power of a crossover operator, a spread
factor has been introduced as the ratio of the absolute differences of the
children points to that of the parent points. Thereaft er , the probability
of creat ing a child point for two given parent points has been derived for
the single-point crossover. Motivat ed by the success of binary-coded GAs
in problems wit h discrete sear ch space, a simul ated bin ary crossover (SBX)
operator has been develop ed to solve problems having cont inuous search
space. The SBX operator has search power similar to that of the single-po int
crossover.
On a number of t est fun ctions, including De Jong's five te st fun ct ions, it
has been found that real-coded GAs with the SBX operator can overcome a
numb er of difficult ies inherent with binary-coded GAs in solving cont inuous
search space problems-Hamming cliff problem, arbitrary pr ecision problem,
and fixed mapped coding problem. In the comparison of real-coded GAs wit h
a SBX operator and binary-coded GAs with a single-point crossover ope rat or ,
it has been observed that the performance of the former is better than the
latt er on continuous functions and the performance of the former is similar
to the lat ter in solving discret e and difficult functions. In comparison with
another real-coded crossover operator (i.e. , BLX-0 .5) suggested elsewhere ,
SBX performs better in difficult test functions. It has also been observed
that SBX is particularly useful in problems where the bounds of the optimum 

point is not known a priori and wher e there are multi ple optima, of which
one is global.
Real-coded GAs wit h t he SBX op erator have also been tried in solving
a two-variab le blocked function (the concept of blocked fun ctions was introduced
in [10]). Blocked fun ct ions are difficult for real-coded GAs , because
local optimal points block t he progress of search to continue towards t he
global optimal point . The simulat ion results on t he two-var iable blocked
function have shown that in most occasions , the sea rch proceeds the way as
pr edicted in [10]. Most importantly, it has been observed that the real-coded
GAs wit h SBX work similar to that of t he binary-coded GAs wit h single-point
crossover in overcoming t he barrier of the local peaks and converging to t he
global bas in. However , it is premature to conclude whether real-coded GAs
wit h SBX op erator can overcome t he local barriers in higher-dimensional
blocked fun ct ions.
These results are encour aging and suggest avenues for further research.
Because the SBX ope rat or uses a probability distribut ion for choosing a child
po int , the real-coded GAs wit h SBX are one st ep ahead of the binary-coded
GAs in te rms of ach ieving a convergence proof for GAs. With a direct probab
ilist ic relationship between children and parent points used in t his paper,
cues from t he clas sical stochast ic optimization methods can be borrowed to
achieve a convergence proof of GAs , or a much closer tie between the classical
optimization methods and GAs is on t he horizon.

In short, according to the authors my SBX operator using real gene values is as good as older ones specially designed for discrete searches, and better in continuous searches. SBX as far as i know meanwhile is a standard general crossover operator.

But:

- there might be better ones out  there i just havent seen yet. please tell me.

- besides tournament selection and mutation, crossover is just one part of the breeding pipeline. also there is the elite management for MOEA which is AT LEAST as important as the breeding itself.

- depending on the problem, there are almost always better specific ways of how to code the mutation and the crossover operators. but octopus is meant to keep it general for the moment - maybe there's a way for an interface to code those things yourself..!?

2) elite size = SPEA-2 archive size, yes. the rate depends on your convergence behaviour i would say. i usually start off with at least half the size of the population, but mostly the same size (as it is hard-coded in the new version, i just realize) is big enough.

4) the non-dominated front is always put into the archive first. if the archive size is exceeded, the least important individual (the significant strategy in SPEA-2) are truncated one by one until the size is reached. if it is smaller, the fittest dominated individuals are put into the elite. the latter happens in the beginning of the run, when the front wasn't discovered well yet.

3) yes it is. this is a custom implementation i figured out myself. however i'm close to have the HypE algorithm working in the new version, which natively has got the possibility to articulate perference relations on sets of solutions.

Thanks Robert. Very useful explanations... Fred

look here

Hi Robert,

I am right into trying to achieve convergence....

Would you also have any useful reference for how to set the mutation rate? the tournament parameter and even population size... as a starting point?

I also noticed that the performance of the solutions in the archive seem to be re-evaluated against the optimisation criteria at each generation (this is in the version for GH 9.0014). I understand why they would need to be re-ranked together with the new generation, but considering that they will have already been evaluated before being stored in the archive, do they need to be re-evaluated? Wouldn't we significantly reduce run times if we could avoid re-evaluating the archive each time?

Thanks!

Fred

A few additional questions:

What does a mutation rate of 0.15 mean? does it mean the probability of a mutation occurring on a given gene is 15% at each generation?

Similarly what does a 'tournament size' of 2 mean? does this mean selection by tournament between pairs of solutions?  

And a crossover rate of 0.90?

Finally, has the optimisation process changed in version 0.2?.. or is it mostly to do with user interface and compatibility with GH 9.0056?

Thank you. 

Fred

Just to share some findings along the way... This paper gives some hints on how to set the mutation rate.

Laumanns, M., E. Zitzler, and L. Thiele (2001). On the effects of archiving, elitism, and density based selection in evolutionary multi-objective optimization. In E. Zitzler, K. Deb, L. Thiele, C. A. C. Coello, and D. Corne (Eds.),Proceedings of the First International Conference on Evolutionary Multi-Criterion Optimization (EMO 2001), Volume 1993 of Lecture Notes in Computer Science, Berlin,pp. 181–196. Springer-Verlag.

The original paper on SPEA2 is alos useful.

http://www.kddresearch.org/Courses/Spring-2007/CIS830/Handouts/P8.pdf

regarding the re-evaluation: i thought this was in in every version, but i will check! of cooursE this would be a mistake..

OK - let me know.

I have developed a couple of GH components to by-pass the evaluation tests when the solution has already been tested... it seems to work fine so far, and goes much faster.

 

okay I am back developing (I hope so at least), and just looked it up - in the new version the individuals are not evaluated again if they have been computed already.

the old version for GH 0.9.0014 you are right, it seems I did not have that implemented back then.

OK thanks for confirming

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service