Grasshopper

algorithmic modeling for Rhino

# 2 Simulated Annealing Runs

Testing the new Simulated Annealing algorithm in Galapagos. This video contains two subsequent runs, the first one colonizes the highest point, the second run homes in on the second highest point. Every point that is sampled by the solver is recorded and displayed as well.

Views: 712

Comment

Join Grasshopper

Comment by David Rutten on August 16, 2011 at 5:31pm

Sorry guys, haven't been paying attention.

Arthur: It's a completely different solver but it will work with the exact same inputs. So once you set up all the sliders and fitnesses, you can choose to run either the Evolutionary solver or the Annealing solver. Or both.

Daniel: by "second highest" I meant the other local maxima. As in "K2 is the second highest mountain in the Himalayas", rather that the snowflake just next to the topmost flake on Everest. I.e. Fred was right.

Fred: "Because SA is able to go backwards, it can't be stuck in local extrema, then you are certain to find the best better best solution." Sort of. The solver starts out with a 'high temperature'. During this phase the solution is allowed to move downhill a lot. This is basically the phase in which we hope to avoid local maxima. As the temperature cools though, it becomes less and less likely that a downhill step will be accepted (though the possibility never goes to zero), it is during this stage that the summit of the global maxima should be reached. These two phases are not clearly delineated, cooling is a continuous process.

This video shows the first successful run, I've since moved on. The two graphs show some of the basic stochastic distributions in the solver. They're not really all that informative, they just give you an instant feeling for how far along a certain run is. I've now got a Gaussian distribution showing lattice jump magnitude and a Cauchy cumulative distribution showing acceptance probabilities.

You don't need to understand these concept to use the solver, all you need to do it press the button.

One final note, it's not all that important that the solver can go downhill. It makes it somewhat less likely that it gets stuck, but you need several runs to get a reliable answer anyway. A more important feature is that I start new runs in 'empty' regions in solution-space. You can imagine a single run as a series of connected points in the search-space. After a while the search-space is criss-crossed with filaments, each recording a single annealing track. I start new tracks in areas that have the fewest possible filaments. This means I'm actually pretty good at avoiding solution attraction basins that have already been sampled.

Comment by Fred Becquelin on August 16, 2011 at 10:01am

Well don't quote me, I really don't know what I'm talking about!

Because SA is able to go backwards, it can't be stuck in local extrema, then you are certain to find the best better best solution.

My guess (?) is that David added some kind of clustering and sorting on the results - see this comment.

I hope I'm not spoiling...

Comment by Daniel Hambleton on August 16, 2011 at 7:09am

Thanks Fred.

It must mean that of all the local maxima, choose the second highest.

Comment by Robert Vier on August 16, 2011 at 3:00am

The S.A.-algorithm varies from Hill-Climbing in its decision of when to replace
S, the original candidate solution, with R, its newly tweaked child. Specifically: if R is better than
S, we’ll always replace S with R as usual. But if R is worse than S, we may still replace S with R
with a certain probability P(t, R, S):

P(t, R, S) = e^{[Quality(R)−Quality(S)]/t}

where t >= 0. That is, the algorithm sometimes goes down hills. This equation is interesting in two
ways. Note that the fraction is negative because R is worse than S. First, if R is much worse than S,
the fraction is larger, and so the probability is close to 0. If R is very close to S, the probability is
close to 1. Thus if R isn’t much worse than S, we’ll still select R with a reasonable probability.

Second, we have a tunable parameter t. If t is close to 0, the fraction is again a large number,
and so the probability is close to 0. If t is high, the probability is close to 1. The idea is to initially set
t to a high number, which causes the algorithm to move to every newly-created solution regardless
of how good it is. We’re doing a random walk in the space. Then t decreases slowly, eventually to
0, at which point the algorithm is doing nothing more than plain Hill-Climbing.

The rate at which we decrease t is called the algorithm’s schedule. The longer we stretch out
the schedule, the longer the algorithm resembles a random walk and the more exploration it does.

Simulated Annealing gets its name from annealing, a process of cooling molten metal. If you
let metal cool rapidly, its atoms aren’t given a chance to settle into a tight lattice and are frozen in a random configuration, resulting in brittle metal. If we decrease the temperature very slowly, the atoms are given enough time to settle into a strong crystal. Not surprisingly, t means temperature.

[from 'Essentials of Metaheuristics' by Jean Luke]

Comment by Fred Becquelin on August 16, 2011 at 2:00am
I understand anelling is aware of local maximas, so it would be the second maximum solving... Am I right?
Comment by Daniel Hambleton on August 15, 2011 at 6:28pm

Cool!

What do you mean by "second highest"? On a smooth surface, shouldn't the second highest point be really close to the highest point? Or are you searching a discrete sampling of the surface?

Comment by taz on August 15, 2011 at 2:17pm
I would hazard a guess that the green and turquoise graphs are u, v coordinates... set up for a fixed number of search iterations...
Comment by Arthur Mamou-Mani on August 15, 2011 at 8:00am

Thanks David!

I read the Annealing article but didn't really get it in relation to Galapagos:

Are the green and turquoise graphs showing genomes for two different fitnesses?

Are the sampled point satisfying these two fitnesses?

Is SA similar to GA ? Did I get it completely wrong?

Thanks again for all of this, it is so exciting!

Arthur

Comment by Daniel Piker on August 15, 2011 at 12:32am
Brilliant!

• View All