Grasshopper

algorithmic modeling for Rhino

Views: 7623

Comment

You need to be a member of Grasshopper to add comments!

Comment by Ludovica Tomarchio on January 21, 2020 at 2:32am

Hi John, 
Thanks for your amazing work. I would like to apply SOMs to sort a series of images based on the combination of their RGB values or HSV values. I am new to GH and I struggle with how to format inputs and how to read the outputs? Could you help me with the file attached? Kind regards, 
LudovicaKohonen_Images.gh

Comment by Theodoros Galanos on March 3, 2017 at 12:13am

Also, did you get numpy to work in GH? :D

Comment by Theodoros Galanos on March 3, 2017 at 12:12am

Hi John,

Thank you for sharing this is beautiful work! The idea is very close to a convolutional network, perhaps a convolutional autoencoder would be interesting to compare with this.

I wonder if I can have a moment of your time in the future to discuss ANNs and GH, mostly share my thoughts on how I plan to introduce him.

Kind regards,

Theodore.

Comment by John Harding on March 18, 2016 at 4:49pm

Random seed input now added to help tune your maps better. Please note there was a small bug when using the decay rates which has now been corrected. Thank you Max for bringing it to my attention.

Both example files have been updated to suit this new release (0.1.3). I'll put the source on git and do some proper version control in due course.

Comment by Max Marschall on March 15, 2016 at 12:01pm

Hi John,

I've been trying to push the number of inputs, so far with limited success...

I'm thinking that in order to achieve that, I might need to control the

- initial neighborhood radius and

- number of iterations (what is the current criteria for convergence?)

Does that make sense? Would it be possible to make those explicit?

Cheers,

Max

Comment by djordje on March 4, 2016 at 12:28pm

Wonderful work John!!
Thank you for sharing it.

Comment by John Harding on March 4, 2016 at 6:40am

Learning exponential decay constants now exposed for tweaking:

  • WinLearnDecay: Winner learning decay constant.
  • LearnDecay: Neighbour nodes learning decay constant.
  • NeighDecay: Decay constant for size of neighbourhood influence (starts at half the map size). Note: this size is also now given as the output 'Neigh'.
Comment by John Harding on February 22, 2016 at 4:28am

Hi Max,

Setting the size of the map compared to the number of inputs seems to be a bit of a dark art. In most examples you'll commonly see around 5 to 10 times as many map nodes as there are inputs, but this also depends on the learning rates.

In this example a map has been trained with 200 inputs on an 8x8 map. The learning rates appear to be quite low to get this to work. So in theory, the approach you have adopted should be achievable but you need more control over the learning decay rates.

So the issue at the moment here lies with how much I make explicit, for instance the following parameters are not exposed but probably should be:

  1. Initial Euclidean radius of influence when a winning node is 'fired'
  2. Decay rate of this radius
  3. Decay rate of 'winLearn' multiplier
  4. Decay rate of 'learn' multiplier

The decay rates essentially cool the map down to some kind of equilibrium. Obviously these really need to be parameters you can modify... particularly the last two. I'll get onto this now.

By the way, a very good tutorial is located here that goes through the process step by step.

John.

Comment by Max Marschall on February 19, 2016 at 9:04am

Hi again,

That definitely cleared things up. I guess I was using the Kohonen Map for the wrong purpose: Imagine taking your original, random map (t=0), and simply reorganizing all of those different little boxes (without changing their appearance) to resemble a distribution looking a bit more like t=100. I was trying to visualize a parameter space in a way that doesn't have a fractal look, the purpose being to better identify the "neighbors" of a variation.

That would probably mean defining as many inputs as there are points in the map. I realize of course the difficulty of such a task, but could you please write something about the limits?

Here is a little test I just did. Defining too many inputs causes the algorithm to fail. If I turn down the learning rates it seems to work better, however at a certain point it can't seem to include all the inputs.

In any case a great and useful tool, thanks for sharing!

Cheers,

Max

Comment by John Harding on February 19, 2016 at 6:04am

Hi Max,

Seems a bit strange that it's not representing the domain very well, but I suspect you might need a higher learning rate (winlearn and learn).

See the attached 4 dimensional example which seems to handle 4 inputs quite well with a 0.95 and 0.9x learning rate respectively. Perhaps you can compare this to your example.

Graph:

Output:

Thanks for trying out the component and let me know how you get on.

Using SOMs to visualise the design space for high dimensional models has a lot of potential.

Best wishes,

John.

About

Translate

Search

Photos

  • Add Photos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service