Grasshopper

algorithmic modeling for Rhino

Hi,

I have a custom GH_Param(of GH_Number) with list access when is input and item access when output. For this question only matters when it is input.

I need to keep a value associated with each input data, not for each source, but for each individual value. A list as a global variable within the parameter can serve me to store them. Although a gh_structure would be better? 

The problem is to control all the processes associated (add, remove or change sources) keeping my list of weights/values synchronized. For example, if I remove the source with index 2, in my list of weights should also be removed all the weights which from of source index 2. Sometimes with the same source, change the amount of data so you should change the amount of weights.

So what's the best way to do this? Is there any way to avoid doing this by overwriting all processes (add, remove, change...) separately? There is a smart choice?

There are many methods that seem to be related [AddVolatileDataList(GH_Path, IEnumerable), CollectVolatileData_Custom(), CollectVolatileData_FromSources(), OnObjectChanged(GH_ObjectEventType), OnVolatileDataCollected()], therefore I need some guidance.

I hope it was understood. Thank you!

Views: 736

Replies to This Discussion

This is very difficult. GH_Structure doesn't allow you to store meta data for entries, and if you create a custom data type which has room for your meta-data, that data type will either not work with other Grasshopper components or your meta data will disappear as soon as you number+data passes through any other component.

You could keep a class level dictionary/list of associated data (don't make it global or static though), since you can unique identify whether two GH_Numbers are the same instance, not just whether they have the same value. You can use ReferenceEquals() to test for instance identity between two reference types. (GH_Number is a reference type, ie. a class, whereas double is a value type, ie. a struct).

I don't fully understand what you're after though. I don't know who is responsible for assigning this meta-data, nor where it is used.

Thanks for the reply David.


Let me elaborate a bit more. These metadata only have to be accessible by the component owner of the parameter and by another special component(s) (outside the native cycle of gh, something like anemone). These numbers are weight values, they are used as the most common strategy in learning algorithms, as with artificial neural networks. The initial weights are generated randomly. Given an architecture or connection diagram between neurons, and after training the network, these weights is where knowledge is stored. So, topology (and specific configuration) + weights = deep learning.


These values ​​are not transported, they are specific to each parameter, and there must be a weight for each input number. Each component (neuron in my current case) has a special parameter that receives a list of numbers and has these additional metadata. They only return a number (no weight). They will only be connected between components of the same type (other neurons) or numerical emitters (sliders, range, etc.).


So my problem is reduced to: when I add a source to that parameter, the same amount of weights is added as input values. When I remove a source, the associated weights are removed. However, in practice, the amount of input values ​​will only change at the start when the network architecture is configured, then in this phase, object reference is not mandatory, what matters is always having the same amount of weights as input values, because while the network is not ready to be trained, the weights can take any random value. And when the network is to be trained, the amount of input values should not be changed.

It must be difficult to understand this, excuse me, I hope I have flattened the ground so you can advise me. If you have more doubts please tell me.

And when the network is to be trained, the amount of input values should not be changed.

How about the order? Is that likely to change?

No.

Normally these networks are trained using a database, in case of supervised training, consists of a list of ordered numbers and a desired number output, inputs and output. The training consists in adjusting the weights -using for example the differential error of each weight with respect to the local output error of that neuron- to calibrate the weights causing the obtained network output to approach the desired output. But that order can not be changed, since each (input) number represents a characteristic that will be learned. Therefore the order of the weights should not change.

What do you advise me?

I would start by keeping a DataTree<double> on your component. Every time BeforeSolveInstance() is called you make sure that your data tree has the same topology as the data inside the relevant input parameter. If not, you rebuild.

If you want to persist weight values during changes to the data topology, you may also need to keep an additional dictionary which allows you to more permanently cache weight/goo associations. But this is something that can be added later if you want.

Thank you very much David!


I was using OnVolatileDataCollected() instead of BeforeSolveInstance(). Is it equally valid, right?

I'm not sure why you advise me to use a dictionary. Can a reference type (like gh_number) be used as key, so that the dictionary entries are set based on the instances??

I just tried this question and it works! that sounds like a very good choice :) thanks!

Ah yes, well OnVolatileDataCollected is part of the parameter, BeforesolveInstance is part of the component. So wherever your code is located, use one or the other.

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service