available yet on this front.
Here's a basic breakdown:
1. Galapagos populates the first generation (G[0]) with random individuals. Basically the sliders are all set at random values.
2. Now we step into the generic evolutionary loop, so G[0] becomes G[n], as this is the same for all generations.
3. For each individual in G[n] the fitness is computed. This is the most time consuming operation in the solver.
4. The individuals in G[n] must populate G[n+1], there are two ways in which this can happen:
- Individuals 'survive' the generation gap and are present in both G[n] and G[n+1]
- Individuals mate to produce offspring that populates G[n+1]
Often, fit individuals will use both vectors.
5. Creating offspring is a complex procedure and there are many factors that affect it.
5a. Coupling: this step involves picking individuals from G[n] for mating couples. Individuals can be picked isotropically (i.e. everyone has an equal chance of being picked, regardless of fitness), exclusively (i.e. only the fittest X% are allowed to mate, but they are all equally likely to mate) and biased (i.e. the fitter an individual, the higher the chance it finds a mate, but everybody has a chance)
5b. Mate selection: this step involves someone picking a mate from G[n]. When an individual has been selected to mate (step 5a), he/she needs to find a mate. Instead of picking another fit individual, mate selection happens based on genetic distance. For example, individuals could be said to prefer very similar individuals, or they could be said to prefer very different individuals, or something in between. This is called the "Inbreeding factor" in Galapagos. A high inbreeding factor will result in 'incestuous' couples, a low factor will result in 'zoophilic' couples. Neither extreme is healthy.
5c. Coalescence: Once a couple has been formed, offspring needs to be generated. Basically coalescence defines how the genomes of mommy and daddy are combined to produce little johnny. The best analogy with biological coalescence is crossover, where P out of Q genes are inherited from mom and (Q - P) genes are inherited from dad. In Galapagos, these genes are always consecutive, thus if the genome consists of 5 genes, the first 3 come from mom and the last 2 come from dad. Or the first 1 comes from mom and the last 4 come from dad. The amount of genes per parent is random. Genes can also be interpolated (there is no analogy for this in biological evolution). Since a single gene in Galapagos is nothing more than a slider position, it is quite easy to average the positions for mom and dad. Finally, genes can be created via preference blending. Very similar to interpolation, but the blending is weighted by the relative fitness of both parents.
5d. Mutations: Once the offspring genome has been created in step 5c, mutations are applied. Mutations are random events that affect gene values in random ways. Although the Galapagos engine supports several kinds of mutations, in Grasshopper it only makes sense to allow for point mutations, as it it not possible grow or shrink the number of sliders.
6. Finally, a new generation is populated and solved for fitness. There is an optional final step which can ensure that fit individuals do not get lost in the process. The "Maintain High Fitness" value controls what percentage of individuals from G[n] are allowed to displace individuals in G[n+1] provided they are fitter. By default this percentage is 10. Which basically means that the 10% fittest individuals in G[n] are compared to the 10% lamest individuals in G[n+1] and if grandpa is indeed fitter, he's allowed to bump junior off the list.
7. This process (step 2 - step 6) repeats until the maximum number of generations has been reached, until no progress has been made for a specified number of generations or until a specific fitness value has been reached.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
t file** - ply file with just x,y,z locations. I got it from a 3d scanner. Here is how first few lines of file looks like - ply format ascii 1.0 comment VCGLIB generated element vertex 6183 property float x property float y property float z end_header -32.3271 -43.9859 11.5124 -32.0631 -43.983 11.4945 12.9266 -44.4913 28.2031 13.1701 -44.4918 28.2568 13.4138 -44.4892 28.2531 13.6581 -44.4834 28.1941 13.9012 -44.4851 28.2684 ... ... ... In case you need the data - please email me on **nisha.m234@gmail.com**. **Algorithm:** I am trying to find principal curvatures for extracting the ridges and valleys. The steps I am following is: 1. Take a point x 2. Find its k nearest neighbors. I used k from 3 to 20. 3. average the k nearest neighbors => gives (_x, _y, _z) 4. compute covariance matrix 5. Now I take eigen values and eigen vectors of this covariance matrix 6. I get u, v and n here from eigen vectors. u is a vector corresponding to largest eigen value v corresponding to 2nd largest n is 3rd smallest vector corresponding to smallest eigen value 7. Then for transforming the point(x,y,z) I compute matrix T T = [ui ] [u ] [x - _x] [vi ] = [v ] x [y - _y] [ni ] [n ] [z - _z] 8. for each i of the k nearest neighbors:<br> [ n1 ] [u1*u1 u1*v1 v1*v1] [ a ]<br> [ n2 ] = [u2*u2 u2*v2 v2*v2] [ b ] <br> [... ] [ ... ... ... ] [ c ] <br> [ nk ] [uk*uk uk*vk vk*vk]<br> Solve this for a, b and c with least squares 9. this equations will give me a,b,c 10. now I compute eigen values of matrix [a b b a ] 11. This will give me 2 eigen values. one is Kmin and another Kmax. **My Problem:** The output is no where close to finding the correct Ridges and Valleys. I am totally Stuck and frustrated. I am not sure where exactly I am getting it wrong. I think the normal's are not computed correctly. But I am not sure. I am very new to graphics programming and so this maths, normals, shaders go way above my head. Any help will be appreciated. **PLEASE PLEASE HELP!!** **Resources:** I am using Visual Studio 2010 + Eigen Library + ANN Library. **Other Options used** I tried using MeshLab. I used ball pivoting triangles remeshing in MeshLab and then applied the polkadot3d shader. If correctly identifies the ridges and valleys. But I am not able to code it. **My Function:** //the function outputs to ply file void getEigen() { int nPts; // actual number of data points ANNpointArray dataPts; // data points ANNpoint queryPt; // query point ANNidxArray nnIdx;// near neighbor indices ANNdistArray dists; // near neighbor distances ANNkd_tree* kdTree; // search structure //for k = 25 and esp = 2, seems to got few ridges queryPt = annAllocPt(dim); // allocate query point dataPts = annAllocPts(maxPts, dim); // allocate data points nnIdx = new ANNidx[k]; // allocate near neigh indices dists = new ANNdist[k]; // allocate near neighbor dists nPts = 0; // read data points ifstream dataStream; dataStream.open(inputFile, ios::in);// open data file dataIn = &dataStream; ifstream queryStream; queryStream.open("input/query.
pts", ios::in);// open data file queryIn = &queryStream; while (nPts < maxPts && readPt(*dataIn, dataPts[nPts])) nPts++; kdTree = new ANNkd_tree( // build search structure dataPts, // the data points nPts, // number of points dim); // dimension of space while (readPt(*queryIn, queryPt)) // read query points { kdTree->annkSearch( // search queryPt, // query point k, // number of near neighbors nnIdx, // nearest neighbors (returned) dists, // distance (returned) eps); // error bound double x = queryPt[0]; double y = queryPt[1]; double z = queryPt[2]; double _x = 0.0; double _y = 0.0; double _z = 0.0; #pragma region Compute covariance matrix for (int i = 0; i < k; i++) { _x += dataPts[nnIdx[i]][0]; _y += dataPts[nnIdx[i]][1]; _z += dataPts[nnIdx[i]][2]; } _x = _x/k; _y = _y/k; _z = _z/k; double A[3][3] = {0,0,0,0,0,0,0,0,0}; for (int i = 0; i < k; i++) { double X = dataPts[nnIdx[i]][0]; double Y = dataPts[nnIdx[i]][1]; double Z = dataPts[nnIdx[i]][2]; A[0][0] += (X-_x) * (X-_x); A[0][1] += (X-_x) * (Y-_y); A[0][2] += (X-_x) * (Z-_z); A[1][0] += (Y-_y) * (X-_x); A[1][1] += (Y-_y) * (Y-_y); A[1][2] += (Y-_y) * (Z-_z); A[2][0] += (Z-_z) * (X-_x); A[2][1] += (Z-_z) * (Y-_y); A[2][2] += (Z-_z) * (Z-_z); } MatrixXd C(3,3); C <<A[0][0]/k, A[0][1]/k, A[0][2]/k, A[1][0]/k, A[1][1]/k, A[1][2]/k, A[2][0]/k, A[2][1]/k, A[2][2]/k; #pragma endregion EigenSolver<MatrixXd> es(C); MatrixXd Eval = es.eigenvalues().real().asDiagonal(); MatrixXd Evec = es.eigenvectors().real(); MatrixXd u,v,n; double a = Eval.row(0).col(0).value(); double b = Eval.row(1).col(1).value(); double c = Eval.row(2).col(2).value(); #pragma region SET U V N if(a>b && a>c) { u = Evec.row(0); if(b>c) { v = Eval.row(1); n = Eval.row(2);} else { v = Eval.row(2); n = Eval.row(1);} } else if(b>a && b>c) { u = Evec.row(1); if(a>c) { v = Eval.row(0); n = Eval.row(2);} else { v = Eval.row(2); n = Eval.row(0);} } else { u = Eval.row(2); if(a>b) { v = Eval.row(0); n = Eval.row(1);} else { v = Eval.row(1); n = Eval.row(0);} } #pragma endregion MatrixXd O(3,3); O <<u, v, n; MatrixXd UV(k,3); VectorXd N(k,1); for( int i=0; i<k; i++) { double x = dataPts[nnIdx[i]][0];; double y = dataPts[nnIdx[i]][1];; double z = dataPts[nnIdx[i]][2];; MatrixXd X(3,1); X << x-_x, y-_y, z-_z; MatrixXd T = O * X; double ui = T.row(0).col(0).value(); double vi = T.row(1).col(0).value(); double ni = T.row(2).col(0).value(); UV.row(i) << ui * ui, ui * vi, vi * vi; N.row(i) << ni; } Vector3d S = UV.colPivHouseholderQr().solve(N); MatrixXd II(2,2); II << S.row(0).value(), S.row(1).value(), S.row(1).value(), S.row(2).value(); EigenSolver<MatrixXd> es2(II); MatrixXd Eval2 = es2.eigenvalues().real().asDiagonal(); MatrixXd Evec2 = es2.eigenvectors().real(); double kmin, kmax; if(Eval2.row(0).col(0).value() < Eval2.row(1).col(1).value()) { kmin = Eval2.row(0).col(0).value(); kmax = Eval2.row(1).col(1).value(); } else { kmax = Eval2.row(0).col(0).value(); kmin = Eval2.row(1).col(1).value(); } double thresh = 0.0020078; if (kmin < thresh && kmax > thresh ) cout << x << " " << y << " " << z << " " << 255 << " " << 0 << " " << 0 << endl; else cout << x << " " << y << " " << z << " " << 255 << " " << 255 << " " << 255 << endl; } delete [] nnIdx; delete [] dists; delete kdTree; annClose(); } Thanks, NISHA…
o, I'm not sure it's always solvable, given a particular pair of curves defining the loft.
I came up with a looping solution using hoopsnake. It's a bit clunky and complex; it's very possible that I overlooked a simpler way to solve the problem. I've clustered the contents of the repeating step for clarity, but the inside is a bit of a mess. Here's a summary of the approach:
1. Take two curved edges
2. from the start points of the two edges, calculate the maximum possible extent of a valid loft on each curve
3. within that range, calculate a series of points along each edge
4. find all possible pairings of points between the two sides (the straight edge of the next division)
5. calculate the lofts between each possible edge pairing
6. test all lofts for the desired condition (one vertex per quadrant)
7. sort successful lofts by area, grab the largest one
8. output the original curves minus the portions now included in the loft
This process is repeated until it is no longer able to find a solution. Sometimes it seems to be able to traverse the entire pair of curves, and other times it fails in the middle.
Hope this explanation is reasonably clear... I'm happy to answer questions if you have any.
…
hat software I use: Microsation, AECOSIm (BIM), Catia/SiemensNX, Generative Components (slow, faulty, almost dead), Quest3d (boy! this is from planet Zorg) and GH (good fun but NOT for "strict" AEC matters). I use numerous other stuff for fun as well.
4. Do I use Modo for other than fun? No ... because I hate subdivision modeling (but this doesn't stop me from admiring a stunning product made by the best out there).
5. Am I a Bentley man? Yes (and no).
6. Does Rhino need a "top" rendering thing and/or Nexus? No.
7. Does Rhino need Quest3D? Yes.
8. Can Rhino do AEC things? No (but can act in a third violin role).
9. Should Rhino target AEC? Yes (requires a lot of money, mind).
10. Should GH target AEC? Yes (see n9).
PS: still renderings are previous century stuff: there's the next century going on now (from what I'm told, he he). Do people know that? No (as usual).
best, Peter…
ride random, the workflow is fucking hard. I understand the scheme topology/function, but is difficult to handle. Could you, when you can, publish some example more about this?.3. Would it not be possible (and easy) adding a component that blocks the use of a parameter? That is, in the ingredients, inserting a component to the inputs that will not be used (and not considered by the topology).
4. Would it be useful to add a couple of components that act as magnets to add probabilities of connection between two components? That is, between output of a component of the ingredients, and the input of another component.5. Would it be possible to add groups of connected components, rather than individual components in the ingredients? It's a nuisance having to individually deduct the map of connections. It would be nice to have the option of using a piece of definition as a component itself.6. Why you have implemented this topology/function format? Did you take into consideration do with tree-path format? But I do not really know what I'm talking about.7. Can you establish a list of intervals, rather than a single possible for the sliders domain?
It was cool to get into that, although I have seen just a little your thesis, some of my neurons have been exploited by overheating. Thank you, what do you currently do?…
cause I am such a beginner... i'll just do a quick paraphrase to make sure I'm up to speed? or sorta jogging along at a reasonable pace...
1. When you say "holes" you literally mean perforations in the surface... right? And these are coded as "loops"
2. So for each hole, each is pushed around by the unary z (or I suppose any force...?) and is locked up to some extent by an anchor (which can be stretchy).
3. What is the max span? you mean the max distance between points? sorry... lots of dumb questions here.
4. Noted.
5. Code bites hard. I'm pretty sure. You are too awesome. I can be a happy bunny... just how do you make this code?
6. No worries about real life. This is completely digital. It could be an art project... maybe a visual social network... idk... doesn't need to be built... just visualized.
7. You are my hero. I'm working through it all now. Maybe if I get super smart super fast I will process all of this and be a kangaroo whiz. One can dream... At any rate thanks for your patience and your competence. Totally the coolest.…
"sheet" body Breps (i.e trimmed stuff == BrepFaces) divide them proportionally to the u/v domains using DeDom2Num: get Lists of U1/V1 values, say a uList and a vList. In order to do that:
3. Find the min of each List, say uMin, vMin.
4. Your U/V division integer values (per surface) are:
(int) (U * uList[i] / uMin)); (int) (V * vList[i] / vMin));
5. Using the pts and vectors trees from the divide surface component ... add to each div pt the corresponding normal vector multiplied by some min/max random value.
6. Create a Nurbs Surface using the newly created "distorted" pts (control points or trough points).
7. Trim the surfaces against the potential Inner/Outer loops (per brep). Trim requires solid cutters mind.
…
ding on the topography of your location you will probably end up with a 10 000 meters mask radius.2) Again you would plug in all the geometry (those blocks) into the context_ input. Depending on the topography of the location, you will probably end up with a mask radius higher than 10 000 meters.But in this case the default value of 0 of the minVisibilityRadius_ input needs to be increased, so that the topography near the location gets excluded. Which is exactly what the minVisibilityRadius_ serves for.To my knowledge there is no paper which describes the exact amount of the minVisibilityRadius_ which needs to be used.ShadeUp plugin for example uses 50 meters of minVisibilityRadius_ by default and 50 000 meters of maxVisibilityRadius_ by default, for objects of a tens of meters in diameter.Something similar can be applied to our minVisibilityRadius_ input. For example: for relatively flat location surroundings one can use minVisibilityRadius_ to be at least 3 times larger than the contextRadius output. For more hilly locations surroundings this value can be increased (6, 7 times of the contextRadius).For example if the contextRadius is 600 meters, minVisibilityRadius_ can be 3.6 kilometers, and so on.Let me know if this answers your questions.…
for you. After using this library you won’t use any other.
3. wxPython. A gui toolkit for python. I have primarily used it in place of tkinter. You will really love it.
4. Pillow. A friendly fork of PIL (Python Imaging Library). It is more user friendly than PIL and is a must have for anyone who works with images.
5. SQLAlchemy. A database library. Many love it and many hate it. The choice is yours.
6. BeautifulSoup. I know it’s slow but this xml and html parsing library is very useful for beginners.
7. Twisted. The most important tool for any network application developer. It has a very beautiful api and is used by a lot of famous python developers.
8. NumPy. How can we leave this very important library ? It provides some advance math functionalities to python.
9. SciPy. When we talk about NumPy then we have to talk about scipy. It is a library of algorithms and mathematical tools for python and has caused many scientists to switch from ruby to python.
10. matplotlib. A numerical plotting library. It is very useful for any data scientist or any data analyzer.…
try
3. check intersection of curve with geometry.
4. select range of curve(s) intersecting with geometry.
5. for each curve plot distance between points of intersection (find the mid of intersection along curve).
6. add control point to curve at mid-intersection point.
7. plot mid point of geometry and mid-intersection point as a vector.
8. move control point at mid-intersection point away from mid point of geometry until intersection = 0.
I think I know how to go about doing most of these, but I am having issues with 3-6. Any ideas.
BCX gives me the instances of intersection but does not seem to select the curve which are intersecting the range.
Thanks in advance.
…