algorithmic modeling for Rhino
For another project I have been working with metaball scalar fields, and I thought it would be fun to have a go at meshing them using Paul Bourke's marching tetrahedra methods. I know that Daniel Piker has done a little of this before more generally, and that there is an excellent implementation of the marching cubes algorithm as a part of Millipede...this is really just more specifically set up for generating metaball meshes.
On its own, the marching tetrahedra algorithm as outlined by Paul Bourke produces some odd results for metaballs...basically, the points that drive the mesh are assigned along the edges of the tetrahedra according to a linear interpretation of the iso value...what this means is that if you have an "ideal" iso value say of 0.29, and the sample points at each end of your tetrahedron edge return values of 0.2 and 0.3, the point will be located along that edge at 10% of the length away from the point with the 0.3 sample. In general, this works fine, except (from what it appears to me at least...could be wrong here) that the metaball field itself isn't really linear, and the tetrahedra used for sampling are carved from cubes such that the lengths of each edge differ significantly...so that there are a few spikes and dips here and there. Sampling resolution has a lot to do with these results as well. I address this issue here by embedding laplacian smoothing as a part of the process, and also by an additional field value sampling run for each mesh vertex after the initial mesh has been constructed.
So the inputs are your metaball centers (as a list), their radii (also as a list, which is the location at which their isosurface value contribution is 0), the iso level threshold, the grid resolution for sampling (here a lower number produces a higher resolution...basically is the rough size of the grid in the x, y and z directions), and the number of smoothing passes run after the mesh is generated. The outputs include the metaball meshes as well as the bounding box that's used, as well as the centers of the sample cells (both of these last are superfluous but perhaps interesting). I'd suggest starting with a low resolution (higher res value) for sketching, as it works pretty quickly...then if you want, you can increase the the sampling density (and incur the resulting exponential slowdown).
I've used a couple of tricks for reducing the number of sampled points (nothing so sophisticated as an octree subdivision), but there are also definitely inefficiencies in points shared between sampling cells having their fields calculated multiple times. There's really plenty of room for improvement...this is just a rough pass really, but I figured some people might have some fun with it and it seems that every now and again the issue of making metaball meshes comes up, and I figured that it'd be useful to have a quick way of building (relatively) clean meshes laying about.
Also, probably lots of bugs everywhere...feel free to point them out...and the code's pretty ugly, but poke around, take it, do what you will with it.
Comment
You can also check out Cytoskeleton (it's within the Exoskeleton component group: http://www.grasshopper3d.com/group/exoskeleton) It has a good way to create a thickened mesh from the wireframe of another mesh, and you can have it work on the Dual of the other mesh as well...
You want to get a "Mesh Dual" Weaverbird has it...
Thank you David! One more question, how could you get your mesh in hexagon. Mine is always triangular.
Hi Victor. You may want to check out the Cocoon plug-in I've developed, which advances this metaball meshing strategy quite a bit further.
But as to the patterning, this project by TheVeryMany appears to use some sort of structural analysis on the shell to erode portions of the mesh from the final structure (it's actually very difficult to know with their projects, as they don't disseminate their work beyond images/built objects...no papers on their methods are out there). You could use a plug-in like Millipede to get an understanding of the shell's structural properties.
But for sure, it's not a trivial thing to make this type of structure...
Hi Rodion- sorry I missed your post on Wednesday! I would really stay away from MeshFromPoints...it's super heavy and totally unnecessary, as the marching tetrahedrons algorithm organizes all of your points perfectly for creating your own mesh faces to begin with. You really don't need to build surfaces at all, or get points...
I don't use Python, so I can't describe the syntax there, but the C# syntax for creating a mesh from those points would be something like this:
First, create your final mesh object:
Mesh MyFullMesh = new Mesh();
Then, run your marching tetrahedrons algorithm, and for each Triangle output (which should give you three points) do this:
Mesh NewTriangle = new Mesh();
NewTriangle.Vertices.Add(TrianglePoint0);
NewTriangle.Vertices.Add(TrianglePoint1);
NewTriangle.Vertices.Add(TrianglePoint2);
NewTriangle.Faces.AddFace(0,1,2);
MyFullMesh.Append(NewTriangle);
MyFullMesh.Vertices.CombineIdentical(true, true);
Sorry that it's late...hope it helps!
Hello Again,
So I managed to do the 1st 2 points. Apparently, in Weaverbird there is a component Laplacian smoothening. So here are my results:
This is fine for me already, I guess. The only thing, is that the meshing command (done in the 2nd result) is done via Rhino plugin: MeshFromPoints. I don't know yet how to access it in grasshopper or in Iron Python, so that I would have a script, where I don't need anything to do manually...
Thank You David for Your help !
Hey David,
I didn't have the time to work on that code. Now I am going to do it within these couple of days.
I wanted to clarify what I have to do in those points which You suggested:
1) build up a full mesh rather that bunch of surfaces, and use Rhinocommon to combine identical vertices, and rebuild the vertex normals
As the output of the code I have are the points that are generated by the triangular planes, I can just use these points with a plugin called MeshFromPoints, which generates a mesh from the points you give it. However, the problem of bad mesh is still there.
As You may also observe, yes there are probably points that are duplicated, however, there are points that are really close to each other, that produce the irregularity of the mesh.
Now from what You have written, I can delete the duplicates, OK, but what do You mean by rebuild the vertex normals? Could You please explain it in detail?
2) run a couple rounds of laplacian smoothing on the mesh to better distribute your vertices (for each vertex, make it equal in location to the average of its neighbours)
Here, I also have a question. Ok, I can find the average of the neighboring points by math, but how do I determine within scripting the neighboring points ?
3) create a line normal to each vertex roughly the length of your sampling grid and test the endpoints of it against your scalar field formula, and then do one final linear interpolation between those two points for your vertex
This one is again not clear what to do. Could You please explain ?
Hi Rodion-
I think you're confusing marching cubes with marching tetrahedrons...you're right that marching cubes has 256 cases, but the lookup you're using for marching tetrahedrons is correct. There are just the 8 cases, so you're actually doing it right here (scroll down on this page, and you'll see a separate subset all about marching tetrahedrons http://paulbourke.net/geometry/polygonise/). The benefit to using marching tetrahedrons is exactly this: that the number of possible "cuts" through the tetrahedron are dramatically smaller in number than those through a cube.
However, I have found that also what you're seeing that the linear interpolation creates some odd distortions (which is why I went ahead and later did the marching cubes implementation). Some of this comes from the density of the sampling grid: the more dense, the fewer distortions.
What I would suggest, if you want a (relatively) quick way to improve this outcome:
1) build up a full mesh rather that bunch of surfaces, and use Rhinocommon to combine identical vertices, and rebuild the vertex normals
2) run a couple rounds of laplacian smoothing on the mesh to better distribute your vertices (for each vertex, make it equal in location to the average of its neighbours)
3) create a line normal to each vertex roughly the length of your sampling grid and test the endpoints of it against your scalar field formula, and then do one final linear interpolation between those two points for your vertex.
This should give you a smoother mesh for sure.
But good work getting this far!
Dear David,
With the help of my friend, who is a much more experienced programmer than I, after some tweaking and playing around with the output and seeing what it does, we managed to receive the Metaballs. So yes, the output of that code are points, generated by the Marching cubes algorithm.
this is the picture:I didn't apply any meshing. I just created the separate surfaces between the sets of 3 points defining a triangle. Just to visualize the result in a better way.
As You can see, the pattern of points in certain places is rather distorted. Could You please help with solving this problem?
In the code, which is discussed earlier, there is a comment and some lines:
def PolygoniseTri(g, iso, v0, v1, v2, v3):
triangles = []
# Determine which of the 16 cases we have given which vertices
# are above or below the isosurface
triindex = 0;
if g.val[v0] < iso: triindex |= 1
if g.val[v1] < iso: triindex |= 2
if g.val[v2] < iso: triindex |= 4
if g.val[v3] < iso: triindex |= 8
return trianglefs[triindex](g, iso, v0, v1, v2, v3)
But according to Paul Bourke's article, there should be 256 cases of triangular intersections, so:
cubeindex = 0;
if (grid.val[0] < isolevel) cubeindex |= 1;
if (grid.val[1] < isolevel) cubeindex |= 2;
if (grid.val[2] < isolevel) cubeindex |= 4;
if (grid.val[3] < isolevel) cubeindex |= 8;
if (grid.val[4] < isolevel) cubeindex |= 16;
if (grid.val[5] < isolevel) cubeindex |= 32;
if (grid.val[6] < isolevel) cubeindex |= 64;
if (grid.val[7] < isolevel) cubeindex |= 128;
Is this the reason, or it is because of something else?
This is rather important, because it is an important aesthetic aspect of our Final Project.
Thank You in advance
Added by nina songhori 0 Comments 3 Likes
Added by David Rutten 11 Comments 5 Likes
© 2016 Created by Scott Davidson. Powered by
You need to be a member of Grasshopper to add comments!
Join Grasshopper