tten on the initial configuration): this makes the analysis a bit tricky. In Finite Element programs, this is usually solved by an iterative method (modified Riks method), which is unfortunately not implemented in Karamba. There are other form-finding techniques, used for gridshells:
Dynamic relaxation with kinetic or viscous damping. I used viscous damping and an implicit integration scheme (Bathe's method) for the form-finding of gridshells in this paper. For kinetic damping, you can look here. It was first used for beams by Sigrid Adriaenssens
You can also look at Sina Nabei's PhD on the form-finding of twisted beams, and also the thesis of Frederic Tayeb (in french) and some papers in the link far below.
The main question remains the mechanical you are using: beam model (with torsion and bending) or shell model? In terms of solver, Kangaroo2 is powerful (although you don't have access to real engineering values, like Young's modulus), but there is no beam element with 4 or 6 degrees of freedom/node... Likewise, I'm not sure that shell elements (with bending) are implemented within Kangaroo2.
If you look for references of research on deployable structures for shading, you can look at the research at ITKE, but also a joint research effort between Princeton and l'Ecole des Ponts ParisTech.
http://thinkshell.fr/deployable-structures/
http://thinkshell.fr/form-finding-of-twisted-beams/
I hope this helps you...
Romain…
accept untrimmed surfaces, only Open Brep, but sometimes, seemingly out of the blue, the Open Brep changes into Untrimmed Surfaces and vice versa. I've already checked the unit tolerance in Rhino, that made no difference. Any ideas about where I could be going wrong?
--
Second issue is that some of the geometries should 'curl' outside their grid boundaries. I need to be able to play with the grid size while the geometry maintains its position.
Also, the first set of these geometries (bottom of image) should translate as a flat surface. But the points 1 and 3 tend to stick to the 2nd Grid - creating openings on the side. How could I fix that?
--
Third issue is that the geometries seem to be a little 'squished' at their plane normal (right until where the 2nd Grid offsets). I tried adding a number slider between the [z-vector] in the ptCoordinates and the [translation vector] in the Move component.. but that isn't working. Ideally I wouldn't need to control the offset distance, the shapes would retain proportion automatically. Any ideas?
Thanks so much in advance! :)
…
_b2 texfunc WoodGrain_tex
6 xgrain_dx ygrain_dx zgrain_dx woodtex.cal -s 0.01
0
1 0.075
WoodGrain_tex plastic WoodGrain_NonColor2
0
0
5 0.364 0.187 0.072 0.006 0.0
This creates the texture (on the table) below:
Is it possible for me to use a multi-modifier material like this in Honeybee ?
Thanks,
Sarith
(Update: I figured out a hack to do this in MSH2RAD but I still don't know if it is possible to add this to the Honeybee Library).…
ashes... but it made the event.OldState even more volatile (with a great number of fields throwing exceptions in the debugger, and crashing Rhino if accessed).
2) The problem was largely solved in the components that Modify the Rhino document, not the layer events listener itself. The trick that prevented crashes (so far, fingers crossed) was moving "layer.CommitChanges()" out of SolveInstance and into the AfterSolveInstance override. After verifying the changes with testing, Andrew also updated his Human components to use this pattern.
3) After updating the layer modification components, I moved the ExpireSolution back into SolveInstance, and event data is much more reliable again. That said, there are a great number of instances where some event.OldState properties throw exceptions (particularly the IsValid property, ironically) -- but my code uses a hack to determine if a state is corrupted without access an invalid property... namely, corrupted OldStates generally have a layerindex property that is outside the bounds of the document layertable (e.g., '1562342' or '-12546'). My components check for an invalid layerindex before accessing any OldState properties.
As a side note, I am a bit concerned about this code in Rhino 6, as I just ran across this forum topic, where Steve Baer notes that in Rhino 6 WIP he has changed the paradigm to execute CommitChanges immediately when any modification is made to a layer. I am concerned that this will cause problems to resurface if I don't refactor my entire components to execute their business logic outside of SolveInstance.
I will try to log some data regarding the corrupted events but I'm not sure if it is a GH issue or a Rhino issue, so I'll post both here and in the Rhino Forum.…
s levels of detail by subdividing a 6 sided cube mesh and projecting its vertices according to a referenced height map. This is one of the standard conventions for building full sizes planets. At the lowest level (0) the mesh planet is made of 6 pieces(each 32x32 resolution). The next level down (1) is made of 24 pieces... 6 divided by 4 = 24. Level (2) is 96 quads etc etc. The script will generate each quad at its sub-division level and compare edge vertices to neighboring quads. It will then make sure any shared vertices are in fact at the same projected vector. This ensures a planet quad with edge vertices that match.
The problems comes in texturing each quad.
If I build the quad as a nurb surface from points I can place the texture easily because each surface UV maps squarely to my texture map (which is also square).
If I build the quad as a mesh I cannot just apply the square texture to the mesh UVs. This is because when you unwrap the UVs from a mesh they will not unwrap like a nurb surface's UVs. Therefore to get the correct mapping I would have to manipulate each UV back to an evenly aligned array (which is 1024 points in a 32x32 resolution UV). Maya and blender have 'relax uv' and 'align UV' functions but they don't do the trick and manual corrections are out of the question. So why not skip the mesh method and use the nurb method?
I did this and there is a trade off. The nurb will accept the material texture I want with no other work on my end but when I export the object as an .obj rhino creates its own mesh to describe the nurb(with various unsatisfactory setting options). This works great up to a point because at some level the interpreted mesh will have vertices that do no match at the edges, ie .. creating visible seams in the mesh. The picture below is the nearly seamless planet at LOD(1) made of 24 quads, each with 32x32 vertice resolution and a 512x512 jpg texture running in Unity3d 5. It works but at close level there are seams. This will be resolved simply by having the next LOD(x) instantiate before getting close enough to see the seam but at core nerd level I want the seamless mesh.
So, I can make the seamless mesh but I can not realistically texture map it. I can also make the nurb surface from points and texture it at the expense of the edge vertices matching. I am at the split in the road but I want to have my cake and eat it too. Thoughts, comments, trolls...?
Thanks for reading =)
Footnote: For you pros I am not using seamless noise across the map I am using grasshopper to sew up my otherwise non perfect edges.
Other programs in the pipeline:
-WorldMachine 2
-Wilbur
-Photoshop
-Unity3d…
ll the wires gone. Now you'll have to hook them up again. This is the sort of thing that gets annoying after the third time.
Problem #2. The solution in Grasshopper iterates over all objects and solves each one in turn. Of course a lot of objects depend on other objects, so it usually becomes a cascade of objects solving themselves for the benefit of other objects. It is therefore very difficult to predict the order in which objects are solved.
The root iteration is the same as the display order (back to front), but the topology of the network complicates matters greatly. If you change the topology of the network during a solution, you might end up with whole portions of the network not being solved at all, or, worse, a conflict in the topology checker that makes Grasshopper think a network is self-referencing.
Problem #3. Grasshopper caches all manner of things about a network that can be recomputed from basic principles, but take a long** time to do so. If you start to expire caches where I don't expect you to, we'll either run into null reference errors, or stale cache data, or invalid cache data. Problems like these are very difficult to track down.
Problem #4. File IO. Components get deserialized from ghx files based on their default layout. In your case, we need to solve a component before we know how many outputs it needs. This cannot be done until the file has been completely deserialized. It's a chicken and egg problem, which will result in missing wires every time you open a file.
If you want to have a flexible number of outputs on your component, I'd suggest you add a menu item to your component that will change the output list when clicked, then causes a recompute. This way you won't mess with the network topology during solutions and people don't get their connections pulled out from under them. You'll need to do quite a few things to make this work, but I'll be happy to help you out there:
- Adding menu items and menu click handlers
- Properly removing parameters from a component
- Properly adding parameters to a component
- Recording undo for parameter changes
- Writing custom settings to a ghx archive
- Reading custom settings from a ghx archive and making sure your component is compatible with the ghx layout before it tries to deserialize itself.
--
David Rutten
david@mcneel.com
Poprad, Slovakia
* This sort of thing has cropped up before and it has always been due to human error.
** 2ms or more…
Added by David Rutten at 9:28am on October 19, 2010
strictly with code (BTW: did you crossed Rubicon?).
1. See this: Imagine a curve (say a "rail") that is divided N times and then circles are created with random radii. Circle control points (9, that is) are sampled (obviously) into a DataTree where branches are the rail divisions. Let's call the control points: "start" seed points.
2. Imagine a capability ... that stores all these (the original "seed" control points) into a "parameter" and then each time that a change occurs to them (varying the x/y, on a per point on a per branch on a per plane basis[that provides the Z]) stores the "modified point" into the parameter (at the same index with the old: meaning "deleting" the old) ... and then some other code gets that data and makes curves and lofts them. Reset means: sample again the original "seed" points into that "parameter". Closing are reopening the definition has no effect: the lofted stuff is derived from the (internalized, so to speak) modified points (from the "parameter").
3. A variety of "automation" is available: for instance if you jump from branch to branch and from item to item the value of the selected point is inquired and the sliders that control the new x/y are "set" to 0,0 (meaning no change - yet) values. There's mo "store" mode: it works automatically as far as you modify points or you hit the reset button
4. This does that (only achievable with code):
5. Obviously points can been replaced with anything ... and thus ... we can individually modify items in collections ... and forget for ever attractor points and all that (OK where appropriate, he he).
I'll post 30 similar examples soon in the forthcoming mother of all threads: "GH goes (at last) interactive". Watch this space.
BTW: study the "animation" where points with index 6 are "sequentially" modified. I've added some delay in order to give you time to get the gist of the whole thingy.
best, Lord of Darkness
…
4 explode the text
5 select the exploded text, which are now curves, and the border from step 2 and use the planarsrf command again
6 make your surface using the two curves at top and bottom and a section. Use the sweep2 command
7 select your negative text surfaces and use the flowalongsrf command
maybe the scale of the text can be edited by the size of the surface or of the text but I bet you can figure that out! good luck!…
l design.
2/ Optimization
2.1/ in prefabrication
2.2/ combinatorial
2.3/ approach comparisons (i.e. deterministic vs stochastic)
2.4/ share your research
2.5/ ... etc. the list goes on and on
3/ Share you design rationale and how computation fits in
4/ Need help with this problem...
5/ Challenges and workshops announcements
6/ CD News
7/ Share computational design projects under construction or built (akin to skyscrapercity)
8/ and so many other categories and sub-categories...
Just my first thoughts. That breakdown in optimization is just an example. Maybe 'sections' is an old-school way of seeing things, I just wanted to share some thoughts on the kind of content I look forward to seeing. It can be pragmatic topics, but also theoretical, and allow folks to share their projects and research. Some categories are specific, others broad. I suppose I'm interested in community building with regards to computational design. I think SmartGeometry attempted to accomplish this at some point in the past, to some degree. However their focus appears to be in the workshops and challenges.
I recall the silly flame wars that the CG industry had 20 years ago (lame). I'd avoid that, even if it meant forbidding the mention of any specific software in certain areas or in the entire forum. Which would be tricky, but the endless flame wars and silly comparisons were such a waste of everyone's time in CG.
Without dwelling on this too much yet, I think that the software specific questions belong in software specific forums. If we already had a common language for computational design, you'd just need to add the right description as a meta-tag to any Dynamo or Grasshopper forum post, and you'd be able to find analogous solutions in either forum effortlessly, right?
The Dynamo and Grasshopper forums lack design-centric content. The emphasis is generally on the tools and workflow. Computational design is hybrid in essence, it involves both design and computer programming (be it visual or textual). We could really use a forum for knowledge exchange where the expectation is that both are discussed with equal status.
I disagree that such a forum ought to exclude professional programmers. It should include professional programmers whom have an interest in design, and also include professional designers whom have an interest in computer programming, and everyone in between, and enthusiasts, and artists whom are curious about algorithms as a creative medium, and academics, and students, and etc etc. As long as there is rich content and activity on design as well, not only the computational bit, then the crowd will be diverse and we'll all have more to learn from one another.…
s not imported in the workflow, there is no problem for the intersectMass component, no matter how we change the random seed to generate different groups of breps with overlapping surfaces:
2. However, once we add the mass2Zone component into the workflow, we have this problem as posted here.
3. As you pointed out, the warning is given when "Hzones == True" which only happens when one of the objects in _bldgMassesBefore is recognized as a zone already stored in HB Hive. So, I added one line after line257 to print out the zone name:
4. It seems for the cases that the intersecMass component shows warning, there is a zone name printed:
5. Whereas for those cases that the intersecMass component shows no warning, there is no zone name printed:
6. Even if the zone creation is turned off, there is still zone name printed for cases that intersectMass is unable to process:
So, I agree with you that this might be related to the HoneyBeeHive which stores honeybee zones.
Maybe whenever upstream geometries change that are connected to either intersectMass or Masses2Zones, the HoneyBeeHive has to be cleaned up and zone objects inside need to be recreated.
Hope Chris and Mostapha can kindly take a look and advise if this is the source of the issue.
Thanks.
- Ji…