avy. The images I included were only a fraction of the actual cloud.
This (the point could and mesh attempt) is all somewhat exploratory. Ideally in Rhino I would have an accurate representation of the natural area, then reference in a footbridge that I also modeled in Rhino. Then create a scene by scene fly-through animation.
I found the MeshLab resources and I'm currently working through them. It is yielding smoother results with the Poisson Disk Sample and Ball Pivot Algorithm.
I have several versions of the same point cloud scan. Some denser than others.
The image shown has 1.4 Million, and again, is a small fraction.
The least dense complete cloud of my main area of focus is 5.9 Million points.
The Next dense is 13.4 Million points which includes the entire Tonto Natural Bridge and Canyons on either end.
I have an even denser one at 73 Million Points of that same area
and Finally, I Have the full resolution of the same area which has to be north of 200 Million Points. (I never was able to explode that one fully in Rhino)
The mesh I am working in mesh lab is the 5.9 Million Point Mesh which I Poisson Disk Sampled down to about 1/6th of that. Which still looks like it has enough resolution.
-Danny
…
Added by Daniel Gault at 10:11am on March 16, 2017
ing illuminance and limiting exposure (lux hours). Hours with direct solar irradiance are likely to exceed the limiting illuminance thresholds, which range from (200 to 50 lux as per Table 3.4 in CIE 157:2004). It makes sense to consider direct illuminance (an ab=0 simulation in Honeybee) separately from a normal illuminance calculation.
Assuming that the museum exhibits have low to high responsivity to light, an ideal solution would minimize direct sunlight. For daylight from the sky and reflected light, it might be enough to keep the illuminance levels below the recommended thresholds and then sum up lux-hours.
Daysim, the annual daylighting engine used by Honeybee and DIVA, is not very accurate for direct-sun calculations. You will get more accurate results if you run your analysis with Radiance directly.
Instead of considering the horizontal illuminance grids, one can create grids that correspond to the dimensions of the exhibit and then average those values. I think single points, as shown in your gh file might not suffice. Calculating lux-hours is by far the simplest part of such a simulation. It will only require averaging these points, extracting them into an array and then summing up that array.…
wich is nice actually :-) But I had 2 problems that make every thing just impossible to use : DIRECTION OF STEPS :
The motor can just reach one direction (for exemple clockwise) and when comes time to switch... not possible.
Details : When I put a value in HD.SM in V input, I put for exemple 100 with a slider so i have 100 steps and it works. Then I write 200, still working. and when I put a value that is less than the last higher value, for exemple 180 in my case the motor go endless to an higher amount, i mean 201,202,203...1000 etc... Dont undestand why :-/ SPEED : The speed is really really slow.
Details : The Firefly stepmotor component use the same library as heteroduino wich is Accelstepper.h but still going way faster. I tried lot of values in the S input (speed) but is not changing the speed that much, and sometimes its even changing the steps.
If there is another place where I have to post this, let me know also and I'll do it :-)
See you soon and hope some people are interested in the same problem ^^thanks :-)…
freeform Mesh Solid???? Because, that's how it really goes in the Nature!
my Try:
step 1) build the facade skeleton with the outlines intersected from Voronoi cells and the skin of skyscraper. then use a some borrowed GH-sets to loft those outlines.
step 2) use some other borrowed GH-sets to loft the whole 3D Voronoi Skeleton. Then bake them. and then use "mesh boolean" in Rhino, to get the inner 3D Skeleton for the Skyscraper.
step 3) combine them, it's not perfect, but not too bad.
This is the result.
Problem: When I try to make the Point cloud more denser, the "mesh boolean" allways failed out. Because those Skeleton-meshs are too komplicated to split.
any ideas??
Thanks!!!
…
of a hack to push it to an android device, and you can't use labels, which is a very bad point!
...
I won't buy an Iphone!
The other is Control OSC. It looks rougher, but it has a lot of advantages to me.
+ Game of Life included!
+ you can use and update labels :))
+ Has a nice muti touch widget unfeatured in touch osc
+ You can script the interface using java script manipulation in gh, stream it to your dropbox and update in one "tap", as follows
Does anyone have experience with scripting interfaces for this software? I'm stuck already. I know nothing of java script to begin with. As you can see I managed to format the labels but the osc message I could not find a way, it stays untouched.
Just in case someone knows better, here are my "objects" (I said that right?). The userXXX are replaced in GH.
{ "name":"userName", "type":"Slider", "x":(xPadding + .11), "y": yPadding, "width":.82, "height":.082, "color":"userColor", "min":userMin, "max":userMax, "ontouchmove" : "var roundedvalue = this.value.toFixed(userFix); LbluserName2.changeValue(roundedvalue)", "onvaluechange": "oscManager.sendOSC('/userName', 'f', this.value.toFixed(userFix))",},{ "name":"LbluserName1", "type":"Label", "x":xPadding, "y": yPadding, "width":.1, "height":.05, "color":"userColor", "value": "userName"},{ "name":"LbluserName2", "type":"Label", "x":xPadding, "y": (yPadding + 0.05), "width":.1, "height":.05, "address":"/userName", "color":"userColor", "value": 0},…
ill crash.
Example: offset surface, input "S" 100 surfaces grafted, "D" 2 values > total 200 surfaces as output
now in source link we replace the 2 values with 100 new values, and because we are changing the logic of the definition we would need to remove the graft in "S" to have just 100 surfaces as output, but it is too late! GH now is calculating 10000 new surfaces! (noob behavior, yes, but hey, it happens)
Solutions (maybe):
1- GH multiply only the trees lengths and/or complexity to find out how many outputs there will be. It will show up "We are going to calculate 10000 new surfaces, do you want to continue?" (if no, the component is disabled)
2- GH start doing some of the first solutions, check the time needed to do so, then calculate the approximative total time and pop up a "This component will require roughly x minutes to complete, do you want to continue?"
This should be done also with the whole definition: sometimes the output of a component (in danger if with new inputs) is linked to another dangerous component! (usually even to other more and more)
This lead in some cases, just by removing or adding a single graft/flatten somewhere, to completely crash GH, because millions of solutions should be calculated.
Calculate the total number of outputs should really faster than calculating the outputs themselves. A small IF to stop things where it's too much could come handy.
(tell me if I've explained myself bad :P )…
nd what you can do to make it more attractive to get answers.
The chances of getting an answer in a single unit of time are proportional to the number of people who can answer that question. If you ask a very obscure question it may take quite a while for someone knowledgeable enough to stumble upon it.
The chances of getting a good answer are proportional to the quality of the question. Phrasing, spelling, proper use of caps and punctuation, descriptive title and so on and so forth. If you have a file pertaining to the question and you are allowed to post it, do so. If you have an image/pdf/link that pertains to your question, include it.
Questions that allow for prompt answers will be dealt with quicker than questions which are vague, ambiguous, open-ended or large in scope. I.e. "How do I make a facade like this in Grasshopper?" will probably not be answered as it would be far too much work.
If you use Google Translate to post a question in English, also be sure to include the question in a language you are fluent in. Google Translate is a pretty terrible way of communicating and phrases usually get mangled beyond recognition.
Before posting a problem, reduce it to the smallest workable (but still problematic) subset. I.e. don't post a 200 component file if a 5 component file can be used instead.
…
if I have multiple inputs only the first output geometry will change color and the rest will only be displayed as wireframe.
How can i solve this?
List<Brep> supportGeometry = new List<Brep>();
protected override void RegisterInputParams(GH_Component.GH_InputParamManager pManager) { pManager.AddGenericParameter("Position", "Pos/SN", "Input PointCoordinates", GH_ParamAccess.item); pManager.AddGenericParameter("NodeRestraint", "NR", "SupportRestraint", GH_ParamAccess.item); }
protected override void RegisterOutputParams(GH_Component.GH_OutputParamManager pManager) { pManager.AddGenericParameter("Support", "SN", "Support", GH_ParamAccess.item); pManager.AddGenericParameter("Sphere", "S", "Resulting sphere", GH_ParamAccess.list); }
protected override void SolveInstance(IGH_DataAccess DA) {
...
// create sphere and cone Sphere sphere = new Sphere(supportPlane, scale);
Cone cone = new Cone(conePlane, num2, scale);
Sphere sphere = new Sphere(supportPlane, scale);
supportGeometry.Clear();
supportGeometry.Add(sphere.ToBrep());
supportGeometry.Add(cone.ToRevSurface().ToBrep());
DA.SetDataList(1, supportGeometry);
}
public override void DrawViewportMeshes(IGH_PreviewArgs args) { Rhino.Display.DisplayMaterial material = new Rhino.Display.DisplayMaterial(System.Drawing.Color.FromArgb(0, 0, 200), 0.5); material.Transparency = 0.5; for (int i = 0; i < supportGeometry.Count;i++ ) { args.Display.DrawBrepShaded(supportGeometry[i], material); } }
…
ng on the screen isn't rendered to a third of the size it should be, Windows has a scaling function under display settings, which I currently have set at 200%. For most programs, this works well, but in Grasshopper, this is what I get...
Remember that this is not a big screen! Only a small laptop...
What seems to be happening is that fonts are scaling up correctly. However, everything else is remaining at 96ppi, meaning all of the menus and buttons are tiny! Also, because the fonts are scaling up, the fonts now look enormous relative to the components.
This is rather irritating, as at a normal viewing distance things like the icons are almost indistinguishable.
Normally, I would accept that I was the one being weird as I had bought a laptop with high resolution, but shopping in PC World, it seems most mid- and high-end laptops now have high resolution screens. I'm aware that there is no quick fix for this, likely needing a substantial rewrite of the interface, but given current development for GH2, this might be something to bear in mind if it's not too late...…
Added by James Ramsden at 11:01am on February 5, 2016