make sure I add this information to groundTerrain_ inputs in the next few days.
So if you are using "Gismo Terrain Generator" component (former "Ladybug Terrain Generator 2" component), only the following types are allowed for groundTerrain_ input: type_ = 2 (surface with rectangular edges)
type_ = 3 (surface with circular edges)If you are using "Ladybug Terrain Generator" component, then only the:
type_ = 1 (surface with rectangular edges)
is allowed.
As for terrain not being colored when it is created as a surface, you can analyse it additionally with "Terrain Analysis" component for Elevation analysis type. It can even be colored for rendering afterwards by using the "OSM Render Mesh" component. Check the attached file below.Have in mind that in urban areas "Ladybug Terrain Generator" component produces much more precise terrain than "Gismo Terrain Generator" component. On the other hand, the latter component can generate much larger terrain areas (up to 10 000 sq km2, at least in theory).
The reason why component might still work even though a terrain mesh has been added to the groundTerrain_ input is probably because once groundTerrain_ input fails to convert a mesh to a brep, this results in it being equal to None. Component then considers as if groundTerrain_ input is empty and runs as if nothing has been added to it (the buildings are laid down on a flat plane with 0,0,0 as the plane origin).
Thank you once again for all the testing you are doing!!! It really makes Gismo a better plugin!!…
Added by djordje to Gismo at 12:45pm on February 8, 2017
ng the "kaleidocycle" as a facade component, and i need to be able to move it through its entire "rotation" in 3d space to understand where and how it is moving.
http://www.youtube.com/watch?v=4owFczeqqMQ
this is what it is doing, in general. there are 2 sets of 3 hinges, rotated 180 degrees, making up a hexagonal form.
here is a rhino model of the form. i used the trigonometric properties of the isoceles triangle to make this model very accurate (63.333, 53.333, 63.333 angles), and now i need to describe the movement.
It is TOUGH. i think i have it and it just throws me for a loop (no pun intended).
I have a ghx model set up to where it can go through part of the cycle, but the inbetween states are incorrect, and therefore it's not valid, but it shows how something like this could work. The trick is it rotates on multiple axes at different times, and its just very very tricky to figure out what it is rotating around and when.
If anyone has any ideas, or insight, please please let me know. I am working on this in my masters' studies, and I'm pretty screwed if i can't figure this out in grasshopper!
Also, please find attached a research article concerning this form. I haven't been able to apply the geometric findings of theirs, yet. But it shows it can be described mathematically.
THANK YOU!!!!
benjamin
…
ts connectors and slots that allow CNC machining the facets and connectors for assembly.
https://www.youtube.com/watch?v=34OvgflJEmI
We developed this construction methodology earlier this year while working on a large scale parametric structure for Midburn, the Israeli Burning Man. While doing so I used grasshopper to generate the facets for the geometry, while a friend on the team (Matan Zohar) wrote a javascript app that translated the mesh into connectors and slots for CNC manufacturing. You can see more about the project here:
http://www.shlomimir.com/triped/
I wrote this component as an exercise in learning rhinoscript and python, with the purpose of bringing the functionality into the grasshopper workflow. It's now to the point where it is working for triangle and square welded meshes while outputting the connectors and slots as an unorganized list.
Questions and To Do List
1. I'm new to object oriented coding and functions, and basically just wrote the whole thing as a series of conditional loops with two dimensional arrays holding the data. Planning on restructuring this better, would love any tips.
2. Right now outputting the connectors and slots on the input mesh itself in 3D, planning on setting this up layed out on one plane to organize for cutting. I was wondering if there are any existing tools for this or if I need to do this manually.
3. Labeling connectors and slots. Is there anyway to output text from python that can be later baked into the rhino for labeling?…
tly light vehicles such as bicycles and variations thereof. Although frame design is mostly of a structural nature, there are a number of elements that interact mechanically. Also, as you may be aware, bicycle and high grade tubing is not of constant section so shelling method in FEA is out of the question, but even so, because the joint needs to be modeled very accurately, that means different geometry and properties for welded area, heat affected area and base material; like so a simpler FEA package may not suffice.
I don't know karamba extensively, rather superficially, actually, but I'm under the impression it mostly deals with beam analysis. Pls correct me if I am under the wrong impression. I must say it would be very nice to have a complete FEA package inside GH really!!
Typical workflow for me would be to model everything in Solidworks, and then export to Ansys Mechanical. Although Ansys needs to read every input and naturally remesh back again, integration within Solidworks, Catia, Inventor, Creo, Solidthinking... and the sort, works reasonably well.
Now, I don't remember Ansys having a Rhinoceros plugin so that you could bridge the 2 together, but maybe I should go check again.
3) Great work with that fractal tree. It's nice to know it is a possibility at least. I have tried Apophysis and others, but to my knowledge there's not an application that could deliver 3D fractal designs in a way that you could further manipulate with conventional modelling techniques, maybe apply textures and render, or export to CAM, 3D printing... etc.
P.S.: I have tried all the apps mentioned above and then some more. All of them have serious limitations when it comes to parametric design. For complex models they crash plenty upon rebuilding... a number of time consuming errors appear, and general work flow isn't very efficient for purely parametric work. Speaking for myself, I'd rather spend the time on a definition that enables me to have full control and then generate a new result within seconds, than model everything very quickly and then taking a long time with each new result.
(Thanks for the replies and sorry for the long text, you asked to elaborate).…
le] demo):
1. A transformation Matrix is a 4*4 collection of 16 values that "deform" 3d things according the values in the cells. The orthodox way is to deploy "cells" left to right and top to bottom. Rhino does the opposite (why?) hence we need the transpose method.
2. Since "translate" and "perspective" are "symmetrical" the transpose boolean toggle (within the C#) "flips" rows with columns ... so we get perspective or move.
3. When in perspective "mode" the vanishing points are computed internally within a min/max limit (per X/Y/Z axis) thus avoiding the usual havoc with "extreme" perspective angles (very common "glitz" in pretty much every CAD app - CATIA excluded). Vanishing points (and limits) are oriented with respect the pos/neg value of a given control slider.
Note: slider values are percentages between min/max (mode: perspective) and/or actual values*100 (mode: move).
4.In order to start mastering the whole thing: don't change anything: just play with these 4 sliders selected:
5. The 123 sardine cans challenge: even with DeusExMachine = true (see inside C#: that one redirects the transformation per BrepFace and then joins the breps instead of applying it on a brep basis)... odd things (and/or invalid breps) occur ... thus what is required in order to make things working 100% ??.
he, he
best, Lord of Darkness …
printers.
How I want to communicate this: The depth of transparent cubes is relative to the brightness of a picture (low depth = bright, high depth = dark). Then I assign each cube as red or blue depending on the RGB values of the cube column's corresponding pixel - this is where I'm stuck.
What I've done: I have one image sampler containing a greyscale version of my image which is outputting the brightness measurements. This made into lines, which are divided to create the points from which the cubes are created. (I have had to invert the image in photoshop as brightness gives black a low value when I need a high one, and vice versa)
What I want to do next: In the second image sampler I have an image which has a Red to Blue gradient applied to it. I want to group my cubes into reds and blues depending on the colour values in this image (so they could eventually be saved as a "blue" and "red" stl to be 3D printed).
So columns that correspond to a blue part of the image will contain a completely blue stack of cubes, and the same with red. But where there's a combination of blue and red values I need a combination of blue and red cubes mixed together. I was hoping to do this by turning the RGB values into some kind of ratio that will help assign each cube a group but I'm struggling.
Would love any thoughts on resolving my problem, even if it's only for part of it! This was quite hard to explain so let me know if there's anything that needs clarifying.
Thanks…
onstrates the following:
1. The definition's functionality employing HumanUI for the custom user interface.
2. The evaluation of the definition's ability to handle different point cloud data sets.
3. Video reports with the definition's results, animating subsequent per deviation step frames.
This definition calculates best fitting plane deviations. The number of manual set parameters has been minimized to two the facade per World UCS axis selection and the search width. This defines a box, which is used to crop protruding architectural details, which do not contribute to the analysis, but also ensures that large deformations are included in the calculation.
For the automation of the vertical and horizontal sections creation, the analyzed cloud is clustered, according to user defined number of 2d grid cells. The deviations corresponding to each cell are averaged in mean and median mode.
The process is displayed mostly in real time, with some speed up in some parts. Too long calculations have been omitted during video edit. The setup is responsive and benchmarks show that changing between dense point cloud data sets and facades is pretty quick (6.5-7.5M points, 25-45 deviation steps, 44x22 clusters), updates are calculated in acceptable timings (3-6 minutes).
I would like to thank Heumann A. and Zwierzycki M. who provided direct support with HumanUI and Volvox. Also Grasshopper3d forum users Maher S. and Segeren P., who contributed with Rhino viewport manipulation scripts.
More on Volvox:
http://papers.cumincad.org/cgi-bin/works/Show?_id=ecaade2016_171&sort=DEFAULT&search=ecaade%20volvox&hits=2629
http://www.food4rhino.com/app/volvox
http://duraark.eu/
HumanUI:
http://www.food4rhino.com/app/human-ui?page=1&ufh=&etx=…
in App store.
2. Modelo now supports VR! check out this video:
3. We've added a specular option in the rendering settings. So now you can have your design rendered a little bit shinny-er.
4. There is also a "filters" option in this panel, with which you can get some interesting image post processing effects. We are expanding this filter library, if you have any suggestions, please let us know.
5. This one is very important and has been requested by our customers for a long time. Now when you upload a model, you can grab the reviews(3d comments, screenshots,sketches) from your previously uploaded model! This works really conveniently if you use Modelo for your design review/presentation, cause you don't have to recreate the same 3d anchor views every time you made some changes to your design.
6. Also, our developer API is almost ready, which means if anyone is interested in developing a grasshopper plugin that works with Modelo, they can!
There are some many other updates and bug fixes happened. I don't want to list all of them here. Definitely stay subscribed with our newsletter. Modelo is thrived to grow into a more comprehensive platform! If you have any good ideas about our platform, please do not hesitate to let me know!
Here is our Youtube channel: https://www.youtube.com/channel/UCufBShhLtUQepsit9ilI-AA
Cheers
Qi…
Added by Suqi to Modelo at 1:24pm on October 18, 2016
up before you can produce a nice render. If you are using vray for Rhino you need to first learn how to set up (as an architect) a nice solar daylight system with environment, is actually very easy. (1 - set up sun lighting, 2 - set up environment, 3 - choose correct settings, such as activating indirect illumination)
However, since sketchup is the perfect draft tool for architectural design, it happens to have an environment with daylight defined already when you open an empty file. Vray for sketchup knows how to use all these settings so the only thing you need to do is to hit render. Apart from that you need to learn some simple material settings, which you find here: http://www.vray.com/vray_for_sketchup/manual/, the same manual for rhino here: http://www.vray.com/vray_for_rhino/manual/
The advantage of using vray for sketchup rather than for rhino (although if you can handle vray for one program its exactly the same for the other), is that you can easily import models from 3d warehouse. Sketchup is an excellent render set-up platform, except its only 32-bit so a to complex scene will simply not render. Rhino 64-bit will handle this better.
Conclusion, learn vray, whatever you learn can be applied to sketchup, rhino and 3ds max. Sketchup is probably a tool you already use and vray for sketchup will render with correct settings by default. Later when you take it to the next step you can go one and learn vray 2.0 for 3dsmax.
Personally I like using Luxology render engine that comes with Microstation, simply because I handle it better and Microstation is the best tool for architects in my opinion. However Vray is similar but more powerful.…
Added by Martin Hedin at 4:11pm on October 21, 2011
something in 3d, explode it to single surfaces, reference it to GH in proper order -manually- then unfold it with gh).
To make it really elegant you could try to make some "topology language" - have you seen this talk by Robert Lang http://www.ted.com/talks/lang/en/robert_lang_folds_way_new_origami.... ?
You can always make only few parametric types of structures - like leg, hand etc. (this is much easier than Mr.Lang's ) which can change its sizes, but topology stays the same.
Beside - Your sandwich looks really good, i played something similiar before.... have you tried thin PE (polyethylene) sheets ? Its similiar to PP (polypropylene) but a little bit softer. It is (PP) commonly used as tic tac box cap ( http://www.absolutelynarcissism.co/wp-content/uploads/2011/09/Tic-T... ) and some say that it can fold/unfold about 1000000 times. It would really simplify the whole production (just one cnc router needed to obtain full structure). Of course bending it will require prefabrication to look like e.g. http://www.grasshopper3d.com/video/the-swarm-2012 by Mr. Wieland Schmidt.
To clear things up :
1. It certainly can be done with rhino/gh
2. You should write some more on how should it all work (what you provide as geometry)
3. You should also provide some more info how 2d drawing looks now.
EDIT : I forgot about kinematics - use kangaroo. There are forces now like bending resistance etc.
…