algorithmic modeling for Rhino
Hi Mostapha et al
I need to do a view analysis from a floorplate to a specific building but am having some issues.
Geometry - I have offset the finished floor surface 1650mm to be at eye level.
Context - I have a mesh of the geometry to analysis. This is a city model.
View points - These are the vertices of the Sydney Harbour Bridge mesh.
The problem is that I cannot use a surface as the input to '_geometry'. What I need is to have a surface which isn't self-occluding. If I extrude the surface to create a brep, the top surface will be coloured accurately, while the the bottom surface will be blocked and all shown 0. Controlling the normal of the surface and meshing it doesn't work because sometimes the view points will be above and other times below. Basically the '_geometry' isn't really geometry but a series of eye positions. Is that clear?
Any ideas how to solve? View type doesn't really help because I want to focus on just one building (Sydney Harbour Bridge) rather than the entire city context geometry.
Replies are closed for this discussion.
This is something that I had been struggling with as well when I was adding in different viewTypes to this component a couple of months ago. Some of the viewTypes go hand-in-hand with having the geometry blocking the view (like skyView) but many others do not. While I originally thought that I should just set different "geometry blocking" values based on viewType, your post here has convinced me that this really needs to be exposed as an input on the component. As such I just added in the ability to change whether the _geometry blocks the view or not:
I also changed the default for the viewPoints analysis to not have the _geometry block the view since this is usually what I want to do as well.
See attached file for the new component.
Thanks Chris that's fantastic. You guys are so speedy!
As an extension to this, I have another problem which I'm sure you have already come across. How do you generate view points? For example, if I have the Sydney Opera House that I want to evaluate, I could mesh it, extract vertices and use these as the input. If I don't include the building as context, it essentially becomes transparent making all points visible which is not correct. However, if I do include the building as context, the maximum points visible will only be 50% as half will be on the back face. So again, this skews the results because it is impossible to see those points in the first place. So what I think is needed, is a cull of all view points so that only those that are possible of being viewed are analysed. If all of these can be seen, then the result is 100%. Is there a Ladybug component for this already or do we need to generate this ourselves?
Also, clearly the meshing of the brep is important if we are extracting its vertices. I notice that Ladybug has a neat feature that when it generates the analysis mesh it is roughly the mesh grid size imput. This is significantly better than GH's native mesh brep component. Where is this function located - in Ladybug Ladybug or View Analysis? I would like to expose it to help solve the above issues.
Writing a small script with native GH components to cull vertices to just one side is one way to do it. Turning the geometry into a 2D surface (like the example in my last post) is another. Personally, I find myself using the following method a lot: I just take the highest value of the view study (typically around 50%) and use it to re-scale all of the results. I can then use the recolor mesh component to have the results display with the correct legend like so:
The meshing of the brep happens with just a few lines of code inside Ladybug_Ladybug:
import Rhino as rc
aspectRatio = 1
meshParam = rc.Geometry.MeshingParameters.Default
mesh = rc.Geometry.Mesh.CreateFromBrep(inputBrep, meshParam)
You should be able to get the same functionality with native grasshopper components by using the "Custom Mesh Settings" component like so:
Hope this helps,
© 2023 Created by Scott Davidson. Powered by