e= -3
e = -4
Setting exposure values is a "post-processing" thing done to improve the display of the image on a device or photograph. So, your analysis will still be physically based. I think towards the end of his video Mostapha talks about how the exposure values can be set through Honeybee....
…
by anything else (like walls, ceiling etc.). So, specifying ambient parameters in the simulation does not serve any purpose.
2. Get rid of source subdivision by setting -ds to 0. This will make Radiance assume that your source is a point source ( which seems to be a valid assumption in your case).
3. Finally, you can cut down the time even more by magnifying the candela value of a single light fixture instead of using the 9 fixtures that you have right now. This is a hack, but will work because that way, even the deterministic part of the calculation will feature only 1 variable instead of 9. You can make this work by increasing the candelaMultiplier of that one source by 9.
I have attached a gh file (which shows the parameters options) as an example. The points I have made above will mostly work for electric lighting only. Daylighting needs different kind of optimization because we are dealing with parallel rays from the sun and not a source with directional distribution like electric lights.
The image below is base-case. Takes the maximum time, using default settings.
This one is with direct-only calculations.
This is the case where a single fixture was used and the candela multiplier was used to magnify it's brightness 9 times. Took the least amount of time....but as you can there is a perceptible difference between this one and the base-case.
…
he time to work with it.
the project is about facade strips which turns along height. the top angle is
parallel to the facade and the bottom is max. 90 degrees twisted, but the strips
should turn diffrently to achieve more dinamic look.
first i have tried to achieve this by calculating distance between the rotation angle from points of the grid and a single point.
then i have tried to ad some more effecting points and used the distance to the divided surface (the circles are just to control the area of effection):
i manually lofted it.
the result is a bit annoying becouse the points that effect the angle are always visible:
i have triend to solve this by drawing a line and divided it to recieve points along the bottom of the geometry. the result is not working properly:
Anyway,
there must be a better/smoother way to achieve this. i would like to effect the twist of the surfaces by distance to a spline, but im just lost. can you help me please?
the problems im encountering:
0- distance spline to grid to effect the angle
1- list of x/y coordinates and angle of rotation for each point of the grid
2- export points to excel
3- lofting lines in one direction only (x1, x2, x3...)
4- reduce the list data to 2 decimal (0,00)
5- maybe angle from radian to degrees
thx…
he process. The last one is there because fixing it would cause another problem, which we feel is more serious. Solutions may well be forthcoming in the future though.
1. Grasshopper curves and points are drawn more towards the camera than they really are. This is a conscious decision. Often Rhino geometry and Grasshopper geometry exist in the same place. If we would draw the Grasshopper preview in place, then there's no telling whether you'd see the Rhino curve or the Grasshopper curve. We feel it's important that you always see the Grasshopper curve on top. This is why we draw all curves and points slightly towards the camera. However we don't do this for meshes. This results in something akin to the image below. The eye represents the location of the viewport camera, the shaded box represents the actual location of the geometry and all the thick black lines represent the edges of the geometry moved towards the camera. As you can see, the red lines will be visible, even though they should be behind the shaded box. This effect can get very strong when the camera is close to some geometry relative to the size of the boundingbox of all geometry.
2. Wires behind the camera are sometimes visible. This is a bug I don't know how to solve. We'll get around to it eventually. When an object is behind the camera the display transform sometimes makes it visible in front of the camera in some weird inverted perspective mode.
3. Meshes are not z-sorted prior to display. This means that the order in which they are drawn is not back-to-front, but fairly arbitrary. This means that a transparent mesh may appear to punch a hole in the mesh behind it. If this is annoying you to no end, you can use Ctrl+F on the Grasshopper components that contain the meshes that are punching holes and then press F5 to recompute. The draw order should now be different. Of course sometimes it will only 'fix' it for a specific camera angle.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
try to optimise the form maybe will use Rhino Phyton... I will try to continue this script to pass trough potential method, which will modify the hull shape, and get the optimum shape. For now there is a problem with that Open Foam works only on linux and I am looking for other free alternative for CFD.. Just to investigate the resistance fast, so to deform and modify the shape... Later with the deformed shape will do 3 passes with deform, compare, deform with fast potential solver. And the last two shapes which I liked could pass trough viscous solver so I can compare the difference, cost, performance, production. Does somebody know such free potential code for windows I could try? After getting free CFD code. IDEA: The question will be how to connect them (the outputs of CFD with deformations in Rhino) so taking PressureXYZ vector fields from CFD, in table with coordinates for each point and amount of deformation in each direction putting them in Rhino (in scalar form in table with position of the control point amount of deformation and direction. So using Rhino nudge comands to deform the body. Deforming the hull using the control points, or control curves with soft body deformation. Maybe there will be a high dense cage where is the higesht point in the table and soft deforming proportionally the rest points. There sould be two limiting lines (which user will specify in top view) up to where the deformation can be done, for the cylindrical part. So the curvature should smooth with g2-g3 curvature up to this meeting points of optimised stern, bow with the cylindrical left alone part. At the end there will be optimised hull form for lowest Pressure = Resistance in some constrains (limiting lines for the untouched cylindrical part for example) for current LBD and displacement. With possibility to get linesplan.. Later time begining shape can be connected with Tables with emperical or statistical formulas from the rules for specific ship. If somebody wants to join this quest is more than welcome
Here is an example picutre which will show me where do dig, and where to put material for the deformation of the body: …
r ideal surface so they add up where lots of points or lines cluster and create rather unintuitive bulges form a 3D modeler's perspective, here done with Millipede's Geometry Wrapper:
I've learned to do marching tetrahedra or cubes in Python to create the surface as needed from a implicit ( f(x,y,z) = 0 ) mathematical equation based on raw trigonometry but am not yet sure how to define an equation for Rhino user created input items like this or find a way to make marching cubes accept such input let alone one that doesn't treat each geometry item as an electric charge with so little decay.
This would afford an old school "organic" modeling paradigm that T-Splines replaced, but the T-Spines pipe command can't do nearby lines right either, which just makes overlapping junk. Metaballs and lines are not as elegant in that there is a real "dumb clay" aspect to the result that affords little natural structure beyond just smoothing, but still, if it works at all that beats T-Splines, and then I can feed the crude mesh result into Kangaroo MeshMachine to afford surface tension relaxation that will add elegant form to it.
I need both quick hacks and some help on how to deeply approach the mathematics of the required isosurface, now that I can think in Python better than ever.
I got a hint the other day here, about using a different power of fall-off but am not sure how to do the overall task mathematically:
"and just as with point based potentials, one can use different power laws for the distance, function, resulting it different amounts of rounding at the junctions. Below is with a 1/d^3 law for comparision with the above 1/d" - Daniel Piker
http://www.grasshopper3d.com/forum/topics/meshes?commentId=2985220%3AComment%3A1324050
He also included this link about bulging:
http://paulbourke.net/geometry/implicitsurf/
Am I supposed to create an actual implicit equation for my assigned points and lines and use that with marching cubes to surface it? If so, how do I define that equation, at all, and then how to control bulging too?
…
t;Custom additional code> Bob[] b = new Bob[] {new Bob(1), new Bob(2), new Bob(3)};
class Bob{....
}
//But how to make something like this in a loop?
// <Custom additional code>
Bob[] b = new Bob[10];
for(int i = 0; Bob.Length; i++){
b[i] = new Bob(i);
}…
housing for an LED PCB. The object is a parametric series of discs with an opening inside made up of a mirrored curve [drawn in Rhino, mirrored in GH]
It is madde up of N number of discs which can be varied through the distance between the circular outline using a divided curve [straight line in GH]. The length of the object can be varied using a length parameter, and the shape using a graph mapper.
I've chosen to cap the end two discs by creating two sets of outlines. One set has the central aperture cutout for the PCB, whereas the other set is a trimmed circle [achieved using the "trim box" layer profile in Rhino]
I then cull the outer two curves from one array, and the inner N-2 curves from the inner array.
The final outcome I am after is to create the housing as both an STL and a 2d template for laser cutting. This is a learning exercise for me as well as a cool project.
I had it working OK, but then I adjusted the profile for the PCB and joined it and now it is giving me some grief. I am sure the answer is obvious. The problem is the PCB profile is made up of 3 polycurves, whereas the disc profile is one planar curve. I have no problem using the flatten option so there are only N sets of curves coming out of the "Join Curves" component. However when I cull the curves, the planar curves making up the exterior edge cull fine, but the interior curves [the joined, PCB profile] culls in a different [irrational?] order to what I would expect. If I connect the single planar curve to the culls section, it works fine, but the joined line section just won't play.
In the instance uploaded N = 10, based on spacings. And index white it appears that 0 and 9 are diagonal to what I'd expect, although if you fiddle with the values they go all over the place.
Can someone please help me and explain what I did wrong? Files are attached... I have screen grabbed the relevant section, but it is grouped in red and labelled as "problem child" :)
Many thanks for your help, sorry if this looks like a clusterf**k, first time for everything... any advice very much appreciated, not just relating to my problem.
All the best
Nick…
rtical Sky Component (VSC), and now Sky Exposure Factor (SEF). For everyone else following this post, this discussion has been ongoing in these other threads:
http://www.grasshopper3d.com/forum/topics/sky-view-factor-vs-vertical-sky-component?groupUrl=ladybug&xg_source=msg_com_gr_forum&groupId=2985220%3AGroup%3A658987&id=2985220%3ATopic%3A1377260&page=1#comments
https://github.com/mostaphaRoudsari/ladybug/issues/230
Grasshope, you have gone right to Oke, the grandfather of urban climatology, whose papers I have several times and yet I somehow I always missed the finer details of the sky view calculation. From his definition, I had always thought of Sky View Factor as a purely solid angle or "view factor" calculation in the sense of Mean Radiant Temperature. However, the numbers and formulas that you give here clearly show that Oke meant that this metric for quantifying and understanding urban heat island must refer back to the urban surfaces and their orientation in relation to the sky. It cannot simply be the view from points in space.
To clarify the distinction in simple geometric terms: The key difference is that Sky Exposure refers to the sky seen by a point in space while Sky View refers to that seen by a surface. Both of them involve the calculation of either projected rays or solid angle calculations to the sky (since they both are “view” calculations). However, while Sky Exposure treats each patch of the sky with relatively equal weight, Sky View weights these patches by their area after being projected into the plane of the surface being evaluated. In other words, the sky view calculation for a horizontal surface would give more importance to the sky patches that are directly overhead than those near the horizon because these overhead patch are “in front” of the surface (as opposed to on the side).
To express this difference in the trigonometric terms you cite here:
Wall View = 0.5(sin2 θ + cos θ – 1) / (cos θ)
Wall Exposure = θ/π
I both cases:
θ = tan-1(H / 0.5W) - ** This is the solid angle or ray-tracing calculation
SkyViewOrExposure = (1 - 2 (WallViewOrExposure))
To put this in more simpler terms for the View Analysis component, all that I actually have to do to convert sky exposure to sky view is multiply each of the traced view rays by 2cos(ϕ), where ϕ is the angle between the surface normal and the given view ray being traced.
I have done this by adding this line of code () and I have verified that I get the values from Oke’s paper that you cite above, Grasshope. Accordingly, the View Analysis component now has the option to compute either Sky Exposure or Sky View. You can see this happening in this new example file:
http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Sky_Exposure,_Sky_View,_and_Sky_Component&slide=0&scale=1&offset=0,0
To (once and for all!) clearly define the difference between the three metrics at the top of my reply and to explain how to calculate each with Ladybug Honeybee:
Sky Exposure Factor - The percentage of the overlying hemispherical sky that is directly visible from a given POINT or set of POINTS. This is equivalent to a geometric solid angle calculation or ray-tracing calculation from points. It is useful for evaluating one's general visual connection to the sky at a given point and should be applied to cases where direct views to the sky are the parameter in question.
Sky exposure is calculated with the Ladybug_View Analysis component like so:
Sky View Factor – The percentage of the overlying hemispherical sky that is directly visible from a given SURFACE or set of SURFACES. While Sky Exposure treats each patch of the sky with relatively equal weight, Sky View weights these patches by their area projected into the plane of the surface being evaluated. In other words, Sky View for a horizontal surface would give more importance to the sky patches that are overhead and less to those near the horizon. Sky View is an important factor in for modelling urban heat island since the inability of warm urban surfaces to radiate heat to a cool night sky is one of the largest contributors of the heat island effect.
Sky View is calculates with either the Ladybug_View Analysis component like so:
Or with the Honeybee_Vertical Sky Component Recipe like so:
Sky Component - The portion of the daylight factor (at a surface indoors) contributed by luminance from the sky, excluding direct sunlight. This is essentially the same as Sky View Factor but it often incorporates a sky condition that is not uniform, such as a cloudy sky or sky that is more indicative of diffuse sky light. Another way of conceiving of this metric is a Daylight Factor calculation without any light bounces. It is useful for understanding the direct daylight contribution of diffuse skylight and, although many consider it an older (and perhaps outdated) daylight metric, it is still required by some codes and standards.
Sky Component can be calculated with the Honeybee_Vertical Sky Component Recipe like so:
In addition to the added capability in the view analysis component, I have revised the component description to include the definitions above. I have also corrected the Hydra example file in which I cite sky view as an urban heat island metric to use the new formula:
http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Sky_View_in_an_Urban_Canyon&slide=1&scale=1&offset=0,0
Finally, all of this discussion has made me realize that the Vertical Sky Component recipe for Honeybee might not always be evaluating VERTICAL sky. The sky component might be vertical, horizontal, or in any direction that the input test surface is placed and pts vectors are oriented. Accordingly, Mostapha, I think that we should change the name of the component to simply be “Sky Component” instead of “Vertical Sky Component”. Please let me know if you agree.
Thanks again, Grasshope, for all of the great work! All of this never would have made sense without your research.
-Chris…
greatly appreciate it!!
You can write the number of the question and write your answer next to it, example:
1) a
2) c
3) a) Washington University in St. Louis
4) 2 weeks (1week+1week shipping)
5) 130
6) b
7) b
The survey questions are as follows:
1)
Did you 3D print before?
5)
How much did it cost (in dollars)?
a.
Yes, for a school project
a.
Between 20 & 50
b.
Yes, for a personal project
b.
Between 50 & 80
c.
Between 80 & 120
2)
Print size
d.
Please specify if otherwise: _____ dollars
a.
Between 2 & 6 cubic inches
b.
Between 6 & 12 cubic inches
6)
Do you think the price was expensive?
c.
Between 12 & 20 cubic inches
a.
Not at all
d.
Please specify if otherwise: ____cubic inches
b.
A little bit expensive
c.
Very expensive
3)
Where did you print your object?
a.
School
7)
Were you satisfied with the printed object?
b.
Outside school: _________________
a.
Yes, it was a great print without problems
b.
Not bad, some issues
4)
How long did it take to print?
c.
I was not satisfied, very bad quality
a.
___ days
b.
___ weeks
Thank you very much to all!!
PS: If you did many 3D prints, you can post multiple answers.
Wassef…