iple strands. If a curve is closed, it simply means that the start and end points are in the same location, but not the curve is really a closed loop without start or end.
When you define a curve in this way it becomes possible to say that each curve has a 'domain' in which it exists. The start of the domain represents the start of the curve and the end of the domain the end of the curve. A domain is a numeric range, say {0.0 to 12.67}. The curve is undefined for numbers outside of the domain. The numbers inside the domain are called 'parameters' and are usually represented with the symbol t. It's important to realize that parameters have little or nothing to do with length. The domain of a curve may well go from zero to the length of the entire curve, but that doesn't mean that when you measure a curve at t=1.5 that you'll get a point 1.5 units along the curve. 'Parameter density' is not guaranteed to be constant meaning that if you walk along the curve at fixed parameter intervals, your speed in 3D space will not be constant.
Curve parameters are usually closely related to the mathematical definition of a particular type of curve. For Lines the parameters tend to go from zero (start) to one (end) regardless of the actual length of the line. Circles tend to have parameter domains from zero to 2*Pi as circles are described using Sine and Cosine functions. Nurbs curves are defined by a set of consecutive polynomials and the parameters are related to the variables in those polynomials.
Long story short, parameters are the only way in which you can measure/sample/evaluate a curve at a specific location. You can measure the location of a curve at a parameter, but also the tangent vector and the curvature etc. etc. It is possible to compute what parameter represents the curve at a certain distance from the start-point, but that computation can be quite complicated and expensive (from a processor point of view), which is why most curve evaluation components use curve parameters as input.
So when you use the Evaluate Curve component and you provide a parameter outside of the curve domain, the component will complain. If you provide a parameter inside the domain then you'll get the location and tangent vector of the curve at that parameter.
You never pick parameter by hand btw., they are almost always the result of a previous step in the algorithm.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
Added by David Rutten at 10:53am on February 19, 2013
e no reply was given. Please if something is not clear do not hesitate to comment back, in that way some other user might see your reply and post a solution.
A more in detail continuation of the upper mentioned reply would be:
Find the Z coordinate of area centroid of your curtain wall panels.
Then by using "windProfileHeight_" and/or "distBetweenVec_" inputs from Ladybug Wind boundary profile component find the wind speed (m/s) at each of Z coordinate values from the previous sentence.
Convert those wind speed values into pressure (N/m2): p = (1.25*(windSpeed^2))/2.Derive the point load force from the pressure (Newtons):
F = p * A (A being an area of a curtain wall panel in m2)
Fnode = F/4 ("disposing" the F force to each support-node of your curtain wall panel).
Apply all these Fnode forces to your panels which are somewhat normally oriented to the wind direction. These will be your wind pressure forces. On the opposite side of the building, we can use a rule of a thumb and apply the same Fnode forces/2. These are your wind suction forces.
On lateral panels nodes (lateral in comparison to the wind direction) you can apply Fnode forces, in magnitude Fnode*(1 to 1.5). These are lateral suction forces.
Have in mind that wind load in structural engineering is a complicated subject. Most codes and equations currently used come from a very simplified 2D or 3D (cylinder, box) models. In reality one would need to use CFD software for detailed analysis (Ansys, NX, SolidWorks...), or if the project is actually going to be built then wind tunnel tests.…
nition. Using RenderAnimation component from http://www.giuliopiacentino.com/grasshopper-tools/, I could do all of above except for the Toon material part.
I have found a post regarding same matter ( http://www.grasshopper3d.com/forum/topics/how-to-add-materials-to-material-table ), but since I am not very familiar with scripts, this is what I think his definition does. Correct me if I am wrong.
Since Rhino Vray only supports Toon environment per material (unlike Max Vray has global override feature),
1. import toon material from Rhino material editor
2. add colors to the toon material and make new toon materials with color (as many as needed)
3. import that new materials back into grasshopper
4. match them with designated geometries and render. (RenderAnimation component by Giulio does this job) Here is the final work he did : http://vimeo.com/34728433
Grasshopper + Vray from Marc Syp on Vimeo.
I am using rhino 4 with vray 1.5.
I have uploaded my definition, simple definition that transforms box height along with color as frame advances. The definition works but toon effect is not there.…
3d voronoi from which i extract a cell for each point. After this i need to compute the intersection between each cell and the surface.
I tried two approaches: one with "Brep|Brep intersection" and then "Curve splitting", and another with "Split Brep".
The first solution works (below) perfectly but the "Curve splitting" phase can be bloody slow! Sometimes it requires more than 1.5 minutes
The "Split Brep" solution (below) is much faster, but for each cell it returns me TWO surface: the one inside the Voronoi cell and the one outside. These are inside a three with a branch for each curring surface. The problem is than inside the branches, sometimes the first element is the inner surface, sometimes it's the outer.
Is there a way to use "Split Brep" and throw away the outer surfaces?
Do you suggest any other method?
Thank you very much, and sorry for my terminology: i've been using GH for 2 days :/…
r graphics get saved as 24x24 pixel images before they are put into the grasshopper application, which means the icons look like crap when you zoom in. This is the aforementioned problem that needs to be addressed in GH2. There have historically been two approaches to this issue:
Provide pixel images with several sizes.
Render vector graphics directly.
Option 1 is common for apps that do not have variable levels of zoom, such as Windows Explorer. When explorer shows file icons it either shows them in 16x16, 32x32, 48x48, 96x96, or these days, various HUGE sizes. As a result *.ico files allow you put in different images for all these target sizes. Since Grasshopper has variable zoom levels, this is not an ideal solution. Also, it requires a lot more work per icon.
Option 2 is becoming more and more popular as increased graphics speed now allows for the real-time rendering of vector graphics. Yet, you still need a renderer that knows how to draw vector geometry crisply at low sizes. All vector renderers I know just interpolate the geometry linearly and if a line happens to end up 'between pixels' it's just fuzzy.
I don't have hard and fast rules for the icons, but I try to adhere to at least these:
Keep a border of 2 pixels free around the icon content. So basically only use the inner 20x20 pixels rather than the 24x24 you're allowed. This is needed because the drop shadow needs to go there.
Only draw silhouette edges around shapes, not inner creases. Typically a 1-pixel line will do. I prefer to use a dark version of the fill colour rather than black for edges.
Loose curves can be drawn in 1 or 2 pixel thicknesses, depending on how important the curve is.
Try to avoid text in your icons (not always possible).
Stick to 1 colour family per icon, preferably per icon family. You can add highlights with another colour if you must, but too many hues make an icon hard to read (for the example the [Voronoi] icon, it has red, green and blue and it's a bit of a mess, on the other hand [Colour Wheel] has the full spectrum and seems to work quite well...).
Very roughly speaking, if there's both black and red geometry in an icon, it means the red is component input and the black is component output.
Drop shadows are pixel effects, applied to the 24x24 image. They have a blurring radius of 2 pixels, a horizontal offset of 1 pixel to the right, a vertical offset of 1 pixel to the bottom and they are 65% black.
When you use high contrast shapes (for example black edges on a light background) the anti-aliasing provided by vector renderers such as Xara or Illustrator won't be enough to make it look smooth. I'd recommend avoiding high contrast if at all possible, but if not possible then draw a 1-pixel line around the dark bits in 95% transparent black. This effectively extends the anti-aliasing range from 1.5 to 2.5 pixels and it helps make things looks smoother.
--
David Rutten
david@mcneel.com…
0.1 Webinar introduction0.2 Installation of Ladybug for Grasshopper (+Rhino)0.3 Getting started with Ladybug for Grasshopper (+Rhino)0.4 Introduction to Environmental Design Analysis - process and methodology_STEP 1 CLIMATE ANALYSIS (NO MODEL)1.0 Introduction to Climate Analysis1.1 Finding and importing weather data file1.2 Sun Path1.3 Temperature chart1.4 Humidity chart1.5 Wind Rose1.6 Comfort Analysis based on weather data1.7 Psychrometric Chart1.8 Bioclimactic Chart1.9 Customizing Analysis Period and Charts_STEP 2A ANALYSIS OF EXISTING URBAN SPACES (WITH MODEL)2a.0 Introduction to Analysis of existing Urban Spaces2a.1 Import Context models from Rhino2a.2 Radiation Rose2a.3 Solar Fan / Envelope_STEP 2B ANALYSIS OF NEW URBAN SPACES / DEVELOPMENT (WITH MODEL)2b.0 Introduction to Analysis of new Urban Spaces2b.1 Import new Urban Buildings and/or Elements from Rhino2b.2 Parametric Grasshopper models 2b.3 Radiation Rose-------------------DANIEL NIELSENThe Danish architect Daniel Nielsen has a broad experience with Architectural Sustainability and the integration of parametric 3D modeling and simulation tools into the process. He have worked on projects at various scales - from buildings to planning, and have been involved in research and education programs at The Royal Danish Academy of Fine Arts and Technical University of Denmark.…
grid size 3 = 2.7 mins
grid size 2 = ??? memory peaks and rhino freezes.
However now that I have switch the unit of the rhino file to feet,
now grid size 3 = 18 mins.
which makes i suppose since the analysis will have to work with smaller tolerance.
The below img is what i got after 18 mins. I think also the fact that I have joined the individual units with solid union also make it longer maybe? you can see the mesh triangulation not only around the corners of masses but also inbetween different units (if you look at the top level you will see)
oh, and I also have very little disk space left.
I would like to share the file but right its a big mess and has a lot of stuff that is unrelated to this particular memory issue, like revit interoperability and urban modelling. and the definition is set up so that it needs to have an excel file that feeds what you see on the lower left corner, wing mass scales. In order to compare design studies I am animating the index of list component that feeds the different scale of the wings and the width of the floor plates you see. you can see it in my video here. I will try to clean it up a bit when I get a chance, but it seems like grid size 3 might work as a starting point.
when I get around to extract values from the mesh vertices and actually apply different facade designs driven from the parameters, I would know better what grid size might be necessary.
…
good value so 4 times would get you there.
Ok. I understood. You have to consider that the mesh displayed is generated by genTestPoints component where the grid size is effectively 0.5m:
The pressure plot you shared above, it seems like your cell size varies, are you using any gradient options in the blockMesh step? In this example there'd be no need, as all areas are equally important.
The pressure plot varies because I have modified the cell grid size of the meshParams component in (0.25, 0.25, 0.25):
Now, I have re-uploaded the basic values of (1,1,1) and modified the cell size in the blockmesh component as you suggested in the previous post:
As cell size by default, this component considers the length/5. So, in this case, there would be a cell size of about (1.5, 1.1, 0.6). It’s correct? But If I display the mesh with load mesh component connected to the case output of the snappyhexmesh component, I find this:
With a cell size of (0.25, 0.25, 0.25) as modified in the screenshot of the blockmesh component reported above, the mesh is extremely dense:
The residuals decrease:
with a more clear image.
Regards,
Francesco…