opper is all these values "recognizing" as similar/same.
I got list of results (n) with following values:
0. -3.2584e-9 1. -4.4992e-9 2. -6.7220e-9 3. -4.5154e-9 4. -4.3325e-9 5. -2.2496e-9 6. -2.2385e-9 7. -6.7525e-9 8. -4.5154e-9
Even though most of these values (maybe all of them) "go" into the second group:
(10^(-9)≤n) and (n>10^(-4))
Grasshopper recognizes all of them as members of the first group:
10^(-4)≥n
I am aware that this kind of very small values are unusual, and maybe Grasshopper is not made for it. But is there any way this can be done?
Take a look:
Thank you.…
I kept adding new text every day until now... and now I have to change almost all the text I did type but... it's made of curves!
So I was wondering if anyone has ever had similar problems solved by a gh definition
In case no-one has ever had similar troubles (I think you all here are smarter than me :P) how would you proceed to create a similar definition, given all the text has same dimension and font?
I would:
a) create a set with all the possible character-curve in that Font b) create an identical set with the same characters as type
c) compare this set with every given text-curve in the drawing (issue: the number 8 is made of 3 different curve .___. same as letter B... A has 2, as D, R, O, P, p and so on...)
d) list item from set 'b' using pattern I get from 'c'
e) evenctually -this is a moonshot in the moonshot- concatenate characters at 'd' based on proximity of different character-curves (to get "ABC" as a whole text, instead of "A" "B" and "C" as separate instances)
It sounds kind of challenging!
...maybe I'm better start re-writing text NOW as it could EASILY take me a couple of days to get things done... :)…
I tell you what I had to do and how I did it.
I have the following situation. A urban context with a square plot 40m x 40m surrounded by buildings.
If I extrude the plot I get 4 surfaces and I need to calculate the minimum daily quantity of direct sunlight hours each test point receives in the period from 22nd of April to 22nd of August. For example for the test point at index 21 of surface with index 1 (I am just creating these numbers in my mind) the minimum is on 27th of April and the test point receive 8 hours (this is also invented for the sake of the example) of direct sunlight. All the other days it receives more. So the values I have to found are these minimums for all the test points. Now how to calculate these minimum quantities is a different issue of the topic of this post and actually I manage it.
Continuing with the explanation of what I had to... so I have only the initial plot that generate 4 surfaces, then I want to test smaller plots generated by an offset of 4 m of the original one, and the relative 4 surfaces for each smaller plot.
So in this case I think I cannot use your suggestion because the object don't exist yet.
I managed creating a loop with Anemone, the loop generate an offset starting from the original at 0 until 4 (then I multiply it by 4 to obtain the offset at 0, 4, 8, 12 and 16. Then I did like you also suggest I record every time the result with the DataRecorder and I create for each result a different branch with the index coming from the loop (0, 1, 2, 3 and 4) with the Flatten component.
In this image you can see all the surfaces saved in the same way as described above and in white the test points that receive minmum or equal than 2.5 hours per day of direct sunlight in the period from from 22nd of April to 22nd of August and in dark gray the test points that receive less.
The main point of this discussion is just the fact that instead use this tricky way I used, or the one you suggest, to analyze separately (because they shade each other) 20 geometries (in this case 20 they could be many more) it would be good if it would be possible just to input all the geometries at the same time and they would not shade each other so to get directly all the results with one run and in a more simple way.
Francesco
…
, but at the lowest level computers only manipulate ones-and-zeros according to exact and unambiguous rules. As a result of this it is actually impossible to generate true random numbers using a computer. Computers use algorithms that create sequences of pseudo random numbers, numbers that appear to be random, but in fact are created by the application of a deterministic algorithm.
One of the major benefits of pseudo random numbers over actual random numbers is that it's easy to reproduce a sequence of numbers. If you generate the first 50 numbers in the pseudo-random sequence with seed=5 they will be exactly the same as when you did it last week. If you want different random numbers, you have to use a different seed. In Grasshopper I thought it important that the same random numbers are always generated, as that minimizes the 'surprise'. However, since the default numbers might not be to your liking, you can always play around with the seed value until you find a pseudo random sequence that suits you.
If you generate 8 random numbers between 1 and 10, you might get a sequence like this:
{5, 8, 2, 4, 2, 7, 3, 10}
The pseudo random number generator guarantees that the spread of the numbers in the sequence is equal everywhere, but only when you generate an infinite amount of numbers. Since every sequence you care to generate in one human lifetime will not be infinite, there will always be some 'clumping' of values. A small stretch along the number line that is somewhat more densely populated by random numbers than the adjacent stretch.
There is also absolutely no guarantee that you won't get the same number more than once. Obviously this is impossible if you were to generate 50 values between 1 and 10 (there are only 10 possible unique numbers), but even if you generate only 2 values between 1 and 10 you might still get the same number twice.
Indeed in my example above the value 2 occurs twice, whereas the value 1 doesn't occur at all. If you want a range of numbers without overlaps, it's better to not use the Random component, but instead generate all the numbers using a Range or Series component and then Jitter the list, thus randomizing the order of the values, but not the values themselves.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
onent are experiential or location specific. For example: humidex has been derived and widely used in Canada.Also both humidex and discomfort index should be used in in-shade conditions.For universal applications and locations, you should concentrate on either PET or UTCI (this is what "Outdoor Comfort Calculator" component is based on).
I have found out, that for instance - OutdoorComfortCalculator - which considers temperatures of 9-26 and other factors, gives the % of comfortable time outdoor for instance in Kenya in Africa (high temperatures and humidity) 55%, whereas within the same .epw data and some additional factors added to the Thermal Indices component, the "humidex" or "Discomfort index" give a result drastically lower, I think it was even 1-5% comfortable.How is that?
Yes, this is one of the issues that I have with UTCI index: the authors wanted to make it as an index applicable in any type of climate. To create the UTCI comfort categories a number of data has been collected from different locations (for hot humid climate, it was the data from Madagascar. I may be wrong on this). This resulted in universal comfortable range of 9 to 26 C which you mentioned. How would the people in Madagascar perceive the feel like temperature of 9 degrees as comfortable is beyond my understanding.Thermophysiology of a human in Madagascar, and in Poland is the same. However their acclimatization is quite different, which raises the issue with the upper universal comfortable range. In general people who live in hotter climates have a bit higher tolerance to high temperature than those living in continental climates. And vice-versa: their tolerance to lower temperatures is lower than the tolerance of the people from the continental climates. Here is a comparison of the UTCI - PET stress categories:
UTCI
all climates stress category
above +46 extreme heat stress+38 to +46 very strong heat stress+32 to +38 strong heat stress+26 to +32 moderate heat stress+9 to +26 no thermal stress+9 to 0 slight cold stress0 to -13 moderate cold stress-13 to -27 strong cold stress-27 to -40 very strong cold stressbelow -40 extreme cold stress
PET
(sub)tropical humid climate temperate climate stress categoryabove +42 above +41 extreme heat stress+38 to +42 +35 to +41 strong heat stress+34 to +38 +29 to +35 moderate heat stress+30 to +34 +23 to +29 slight heat stress+26 to +30 +18 to +23 no thermal stress+22 to +26 +13 to +18 slight cold stress+18 to +22 +8 to +13 moderate cold stress+14 to +18 +4 to +8 strong cold stressbelow +14 below +4 extreme cold stress
I attached below an example of PET humid climate comparison with UTCI, for in-shade and out-shade conditions.As it can be seen UTCI shows the percent of time comfortable: two times higher than PET.
Thank you Pin, for the useful comment, on usage of "Analysis period" component.…
n en el diseño y fabricación digital de formas complejas y euclidianas.
Tomando como plataforma Grasshopper con RHINO, se explora y optimiza el diseño y fabricación de topologías complejas bajo los entornos de "Grasshopper", "RhinoNest" y "RhinoCAM" así como la parte de renderizado tipo high-end con Brazil.
D-O-F De 8:00 AM a 12:00 PM y de 1:00 PM a 5:00 PM
Contenidos:
1. Modelado Avanzado y sus Tecnicas. Aplanado y Desarrollo de Superficies.Anidado y distribución Nesting.
2. Introducción al Diseño Paramétrico.Definiciones Avanzadas de Grasshopper,posibilidades y limitaciones. Ajustes de escala para impresión y corte.
3. Introducción a la Manufactura en CNC - RhinoCAM 2.0.
4. Guía Paso a Paso para la realización de un Renderizado usando Brazil 2.0. Presentación DIGITAL de proyectos.
Docentes:
Andrés González - CEO McNeel Miami
Ovidio Cardona - Especialista en RhinoCAM y Zebra
Juan David Moreno - Especialista en Rhino y Brazil
Inversión:
$650 000 (Incluye licencia Educativa y Certificación de McNeel)
$550 000 ( Incluye Certificación de McNeel)
Informes:
Bits LTDA Tel: 412 30 15
Laboratorio de Imagen Facultad de Arquitectura Tel: 430 94 32…
I want to use standard components I have to use 2 or 3 to get the result or use a scripting component, but sometimes I fell this could be avoided if we could access geometry properties and methods directly, let's say we want to use the x coordinates of a bunch of points, instead of decomposing the points to get the X input we could directly type X in the expression editor input to do so similarly to what happens with math formulas.
mmmm, I suppose that methods will be a bit trickier if more inputs where necessary.
On the other hand GH is very easy to start doing things with because the interface allows all levels of knowledge as shown on this forum where most of the questions, I'd say, have to do with solving specific geometry problems or asking for people experiences in similar problems and not always, how do I use a component if you know what I mean.
Overall I'm so, so happy GH is out there in the hands of creative people and in the hands of creative developers! Perhaps there is no need for GH to do any task because it certainly does quite a lot and it is so versatile, even better, that the requests of users get implemented as far as possible.
I think it's very difficult to compare two programs unless you are at the same level of proficiency on both, in the future I'm going to pay more attention as to if there would be a simpler way to do things in GH and if it required some implementation.
My two pence, 8)
Evert…
Added by Evert Amador at 4:03am on February 23, 2011
simple, there are many symetries in 3 main planes. So I used arcs rotated 45° from the main planes and I generate a pentagon which was mirrored and rotated many times.
At the end there are 24 pentagons and 8 hexagons so 32 faces, 54 points/vertex and 84 edges.
It could generate some others tessalation styles
…
ifically: I have a 100' vertical plane lofted between curved top and bottom profiles. I contour it every 8' (normal direction is Z, giving me 13 horizontal curves). I use Divide Curve to divide each contour into 10 segments. The "Points" output of Divide Curve now yields 13 branches with 11 items each, corresponding to 13 contours with 11 points from the left end of the curve to its right.
I now want to string "vertical" lines, and connect all the 2nd items in each branch together, all the 3rd items, etc... in order to make a polyline that travels between each 2nd point or 3rd point. i don't want to use Cull Pattern/Nth/Index because the number of subdivisions could change (11 could become 20, etc).
How do I connect the Nth item of each branch in this tree? Moreover, how do I connect all values in a branch with their corresponding values in all other branches?
Thanks for any replies,
Richman Neumann
Solomon Cordwell Buenz Architects
…