ich is the following :
"in a box", i would like to create structure made by wooden blades that follow floor, wall and ceiling, but moving from this support due to "curves" which are the most important variables.
Here is my "logic". You will find enclosed to this post my files as well.
In bold what i'm unable to do by myself (i guess) :
Take the square of 25 m x 12 m ; make it a surface
I divide it in "blades" of 20 cm
I take the edges of the "blades"
I divide this edges in 40 points (or equivalent) (A)
I identify my curves (curves) which are on the floors
I identify the crosspoint between my edge-blade and the curves (B)
I have to test the difference between X Y Z of each A and B.
I have to test which B point is the closest of each A
Each A points which is close to B (Distance < 40 cm) must be on the floor
I have to input a math formula in order to représent the movement of A points regarding their distance to B (example : A1 Z = Distance between A1 and B / 2)
If there are 2+ B, that mean that i have "to do something" to get a correct movement. I mean
2 consecutives points must be on the same "plan"
2 height difference between each point must be 0 or a dedicated value
Regarding Ceiling, it is a duplication of the floor but there is coef to apply with Z distance.
2 parallele points on a define axis, example : X, and consecutive can't have more than 20cm of difference
When all points have moves regarding "parameters" and "curves"; i have to do curve linking all the point of a same "line".
After that i duplicate this curve to a upper curve.
Loft
Extrude surface and then, it's done ?
To be clear, i miss the part where i need to make my points move regarding variables...
I'm sorry, RHI Grasshopper projet.3dm does not represent the "need" to have to consecutive point on the same plan
…
of the specialized tools that independent developers like Jon include in their plug-ins.
I understand it feels icky to purchase a plugin when the core application is free, but intuition and gut-feelings have no place in economic policy. The only reason to buy software is because it will save you money in the long run. If your hourly rate is $25 and you can eliminate 20 hours a month by buying a $1000 product, you've earned your money back in 2 months and everything else is gravy as they say.
There is a conflict of sorts between McNeel & Associates and 3rd party developers as I outlined in this response, but so far the benefits of having an active 2nd & 3rd party development community have far outweighed the costs. You, me and Jon are engaged in a symbiotic relationship that ideally benefits us all. If you feel it doesn't, then you have the freedom to ignore it or even ask for your money back.
But whatever your personal feelings, it cannot be denied that Jon is providing worthwhile solutions for other people who are probably quite interested to see what he is working on.…
ate the sky but CIE skies are standard models (See this file page 19 and 20). Average sky is a climate based sky that averages the values for each hour during the month, and it's useful to get a sense about the sky condition.
2. If you google for CIE sky and Radiance you will find several discussions. Some of them like sunny without sun are made for very particular uses. Read Greg's post here.
3. Yes. If you check the output of the component you can see the file path.
…
h tree is actually a number of curves in rhino. I am then refreancing those groups of curves [trees] into grasshopper so each tree [group of curves] is held in one crv compnent. this is where my problem starts. The end goal is to extract endpoints from branches in each tree, then add splines that go through those endpoints of each tree in order, i.e. point 0 - tree 0, point 0 - tree 1, point 0 - tree2, point 0 - tree 3 ......... My problem is that i cant seem to get the data structured in a way that each branch holds each tree or group of curves, my problem may be that i start with (x) lists of (y)curves in each list. corresponding to x curve components [each a tree] with y curvres in them [each a crv of respective tree in rhino] Now i say x lists with y curves becuase right now its set up 6 lists each with somewhere between 20 and 45 curves in each, but those both will be changing often through iterations. so my problem may not be changing the structure of the data but getting it structured the way i want from the begining. I cant see to pass each of those trees into a component and have it come out as x lists/branchs of y crvs each, i either get one branch with (x*y) curves in the list, so essentiall all curves in the model. or i get (x*y) branches each with 1 curve per branch, essentially creating a new branch for every single curve. I have been working with the path mapper to try and solve but no luck, like i said i think it may be somethign to do with how it is structured from the begining rather than changing it down the line. attached def and model for referance, any thoughts /ideas greatly appreciated, midcrit on wednesday and need to get this base def working so i can start pumping out iterations with added attraction and repulsion fields built into trees/points.…
will work slightly different from before. Sorry about breaking this, but it proved impossible to improve the selection logic with the fairly ambiguous notation that was implemented already.
Not every change is breaking though and I hope that most simple matching rules will work as before. There will be a McNeel webinar on Wednesday the 6th of November where I discuss the new selection rules (as well as path mapping syntax and relative offsets within one or more data trees). This will be a pretty hard-core webinar aimed at expert users. The event will be recorded so you can always go and watch it later. I figured I'd briefly explain the new selection rules on Ning before I release the update though.
-------------------------------------------------------------------------------
Imagine we have the following data tree, containing a bunch of textual characters:
{0;0} = [a,e,i,o,u,y] {0;1} = [ä,ë,ê,ï,î,ö,ô,õ,ü,û,ÿ,ý] {1;0} = [b,c,d,f,g,h,j,k,l,m,n,p,q,r,s,t,v,w,x,z] {1;1} = [ç,ĉ,č,ĝ,ř,š,ş,ž]
There are a total of four branches {0;0}, {0;1}, {1;0} and {1;1}. The first branch contains all the vowels that are part of the standard English alphabet. The second branch contains all non-standard vowels and branches three and four contain the standard and non-standard consonants respectively.
So what if we want to select from this tree only the standard vowels? Basically include everything in the first branch and disregard everything else. We can use the [Tree Split] component with a selection rule to achieve this:
{0;0}
This selection rule hard-codes the number zero in both tree path locations. It doesn't define an item index rule, so all items in {0;0} will be selected.
If we want all the vowels (both standard and non-standard), then we have several options:
{0;?} = select all branches that start with 0
{0;(0,1)} = select all branches that start with 0 and end in either 0 or 1
{0;(0 to 1)} = ......................................... and end in the range 0 to 1.
Conversely, selecting all standard vowels and consonants while disregarding all non-standard character can be achieved with rules as follows:
{?;0}
{(0,1);0}
{(0 to 1);0}
It is also possible to select items from each branch in addition to limiting the selection to specific branches. In this case another rule stated in square brackets needs to be appended:
{0;?}[0 to 2]
The above rule will select the first three vowels from the standard and the non-standard lists.
Basically, rules work in a very consistent way, but there are some syntax conventions you need to know. The first thing to realize is that every individual piece of data in a data-tree can be uniquely and unambiguously identified by a collection of integers. One integer describes its index within the branch and the others are used to identify the branch within the tree. As a result a rule for selection items always looks the same:
{A;B;C;...;Z}[i] where A, B, C, Z and i represent rules.
It's very similar to the Path Mapper syntax except it uses square brackets instead of parenthesis for the index (the Path Mapper will follow suit soon, but that won't be a breaking change). You always have to define the path selector rule in between curly brackets. You can supply any number of rules as long as you separate them with semi-colons.
The index rule is optional, but -when provided- it has to be encased in square brackets after the path selection rule(s).
The following rule notations are allowed:
* Any number of integers in a path
? Any single integer
6 Any specific integer
!6 Anything except a specific integer
(2,6,7) Any one of the specific integers in this group.
!(2,6,7) Anything except one of the integers in this group.
(2 to 20) Any integer in this range (including both 2 and 20).
!(2 to 20) Any integer outside this range.
(0,2,...) Any integer part of this infinite sequence. Sequences have to be at least two integers long, and every subsequent integer has to be bigger than the previous one (sorry, that may be a temporary limitation, don't know yet).
(0,2,...,48) Any integer part of this finite sequence. You can optionally provide a single sequence limit after the three dots.
!(3,5,...) Any integer not part of this infinite sequence. The sequence doesn't extend to the left, only towards the right. So this rule would select the numbers 0, 1, 2, 4, 6, 8, 10, 12 and all remaining even numbers.
!(7,10,21,...,425) Any integer not part of this finite sequence.
Furthermore, it is possible to combine two or more rules using the boolean and/or operators. If you want to select the first five items in every list of a datatree and also the items 7, 12 and 42, then the selection rule would look as follows:
{*}[(0 to 4) or (6,11,41)]
The asterisk allows you to include all branches, no matter what their paths looks like.
It is at present not possible to use the parenthesis to define rule precedence, rules are always evaluated from left to right. It is at present also not possible to use negative integers to identify items from the end of a list.
If you want to know more, join the Webinar on Wednesday!
--
David Rutten
david@mcneel.com
Seattle, WA…
Added by David Rutten at 8:57pm on November 3, 2013
sophy though, I have a rudimentary grasp of the Ancient Greeks and modern schools of thought such as Existentialism and Pragmatism, but there is certainly no depth in my understanding. However here the same rule applies. You can quote philosophy all you want, but unless you understand that which you're channelling you can be -at best- accidentally correct.
According to you, these are all vital characteristics:
Aesthetic judgement
Intuition about spatial effectiveness
Knowledge of construction materials & assembly systems
Consideration of performance-driven design properties
Mad synthesizing skillz
[1] and [2] are pretty much worthless, especially when we're dealing with students. Aesthetic judgement is not something that can be wrong or right. You can hone your aesthetic skills but you cannot cultivate better tastes. Intuition is also problematic. It's basically a stand-in for argumentation. Instead of saying "these buildings have to have 20 meters apart because of wind/sound/human perception/human psychology/light/shadow/etc. etc" is a far stronger statement than "these buildings have to have 20 meters apart because of my feelings". Who are you to be trusted? If you have a long and distinguished career backing you up, maybe your opinions carry some weight, but until that point you'd better be prepared to justify your decisions with cold hard logic and data.
[3] is certainly important for certain jobs in construction, but it can be argued that implementation details are not necessarily central to a design. One can design a good computer interface without having to be able to program, and certainly without being familiar with all the idiosyncrasies of a particular programming language. Conversely, one can design an excellent space without knowing exactly how strong certain atomic bonds are. If what you design is physically impossible, then obviously something has to change, but it doesn't mean that the design as an abstract idea was bad. Of course on the other hand one can argue that designing impossible things is not doing anyone any favours. I'm not exactly certain where I stand on this issue, probably comfortably in the middle; YES, students need to learn about what can be build in the physical world, but NO that is not part of design training.
I'm not quite sure what [4] means.
[5] is true for a lot of professions, not just Architects. I would concede that architects probably have more to take into account than most designers and that it is indeed an important skill to have.
I would say that -especially for students, who have little experience- an incredibly important skill to be able to ask yourself "why am I doing this?" about pretty much every decision you make. Basically you need to get very comfortable applying the Socratic method to everything you do.
--
David Rutten
david@mcneel.com
Tirol, Austria…
Added by David Rutten at 11:03am on August 14, 2013
ve systems in architectural design that respond to changing environmental and spatial needs.
In its second year the workshop will delve more into the concepts of self-organization, emergence and systems behaviour in architecture, borrowing concepts and tools from biology. Associative modelling, simulation, material experiments and digital fabrication tools will be introduced in order to apply this information to the design of both passive and active responsive architectural systems.
The digital toolset for the studios will be Rhino, Grasshopper, VB.net, Firefly and Arduino. Model making, engagement with different materials and utilisation of digital fabrication will be integral to the core of the course throughout the workshop.
The workshop will provide students with the opportunity to engage in parallel with both the theoretical aspects of biomimetics and integrated design processes, as well as the technical tools essential for the realisation of design outcomes. It will explore adaptive and responsive systems in architecture, capable of interacting with their context, both environmental and social in a context specific brief.
The work generated throughout the workshop, as well as the final prototypes constructed will contribute to a publication and travelling exhibition as the culmination of the three-year programme.
The deadline for applications is 20 June 2011. A late deadline of 4 July 2011 is also in effect, but this will incur a £50 surcharge. Application forms and additional information are available online at: www.aaschool.ac.uk/sanfrancisco and applications can be submitted to: visitingschool@aaschool.ac.uk. …
t of data it has to operate on. So only those aspects of the algorithm that differ in these cases are relevant.
For example if your algorithm always does exactly the same thing (let's say all it does is measure the size of an array and display it on screen) will be O(1), because it doesn't matter if you run it on an array containing 10 or 1000000 items. Measuring the size of an array is a constant-time operation:
Print(string.Format("Array contains: {0} element(s)", data.Length);
However if your algorithm works on not on arrays but on linked-lists, then it becomes an O(N) operation because counting all the elements in a linked list means you have to iterate over all of them. And the longer the list, the more iterations you need. In fact the number of iterations is exactly the same as the number of items. (ps. if you'd be using the System.Collections.Generic.LinkedList<T> class then it's still O(1), because apparently that particular implementation of linked lists caches the count and keeps it up to date.)
If you have a loop that runs for each item, and then inside that loop there is another loop that also runs for each item, then your complexity becomes O(N²). Or, in a similar case if your algorithm consumes two collections (N and M) and iterates over all items in N, and then inside that loop it iterates over all items in M, the complexity is O(N×M).
The case can be made that only the most severe complexity is relevant enough to report. For example if you have an algorithm that comprises of three steps, the first of which is O(log(N)), the second is O(N²) and the third is O(3ⁿ), then technically the total complexity would be O(log(N) + N² + 3ⁿ), however the first two parts are utterly insignificant compared to the third and therefore can be omitted entirely. Consider for example increasing the input size from 10 to 20 elements:
log(10) + 10² + 3¹⁰ = 1 + 100 + 59049 = 59150
log(20) + 20² + 3²⁰ ≈ 1 + 400 + 3486784401 ≈ 3486784802
As you can see the increase of the complexity is almost entirely due to the O(3ⁿ) portion, so much so that there's almost no point in mentioning the other two.
Now, your specific questions:
Constructors/declarations and method invokes are not necessarily O(1). In this particular case they are, but it is possible that some constructor you call may have a higher complexity. For example if instead of an empty List<T> you're constructing a SortedList<T> based on your inputs, then it definitely may be the most significant complexity in your entire algorithm and it needs to be taken into account.
Correct. A loop like this has complexity O(N), ignore stuff that only happens once like the declaration of the iteration variable.
I don't understand that line of code. cP is already a list. Why are you calling ToList() on it? In general making copies of memory-contiguous collections (like arrays or lists) can be done in O(1), depending on implementation, because blocks of memory can simply be duplicated or moved at one go using the correct hardware ops. However other times it will require a loop in which the complexity goes up.
It's very cheap to add items to lists, provided the list has enough space to add new items. By default a list is big enough to contain only 4 items. If you try and add a fifth one, the list will need to allocate more memory elsewhere, copy the 4 existing items into the newly allocated space and only then add a new item. So, if you know ahead of time how many items you'll be adding to a list (or even if you only know a theoretical upper bound), you should construct the list using that known capacity. This will speed up the process of adding many items to a single list.
Don't know how crypto providers work, but since this part of your algorithm does not depend on cp.Count or the magnitude of populationCount, it doesn't matter for the big-O complexity metric.
…
NURBS using Rhinoceros. Content includes: Basic terminology, user interface, workflow strategies, using reference material and creating drawings from modeled geometry.
Workshop 2: Introduction to Parametric Design
Instructor: Rajaa Issa
(12:30 PM-3:30 PM)
This workshop will introduce the general framework of parametric thinking with a series of hands-on tutorials using Grasshopper for Rhinoceros. It is meant for beginners who have little to no idea about parametric modeling. The workshop will introduce the general components of an algorithm, design workflow, Grasshopper interface and visualization techniques. The students are expected to have basic knowledge of the Rhino modeling environment. Workshop 1 should fulfill this requirement.
Registration: Computers and software will be provided. Space is limited to 20 seats per workshop. The fee for each workshop is $60 (plus a $4.29 fee). There is a special rate of $30 (plus a $2.64 fee) for students and teachers provided they request a discount here with their school email address before registering. Register now……