ly this is a Rhino.Python problem and not a Grasshopper issue, but it could apply to both!
I was trying to take a simple example of moving a ball around and see how it could be animated through Rhino.Python. The code works great in wire frame with now memory issues at all. However, when I switch the view to Shaded or Rendered, things go south pretty quickly. The RAM usage of Rhino which was steady around 350mb (ish) now grows every frame after a minute or so, it is in the GB's and never drops even after the script has stopped.What gives? Clearly this must be possible because Bongo does something similar when it does animations. Check out my code below and I would love to hear your thoughts.
import time
import rhinoscriptsyntax as rs
import Rhino
height = 100
width = 100
x = 0
y = 0
xspeed = .1
yspeed = .3
start_time = time.time()
end_time = 60
run_time = 0
sphere = rs.AddSphere((x,y,0), 5)
while run_time < end_time:
x = x + xspeed
y = y + yspeed
if x > width/2 or x < -width/2:
xspeed = xspeed * -1
if y > height/2 or y < -height/2:
yspeed = yspeed * -1
rs.MoveObject(sphere, (xspeed, yspeed, 0))
Rhino.RhinoApp.Wait()
run_time = time.time() - start_time…
uts an instance of a class type, which I refer to as Class Instance A for this discussion post.
Component B, has a for-loop, through which it does a few things, and makes "n" number of variations (in this case, just 3) of Class Instance A. Simplified/generalized structure of Component B is as follows:
a=[]
for i in range (n):
.....variation = DoSomething (StartShape)
.....a.append(variation)
.....for obj in ghenv.Component.OnPingDocument().Objects:
..........if obj.Name == "MakeAssembly": ...............obj.ExpireSolution(False)
I am not sure if I am using the ExpireSolution method correctly here. I also have found this (Grasshopper.Instances.ActiveCanvas.Document.NewSolution(False)) from some other discussions, but it doesn't seem to solve this problem. Or maybe I have the placement of ExpireSolution or NewSolution wrong. Upon pressing F6 then F5, Component B keeps updating Class Instance A, instead of "resetting" it to its original condition. For-loop in Component B must have the original Class Instance A every time, in order for it to spit out variations. I need to find a way to expire the solution for Component A only, at the end of each For-loop. Eventually, I need it to loop 100+ times, so manually pressing F5 is not really an option... Below is an image of what should happen:
What is the best way to tackle this problem (in either Grasshopper-Python or IronPython), so that the same original Class Instance A is fed in every time the loop is run in Component B?
Thank you! …
mbre de 9:00 am a 8:00 pm Este taller está dirigido principalmente a arquitectos y diseñadores interesados en el aprendizaje del diseño paramétrico y generativo aplicados a la generación y racionalización de geometrías complejas para su implementación en diferentes procesos de diseño. En el curso se abordarán los conceptos básicos y metodología para hacer frente a diversas problemáticas del diseño mediante el desarrollo de herramientas algorítmicas a través de un lenguaje de programación visual y el desarrollo de esquemas de fabricación digital. No se requieren conocimientos previos de Rhinoceros 3D ni de programación, conocimientos previos de CAD deseables. Estudiantes: 2,500 MXN Profesionales: 3,000 MXN
CONCURSO DE RENDERS - BECA DEL 100% - Parametric & Generative Architecture & Design Grasshopper Workshop.
- Publica tu render en www.facebook.com/3dmetrica - El render con más likes será el ganador. - Fecha límite de votaciones 15 de septiembre del 2012.
Informes e Inscripciones: workshop@3dmetrica.com 04455 28790084 www.3dmetrica.com www.facebook.com/3dmetrica
…
reaky thing consisting from triangulated "modules" (i.e an assembly out of this, this and that) where the exterior edges ARE always under tension (= SS 304/316 cables OR nylon) and the interior ones MAY be under compression ( = steel, aluminum, wood, carbon) OR ... some of them ...may be under tension. Bastardized T trusses deviate a bit from theory ... but who cares? (not me anyway). T trusses have many variants (but as the greatest ever said: Less is More).
2. Large scale T for AEC is the art of pointless since it costs around the GNP of Nigeria. Here's some indicative components from a module of a multi adjustable TX system costing (the module) ~ the price of my Panigale (Google that):
The above is mailed to a friend who has MIT (yes, that MIT: the top dog) on sight ... therefor he needs some appropriate "credentials", he he.
3. The distance that separates the above with the demo TDT node provided is around 666.666 miles - but we don't care: we are after Art not some testimony to vanity.
4. On purpose I've used a smallish ring to give you a clear indication upon the constrain numero uno in truss design: CLASH matters.
5. You'll need:
(a) A decision related with the tensioners (classic Norseman + SS cables or nylon machined thingies?).
(b) A machinist who can do elementary stuff (like the adapters) and can weld this to that (the "ring" for instance). His abilities must be 1 in a scale of 100. If the fella has a computer (not a CRAY) and he knows what 3dPDF is (hmm) ... well ... use that way to communicate with him PRIOR designing anything: He must agree on the parts BEFORE the whole is attempted (as a design in GH or in some other app).
(c) A carpenter with a wood lathe for the obvious. BTW: BEFORE doing any TDT attempt > ask the carpenter about the available wood strut sizes. Against popular belief DO NOT varnish the wood (use exterior alkyd/oil stains from some top maker like the notorious US company PPG).
http://www.ppgpaints.com/products/paints-stains-data-sheets
(d) Good quality cigars (and espresso) plus some classic music (ZZTop, PFloyd, Cure, Stones, U2 etc etc) during the assembly.
(e) Faith to the Dark Side (see my avatar).
May the Force (the dark option) be with you.…
that both the ASHRAE and European Adaptive models were derived from surveys of awake occupants. While the topic has not been investigated as well as it should be, the few adaptive-style surveys of sleeping occupants that have been conducted show that people tend to desire significantly cooler temperatures when they are sleeping as opposed to when they are awake.
Notably, Chapter 8 of Humphrey's recently-published book on Adaptive Comfort (https://books.google.com/books?id=lOZzCgAAQBAJ&printsec=frontcover&dq=Adaptive+Thermal+Comfort+Foundations+and+analysis&hl=en&sa=X&ved=0ahUKEwi6npqSi__KAhUJMj4KHf7SCXMQ6AEIKjAA#v=onepage&q=Adaptive%20Thermal%20Comfort%20Foundations%20and%20analysis&f=false) provides some interesting insights into this. In a 1973 survey, Humphreys found that the quality of sleep started to deteriorate at temperatures above 24-26C regardless of the time of year and that there was no clearly-determinable lower limit to comfortable sleeping temperatures (in other words, people were fine at 12C if they were given enough blankets). He surveyed only British occupants who were sleeping in traditional beds with mattresses and a wide range of blankets. This is important because the nature of the findings is such that the comfort temperatures would be very different if the survey participants had been sleeping in a hammock or in closer contact with the ground (both popular practices for a number of cultures living in warmer climates). Traditional mattresses cut the ability to radiate body heat in half as compared to a standing human body and I would venture a guess that this is a big reason why much cooler temperatures are desired while sleeping on mattresses as opposed to standing awake/uptight.
So for your case, if you want to account for a time of the day that occupants are sleeping on mattresses, I would change the comfort temperature for this these hours down to 24C. Otherwise, if you are trying to show the comfortable hours of awake people in your space, your current 100% comfortable nighttime hours are a better estimate. I have also noticed that nighttime temperatures become comfortable in extreme weeks of hot/dry climates. This is what is happening in this extreme week simulation of Los Angeles' San Fernando Valley here:
https://www.youtube.com/watch?v=WJz1Eojph8E&index=3&list=PLruLh1AdY-Sj3ehUTSfKa1IHPSiuJU52A
I will put in the ability to set custom values for comfort temperatures into the Adaptive Comfort Recipe soon so that you can test out a 'sleeping comfort temperature' if you would like. I have created a github issue for it here:
https://github.com/mostaphaRoudsari/Honeybee/issues/486
I was not so convinced by Nicol's argument about humidity on those pages as I was when I saw the correlations of both operative temperature and effective temperature to surveyed comfort votes in real buildings. Humphreys shows these correlations on page 106 of the book I linked to above. Notably, the correlation of Effective Temperature to comfort votes (0.257) is slightly worse than the correlation of just Operative Temperature (0.265). In other words, trying to account for humidity actually weakened the predictive power of the metric. This difference in correlation is not so great as for me to discount an Adaptive comfort model based on Effective temperature (as deDear once proposed). However, the correlations of PMV (0.213) and SET (0.185) to comfort votes are so poor that I now use the PMV model only with great caution.
This reason for the decreased importance of humidity may be multi-faceted, whether it's Nicol's explanation or another. Still, the data suggests that we are probably better off ignoring humidity when forecasting comfort and should only consider it when evaluating conditions of extreme heat stress where people's primary loss of heat is through sweating.
-Chris…
ntrol points in Rhino.
Also, I forgot to mention in part 1 that when doing the directional subdivision, depending on how you drew your input mesh, there is a chance that it gets divided in the wrong direction, and you end up with something like this:
Which is not what we want.
The simple way to fix this is with the MeshTurn component, which rotates the direction of each face by one side:
Now we can use physical relaxation to smooth our mesh. In this example I show a simple tensile relaxation, so it will be negatively curved, but the same principles can be applied to all sorts of surfaces by using different combinations of forces.
The definition for the relaxation is attached below.
There are 3 main groups of forces used:
Planarization
For the mesh to be able to unroll properly into flat strips, we want each of the thin rectangles to be flat.
Springs
I already showed how the WarpWeft splitting can be used to assign different strengths to control the shape of a mesh here. Now because of the uneven subdivision we have very different numbers of edges in each direction, so the strengths have to account for this. Depending on the level of subdivision used and the shape you want to achieve, you may need to set the Weft stiffness to be 10 to 100 times that of the Warp.
Edge Smoothing
Because our subdivided mesh has square ends, we might not want to simply anchor the boundary, so I've shown how we can force them to become more circular, while still staying in place. Each boundary curve gets pulled onto its best fit plane, while also applying bending to round it out, and springs to keep it from shrinking.
(This part could also be achieved in other ways, such as pulling the boundary vertices to a curve)
When we run this relaxation, the shape should smooth out to something like this:
Play with the tensions and boundaries until you are happy with the result, wait for it to stop moving, then stop the timer. (Remember it is very important to always stop the timer once the relaxation has finished, before continuing working with the output, as otherwise Grasshopper becomes very slow, because Kangaroo is constantly resolving, even if no movement is visible).
If you want to try other shapes than tensile surfaces, you could also use forces such as bending, laplacian smoothing, or pulling to some target surface to control the form.
Next - Part 3 splitting and unrolling
…
t of data it has to operate on. So only those aspects of the algorithm that differ in these cases are relevant.
For example if your algorithm always does exactly the same thing (let's say all it does is measure the size of an array and display it on screen) will be O(1), because it doesn't matter if you run it on an array containing 10 or 1000000 items. Measuring the size of an array is a constant-time operation:
Print(string.Format("Array contains: {0} element(s)", data.Length);
However if your algorithm works on not on arrays but on linked-lists, then it becomes an O(N) operation because counting all the elements in a linked list means you have to iterate over all of them. And the longer the list, the more iterations you need. In fact the number of iterations is exactly the same as the number of items. (ps. if you'd be using the System.Collections.Generic.LinkedList<T> class then it's still O(1), because apparently that particular implementation of linked lists caches the count and keeps it up to date.)
If you have a loop that runs for each item, and then inside that loop there is another loop that also runs for each item, then your complexity becomes O(N²). Or, in a similar case if your algorithm consumes two collections (N and M) and iterates over all items in N, and then inside that loop it iterates over all items in M, the complexity is O(N×M).
The case can be made that only the most severe complexity is relevant enough to report. For example if you have an algorithm that comprises of three steps, the first of which is O(log(N)), the second is O(N²) and the third is O(3ⁿ), then technically the total complexity would be O(log(N) + N² + 3ⁿ), however the first two parts are utterly insignificant compared to the third and therefore can be omitted entirely. Consider for example increasing the input size from 10 to 20 elements:
log(10) + 10² + 3¹⁰ = 1 + 100 + 59049 = 59150
log(20) + 20² + 3²⁰ ≈ 1 + 400 + 3486784401 ≈ 3486784802
As you can see the increase of the complexity is almost entirely due to the O(3ⁿ) portion, so much so that there's almost no point in mentioning the other two.
Now, your specific questions:
Constructors/declarations and method invokes are not necessarily O(1). In this particular case they are, but it is possible that some constructor you call may have a higher complexity. For example if instead of an empty List<T> you're constructing a SortedList<T> based on your inputs, then it definitely may be the most significant complexity in your entire algorithm and it needs to be taken into account.
Correct. A loop like this has complexity O(N), ignore stuff that only happens once like the declaration of the iteration variable.
I don't understand that line of code. cP is already a list. Why are you calling ToList() on it? In general making copies of memory-contiguous collections (like arrays or lists) can be done in O(1), depending on implementation, because blocks of memory can simply be duplicated or moved at one go using the correct hardware ops. However other times it will require a loop in which the complexity goes up.
It's very cheap to add items to lists, provided the list has enough space to add new items. By default a list is big enough to contain only 4 items. If you try and add a fifth one, the list will need to allocate more memory elsewhere, copy the 4 existing items into the newly allocated space and only then add a new item. So, if you know ahead of time how many items you'll be adding to a list (or even if you only know a theoretical upper bound), you should construct the list using that known capacity. This will speed up the process of adding many items to a single list.
Don't know how crypto providers work, but since this part of your algorithm does not depend on cp.Count or the magnitude of populationCount, it doesn't matter for the big-O complexity metric.
…
one approach. If you are doing residential or small retail I would recommend something completely different. Having been in the architectural business for 25 years I offer the following illustration of how I am using Rhino/Grasshopper, its strengths, and its weaknesses as I see them.
My current work hovers somewhere between sculpture and small commercial architecture and involves in-house design and Digital/analogue fabrication, usually metal. In the past I have also worked for large traditional A/E firms of 100 people, and smaller 10 person Architectural Design Firms also so I understand those types of practices as well. I find the Digital Design/Fab process to be a welcome change in a profession that has been due for an evolutionary leap forward.
Rhino/Grasshopper is a very intuitive (especially for users with autocad history) design tool primarily with the added feature of providing a seamless transition to digital fabrication whether in-house or not. Up until I discovered Rhino, about 7 years ago, I used AutoCad for the 10 years prior, which offered virtual 2D hand drafting with a few added CAD features. It was really still just traditional 2D drafting with a mouse instead of a pencil. During my autocad period, design did not really change much and remained a more traditional process and CDs were done in AutoCad. Once I found Rhino, my design world immediately became one anchored in 3D form.
In a nut shell, Rhino and Grasshopper are a design tool and are best utilized to design 3D form well suited for CNC fabrication. It is particularly strong when the forms start to stray very far from Euclidean geometries. That being said, it is not my tool of choice for traditional architectural Construction Documents (CDs) nor does Rhino claim to be a significant CD tool.
In my early years of Rhino, the ability to parametrically study a design solution did not exist. Each significant design iteration required the designer to pretty much start over with a new model unless the change was fairly simple. With the advent of Grasshopper, parametric variant studies are now one of its greatest strengths. Grasshopper provides the ability of any number of extremely complicated relationships to be established then instantaneously varied and studied without writing a single line of computer code. Hundreds of combinations can be studied in an extremely short period of time. This is the power of Rhino/grasshopper I value most. A great many Grasshopper users also choose to create incredibly complicated geometric forms. This is another of Grasshoppers great strengths however, many of these "over the top" theoretical forms, though very beautiful and stimulating, seem to remain theoretical or at least prohibitively expensive to actually construct at an architectural scale and with our current state of the art of construction technology. Since my work revolves around built form, I remain tethered to build-ability. Grasshopper paired with CNC fabrication re-catagorizes many complicated forms from "unbuildable" to "very buildable". But even with reality limiting my outcome, Rhino and Grasshopper are incredibly powerful tools able to manage challenging 3D forms while providing ease of virtual infinite variant study.
The bottom line of today's architectural process analysis is that there is no silver bullet in design software. The current state of the art of the architectural process utilizes many types software even within a single project. The work flow of a particular project is as highly sensitive to project constraints and opportunities as the design solution its self. If your work varies, so will your process.
And to your second point regarding industry guidelines. BIM, digital fabrication, sustainability, are all such new complexities in a profession that is changing at an unimaginable rate; never before seen in the profession. I am not aware of any guidelines to this issue but would love to hear from someone that might know of some guidelines particularly about risk and responsibilities regarding sharing digital models with the contractor.
Stan…
Added by Stan Carroll at 10:29pm on March 22, 2010
it works is as follows:
The GH_Document has a list of objects that have scheduled a solution (or rather, it maintains a list of callback delegates those objects have registered).
It also contains a TimeSpan field, which remembers the shortest schedule.
If the ScheduleSolution method is called during a solution, the timer won't be started until the solution finishes. So the schedule doesn't control how often the solutions will occur, it controls the delay between solutions.
Let's imagine the following scenario (with insanely scaled up time spans):
At noon exactly, a new solution starts. It doesn't matter what triggered it.
While the solution is still running at 12:01, a component (A) schedules a new solution 15 minutes later. This component registers a callback delegate along with the schedule.
While the solution is still running at 12:02, another component (B) schedules a solution with a 5 minute delay. Since 5 minutes is less than 15 minutes, the document forgets about the 15 minute schedule and instead switches to a 5 minute schedule. (B) does not register a callback.
While the solution is still running at 12:03, a third component (C) schedules a solution with a 10 minute delay. 10 minutes is further into the future than 5 minutes, so the document does not accept this new schedule. (C) does however register a callback.
At 12:05, the solution finishes. The SolutionEnd event is raised, viewports and canvasses are redrawn.
Also at 12:05, the document starts a timer that will fire an event 5 minutes from now, at 12:10.
Nothing happens in this interval, and at 12:10 the schedule timer fires.
The document notices it has a list of two callbacks (registered to A and C respectively), so it invokes them. This allows (A) and (C) to perform some sort of preparation. The most common action is to expire the component that gets the callback, so it'll get included in the imminent solution.
Once the document has invoked all schedule callbacks, it starts a new solution.
Note that (A) and (C) got called back way earlier than they requested. They scheduled solutions for 15 and 10 minutes respectively, but instead got the call only 5 minutes in. They can either play ball and accept the new schedule, or they can choose to not expire themselves and instead schedule a new solution for the future.
If at point 7 instead of nothing happening, a new solution was triggered by some other event (user dragging a slider or changing a wire), all the callbacks are still handled, but now even earlier than they expected.
-----
You can always schedule a solution, which is what makes this solution more flexible than other approaches. It doesn't matter that a solution is already running. It doesn't matter that the schedule comes from another thread*.
On the other hand, schedules are annoying because the time you request is not necessarily the time you get.
Also, if you have 50 components that all want to schedule, you must pick a delay big enough so that they all manage to register their callbacks before the new solution starts. This may be tricky.
* I'm actually not 100% sure about the threadsafety, it could be that there's bugs under rare conditions.…
Added by David Rutten at 7:00am on February 11, 2015
etric/parəˈmɛtrɪk/adjectiverelating to or expressed in terms of a parameter or parameters.art/ɑːt/nounthe expression or application of human creative skill and imagination, typically in a visual form such as painting or sculpture, producing works to be appreciated primarily for their beauty or emotional power.// Summer School 2017 3 day intensive workshop for design students & professionals will delve into computational & parametric methods (using Rhino3D & Grasshopper3D) to create data-driven art installations, physically manifested into a space through hands-on fabrication & assembly.The experimental studio will run across 2 cities in India (New Delhi & Mumbai) and investigate the agenda of ‘filling the void’ at art installation scale, through the use of computation and parametric methods. Studio is designed as a 3-day event in both cities comprising of technical tutorials, teaching sessions, prototyping & presentations culminating in a symposium / round-table conference / open discussion with leading / emerging professionals that demonstrate computation, parametric design or alternative techniques in their work / practice / academia. // Cities & Dates*New Delhi – 30th June to 2nd July 2017 (Friday to Sunday)Mumbai – 7th July to 9th July 2017 (Friday to Sunday)//VENUE: DELHI: Startup Tunnel, Vihara Innovation CampusD-57, 100 Feet Rd, Pocket D, Dr Ambedkar Colony, Chhattarpur, New Delhi - 110074MUMBAI: Raffles Design International, MumbaiHi Life, 2nd Floor, Phirozshah Mehta Road,Santacruz (W). Mumbai – 400054// Registration DatesAll Registrations End 4 days prior to workshop start date (Or till seats last)// About rat[LAB] EDUCATIONrat[LAB] EDUCATION is an initiative by rat[LAB]-Research in Architecture & Technology (www.rat-lab.org) to start a new discourse in architecture & parallel design disciplines with the use of ‘computational design’ & it’s various subsets. Spread across various cities / countries, we are establishing a global dialogue in the domain of computational design by actively organizing and participating in workshops, lectures, presentations & symposia. While rat[LAB] has taken a top-down approach of exploring computational design through industry, a parallel, bottom-up approach is also in-line to involve students of all levels, from design & related backgrounds.…