laxation have been around much longer than any of the tools you mention, or indeed Rhino itself. A particularly well known one is Ken Brakke's Surface Evolver from over 20 years ago. Of the examples listed, some of them were inspirations when starting Kangaroo in 2009, and I've always tried to acknowledge those. (of course, surface relaxation is only one part of what Kangaroo is about)
I also was helped by conversations with many people, including Moritz as you mention, and especially yourself with the coding. Indeed the .Net class you ran back then for McNeel, as well as the other correspondence in your own time was really useful in getting started - thanks again!
As for mesh relaxation not being 'not so difficult', well - for sure there are many implementations of these things out there. Similarly there are dozens of examples of subdivision implementations (Andrew Heumann even showed you don't need to code anything, but can do it with standard grasshopper components), but let's not start being too dismissive of each other's work, hey ? Of course there's a lot more to making a flexible and useful tool than individual algorithms - making them part of a larger framework or system of tools makes a big difference.
Being the first to implement a particular existing algorithm on a particular platform maybe isn't as big a deal as inventing a new algorithm or technique. When other implementations of the same technique come along, we should just assess them on their merits. If the new tool owes something to previous contributions, then that needs to be acknowledged, and then if it improves in some way on what is already available then great. Even if it doesn't then it may still be useful as an exercise for the author or as an example of a different approach - though of course too much duplication of effort tackling already solved problems is a waste, and if there is no significant new contribution then it doesn't deserve to replace what is there.
If a new tool comes along that improves in some way on what we've done already (such as some of the topology tools available in Starling compared to what is in WeaverBird, or the material properties in Karamba compared to Kangaroo), then let's just learn from that and let it spur us on to greater things and improve even further!
So I'll look forward to using this and future versions of WeaverBird in conjunction with Kangaroo, as I think their feature sets complement each other very nicely.…
nt should stand up to reasonable, Socratic interrogation with logical and descriptive rigor. For example, I find entirely credible an architect who suggests that he placed his buildings 20 meters apart because he thought that it would make people more comfortable in light of his reading of the space relative to its environment, materiality, expected time of habitation/circulation, etc. His "thinking" such things is, for the most part intuitive, and backed by deductive logic. (Of course integration of wind analysis and other harder readings is obviously desirable) But I interpret the active denial of intuition's crucial role in design as at the heart of its current deplorable trending toward misuse of terminology, application of pseudo-science and intellectual over-reach. Architects wade out of their waters precisely when they invoke such things as human psychology or perception.
Furthermore, I believe that architects - student and professionals alike - regularly make formal decisions according to their aesthetic judgement. To suggest that students aren't qualified to make a design decision during their studies because they think it's formally successful seems exceedingly stingy; likewise, suggesting that a professional architect shouldn't rely on it is puzzling to me. I find architects' attempts to justify what are obviously decisions based on formal taste using other means often taking the same form of obfuscation that makes architects appear to be intellectual charlatans to specialists in other fields. Taste is taste. I would agree that it can't be taught. But good architectural design certainly remains at least somewhat grounded in artistic sensibility.
3) I'm by no means advocating that all architects must master every detail in their work. Rather, that architects have at least a generalist's working knowledge of materials and construction systems. Floors don't levitate, and windows require depth; rules of thumb count as vital knowledge.
4) I would say that consideration of performance-driven properties falls under basic understanding of how a building will operate in its given environment. For example, if you've designed a glass house in Arizona, ur doing it wrong. The more simulation and science you have, the better. Indeed, I think that such elements - wind analysis, solar gain analysis, structural performance - represent the most solid opportunities today for architects to assert the harder lines of defense in their design decision making...say for example, being able to demonstrate using basic geometry that your shade keeps the sun out in summer, but lets it in when it's cold.…
nome there will be one of those little [+] symbols. Also, when it finds a new best-answer-yet the I'm-giving-up counter is reset to zero.
B is the average fitness of the entire population over time. It is not a particularly interesting statistic.
C represents the portion of the population that is fitter than a single standard deviation away from the average, and E represents the portion that is unfitter than one standard deviation. In a similar fashion, D represents that part of the population that is within one standard deviation of the average. None of these are particularly interesting from the user's point of view, but it does give you a sense about the general fitness variability within a population. I.e. "all genomes are quite fit but there are one or two slackers" vs. "all genomes are absolutely terrible save for a rare few" vs. "genomes are pretty well distributed along the fitness spectrum"
The vertical blue bar indicates that you currently have generation 17 selected. A 'population' of genomes evolves over time and every time-step is called a 'generation'. If all goes well, the fittest individuals in any specific generation are fitter than the fittest individuals from the previous generation. If this doesn't happen -say- 20 generations in a row, the solver will abort the search.
A single generation contains a fixed number of genomes or individuals. When you select a generation, those individuals will be displayed in the bottom three graphs. On the left you see a 'similarity representation' of this generation. The closer two dots are the more similar their genetic make-up. Black dots represent genomes with offspring, red crosses represent genomes that did not contribute to the next generation.
In the middle you see a multi-dimensional-point-graph. Each slider that is being manipulated by Galapagos is represented by a vertical line. Each genome is then drawn as a polyline connecting these vertical lines at the percentage of the slider value they all have. This representation shows not just clusters of similar genomes, it also shows you which slider layout they roughly have. You can select genomes in this graph.
On the right is a list of genomes (sorted from fittest to least fit) with the fitness value written next to it. The green bands are once again indicative of the slider layout of each genome, so if two capsules look alike, they have a similar slider layout.
--
David Rutten
david@mcneel.com
Tirol, Austria…
Added by David Rutten at 3:00pm on November 18, 2013
used of 180 being for the northern hemisphere and 0 for the southern hemisphere.For the optimal tilt, to my knowledge, they are mostly based on correcting location's latitude through a single formula.TOF component is more sophisticated. It essentially replicates the Solmetric's Annual Insolation Lookup tool.What it does is that it creates a grid of points. Each point represents the calculated annual insolation on the surface (PV module, SWH collector, facade, any kind of surface) for a single tilt and azimuth angle.Each point is then elevated according to the annual insolation values. The mesh is created from that grid of points. The portion of the mesh which is the highest, represents the optimal tilt and azimuth angles. So the higher your "precision_" input is, the more points in a mesh you'll have - thus the more precise final optimal tilt and azimuth will be.For the diffuse component of the annual incident solar radiation for each point the Perez 1990 modified model is used. Direct is from classical cosine law, and Ground reflected component from Liu and Jordan (1963).So TOF component calculates the optimal tilt and azimuth based on annual incident solar radiation, not AC energy....…
Loop'. The fun part of the slower version is that you can see what it's doing while it's running. 'Fast Loop' gives no indication that it's working, so you want to test it with small numbers and be sure it's coded properly before bumping the iteration count up.
The GH profiler running the slow version showed between 1 and 1.5 seconds per loop, but the reality was more like ~10 seconds per loop toward the end of an 11 X 11 grid, or ~20 minutes total. It's easier to be patient because you know it's working.
The 'Fast Loop' finished the same grid in 1.6 minutes! An impressive improvement. I've been running it on a 30 X 30 grid (900 points) for ~23 minutes so far and see nothing yet. Not the ~12 minutes I had hoped for... Now 36 minutes on this loop for 900 points... hope it's not stuck. Not fast! Later - DONE!! Profiler says 59 minutes for 900 points but it was more like an hour and twenty minutes total. It succeeded, I have a single 'Closed Brep' from 900 extruded rings, baked to Rhino.
Another strategy to explore would be doing 'SUnion' on a smaller grid using the Anemone loop, then replicate it by moving it as needed to form a larger grid; then run the copies through another 'SUnion' loop. I went ahead and implemented that while waiting. It works and is fast! Started with 3 X 3 and ran the result again as 5 X 5 (9 X 25 = 225 total) in barely ~70 seconds!? Trying 36 X 36 now... 1,296 points appears to have succeeded in less than ten minutes! Though it seems to take quite awhile after the loop ends before control is restored to GH/Rhino. I'll let you do your own experiments and benchmarks.
I encapsulated the loop in a cluster called 'suLoop' (blue groups).
Internal of 'suLoop' cluster:
…
Added by Joseph Oster at 11:14pm on March 22, 2017
he Summer in the City program, part of the Portland School of Architecture and Allied Arts (an extension to University of Oregon).
Using both Grasshopper and the Firefly plug-in, this workshop will focus on the design of innovative facade prototypes that are configurable, sensate, and active. Students will become familiar with the terminology used in interactive facade design including an overview of hardware (ie.sensors, actuators, and programmable microcontrollers) as well as software interfaces terminology. We'll learn new prototyping techniques and develop digital and physical models which can respond to a plurality of environmental and user driven forces. This workshop will take a hands-on approach, and you will walk away with the ability to build your own custom electronic circuits (using the Arduino), as well as create interactive simulations and models.
This course will primarily focus on physical computing techniques. Unfortunately, given the time constraints of the workshop, I will not be able to provide an extensive overview of the Grasshopper interface (it is suggested that participants have some familiarity with the Rhino/Grasshopper environment). There are many great online resources to get you up to speed relatively quickly if you are new to this software. This is a good place to start.
The course will be held at the School of Architecture and Allied Arts in Portland, OR. The date/times of the workshop are as follows:
Friday July 19, 5:00-7:50 P.M.
Saturday July 20, 9:00 A.M.-3:50 P.M.
Sunday July 21, 1:00-3:50 P.M.
If you are a designer, architect, or anyone who is interested in learning about the digital tools and technology trends that are revolutionizing design today, this workshop is for you. Make sure to click here to find out more about registration and enrollment in this exciting new workshop.…
+ Easily debug your system by displaying individual force vectors. + High performance, parallel algorithms, spatial data-structures. + Write your own custom forces, no coding required. + Open source framework for others to build custom behaviors. + Boid forces: Cohese, Separate, Align, & View. + Contain Agents within Brep, Box, Surface, and Polysurface environments. + Forces: Path Follow, Attract, Contain, Surface Flow, Seek, Arrive, Avoid Obstacle, Avoid Unaligned Collision, Sense Image, Sense Point, & more to come. + Behaviors: Bounce Contain, Kill Contain, Initial Velocity, Eat, Set Velocity, & more to come.
Future work:
+ Behaviors to drive simulations of people and vehicles.
+ Temporal inputs can change the actions of the system over time.
Download the add-on on Food4Rhino
If you find any bugs or have any feature requests please post them on the GitHub Issue Tracker which will allow everyone to see which bugs are open or closed and allows me to update you when it is fixed.
This is an open source project so if you need custom defined forces or behaviors for your project reach out to me about becoming a committer.
View the project on GitHub
To get started check out this video tutorial on how to set up a basic particle scene. Follow along with this example script.
Learn how to set up a flocking simulation with agents in this video tutorial and example file.
To learn more about the polymorphic type system in the latest release of Quelea see this video explanation.
For questions on how to use Quelea, please create a new Discussion.…
Added by Alex Fischer at 1:20pm on February 16, 2015
and pioneers in the fields of architecture, design and engineering.
The event will be in two parts, a four day Workshop 15-18 April, and a public conference beginning with Talkshop 19 April, followed by a Symposium 20 April. The event follows the format of the highly successful preceding events sg2010 Barcelona, sg2011 Copenhagen, and sg2012 Troy.
The Challenge for sg2013 is entitled Constructing for Uncertainty.
more information
CONSTRUCTING FOR UNCERTAINTY
Design and construction, increasingly more information-centric, must also address issues of computational ambiguity. As users, we must drive computational systems to assume new roles and subsume more domains to meet the needs before us. We must consider issues of time and permanence within a cultural and technological landscape of constant change - our most grand gestures will define our environment physically, culturally and economically for generations.
Where historic responses to uncertainty constructed a simplistic environment with basic mechanisms for aggregation and subdivision, we augment these with smart, dynamic and interactive systems. Where modeling capacity has been limited, we now take advantage of vast amounts of data collected by sensing and scanning devices, processed by cluster or grid computing, filtered by machine learning algorithms into patterns, and communicated by ubiquitous devices. Our past data trajectories can guide us in discovering robust and tolerant design systems to meet the demands of a malleable present and uncertain future.
sg2013 Constructing for Uncertainty: transition computational design from the hard space of the ideal to the soft reality of an uncertain built environment.
more information
sg2013 WORKSHOPSThe SG Workshop is a unique creative cauldron attracting attendees from across the world of academia, professional practice as well as many of the brightest students. The Workshop is open to 100 applicants who come together for four intensive days of design and collaboration.
The annual Workshop is organised around Clusters. Clusters are hubs of expertise comprising of people, knowledge, tools, materials and machines. The Clusters provide a focus for Workshop participants working together, within a common framework.
more information
sg2013 TALKSHOPAfter four intense days of innovative work, Talkshop offers an opportunity for critical reflection on what has been accomplished in the Workshop. Talkshop will be an opportunity to open debates, pose questions, challenge orthodoxies, and propose new ideas.
Talkshop will feature informal and open discussions between Cluster participants, leading practitioners and emerging talents in digital design, offering inside perspectives on how the landscape of computational design is reshaping built form.
sg2013 SYMPOSIUMThe Symposium will examine the year's Challenge. Invited keynote speakers will showcase major projects and research from around the globe that mark out the territory of the year's Challenge. The Symposium is a unique opportunity to hear insights into the challenges ahead for the discipline.
Interwoven throughout the day will be reports and highlights from each Workshop Cluster, giving an opportunity to view work created during the previous four days of intensive collaboration, design and development.
sg2013 SCHEDULECall for Clusters 26 September 2012Cluster Proposals Due 4 November 2012Workshop Applications Open November 2012
Workshop 15 - 18 April 2013Conference 19 - 20 April 2013
More information about the event can be found at smartgeometry.org…
Added by Shane Burger at 10:35am on October 25, 2012
ically i needed a 3d weighted voronoi to create a controllable screenwall. I looked here and in other sites trying to find an answer, but all what i found were some approximation of the issue. So you know:
http://www.grasshopper3d.com/forum/topics/weighted-3d-voronoi-possible?commentId=2985220%3AComment%3A950591
This is an old post in the fórum (even before the gh comp. voronoi exists) about the theme, although i have understood the theory of weighted voronoi, it was impossible to me to carry this logic to a grasshopper algorithm, even though i tried.
http://www.grasshopper3d.com/forum/topics/voronoi-customization-with-attraction-points
This is a short post about the theme that seems have achieved a solution. I don't know if it was my lack of knowledge (probably yes), but i could not uderstand how the presented solutions solved the problem. :/
http://www.grasshopper3d.com/forum/topics/looking-for-weighted-voronoi?id=2985220%3ATopic%3A49548&page=1#comments
This is the longer post about the theme i have found. It presents a very good approximation to 2d weighted voronoi and i could manage it, but i could not find a way to carry this logic in a 3d voronoi.
http://www.grasshopper3d.com/forum/topics/differentiated-voronoi
In this post i learned that weighted voronoi creates hyperbolic curves instead of straight lines, what made me wonder if it would be possible doing a 3d weighted since i needed flat surfaces in the cells. However in this same post i read something about power diagrams, what brings me to the next two links.
http://graphics.uni-konstanz.de/publikationen/2005/voronoi_treemaps/Balzer%20et%20al.%20--%20Voronoi%20Treemaps.pdf
https://www.uni-konstanz.de/mmsp/pubsys/publishedFiles/NoBr12a.pdf
These links are of two papers about the using of voronoi in the development of a treemap (i'm not entering in the details of treemaps here, but the papers give a good introduce if you are interested). Well, i learned that basically are two types of weghted voronoi diagrams: Additively weighted (this one creates the hyperbolic curves) and Powered weighted (this one creates straigh lines). In the papers the authors present their scripts to achieve the diagrams. I have studied python a little bit, but my lack of knowledge (again) in scripts did not allowed me to understand their complex algorithms.
http://www.laratomholt.nl/ghscripts.html
The last link (finally) have some grasshopper scripts of a researcher named Lara Tomholt. One of these scripts is about weighted voronoi in 2 and 3d and achieved a very good approximation of it. However it still has some voids between the cells, what is undesirable to my objectives.
Sorry for this big research historic, but since the theme has been very discussed, i thought it was a good idea show this "state of art" for a better understanding before showing my developments.
Joining all this knowledge achieved through research and a bit of what i already knew in grasshopper, i have been trying to create my weighted voronoi in 2 and 3d cells. I started trying to make adjustments in the scripts found in the links and honestly don't remember exactly how i got to this file attached, probably a consequent of the very try and error.
The script is based in the conectivity of the deulanay mesh component.
Basically i used the connections of one point to create influence in the points connected to it by scaling a line between them, while using the original point as the center of scale and using the new end points as inputs in the voronoi component.
This approach solved the problem for a 1 cell weighting, to make it work in more than 1 cell i used a recursive looping with hoopsnake for make it always consider the new set of points while adding the weights (better understandable looking the script).
This approach seems to work until the penultimate point (probably because of the nature of the delaunay mesh connections i guess), but most important could be used with the 3d vornoi component.
In the file attached i used a range component to create a increase in the weight of the cells aligned with the sequence of the points referenced, of course other methods can be used to create more dynamic weights in the cells.
Well, i'm not sure if my approach is the correct one to solve the problem, neither if it is really a solution at all, so i'm open to suggestions, reviews and comments that can validate or not this aprroach, also open to new solutions in the case.
Sorry for the big post and the not very good english.
Thank you for Reading. :)…