nome there will be one of those little [+] symbols. Also, when it finds a new best-answer-yet the I'm-giving-up counter is reset to zero.
B is the average fitness of the entire population over time. It is not a particularly interesting statistic.
C represents the portion of the population that is fitter than a single standard deviation away from the average, and E represents the portion that is unfitter than one standard deviation. In a similar fashion, D represents that part of the population that is within one standard deviation of the average. None of these are particularly interesting from the user's point of view, but it does give you a sense about the general fitness variability within a population. I.e. "all genomes are quite fit but there are one or two slackers" vs. "all genomes are absolutely terrible save for a rare few" vs. "genomes are pretty well distributed along the fitness spectrum"
The vertical blue bar indicates that you currently have generation 17 selected. A 'population' of genomes evolves over time and every time-step is called a 'generation'. If all goes well, the fittest individuals in any specific generation are fitter than the fittest individuals from the previous generation. If this doesn't happen -say- 20 generations in a row, the solver will abort the search.
A single generation contains a fixed number of genomes or individuals. When you select a generation, those individuals will be displayed in the bottom three graphs. On the left you see a 'similarity representation' of this generation. The closer two dots are the more similar their genetic make-up. Black dots represent genomes with offspring, red crosses represent genomes that did not contribute to the next generation.
In the middle you see a multi-dimensional-point-graph. Each slider that is being manipulated by Galapagos is represented by a vertical line. Each genome is then drawn as a polyline connecting these vertical lines at the percentage of the slider value they all have. This representation shows not just clusters of similar genomes, it also shows you which slider layout they roughly have. You can select genomes in this graph.
On the right is a list of genomes (sorted from fittest to least fit) with the fitness value written next to it. The green bands are once again indicative of the slider layout of each genome, so if two capsules look alike, they have a similar slider layout.
--
David Rutten
david@mcneel.com
Tirol, Austria…
Added by David Rutten at 3:00pm on November 18, 2013
used of 180 being for the northern hemisphere and 0 for the southern hemisphere.For the optimal tilt, to my knowledge, they are mostly based on correcting location's latitude through a single formula.TOF component is more sophisticated. It essentially replicates the Solmetric's Annual Insolation Lookup tool.What it does is that it creates a grid of points. Each point represents the calculated annual insolation on the surface (PV module, SWH collector, facade, any kind of surface) for a single tilt and azimuth angle.Each point is then elevated according to the annual insolation values. The mesh is created from that grid of points. The portion of the mesh which is the highest, represents the optimal tilt and azimuth angles. So the higher your "precision_" input is, the more points in a mesh you'll have - thus the more precise final optimal tilt and azimuth will be.For the diffuse component of the annual incident solar radiation for each point the Perez 1990 modified model is used. Direct is from classical cosine law, and Ground reflected component from Liu and Jordan (1963).So TOF component calculates the optimal tilt and azimuth based on annual incident solar radiation, not AC energy....…
Loop'. The fun part of the slower version is that you can see what it's doing while it's running. 'Fast Loop' gives no indication that it's working, so you want to test it with small numbers and be sure it's coded properly before bumping the iteration count up.
The GH profiler running the slow version showed between 1 and 1.5 seconds per loop, but the reality was more like ~10 seconds per loop toward the end of an 11 X 11 grid, or ~20 minutes total. It's easier to be patient because you know it's working.
The 'Fast Loop' finished the same grid in 1.6 minutes! An impressive improvement. I've been running it on a 30 X 30 grid (900 points) for ~23 minutes so far and see nothing yet. Not the ~12 minutes I had hoped for... Now 36 minutes on this loop for 900 points... hope it's not stuck. Not fast! Later - DONE!! Profiler says 59 minutes for 900 points but it was more like an hour and twenty minutes total. It succeeded, I have a single 'Closed Brep' from 900 extruded rings, baked to Rhino.
Another strategy to explore would be doing 'SUnion' on a smaller grid using the Anemone loop, then replicate it by moving it as needed to form a larger grid; then run the copies through another 'SUnion' loop. I went ahead and implemented that while waiting. It works and is fast! Started with 3 X 3 and ran the result again as 5 X 5 (9 X 25 = 225 total) in barely ~70 seconds!? Trying 36 X 36 now... 1,296 points appears to have succeeded in less than ten minutes! Though it seems to take quite awhile after the loop ends before control is restored to GH/Rhino. I'll let you do your own experiments and benchmarks.
I encapsulated the loop in a cluster called 'suLoop' (blue groups).
Internal of 'suLoop' cluster:
…
Added by Joseph Oster at 11:14pm on March 22, 2017
he Summer in the City program, part of the Portland School of Architecture and Allied Arts (an extension to University of Oregon).
Using both Grasshopper and the Firefly plug-in, this workshop will focus on the design of innovative facade prototypes that are configurable, sensate, and active. Students will become familiar with the terminology used in interactive facade design including an overview of hardware (ie.sensors, actuators, and programmable microcontrollers) as well as software interfaces terminology. We'll learn new prototyping techniques and develop digital and physical models which can respond to a plurality of environmental and user driven forces. This workshop will take a hands-on approach, and you will walk away with the ability to build your own custom electronic circuits (using the Arduino), as well as create interactive simulations and models.
This course will primarily focus on physical computing techniques. Unfortunately, given the time constraints of the workshop, I will not be able to provide an extensive overview of the Grasshopper interface (it is suggested that participants have some familiarity with the Rhino/Grasshopper environment). There are many great online resources to get you up to speed relatively quickly if you are new to this software. This is a good place to start.
The course will be held at the School of Architecture and Allied Arts in Portland, OR. The date/times of the workshop are as follows:
Friday July 19, 5:00-7:50 P.M.
Saturday July 20, 9:00 A.M.-3:50 P.M.
Sunday July 21, 1:00-3:50 P.M.
If you are a designer, architect, or anyone who is interested in learning about the digital tools and technology trends that are revolutionizing design today, this workshop is for you. Make sure to click here to find out more about registration and enrollment in this exciting new workshop.…
+ Easily debug your system by displaying individual force vectors. + High performance, parallel algorithms, spatial data-structures. + Write your own custom forces, no coding required. + Open source framework for others to build custom behaviors. + Boid forces: Cohese, Separate, Align, & View. + Contain Agents within Brep, Box, Surface, and Polysurface environments. + Forces: Path Follow, Attract, Contain, Surface Flow, Seek, Arrive, Avoid Obstacle, Avoid Unaligned Collision, Sense Image, Sense Point, & more to come. + Behaviors: Bounce Contain, Kill Contain, Initial Velocity, Eat, Set Velocity, & more to come.
Future work:
+ Behaviors to drive simulations of people and vehicles.
+ Temporal inputs can change the actions of the system over time.
Download the add-on on Food4Rhino
If you find any bugs or have any feature requests please post them on the GitHub Issue Tracker which will allow everyone to see which bugs are open or closed and allows me to update you when it is fixed.
This is an open source project so if you need custom defined forces or behaviors for your project reach out to me about becoming a committer.
View the project on GitHub
To get started check out this video tutorial on how to set up a basic particle scene. Follow along with this example script.
Learn how to set up a flocking simulation with agents in this video tutorial and example file.
To learn more about the polymorphic type system in the latest release of Quelea see this video explanation.
For questions on how to use Quelea, please create a new Discussion.…
Added by Alex Fischer at 1:20pm on February 16, 2015
and pioneers in the fields of architecture, design and engineering.
The event will be in two parts, a four day Workshop 15-18 April, and a public conference beginning with Talkshop 19 April, followed by a Symposium 20 April. The event follows the format of the highly successful preceding events sg2010 Barcelona, sg2011 Copenhagen, and sg2012 Troy.
The Challenge for sg2013 is entitled Constructing for Uncertainty.
more information
CONSTRUCTING FOR UNCERTAINTY
Design and construction, increasingly more information-centric, must also address issues of computational ambiguity. As users, we must drive computational systems to assume new roles and subsume more domains to meet the needs before us. We must consider issues of time and permanence within a cultural and technological landscape of constant change - our most grand gestures will define our environment physically, culturally and economically for generations.
Where historic responses to uncertainty constructed a simplistic environment with basic mechanisms for aggregation and subdivision, we augment these with smart, dynamic and interactive systems. Where modeling capacity has been limited, we now take advantage of vast amounts of data collected by sensing and scanning devices, processed by cluster or grid computing, filtered by machine learning algorithms into patterns, and communicated by ubiquitous devices. Our past data trajectories can guide us in discovering robust and tolerant design systems to meet the demands of a malleable present and uncertain future.
sg2013 Constructing for Uncertainty: transition computational design from the hard space of the ideal to the soft reality of an uncertain built environment.
more information
sg2013 WORKSHOPSThe SG Workshop is a unique creative cauldron attracting attendees from across the world of academia, professional practice as well as many of the brightest students. The Workshop is open to 100 applicants who come together for four intensive days of design and collaboration.
The annual Workshop is organised around Clusters. Clusters are hubs of expertise comprising of people, knowledge, tools, materials and machines. The Clusters provide a focus for Workshop participants working together, within a common framework.
more information
sg2013 TALKSHOPAfter four intense days of innovative work, Talkshop offers an opportunity for critical reflection on what has been accomplished in the Workshop. Talkshop will be an opportunity to open debates, pose questions, challenge orthodoxies, and propose new ideas.
Talkshop will feature informal and open discussions between Cluster participants, leading practitioners and emerging talents in digital design, offering inside perspectives on how the landscape of computational design is reshaping built form.
sg2013 SYMPOSIUMThe Symposium will examine the year's Challenge. Invited keynote speakers will showcase major projects and research from around the globe that mark out the territory of the year's Challenge. The Symposium is a unique opportunity to hear insights into the challenges ahead for the discipline.
Interwoven throughout the day will be reports and highlights from each Workshop Cluster, giving an opportunity to view work created during the previous four days of intensive collaboration, design and development.
sg2013 SCHEDULECall for Clusters 26 September 2012Cluster Proposals Due 4 November 2012Workshop Applications Open November 2012
Workshop 15 - 18 April 2013Conference 19 - 20 April 2013
More information about the event can be found at smartgeometry.org…
Added by Shane Burger at 10:35am on October 25, 2012
ically i needed a 3d weighted voronoi to create a controllable screenwall. I looked here and in other sites trying to find an answer, but all what i found were some approximation of the issue. So you know:
http://www.grasshopper3d.com/forum/topics/weighted-3d-voronoi-possible?commentId=2985220%3AComment%3A950591
This is an old post in the fórum (even before the gh comp. voronoi exists) about the theme, although i have understood the theory of weighted voronoi, it was impossible to me to carry this logic to a grasshopper algorithm, even though i tried.
http://www.grasshopper3d.com/forum/topics/voronoi-customization-with-attraction-points
This is a short post about the theme that seems have achieved a solution. I don't know if it was my lack of knowledge (probably yes), but i could not uderstand how the presented solutions solved the problem. :/
http://www.grasshopper3d.com/forum/topics/looking-for-weighted-voronoi?id=2985220%3ATopic%3A49548&page=1#comments
This is the longer post about the theme i have found. It presents a very good approximation to 2d weighted voronoi and i could manage it, but i could not find a way to carry this logic in a 3d voronoi.
http://www.grasshopper3d.com/forum/topics/differentiated-voronoi
In this post i learned that weighted voronoi creates hyperbolic curves instead of straight lines, what made me wonder if it would be possible doing a 3d weighted since i needed flat surfaces in the cells. However in this same post i read something about power diagrams, what brings me to the next two links.
http://graphics.uni-konstanz.de/publikationen/2005/voronoi_treemaps/Balzer%20et%20al.%20--%20Voronoi%20Treemaps.pdf
https://www.uni-konstanz.de/mmsp/pubsys/publishedFiles/NoBr12a.pdf
These links are of two papers about the using of voronoi in the development of a treemap (i'm not entering in the details of treemaps here, but the papers give a good introduce if you are interested). Well, i learned that basically are two types of weghted voronoi diagrams: Additively weighted (this one creates the hyperbolic curves) and Powered weighted (this one creates straigh lines). In the papers the authors present their scripts to achieve the diagrams. I have studied python a little bit, but my lack of knowledge (again) in scripts did not allowed me to understand their complex algorithms.
http://www.laratomholt.nl/ghscripts.html
The last link (finally) have some grasshopper scripts of a researcher named Lara Tomholt. One of these scripts is about weighted voronoi in 2 and 3d and achieved a very good approximation of it. However it still has some voids between the cells, what is undesirable to my objectives.
Sorry for this big research historic, but since the theme has been very discussed, i thought it was a good idea show this "state of art" for a better understanding before showing my developments.
Joining all this knowledge achieved through research and a bit of what i already knew in grasshopper, i have been trying to create my weighted voronoi in 2 and 3d cells. I started trying to make adjustments in the scripts found in the links and honestly don't remember exactly how i got to this file attached, probably a consequent of the very try and error.
The script is based in the conectivity of the deulanay mesh component.
Basically i used the connections of one point to create influence in the points connected to it by scaling a line between them, while using the original point as the center of scale and using the new end points as inputs in the voronoi component.
This approach solved the problem for a 1 cell weighting, to make it work in more than 1 cell i used a recursive looping with hoopsnake for make it always consider the new set of points while adding the weights (better understandable looking the script).
This approach seems to work until the penultimate point (probably because of the nature of the delaunay mesh connections i guess), but most important could be used with the 3d vornoi component.
In the file attached i used a range component to create a increase in the weight of the cells aligned with the sequence of the points referenced, of course other methods can be used to create more dynamic weights in the cells.
Well, i'm not sure if my approach is the correct one to solve the problem, neither if it is really a solution at all, so i'm open to suggestions, reviews and comments that can validate or not this aprroach, also open to new solutions in the case.
Sorry for the big post and the not very good english.
Thank you for Reading. :)…
ipe Pecegueiro Type of participants Students, graduate students, researchers, professionals Duration 2 days, Sat – Sun Prerequisites 1 / participants skills Experience in Rhino and Grasshopper; programming experience with Processing or Arduino IDE is recommended but not necessary Prerequisites 2 / hardware Participants should bring their own computer with Windows XP or 7 64 bit OS Prerequisites 3 / software Rhinoceros Version 4 sr9, Grasshopper 0.8.0050, Arduino IDE, Processing, Google Earth* *Software versions should be the most updated versions at the time of the workshop. Rhino 5 is also acceptable. Description An associative model is only as relevant as the information it seeks to manage. This workshop will engage the associative model by feeding it with real time and real world data captured through prefabricated sensor nodes known as the Ambient Sensor Kit (ASKit). The ASKit is an Open Hardware platform for personal data collection and sharing. The ASKit project is based on the premise that a personal understanding of the information around us is key to a sustainable and informed habitation of our environment. http://uask.it. Workshop participants will be working with Grasshopper, a generative,logic based design environment where participants will be able associate real world data to their models. Several other tools will be employed including Processing, Pachube, Google Earth, and gHowl (a set of custom components which extend the functionality of Grasshopper). This two day workshop will focus on a specific area in Berlin to understand, through data, the differences between the physical barriers and invisible forces which define certain urban functions. The participants will engage in: - environmental data collection - site surveying with open hardware/DIY electronics - data visualization and analysis - associative modeling with collected data Day 1: Demonstration of ASKit hardware platform for data collection and associative modeling. Data capture session in specific zones in Berlin. Data visualization and associative modeling in Grasshopper. Day 2: Focused Data Capture Session Directed projects applying associative modeling with collected data.…
Added by Luis Fraguada at 11:34am on August 23, 2011