bsp;
-Vehicle elements (3D objects and a component for custom vehicles; models from Google Warehouse)
-Traffic Velocity Graphs, drawn on every trajectory curve (allow custom graphs drawn)
-Traffic regulation elements (such as Traffic Lights and Stop Signals) and traffic density
-Particle Systems on trajectory curves, just to manage the traffic regulations and avoid collisions based on security distances
-Traffic Vehicle Animation Modes (Dots, Bounding Boxes or complex Meshes with attributes for final rendering (Giulio Piacentino´s Render Animation)
-Vehicle Lights and Vehicle Sights, to make visual studies
Team:
-Sergio del Castillo Tello (Doctor No, lead programmer)
-Everyone that wants to be involved, support.. these tools
The development of Roadrunner is planned to take part within a Research Group Program at ETSAM (University of Architecture in Madrid); This forum group is created just to test the interest of the community, while we keep on developing (it is still being tested), probably we will share the whole thing in the future. Cheers!
Traffic Cluster Scheme
Traffic Elements
Traffic Urban Systems
Vehicle Elements
Roadrunner - overview
Roadrunner 0 Basics
Roadrunner 1 Modes
Roadrunner 2 Elements
Roadrunner 3 Urban Systems…
g "ipy" form the CMD line.
Now what is Numpy good for I wonder? I hear it does big arrays quickly. And do I also have Scipy? Is that a bonafide package? Alas, "import scipy" gives an error:
C:\Users\Nik>ipy
IronPython 2.7 (2.7.0.40) on .NET 4.0.30319.34209
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import scipy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Program Files (x86)\IronPython 2.7\lib\site-packages\scipy\__init__.p
y", line 124, in <module>
File "C:\Program Files (x86)\IronPython 2.7\lib\site-packages\numpy\_import_to
ols.py", line 15, in __init__
AttributeError: 'module' object has no attribute '_getframe'
>>>
...and again the rabbit hole is even deeper now. The directory and contents checks out fine except there is no "_import_to" like there is "__init__.py".
So where is Scipy? How do I activate/install it for real?
The above mentioned test gives no error, however:
ipy -X:Frames -c "import scipy"
...yet the same import from within IronPython does give an error.
…
Added by Nik Willmore at 3:06pm on October 11, 2015
ed to? --> probably the globalHorizontalRadiation but how?
You are right. Kglob is Global horizontal radiation from the .epw file, in W/m2.hSl is the sun elevation angle in degrees.
Do you agree that in this case the MRT does not depend on these inputs: location, meanRadiantTemperature, dewPointTemperature and wind speed?
It depends on location, as location determines the sun vector's elevation, zenith and azimuth angles.It does not depend on meanRadiantTemperature, dewPointTemperature and windSpeed.
It does not depend neither on the other bodyCharacteristics like bodyPosture, age, sex, met, activityDuration...?
I think you are correct. It is based on the following assumptions:sex_: malebodyPosture_: standingmetabolicRate_: 2.32 mets (walking 4km/h)
MRT calculated by the TCI-11 method is the mean radiant temperature of a vector pointing vertically with a sky view factor of 100%?
The sun vector depends on the sun elevation angle.If "Sunpath shading" component is not used to account for shading, then yes, the sky view factor of 1 (100%) is used.
It can also be applied to a mannequin thanks to the CumSkyMatrix and thus evaluate the dishomogeneity of radiation exposure.
Yes, the MRT in Thermal Comfort Indices component, threats the subject as a single uniform body. Outdoor Solar Adjusted Temperature Calculator component's feature of being able to evaluate different parts of the body is indeed impressive.
In contrast to the TCI-11, this component distinguishes diffuse and direct radiation and contextualizes the calculation thanks to _ContextShading input, right?
MRT in Thermal Comfort Indices component is based on Man-Environment Heat Exchange Model 2005 by Krzysztof Blazejczyk, one of the authors of UTCI index. MRT MENEX 2005 model has 4 version of solar radiation accounting. I am using the one with global solar radiation. The reason is because it cooperates well with other indices in the component, those that require solar radiation as an input.I implemented the MRT MENEX 2005 version which separatelly takes into account the direct, diffuse and reflected radiation in the attached file below.You can read more about the MRT and the whole MENEX 2005 model in general in the .pdf paper attached below.
The default groundReflectivity is set to 0.25 --> is GroundReflectivity taken into account in the Tground or MRT calculation in the TCI component? If yes, what is the hypothesised groundReflectivity?
The MENEX 2005 model does not provide this information, but I assume that it's possibly 0.2 - 0.25 as you said, which is widely used as urban/suburban default annual average albedo value.
The default clothing albedo of 37% (TCI-11 bodyCharacteristics) corresponds to Clothing Absorptivity of 63%?
You are correct again.
Why such a big difference and which of the result should be plugged into the UTCI calculation component?
This is a complex question.One of the reasons might be the fact that MENEX MRT version is using a simplified formula to calculate the ground temperature, which may significantly affect the final results. It could be beneficial if groundTemperature_ is set as an input.Another reason why this might be is that the study which corrects the Mrt for the influence of local heating of the sunlit parts of the floor, is meant to be used for indoor analysis.It should be said that a number of outdoor indices have been derived from indoor indices and studies, but with edition of its parameters.In this case we are simply applying the same indoor formulas to outdoor analysis. The study itself does not provide any suggestion that this kind of approach may be valid.…
lly it should not make much of a difference - random number generation is not affected, mutation also is not. crossover is a bit more tricky, I use Simulated Binary Crossover (SBX-20) which was introduced already in 1194:
Deb K., Agrawal R. B.: Simulated Binary Crossover for Continuous Search Space, inIITK/ME/SMD-94027, Convenor, Technical Reports, Indian Institue of Technology, Kanpur, India,November 1994
Abst ract. The success of binary-coded gene t ic algorithms (GA s) inproblems having discrete sear ch sp ace largely depends on the codingused to represent the prob lem variables and on the crossover ope ratorthat propagates buildin g blocks from pare nt strings to childrenst rings . In solving optimization problems having continuous searchspace, binary-co ded GAs discr et ize the search space by using a codingof the problem var iables in binary st rings. However , t he coding of realvaluedvari ables in finit e-length st rings causes a number of difficulties:inability to achieve arbit rary pr ecision in the obtained solution , fixedmapping of problem var iab les, inh eren t Hamming cliff problem associatedwit h binary coding, and processing of Holland 's schemata incont inuous search space. Although a number of real-coded GAs aredevelop ed to solve optimization problems having a cont inuous searchspace, the search powers of these crossover operators are not adequate .In t his paper , t he search power of a crossover operator is defined int erms of the probability of creating an arbitrary child solut ion froma given pair of parent solutions . Motivated by t he success of binarycodedGAs in discret e search space problems , we develop a real-codedcrossover (which we call the simulated binar y crossover , or SBX) operatorwhose search power is similar to that of the single-point crossoverused in binary-coded GAs . Simulation results on a number of realvaluedt est problems of varying difficulty and dimensionality suggestt hat the real-cod ed GAs with t he SBX operator ar e ab le to perform asgood or bet t er than binary-cod ed GAs wit h t he single-po int crossover.SBX is found to be particularly useful in problems having mult ip le optimalsolutions with a narrow global basin an d in prob lems where thelower and upper bo unds of the global optimum are not known a priori.Further , a simulation on a two-var iable blocked function showsthat the real-coded GA with SBX work s as suggested by Goldberg
and in most cases t he performance of real-coded GA with SBX is similarto that of binary GAs with a single-point crossover. Based onth ese encouraging results, this paper suggests a number of extensionsto the present study.
7. ConclusionsIn this paper, a real-coded crossover operator has been develop ed bas ed ont he search characte rist ics of a single-point crossover used in binary -codedGAs. In ord er to define the search power of a crossover operator, a spreadfactor has been introduced as the ratio of the absolute differences of thechildren points to that of the parent points. Thereaft er , the probabilityof creat ing a child point for two given parent points has been derived forthe single-point crossover. Motivat ed by the success of binary-coded GAsin problems wit h discrete sear ch space, a simul ated bin ary crossover (SBX)operator has been develop ed to solve problems having cont inuous searchspace. The SBX operator has search power similar to that of the single-po intcrossover.On a number of t est fun ctions, including De Jong's five te st fun ct ions, ithas been found that real-coded GAs with the SBX operator can overcome anumb er of difficult ies inherent with binary-coded GAs in solving cont inuoussearch space problems-Hamming cliff problem, arbitrary pr ecision problem,and fixed mapped coding problem. In the comparison of real-coded GAs wit ha SBX operator and binary-coded GAs with a single-point crossover ope rat or ,it has been observed that the performance of the former is better than thelatt er on continuous functions and the performance of the former is similarto the lat ter in solving discret e and difficult functions. In comparison withanother real-coded crossover operator (i.e. , BLX-0 .5) suggested elsewhere ,SBX performs better in difficult test functions. It has also been observedthat SBX is particularly useful in problems where the bounds of the optimum
point is not known a priori and wher e there are multi ple optima, of whichone is global.Real-coded GAs wit h t he SBX op erator have also been tried in solvinga two-variab le blocked function (the concept of blocked fun ctions was introducedin [10]). Blocked fun ct ions are difficult for real-coded GAs , becauselocal optimal points block t he progress of search to continue towards t heglobal optimal point . The simulat ion results on t he two-var iable blockedfunction have shown that in most occasions , the sea rch proceeds the way aspr edicted in [10]. Most importantly, it has been observed that the real-codedGAs wit h SBX work similar to that of t he binary-coded GAs wit h single-pointcrossover in overcoming t he barrier of the local peaks and converging to t heglobal bas in. However , it is premature to conclude whether real-coded GAswit h SBX op erator can overcome t he local barriers in higher-dimensionalblocked fun ct ions.These results are encour aging and suggest avenues for further research.Because the SBX ope rat or uses a probability distribut ion for choosing a childpo int , the real-coded GAs wit h SBX are one st ep ahead of the binary-codedGAs in te rms of ach ieving a convergence proof for GAs. With a direct probabilist ic relationship between children and parent points used in t his paper,cues from t he clas sical stochast ic optimization methods can be borrowed toachieve a convergence proof of GAs , or a much closer tie between the classicaloptimization methods and GAs is on t he horizon.
In short, according to the authors my SBX operator using real gene values is as good as older ones specially designed for discrete searches, and better in continuous searches. SBX as far as i know meanwhile is a standard general crossover operator.
But:
- there might be better ones out there i just havent seen yet. please tell me.
- besides tournament selection and mutation, crossover is just one part of the breeding pipeline. also there is the elite management for MOEA which is AT LEAST as important as the breeding itself.
- depending on the problem, there are almost always better specific ways of how to code the mutation and the crossover operators. but octopus is meant to keep it general for the moment - maybe there's a way for an interface to code those things yourself..!?
2) elite size = SPEA-2 archive size, yes. the rate depends on your convergence behaviour i would say. i usually start off with at least half the size of the population, but mostly the same size (as it is hard-coded in the new version, i just realize) is big enough.
4) the non-dominated front is always put into the archive first. if the archive size is exceeded, the least important individual (the significant strategy in SPEA-2) are truncated one by one until the size is reached. if it is smaller, the fittest dominated individuals are put into the elite. the latter happens in the beginning of the run, when the front wasn't discovered well yet.
3) yes it is. this is a custom implementation i figured out myself. however i'm close to have the HypE algorithm working in the new version, which natively has got the possibility to articulate perference relations on sets of solutions.
…
shopper later uses but for the life of me cannot see the problem.
Component
using System;using System.Collections.Generic;using Grasshopper.Kernel;using Rhino.Geometry;namespace Load_Take_Down_Tool.Components{ // This componenet takes care of creating ordered lists to pass to the column component // Searches for points within tributary areas public class Column_Organiser : GH_Component { /// <summary> /// Initializes a new instance of the Column_Organiser class. /// </summary> public Column_Organiser() : base("Column Organiser", "CO", "Orders lists of points and areas", "Load Take Down Tool", "Pre-Processing") { } /// <summary> /// Registers all the input parameters for this component. /// </summary> protected override void RegisterInputParams(GH_Component.GH_InputParamManager pManager) { pManager.AddPointParameter("Column", "C", "Unsorted points at location of column (required)", GH_ParamAccess.list); pManager.AddBrepParameter("Tributary Area", "T", "Unsorted tributary areas for column (required)", GH_ParamAccess.list); } /// <summary> /// Registers all the output parameters for this component. /// </summary> protected override void RegisterOutputParams(GH_Component.GH_OutputParamManager pManager) { pManager.AddPointParameter("Column", "C", "Sorted points at location of column", GH_ParamAccess.list); pManager.AddBrepParameter("Tributary Area", "T", "Sorted tributary areas for column", GH_ParamAccess.list); pManager.AddPointParameter("Failing Points", "FP", "Points that are not within any tributary area", GH_ParamAccess.list); } /// <summary> /// This is the method that actually does the work. /// </summary> /// <param name="DA">The DA object is used to retrieve from inputs and store in outputs.</param> protected override void SolveInstance(IGH_DataAccess DA) { //Declare lists to hold data //uo = un-ordered //o = ordered List<Point3d> uocolumnpoints = new List<Point3d>(); List<Brep> uotribareas = new List<Brep>(); List<Point3d> failpoints = new List<Point3d>(); //Get data from inputs if (!DA.GetDataList(0, uocolumnpoints)) return; if (!DA.GetDataList(1, uotribareas)) return; //Error if number of points and areas are not equal if (uocolumnpoints.Count != uotribareas.Count) { AddRuntimeMessage(GH_RuntimeMessageLevel.Warning, "Unequal number of columns and tributary areas"); return; } List<Point3d> ocolumnpoints = new List<Point3d>(); List<Curve> ocurves = new List<Curve>(); List<Brep> otribareas = new List<Brep>(); double m_tol = 0.001; string unitSystem = Rhino.RhinoDoc.ActiveDoc.GetUnitSystemName(true, true, true, true); if (unitSystem == "m") m_tol = 0.000001; if (unitSystem == "mm") m_tol = 0.011; PointsInsideCurves pic = new PointsInsideCurves(); try { pic = new PointsInsideCurves(uocolumnpoints, uotribareas, m_tol); } catch (Exception ex) { AddRuntimeMessage(GH_RuntimeMessageLevel.Remark, ex.Message); } failpoints = pic.FailedPoints; ocurves = pic.OrganisedCurves; if (failpoints.Count > 0) AddRuntimeMessage(GH_RuntimeMessageLevel.Remark, "Some point(s) did not lie within a tributary area, were on the boundary/tolerance limit of of a tributary area, or multiple points were within the same tributary area. See output FP for the points that have failed"); foreach (Curve c in ocurves) { Brep b = Brep.TryConvertBrep(c); otribareas.Add(b); } //Pass data to outputs DA.SetDataList(0, ocolumnpoints); DA.SetDataList(1, otribareas); DA.SetDataList(2, failpoints); } //Grasshopper icon for component protected override System.Drawing.Bitmap Icon { get { return Properties.Resources.ColumnOrganiser; } } /// <summary> /// Gets the unique ID for this component. Do not change this ID after release. /// </summary> public override Guid ComponentGuid { get { return new Guid("{3259e465-5400-48e3-907b-fcb7455b12aa}"); } } }}
Helper class
using System;using System.Collections.Generic;using System.Linq;using System.Text;using Rhino.Geometry;namespace Load_Take_Down_Tool{ class PointsInsideCurves { private List<Point3d> Points { get; set; } private List<Curve> ClosedCurves { get; set; } private double Tolerance { get; set; } public List<Point3d> FailedPoints { get; set; } public List<Curve> OrganisedCurves { get; set; } public PointsInsideCurves() { } public PointsInsideCurves(List<Point3d> Points, List<Curve> ClosedCurves, double Tolerance) { this.Points = Points; this.ClosedCurves = ClosedCurves; this.Tolerance = Tolerance; Evaluate(); } public PointsInsideCurves(List<Point3d> Points, List<Brep> ClosedBreps, double Tolerance) { this.Points = Points; this.Tolerance = Tolerance; List<Curve> Curves = new List<Curve>(); foreach (Brep b in ClosedBreps) { Curve[] c = b.DuplicateEdgeCurves(); c = Curve.JoinCurves(c, Tolerance); Curves.Add(c[0]); } this.ClosedCurves = Curves; Evaluate(); } private void Evaluate() { //Sorts in order of points //Naive implementation //Can update to KD tree if necessary List<Curve> OrderedCurves = new List<Curve>(); List<Point3d> fPoints = new List<Point3d>(); //Have to duplicate the input list of curves to modify as we go List<Curve> CCDup = this.ClosedCurves; foreach (Point3d Point in this.Points) { //Backwards loop to allow removal bool assigned = false; for (int i = CCDup.Count - 1; i >= 0; i--) { if (CCDup[i].IsClosed == false) { throw new Exception("Curve is not closed"); } if (CCDup[i].Contains(Point) == PointContainment.Inside) { OrderedCurves.Add(CCDup[i]); CCDup.RemoveAt(i); assigned = true; break; } } //If point not within any of the given breps put null in the list if (assigned == false) { OrderedCurves.Add(null); fPoints.Add(Point); } } this.FailedPoints = fPoints; this.OrganisedCurves = OrderedCurves; } }}
…
Added by Hugh Groves at 3:43am on September 30, 2014
Visiting School Rio de Janeiro will collaborate with the Centro Carioca de Design with the support of Columbia University Studio X to investigate new possibilities for the urban infrastructure surrounding World Cup Stadiums. Nation-wide, there has been significant investment to build and renovate stadiums for the 2014 World Cup in order to meet the required standard FIFA regulations (‘Padrão FIFA’). At the same time, there has been a large public demand for equal investment into transport systems, public space, and public programs such as hospitals and schools. The Visiting School will tap into the momentum of this movement, and promote a series of interventions within and around the World Cup structures, proposing new public programs and standards for their legacy. Students can choose to focus directly on the Maracanã stadium in Rio de Janeiro, the venue for the Final match of the World Cup. The intense ten-day workshop will employ computational design and digital fabrication to introduce a design methodology that creatively automates and promotes transformation, mutation and complexity for these infrastructure interventions.
Prominent Features of the workshop
Teaching teamThe teaching team will include a mix of tutors from the Architectural Association, including Theodore Sarantoglou Lalis e Dora Sweijd (lassa-architects.com) of Diploma 17, and locally-based architects, urban-designers and experts, mediated by locally-based Visiting School directors, to promote cutting-edge innovative strategies informed by local political, economic and construction issues.
Computational skillsThe workshop will teach advanced digital modeling and parametric design skills, no previous experience is needed. A group of specialist computation tutors will conduct an initial skills workshop and continue to assist throughout the workshop to develop the individual projects of the participants.
Digital FabricationA series of physical models will be built using digital fabrication techniques that will be taught during the workshop, no previous experience is needed.
Applications
1) You can make an application by completing the online application found under ‘Links and Downloads’ on the AA Visiting School page. If you are not able to make an online application, email visitingschool@aaschool.ac.uk for instructions to pay by bank transfer.
2) Once you complete the online application and make a full payment, you are registered to the programme. A CV or a portfolio is not required.
The deadline for applications is 11thApril 2014.
All participants travelling from abroad are responsible for securing any visa required, and are advised to contact their home embassy early. After payment of fees, the AA School can provide a letter confirming participation in the workshop.
Fees
The AA Visiting School requires a fee of £695 per participant, which includes a £60 Visiting membership fee.
Fees do not include flights or accommodation, but accommodation options can be advised. Students need to bring their own laptops, digital equipment and model making tools. Please ensure this equipment is covered by your own insurance as the AA takes no responsibility for items lost or stolen at the workshop.
Eligibility
The workshop is open to current architecture and design students, phd candidates and young professionals.
…
surfaces resulting from 'SrfSplit' so they could be sorted to discard the largest one (in cases where a Voronoi curve crossed a surface seam, there are three resulting surfaces, not just two) and
The centroid for each Voronoi curve/surface required for scaling the holes.
So I created a cluster called 'pLen' which returns the perimeter length of a surface (the sum of the lengths of its edges) to be used instead of surface area for sorting (#1). And used another cluster I have called 'PtCloudCntr' that returns the mathematical center of a point cloud (#2). Both of these clusters are extremely fast and worked fine together as a replacement for 'Area'.
But in the process, I discovered and fixed a couple things:
In those cases where Voronoi intersection curves cross a seam in the "Primary Surface", I joined the two pieces together before creating the hole - eliminating the seam from the results!
Somehow the optional 'Bounding Box' used by 'Voronoi³' component got lost (oops) so it was cutting off the results bounded by the random points instead of the "Primary Surface". Fixed that and found later that in some cases (the 'Revolve Srf' in this code), it works much better to double the size of the 'Bounding Box'. Since there is no penalty for this, speed or otherwise, the larger box is used for all surfaces.
The results are MUCH BETTER than before. Surfaces don't lose their edges, holes aren't corrupted by seams and the code runs in roughly 2/3rds the previous times (30 seconds instead of 45 seconds for 438 holes as in prior code). For 100 holes, it takes only six to eight seconds to change the "Primary Surface" and get a new holed surface.
This 'Revolve Srf' is substantially smaller in scale than the other surfaces and due to its apparent complexity(?), it starts to degrade in quality with "large holes", i.e., less than 100 holes:
…
Added by Joseph Oster at 2:14pm on December 22, 2015
bias towards higher-latitude climates where high humidity is less of a issue. The defaults do not include any humidity control, use a differential dry bulb air side economizer, and use the ASHRAE 62.2 ventilation specification, which uses a sum of ventilation/square meter + ventilation/person.
The unfortunate side of these default specs is that there is always some ventilation coming in (because of the ventilationPerArea), which often means that you're bringing in outdoor air in unoccupied hours that have a thermostat setback, minimal heat gains, and no need for cooling. Running the ventilation system without activating the cooling coil can mean that you are bringing in very humid outdoor air sometimes, particularly in evenings. As such, you may want to use only a ventilationPerPerson specification or use a ventilationSchedule to shut off the ventilation during these unoccupied hours (using the "Set EnergyPlus Loads" component or the "Set EnergyPlus Schedules" component respectively). This might mitigate your peak cooling at 9PM as well as your higher humidity in evenings, particularly if your space is not occupied then.
The differential dry bulb economizer might also introduce more outdoor air when it is humid outside, resulting in more "unrealistic" humidity values. As such, switching to a differential enthalpy economizer or removing the economizer altogether can avoid these cases of bringing in more humid outdoor air to cool the zone. You can do this with the "Set Ideal Air Loads Parameters" component.
If both these methods don't give you humidity values that you are happy with, you can always put in humidity control by setting a maxHumidity on the "Set EnergyPlus Zone Thresholds" component.
To be fair to the ideal air system, you would have to consider these ventilation/economizer/humidity control specifications for almost any air-based system that you are designing and I do not see these initially "unrealistic" humidity levels as a limitation of the ideal air system as much as a limitation of typical high-latitude HVAC controls. This said, I will fully admit the limitations of the ideal air system in terms of not giving electricity/fuel values (just loads) and the fact that you don't have a single multi-zone boiler/chiller supply air temperature as you would for a centralized HVAC system.
To get to your questions:
1) The danger of looking at energy balance variables for only a single hour is that you might not get them summing to something close to 0, since you are running a transient simulation. Over a day, you will be more likely to get values summing to 0 and (because of your building's thermal lag) you will also probably get a better representation of the cause of the peak cooling.
2) There was a bug in the code and you are not supposed to get the HVAC outputs with the "Read EP Result" component. You are supposed to use the "Read EP HVAC Result" component like so:
I have fixed this in the attached GH file. In case you were wondering what those units are, they are Joules. All energy results output from EP are in joules and I convert them to kWh inside the HB components since this is what we are typically using in the building industry.
-Chris…