ences, so not terribly important in the end. After all, it's not really worth going through a lot of trouble to get a 15% speed increase; 15% faster than slow is still pretty slow.
Also processor speed has pretty much peaked these past few years, there have been no more significant increases lately. Instead, manufacturers have started putting more cores on motherboards, which is something GH unfortunately cannot take advantage of.
Multi-threading (very high on the list for GH2) brings with it a promise of full core utilisation (minus the inevitable overhead for aggregating computed results), but there are some problems that may end up being significant. Here's a non-exhaustive list:
It's not possible to modify the UI from a non-UI thread. This is probably not that big a deal for Grasshopper components, especially since we can make methods such a Rhino.RhinoApp.WriteLine() thread safe.
Not all methods used by component code are necessarily thread safe. There used to be a lot of stuff in the Rhino SDK that simply wouldn't work correct or would crash if the same method was run more than once simultaneously. Rhino core team has been working hard to remedy this problem, and I'm confident we can fix any problems that still come up, though it may take some time. If components rely on other code libraries then the problem may not be solvable at all. So we need to make sure multi-threading is an optional property of components.
There's overhead involved in multi-threading, it's especially difficult to get a good performance gain when dealing with lots of very fast operations. The overhead in these cases can actually make stuff perform slower.
There's the question on what level should multi-threading be implemented. Obviously the lower the better, but that means a lot of extra work, complicated patterns of responsibilities and a lot of communications between different developers.
There's the question on how the interface should behave during solutions now. If all the computation is happening in a thread, the interface can stay 'live'. So what should it look like if a solution takes -say- 5 seconds to complete? Should you be able to see the waves of data streaming through the network, turning components and wires grey and orange like strobe lights? What happens if you modify a slider during a solution? Simple answer is to abort the current solution and start a new one with the new slider value. But as you slowly drag the slider from left to right, you end up computing 400 partial solution and never getting to a final answer, even though you could have computed 2 full solutions in the same time and given better feedback. Does the preview geometry in the Rhino viewports flicker in and out of existence as solutions cascade through the network?
…
y stages of design, mainly due to the large uncertainties that exist in these phases. Optimisation in the early phases may be helpful, but it does not provide the designers with more information on "where to go from here". Once the designer changes a parameter to suit a client requirement, legal requirement or other, the optimised result may very well be thrown out due to the parameter being changed having such a large effect.
I am hosting several workshops and focus groups in the next month (one for students at Victoria University of Wellington, one for architect practitioners and one for engineering practitioners) to teach the basics of Honeybee and Ladybug within Rhino as NZ is very new to any form of distributed modelling methods (using visual language programmes such as grasshopper and dynamo to communicate between design tools and building simulation program tools). In the focus groups, I am not focusing on the tool of Honeybee so much as I am asking the industry its opinions on the feasibility and wishes of developments such as Honeybee.
I find that many informal interviews I have been having have pointed to the question: Would you rather want to know the optimised concept or the most significant design parameters which you should be wary of at the early stages of design?
I am amazed at the capabilities of Honeybee because it has been such a pain to remodel anything for E+ and Radiance in the past. I particularly love the ability to generate hundreds of idfs with varying parameters within 10min, without having to set up some form of macro to do it. The visualisations of Honeybee are awesome! To say the very least. But as someone who is interested in doing a sensitivity analysis, say with Thermal Autonomy, I feel like there is a lacking element to analysis from an engineer and research/academic stand point.
The way I have set up my files actually create 300+ idfs with all the various different parameters. The parameter ranges only vary from a low, typical and high setting for power densities, WWR, schedules and insulation. These have all been drawn from a large 5 year project where we monitored commercial buildings here in NZ to gain a better understanding of data for purposes like this. I then run them in parallel as batch files and re-insert the data back into Honeybee.
What I am playing around with at the moment though is that, due to the fact that the TA component require so many additional components to then analyse the data in that form, and also that it does not simply give a numerical value in % for the space's performance, I need to re-evaluate the csv that it produces for further analysis.
I have only just begun to try doing some form of sensitivity analysis within Honeybee itself, but I was curious if there were already plugins within grasshopper which may already allow some form of sensitivity analysis.…
of them. If they were already suggested and deemed impossible, i apologize.First, it would be really cool to have a right-click menu item for any geometry retaining module, that does the following: bakes the geometry, then disconnects all inherited data from the module, and assigns the baked version as locally defined. This is a one-time only thing, of course - it would be cool because if you have a "step-definition", that is, you have clear bottlenecks in your dataflow, and at some point you become satisfied with what you have so far, and only need to manually tweak some stuff to move on, you can discard the "already solved part of your definition. It's just a sort of "casting in stone" of partial results, that helps especially with simple work-defs or helper-defs. You could also call it something kickass like "manual override" or "emergency/hand b(r)ake".Second, if you have a component that outputs to a lot of others, and you want to change it with something else, you usually have to painstakingly reconnect all those wires, and if it doesn't work out, you do it right back or undo until you fingers bleed. Just as there is an extract parameter upstream for locally defined values, a downstream "extend parameter" with one rightclick menu item would make switching between various components easier.
Third, maybe a hot-key that you press and then click on a wire, which creates a "data" component at that point, splitting the wire and effectively allowing you to hijack it.
Lastly, maybe this is a stupid question, but what happened to the "clusters"? I mean i know they ended abruptly because of technical difficulties, but collapsing a group to a single component like that was totally awesome.Oh, and a minor bug repor from the v7.053 - it's not important, but mildly annoying: when you have an embedded graft, flatten, reparam or expression into a plug, the component extends to the left with the nifty little icons, and that looks very nice, but the wires still go in the old place, so at first glance i always think the wires are plugged in wrong. Is it possible to move the plug along with the component icon edge, or at least make those little indicators smaller, so that the error is minimized?
Thanks for your time,Hope i was pertinent
Andrei I.ps: the lolcat component is adorable, but i do believe that overall worldwide grasshopper productivity has dropped by various increments of those 20 sec it takes for it to refresh. Sadly accustomed with the feeling of guilt associated to watching around 50 lolpics refresh, I suggest that every 5 refreshes or so, you get a "stop looking at this and get back to work" message. It is at least a good way to derogate responsibility for tempting people to watch kitty-pics all day. :D…
stand completely (i just don't get the math part...).The code can be found here: http://digitalsubstance.wordpress.com/subcode/
So i decided to make my own definition: a cube deformed by 5 attractors and i was wondering if someone can help me solve the meshing at the end of the definition because when i bake it, it gives me an open mesh and i don't understand why ? Waterfall meshes are not suitable for 3d printing... I don't think i've used the clean, weld, and unify faces in the good order ? Maybe there is a problem with the surfaces ?
Secondly i'm not very proud of the result of my cube because it's so deformed that it is a not a cube anymore... so i was wondering if a square grid of points can be deformed by an attractor but still keeping the straight boundary of the grid ?
I had an idea to make that: i make my points, create the vectors between the grid and the attractor points, calculate the distance between the grid points and the attractors: it gives me a list of distances that i remap to control the strengh of my attractors. On the other side i calculate the distance between the boundary of the grid and the grid points and it gives me a second list of numbers. So i wanna average the two list of numbers in such a way that the closest it is to the attractor it takes the distance from the first list and the further it is from the attractor (so the closest it is from the boundary) it takes the distance from the second list ?? I'm sorry for my bad english but even in french it's little bit hard for me to explain it ;). So what can i do to have a grid attracted by a point without moving the boundary points ??
And please don't tell me to cull the boundary points first, to deform the grid and to rebuild the grid after... it gives an ugly cube face at the end, even with a lot of polishing with weaverbird...
If someone has another idea to achieve that please tell me ;)
The first definition "CleanCubeMeshingHelp"is a little bit heavy so watch out if you have a small laptop (any ideas to make it work faster are welcomed !!)
The second one is the one with the two list of numbers.
Also a last questions: what is and when to use the "blur number", "interpolate data" and Weighted Average" under math utilities ??
Thank you in advance for you answers and i apologize for my lack of vocubulary.…
lly it should not make much of a difference - random number generation is not affected, mutation also is not. crossover is a bit more tricky, I use Simulated Binary Crossover (SBX-20) which was introduced already in 1194:
Deb K., Agrawal R. B.: Simulated Binary Crossover for Continuous Search Space, inIITK/ME/SMD-94027, Convenor, Technical Reports, Indian Institue of Technology, Kanpur, India,November 1994
Abst ract. The success of binary-coded gene t ic algorithms (GA s) inproblems having discrete sear ch sp ace largely depends on the codingused to represent the prob lem variables and on the crossover ope ratorthat propagates buildin g blocks from pare nt strings to childrenst rings . In solving optimization problems having continuous searchspace, binary-co ded GAs discr et ize the search space by using a codingof the problem var iables in binary st rings. However , t he coding of realvaluedvari ables in finit e-length st rings causes a number of difficulties:inability to achieve arbit rary pr ecision in the obtained solution , fixedmapping of problem var iab les, inh eren t Hamming cliff problem associatedwit h binary coding, and processing of Holland 's schemata incont inuous search space. Although a number of real-coded GAs aredevelop ed to solve optimization problems having a cont inuous searchspace, the search powers of these crossover operators are not adequate .In t his paper , t he search power of a crossover operator is defined int erms of the probability of creating an arbitrary child solut ion froma given pair of parent solutions . Motivated by t he success of binarycodedGAs in discret e search space problems , we develop a real-codedcrossover (which we call the simulated binar y crossover , or SBX) operatorwhose search power is similar to that of the single-point crossoverused in binary-coded GAs . Simulation results on a number of realvaluedt est problems of varying difficulty and dimensionality suggestt hat the real-cod ed GAs with t he SBX operator ar e ab le to perform asgood or bet t er than binary-cod ed GAs wit h t he single-po int crossover.SBX is found to be particularly useful in problems having mult ip le optimalsolutions with a narrow global basin an d in prob lems where thelower and upper bo unds of the global optimum are not known a priori.Further , a simulation on a two-var iable blocked function showsthat the real-coded GA with SBX work s as suggested by Goldberg
and in most cases t he performance of real-coded GA with SBX is similarto that of binary GAs with a single-point crossover. Based onth ese encouraging results, this paper suggests a number of extensionsto the present study.
7. ConclusionsIn this paper, a real-coded crossover operator has been develop ed bas ed ont he search characte rist ics of a single-point crossover used in binary -codedGAs. In ord er to define the search power of a crossover operator, a spreadfactor has been introduced as the ratio of the absolute differences of thechildren points to that of the parent points. Thereaft er , the probabilityof creat ing a child point for two given parent points has been derived forthe single-point crossover. Motivat ed by the success of binary-coded GAsin problems wit h discrete sear ch space, a simul ated bin ary crossover (SBX)operator has been develop ed to solve problems having cont inuous searchspace. The SBX operator has search power similar to that of the single-po intcrossover.On a number of t est fun ctions, including De Jong's five te st fun ct ions, ithas been found that real-coded GAs with the SBX operator can overcome anumb er of difficult ies inherent with binary-coded GAs in solving cont inuoussearch space problems-Hamming cliff problem, arbitrary pr ecision problem,and fixed mapped coding problem. In the comparison of real-coded GAs wit ha SBX operator and binary-coded GAs with a single-point crossover ope rat or ,it has been observed that the performance of the former is better than thelatt er on continuous functions and the performance of the former is similarto the lat ter in solving discret e and difficult functions. In comparison withanother real-coded crossover operator (i.e. , BLX-0 .5) suggested elsewhere ,SBX performs better in difficult test functions. It has also been observedthat SBX is particularly useful in problems where the bounds of the optimum
point is not known a priori and wher e there are multi ple optima, of whichone is global.Real-coded GAs wit h t he SBX op erator have also been tried in solvinga two-variab le blocked function (the concept of blocked fun ctions was introducedin [10]). Blocked fun ct ions are difficult for real-coded GAs , becauselocal optimal points block t he progress of search to continue towards t heglobal optimal point . The simulat ion results on t he two-var iable blockedfunction have shown that in most occasions , the sea rch proceeds the way aspr edicted in [10]. Most importantly, it has been observed that the real-codedGAs wit h SBX work similar to that of t he binary-coded GAs wit h single-pointcrossover in overcoming t he barrier of the local peaks and converging to t heglobal bas in. However , it is premature to conclude whether real-coded GAswit h SBX op erator can overcome t he local barriers in higher-dimensionalblocked fun ct ions.These results are encour aging and suggest avenues for further research.Because the SBX ope rat or uses a probability distribut ion for choosing a childpo int , the real-coded GAs wit h SBX are one st ep ahead of the binary-codedGAs in te rms of ach ieving a convergence proof for GAs. With a direct probabilist ic relationship between children and parent points used in t his paper,cues from t he clas sical stochast ic optimization methods can be borrowed toachieve a convergence proof of GAs , or a much closer tie between the classicaloptimization methods and GAs is on t he horizon.
In short, according to the authors my SBX operator using real gene values is as good as older ones specially designed for discrete searches, and better in continuous searches. SBX as far as i know meanwhile is a standard general crossover operator.
But:
- there might be better ones out there i just havent seen yet. please tell me.
- besides tournament selection and mutation, crossover is just one part of the breeding pipeline. also there is the elite management for MOEA which is AT LEAST as important as the breeding itself.
- depending on the problem, there are almost always better specific ways of how to code the mutation and the crossover operators. but octopus is meant to keep it general for the moment - maybe there's a way for an interface to code those things yourself..!?
2) elite size = SPEA-2 archive size, yes. the rate depends on your convergence behaviour i would say. i usually start off with at least half the size of the population, but mostly the same size (as it is hard-coded in the new version, i just realize) is big enough.
4) the non-dominated front is always put into the archive first. if the archive size is exceeded, the least important individual (the significant strategy in SPEA-2) are truncated one by one until the size is reached. if it is smaller, the fittest dominated individuals are put into the elite. the latter happens in the beginning of the run, when the front wasn't discovered well yet.
3) yes it is. this is a custom implementation i figured out myself. however i'm close to have the HypE algorithm working in the new version, which natively has got the possibility to articulate perference relations on sets of solutions.
…
as mine but couldn't manage to make it work.
The following script was working on the python module in Rhino, but not in Ghpython.
Note that I have imported a library, and it seems to be importing on Ghpython
--------------------------------------------------------------------------------------------------
from khepri.rhino import *
def iterate_quads(f, ptss):
return [[f(p0, p1, p2, p3)
for p0, p1, p2, p3
in zip(pts0, pts1, pts1[1:], pts0[1:])]
for pts0, pts1
in zip(ptss, ptss[1:])]
def iterate_hexagono(pts, n, v):
return iterate_quads(lambda p0, p1, p2, p3: hexagono_quad(p0, p1, p2, p3, n, v), pts)
def hexagono_quad(p0, p1, p2, p3, n, v):
def chapa(pts):
return intersection(extrusion(line(pts), 280), shape_from_ref(v.copy_ref(v.realize()._ref)))
#return extrusion(line(pts), -40)
topo = intermediate_loc(p3, p2) + vx(distance(p3, p2)/4 * n), intermediate_loc(p3, p2) - vx(distance(p3, p2)/4 * n)
base = intermediate_loc(p0, p1) + vx(distance(p0, p1)/4 * n), intermediate_loc(p0, p1) - vx(distance(p0, p1)/4 * n)
lateral_esq = intermediate_loc(p3, p0), intermediate_loc(p3, p0) + vx(distance(intermediate_loc(p3, p0),intermediate_loc(p2, p1))/4 * n)
lateral_dir = intermediate_loc(p2, p1), intermediate_loc(p2, p1) - vx(distance(intermediate_loc(p2, p1),intermediate_loc(p3, p0))/4 * n)
conex_1 = intermediate_loc(p3, p2) - vx(distance(p3, p2)/4 * n), intermediate_loc(p3, p0) + vx(distance(intermediate_loc(p3, p0),intermediate_loc(p2, p1))/4 * n)
conex_2 = intermediate_loc(p3, p0) + vx(distance(intermediate_loc(p3, p0), intermediate_loc(p2, p1))/4 * n), intermediate_loc(p0, p1) - vx(distance(p0, p1)/4 * n)
conex_3 = intermediate_loc(p0, p1) + vx(distance(p0, p1)/4 * n), intermediate_loc(p2, p1) - vx(distance(intermediate_loc(p2, p1),intermediate_loc(p3, p0))/4 * n)
conex_4 = intermediate_loc(p2, p1) - vx(distance(intermediate_loc(p2, p1),intermediate_loc(p3, p0))/4 * n), intermediate_loc(p3, p2) + vx(distance(p3, p2)/4 * n)
return chapa(topo), chapa(base), chapa(lateral_esq), chapa(lateral_dir), chapa(conex_1), chapa(conex_2), chapa(conex_3), chapa(conex_4)
s = prompt_shape("Escolha superficie")
v = prompt_shape("Escolha solido")
iterate_hexagono(map_surface_division(lambda p:p, s, 5, 15), 0.5, v)
---------------------------------------------------------------------------------------------------
I imported the geometry from another cad software, and then I would select the surface and solid to perform a pattern iteration on the surface to be constrained inside the solid as a internal structure.
The problem is that the surface comes with u, v and normals all weird from the other software so I wanted to pass it through Grasshopper so I can get more control and also perform other computations on Gh on the Ghpython output. Sorry, maybe I’m over complicating. All I want is the Gh inputs working on Ghpython.
I’ll attach the Gh definition,. Need help with the Ghpython component, the rest is just me fooling around.
When I try to run the sript in Ghpython I get:
Runtime error (MissingMemberException): 'NurbsSurface' object has no attribute 'realize'
Traceback:
line 39, in map_surface_division, "<string>"
I'm also attaching the module I've imported
Any help will be very appreciated and sorry about my english
Thanks!
…
peuvent se diviser une surface avec ne importe quel motif imaginable. 3. Ici, je fournir un moyen de le faire via Lunchbox ... cela fonctionne mais il est fixe et donc nous avons besoin de jouer avec des arbres de données afin de créer le motif approprié par cas. 4. L'autre composante est un joint C # qui fait beaucoup de choses autres que de diviser ne importe quelle collection de points avec de nombreux modèles (voir le modèle ANDRE que je ai fait pour vous). 5. Vous devez décomposer une polysurface en morceaux afin de travailler sur les subdivisions. 6. Je donne une autre définition ainsi que pourrait agir comme un tutoriel sur la façon de traiter des ensembles de points via des composants de GH standards et des méthodes classiques.
Avertissez si tous ceux-ci apparaissent floue pour vous: Si oui, je pourrais écrire une définition utilisant des composants de GH classiques - mais vous perdrez les variations de motifs de division.
mieux, Peter
…
and export the geometry out to VVVV to render it LIVE! RawRRRR. In this case, a digital audio workstation Ableton Live, a leading industrial standard in contemporary music production.
the good news is that VVVV and ableton live lite is both free.
https://www.ableton.com/en/products/live-lite/
i am not trying to use ipad as a controller for grasshoppper. I wanted to work with a timeline (similar to MAYA or Ableton or any other DAW(digital audio workstation)) inside grasshopper in an intuitive way. Currently there is no way of SEQUENCING your definition the way you want to see that i know of.
no more combersome export import workflows... i dont need hyperrealistic renderings most of the time. so much time invested in googling the right way to import, export ... mesh settings...this workflow works for some, for some not ...that workflow works if ... and still you cannot render it live nor change sequence of instruction WHILE THE VIDEO is played. and I think no one wants to present rhinoceros viewport. BUT vvvv veiwport is different. it is used for VJing and many custom audio visual installation for events, done professionally. you can see an example of how sound and visuals come together from this post, using only VVVV and ableton. http://vvvv.org/documentation/meso-amstel-pulse
I propose a NEW method. make a definition, wire it to ableton, draw in some midi notes, and see it thru VVVV LIVE while you sequence the animation the WAY YOU WANT TO BE SEEN DURING YOUR PRESENTATION FROM THE BEGINNING, make a whole set of sequences in ableton, go back change some notes in ableton and the whole sequence will change RIGHT INFRONT of you. yes, you can just add some sound anywhere in the process. or take the sound waves (sqaure, saw, whateve) or take the audio and influence geometric parameters using custom patches via vvvv. I cannot even begin to tell you how sophisticated digital audio sound design technology got last ten year.. this is just one example which isn't even that advanced in todays standard in sound design ( and the famous producers would say its not about the tools at all.) http://www.youtube.com/watch?v=Iwz32bEgV8o
I just want to point out that grasshopper shares the same interface with VVVV (1998) and maxforlive, a plug in inside ableton. audio mulch is yet another one that shares this interface of plugging components to each other and allows users to create their own sound instruments. vvvv is built based on vb, i believe.
so current wish list is ...
1) grasshopper recieves a sequence of commands from ableton DONE
thanks to sebastian's OSCglue vvvv patch and this one http://vvvv.org/contribution/vvvv-and-grasshopper-demo-with-ghowl-udp
after this is done, its a matter of trimming and splitting the incoming string.
2) translate numeric oscillation from ableton to change GH values
video below shows what the controll interface of both values (numbers) and the midi notes look like.
https://vimeo.com/19743303
3) midi note in = toggle GH component (this one could be tricky)
for this... i am thinking it would be great if ...it is possible to make "midi learn" function in grasshopper where one can DROP IN A COMPONENT LIKE GALAPAGOS OR TIMER and assign the component to a signal in, in this case a midi note. there are total 128 midi notes (http://www.midimountain.com/midi/midi_note_numbers.html) and this is only for one channel. there are infinite channels in ableton. I usually use 16.
I have already figured out a way to send string into grasshopper from ableton live. but problem is, how for grasshopper to listen, not just take it in, and interpret midi and cc value changes ( usually runs from 0 to 128) and perform certain actions.
Basically what I am trying to achieve is this : some time passes then a parameter is set to change from value 0 to 50, for example. then some time passes again, then another parameter becomes "previewed", then baked. I have seen some examples of hoopsnake but I couldn't tell that you can really control the values in a clear x and y graph where x is time and y is the value. but this woud be considered a basic feature of modulation and automation in music production. NVM, its been DONE by Mr Heumann. https://vimeo.com/39730831
4) send points, lines, surfaces and meshes back out to VVVV
5) render it using VVVV and play with enormous collection of components in VVVV..its been around since 1998 for the sake of awesomeness.
this kind of a digital operation-hardware connection is usually whats done in digital music production solutions. I did look into midi controller - grasshopper work, and I know its been done, but that has obvious limitations of not being precise. and it only takes 0 o 128. I am thinking that midi can be useful for this because then I can program very precise and complex sequence with ease from music production software like ableton live.
This is an ongoing design research for a performative exhibition due in Bochum, Germany, this January. I will post definition if I get somewhere. A good place to start for me is the nesting sliders by Monique . http://www.grasshopper3d.com/forum/topics/nesting-sliders
…
ing the maps to the broader community.
At the moment, there are just a few known issues left that I have to fix for complex geometric cases but they should run smoothly for most energy models that you generate with Honeybee. Within the next month, I will be clearing up these last issues and, by the end of the month, there will be an updated youtube tutorial playlist on the comfort tools and how to use them.
In the meantime, there's an updated example file (http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Indoor_Microclimate_Map) and I wanted to get you all excited with some images and animations coming out of the design part of my thesis. I also wanted to post some documentation of all of the previous research that has made these climate maps possible and give out some much deserved thanks. To begin, this image gives you a sense of how the thermal maps are made by integrating several streams of data for EnergyPlus:
(https://drive.google.com/file/d/0Bz2PwDvkjovJaTMtWDRHMExvLUk/view?usp=sharing)
To get you excited, this youtube playlist has a whole bunch of time-lapse thermal animations that a lot of you should enjoy:
https://www.youtube.com/playlist?list=PLruLh1AdY-Sj3ehUTSfKa1IHPSiuJU52A
To give a brief summary of what you are looking at in the playlist, there are two proposed designs for completely passive co-habitation spaces in New York and Los Angeles.
These diagrams explain the Los Angeles design:
(https://drive.google.com/file/d/0Bz2PwDvkjovJM0JkM0tLZ1kxUmc/view?usp=sharing)
And this video gives you and idea of how it thermally performs:
These diagrams explain the New York design:
(https://drive.google.com/file/d/0Bz2PwDvkjovJS1BZVVZiTWF4MXM/view?usp=sharing)
And this video shows you the thermal performance:
Now to credit all of the awesome people that have made the creation of these thermal maps possible:
1) As any HB user knows, the open source engines and libraries under the hood of HB are EnergyPlus and OpenStudio and the incredible thermal richness of these maps would not have been possible without these DoE teams creating such a robust modeler so a big credit is definitely due to them.
2) Many of the initial ideas for these thermal maps come from an MIT Masters thesis that was completed a few years ago by Amanda Webb called "cMap". Even though these cMaps were only taking into account surface temperature from E+, it was the viewing of her radiant temperature maps that initially touched-off the series of events that led to my thesis so a great credit is due to her. You can find her thesis here (http://dspace.mit.edu/handle/1721.1/72870).
3) Since the thesis of A. Webb, there were two key developments that made the high resolution of the current maps believable as a good approximation of the actual thermal environment of a building. The first is a PhD thesis by Alejandra Menchaca (also conducted here at MIT) that developed a computationally fast way of estimating sub-zone air temperature stratification. The method, which works simply by weighing the heat gain in a room against the incoming airflow was validated by many CFD simulations over the course of Alejandra's thesis. You can find here final thesis document here (http://dspace.mit.edu/handle/1721.1/74907).
4) The other main development since the A. Webb thesis that made the radiant map much more accurate is a fast means of estimating the radiant temperature increase felt by an occupant sitting in the sun. This method was developed by some awesome scientists at the UC Berkeley Center for the Built Environment (CBE) Including Tyler Hoyt, who has been particularly helpful to me by supporting the CBE's Github page. The original paper on this fast means of estimating the solar temperature delta can be found here (http://escholarship.org/uc/item/89m1h2dg) although they should have an official publication in a journal soon.
5) The ASHRAE comfort models under the hood of LB+HB all are derived from the javascript of the CBE comfort tool (http://smap.cbe.berkeley.edu/comforttool). A huge chunk of credit definitely goes to this group and I encourage any other researchers who are getting deep into comfort to check the code resources on their github page (https://github.com/CenterForTheBuiltEnvironment/comfort_tool).
6) And, last but not least, a huge share of credit is due to Mostapha and all members of the LB+HB community. It is because of resources and help that Mostapha initially gave me that I learned how to code in the first place and the knowledge of a community that would use the things that I developed was, by fa,r the biggest motivation throughout this thesis and all of my LB efforts.
Thank you all and stay awesome,
-Chris…