eather data so it cannot be easily compared to Archsim. My account of the differences between Honeybee and Archsim will be far from complete but here are the key ones that I am aware of:
1) This difference is a bit of a superficial one but points to a deeper thinking about how the software should be used. Honeybee has many more components than Archsim, which means that Honeybee has a steeper learning curve than Archsim and will take longer to master. Along with this, you may also encounter a general mentality in the Honeybee community that "you should not be running a certain type of simulation unless you know how it works" whereas I know that Archsim is a bit more amenable to making things fast and easy to set up even when you are not sure what is going on under the hood. However, as a result of the large number of components in Honeybee, it is more open-ended, customizable, and includes more freedom in terms of cases that you can run and the parameters of the energy simulation that you can change than Archsim. You will also notice that, while there is a general ethos in the Honeybee community that you should not be running certain simulations unless you know what you are doing, we try to provide you with many resources to educate yourself if you are motivated. For example, we have long component descriptions that we assemble into documentation books like this (https://www.gitbook.com/book/mostapharoudsari/honeybee-primer/details), hours of video tutorial playlist like this one (https://www.youtube.com/playlist?list=PLruLh1AdY-SgW4uDtNSMLeiUmA8YXEHT_), and many GH example files on a github-based file sharing system (https://hydrashare.github.io/hydra/index.html). Not to mention a community of people who would respond to discussions like this one.
2) Archsim as a standalone application will soon be no more and will be instead distributed with the DIVA daylight analysis tool (http://diva4rhino.com/). While I am unclear on the exact trajectory of DIVA, it currently has a price tag attached to it and so I would assume that the future of Archsim will also carry this price tag. On the other hand, Honeybee and any derivative software will forever be free and open source under the GPL licence (https://github.com/mostaphaRoudsari/Honeybee/blob/master/License_Honeybee_GPL.txt).
3) This third point is a bit of a reiteration of the last one but Honeybee is open source, meaning that, if you need a feature of EnergyPlus that is not yet implemented on either interface, you can usually add it in yourself with a few lines of python code in Honeybee. This type of workflow is not possible with Archsim since it is closed source and requires you to use EnergyPlus's text editor interface after Archsim has exported an IDF in order to implement any additional EnerygPlus features.
4) The libraries and templates for Honeybee come from OpenStudio - the open source interface for EnergyPlus (https://www.openstudio.net/), which is supported by the US Department of Energy (just like EnergyPlus). Since Honeybee is open source, it is able to make use of the large database of building type schedules/loads and constructions that have been assembled by the OpenStudio team over the last several years as well as OpenStudio's SDK. I can also say that almost all of the development efforts of the Honeybee team are now focused now on integrating efforts with OpenStudio, including an exporter from Honeybee to OpenStudio that should be fully functional for the next stable release. I am not certain of the current extent of Archsim's libraries but, last I had checked, the creator was pulling them from his own experience and, as such, only had a few libraries to choose from. For all of my knowledge, through, this may be changing with the integration of Archsim with DIVA.
Let me know if this is helpful and, if anyone has more up-to-date knowledge on Archsim than I, please post there.
-Chris…
er). With the command "End Bulge" I noticed that G2 moves perpendicular to G1! But with an increase which is not equal... and is different, every time, depending on the angle between G0 and G1 and G2. How do I predict the position of G2 compared to G1 simulating the "End Bulge" command? Thank you for your professional answers.
^___^
Below you can see an example with a curve crimson ... If I move G1 of 1 unit G2 moves of 0.42 units (perpendicular) .. If I move of 2 units the next step is 0.46 unit... 3 units --> step 0,50 units... etc.
And each time changes depending on the initial conditions (G0/G1/G2 angle).
…
Added by Lucius Santo at 4:21pm on September 20, 2012
mment%3A1637953
First of all, the invalid Rhino license as seen previously has been removed, and the correct educational license we have is re-installed for this test.
The re-appearing issue is that RAM usage spikes once GH is open in Rhino. It seems that this happens when a series of large GH project files incrementally saved are stored in the same folder. Moving those previously saved large project files to a new folder seems to be able to solve this issue.
The images below explains the issue and the hypothetical solution:
1. A series of GH files were incrementally saved in the same folder previously, and the last few GH files are the ones opened most recently:
2. The total RAM usage is at the normal 5GB level once Rhino is open:
3. Once GH is open, the RAM usage spikes, and the it becomes very slow to maneuver the GH window before even opening any one of those GH files:
4. Once GH and Rhino are closed, the RAM usage drop to the previous level before the GH interface was open:
5. Now, all the incrementally saved GH files are moved to a new folder "wip" except the last one, i.e. for the last GH file, there is no other previous GH files in the same location:
6. Now, if we open GH, there is no sudden increase of RAM usage, and the 3x3 thumbnails on the GH canvas shows "missing" as those previously opened GH files are no longer in the same location as they were before:
I understand that David mentioned that the thumbnails for previously opened GH files on GH canvas will not take much RAM. Nevertheless, I'm still not sure what is causing the increase of RAM usage and slowdown of GH interface. Relocating the large project files previously saved in the same folder as the current GH file seems to be able to make this issue go away, for unknown reason ...
Appreciate if anybody experiencing similar issue can help to check if this solution works.
Thank you.
…
or GH with: 1. Animation Timeline 2. Rendering 3. API
Summary:
Animation Timeline: Smooth animation system that plays at the real-world speed; so you know the robot will run just right when you upload the code.
Rendering: Extensive options and outputs; so you can generate amazing videos.
API: Access our functions through Python and C# scripting; so you can manage parameters and actions for complex processes for each target.
More info:
Animation Timeline:
Build an animation from a list of Planes, it's that easy! Get these from points, curves or surfaces. Download the example files with the trial and test it yourself.
The unique Timeline component displays all the important robot warnings and the digital Input/Ouput:
RED – clash detection BLUE - singularities YELLOW – over rotation ORANGE – out of reach Digital Inut/Output: red=off, green=on
Rendering:
IO smoothly interpolates between all the Planes you set. This means you can generate keyframes for positions between Planes too e.g. you have two planes defining a tool path, IO can generate 2000 keyframes. Smooooth!
Rendered in full colour as standard, not GH red :-)
LiveBaking - let's you use Rhino render settings in real-time (can be a bit slow!)
Slider animation - use the native 'Animate' option to export hi-res images and create videos easily. Just set the number of frames you need (hint: divide total time in seconds by the frames-per-second rate)
Bake unlimited meshes as keyframes for export to render-pipelines in 3DS etc.
API
Accessing the IO functions through Python and C# let's you build more powerful definitions. You can assign data to every position the robot reaches, allowing you to control speed, acceleration, wait-times, actions and more. Examples comparing C# with Python are included in the examples files.
You can also use teh API build your own plugins that use the IO timeline to do all the hard work like IK and creating valid code, while you enjoy developing your new process...
Check out the website for more features and videos of the example definitions: www.robots.io
Download the PDF guide: 150314_IO_Primer_v1.pdf.
See www.robots.io for more info and pricing.
Developed by RoboFold Ltd. Used by leading academics, researchers and professionals.
…
Added by Gregory Epps at 10:15am on November 7, 2014
onents (radiation, sunlight-hours and view analysis) which let you study the effect of the orientation of your building and the analysis result. When you come to a question similar to "what is the orientation that the building receives the most/least amount of radiation?" is probably the right time to use this component.
HOW?
I'll try to explain the steps using a simple example. Here is my design geometries. The building in the center is the building to be designed and the rest of the buildings are context. I want to see the effect of orientation on the amount of the radiation on the test building surfaces from the start of Oct. to the end of Feb. for Chicago.
First I need to set up the normal radiation analysis and run it for the building as it is right now. [I'm not going to explain how you can set up this since you can find it in the sample file (Download the sample file from here)]
Now I need to set up the parameters for orientation study using orientationStudyPar component. You can find it under the Extra tab:
At minimum I need to input the divisionAngle, and the totalAngle and set runTheStudy to True. In this case I put 45 for divisionAngle and 180 for the totalAngle which means I want the study to be run for angles 0, 45, 90, 135 and 180.
[Note1: The divisionAngle should be divisible by totalAngle.]
[Note 2: If you don't provide any point for the basePoint, the component will use the center of the geometry as the center of the rotation.]
[Note 3: You can also rotate the context with the geometry! Normally you don't have the chance to change the context to make your design work but if you got lucky the rotateContext input is for you! Set it to True. The default is set to False.]
You're all set for the orientation study, just connect the orientationStudyPar output to OrientationStudyP input in the component and wait for the result!
The component will run the study for all the orientations and preview the latest geometry. To see the result just grab a quick graph and connect it to totalRadiation. As you can see in the graph 135 is the orientation that I receive the maximum radiation. Dang!
If you want to see all the result geometries set bakeIt to True, and the result will be baked under LadyBug> RadaitionStudy>[projectname]> . The layer name starts with a number which is the totalRadiation.
Mostapha…
ahams's question about how shades are accounted for in the simulation/thermal map and Theodore's thought that just accounting for shades in the E+ run was sufficient. I think that it may be clearest to explain what is going on with this infographic:
As the graphic shows, the thermal maps are made from 4 key types of inputs. The radiant temperature map is formed through a consideration of both the temperature of the surfaces surrounding the occupants and the direct solar radiation that might fall onto the occupants through un-shaded windows. The first surface temperature effect is easily computable from your Energy simulation results and the HBZone geometry. However, the second is calculated by seeing how sun vectors pass through the windows of the zones and uses the SolarCal method of the CBE team (http://escholarship.org/uc/item/89m1h2dg) to compute an MRT delta resulting from solar radiation. This delta is then added to the initial values computed through surface temperature view factor. When you do not connect up your shading brep geometry, internal furniture breps, or outdoor context geometry that might block sun to the additionalShading input, the thermal map will assume that sun can pass unobstructed through the window or through indoor furniture to fall onto occupants. It is important to stress that the EnergyPlus simulation does not count for blind geometry or internal furniture as actual geometry. Just as numerical abstractions of surface area and material properties. So we need you to plug in the actual geometry of these things when we compute the MRT delta resulting from sun falling directly onto people.
Next, to clear up the definition of window transmissivity. The important thing to clarify here is that, whether it refers to the tranmittance of glass or to the amount of sun coming through a fine screen of blinds, the value is multiplied by the radiation falling on the occupant and thus has a direct correlation to the MRT Delta from sun falling on occupants. So, if you set transmissivity to zero, the sun falling on the occupants will not be considered in the calculation and, if you set the transmissivity to 1, the assumption is that there is no window (or the window glass is 100% clear). So, Abraham, your definition of it as a coefficient is appropriate.
Normally, I would just recommend that you leave this value at the default 0.7, which corresponds to the transmittance of the default glass material in Honeybee. However, there are 4 cases in which you might consider changing it:
1) You are not using the default Honeybee glazing material, in which case, you should change the transmissivity to be equal to this new value.
2) You have a lot of really small blind/shade geometries and you do not want the view factor component to take several minutes to trace sun vectors through the detailed shade geometry and so you are ok with using just a simple abstraction instead of plugging shade breps into the additionaShading. In this case, you might try to estimate the average percentage of radiation coming through the blind geometry (maybe with some simple Ladybug radiation studies or with your intuition about the amount of sun blocked by the shades). You will then multiply this by the tranmissivity of your glass and this will be the value that you input to the component.
3) Your blinds for your Honeybee simulation are dynamic, in which case, plugging shade breps into additionalShading is not going to work because the component will assume that those shades are always there. In this case, you should be plugging a list of 8760 values into the transmissivity that correspond to when the shades are pulled. When the blinds are completely up, the value should be the tranmittance of your window and, when they are down, the value should be the window tranmittance multiplied by the fraction of light coming through the shades.
4) You have shades/blinds but they are transparent or are not completely opaque. The additionalShading_ input assumes that all shade geometry is opaque and so you cannot use it to account for such shades. Accordingly, you will need to account for it through the tranmissivity.
In the future, I may try to pull more information about blinds and glass properties off of the HBzones inside the view factor component but, for now and for the next few months, the above describes how it works.
Theodore, for curved geometry, I think that your safest bet is going to be planarizing the Rhino geometry before you turn it into a HBZone (so you just divide the curved surface into a few vertical planar panes of glass that approximate the curve well enough). This is essentially what the runSimulation component does for you automatically (it meshes the geometry as you see here: https://www.youtube.com/watch?v=nMQ2Pau4q6c&index=12&list=PLruLh1AdY-SgW4uDtNSMLeiUmA8YXEHT_). If I were to figure out a way to incorporate shades in this automatic meshing workflow, your EnergyPlus simulation would take a very long time to run and I am not even sure if the result will be that accurate with the way E+ abstracts shades. So I don't think that it's really worth it over just planarizing the geometry yourself.
Lastly, I won't be able to figure out the problem with your current run Theodore, unless I get the GH file from you. Make sure that you are using all up-to-date components.
-Chris…
l coworking. Il corso prevede la trattazione delle tematiche di base della modellazione generativa, con l'inserimento di lezioni basate sulla filosofia progettuale della modellazione generativa e basi di analisi matematica.
Il corso ha durata di 30 ore con appuntamenti bisettimanali (lunedì e mercoledì) a partire da lunedì 03 ottobre. Per maggiori informazioni contattate il docente del corso e scaricate il programma
____________
Cavallette Generative is the new Grasshopper Level I course offered by Mandarino Blu visual communication LAB. The event is organized by the support of Multiverso, a co-working company. The course includes the discussion of the basic themes of generative modeling, such as design philosophy and mathematical analysis.
The course lasts 30 hours with twice-weekly meetings (Monday and Wednesday) from Monday, October 3. For more information contact the instructor of the course and download the program…
tion) which would amount to -at a rough guess based on your image- about 1000 genes. I'm not sure how well Galapagos will be able to deal with such an amount (ironically the biggest problem will probably be the interface, not the solver algorithm).
The main theoretical problem I see is the fitness function definition for this. It won't be good enough to count intersections and minimize those. Reason being is that the number of intersections is an integer (it doesn't vary smoothly) and as such there's no 'selection pressure' towards better solutions. If you start with a setup like this:
it would have a 'fitness' of 2. We'd like to move these two shapes apart but even if we move them a large distance in the correct direction, we still have a fitness of 2:
It seems like a more useful metric would be the area of the overlap, at least then when leaves move apart, the fitness value will change, which allows the algorithm to make an informed decision.
Computing curve region intersections and areas will be a very intense step, making the whole process even slower than it already is. If I had to do this, I'd first try and tackle this using pixels, as computers are very good at dealing quickly with them. You could draw an image of all the shapes, drawing them each in a transparent black. Then, when two shapes overlap, the resulting pixel will be darker than the fill. If three shapes overlap it will be darker still. Then, once you've created the image, you could all the pixels and compute a value based on how many dark pixels there are.
This can probably be done in a reasonably low resolution, but you'd need to write some code to create and analyse the images.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
N, O}. In this case it's very obvious what needs to happen. You want to create lines combining the following points {AK, BL, CM, DN, EO}.
If the second set however only contains 3 points {K, L, M}, it is no longer obvious. The default behaviour for Components is to keep on matching points until both sets are depleted. This is called Longest List Matching. It will give you 5 lines, that connect the following points: {AK, BL, CM, DM, EM}. As you can see, the last point in the second list (M) has been 'recycled' three times.
You can also change the default data matching behaviour. For example if you change it to Shortest List, then the component will stop working as soon as the smallest set is depleted: {AK, BL, CM}. In this case the points D and E are completely ignored because no 'sibling' could be found for them in the second set.
A third option is Cross Reference matching, which will create all possible combinations: {AK, AL, AM, BK, BL, BM, CK, CL, CM, DK, DL, DM, EK, EL, EM}.
However the best solution in this particular case is not to muck about with the data matching, but instead Graft your data. Grafting means that all the items in a set are moved into their own little set. Thus, if you graft {A, B, C, D, E}, you actually end up with 5 sets, each containing a single item {A}, {B}, {C}, {D} & {E}. When you combine this new data layout with your second set {K, L, M}, each grafted item will be matched with all the items in the second set, this is after all how Longest List works. So you end up with a data layout that looks like this: {AK, AL, AM}, {BK, BL, BM}, {CK, CL, CM}, {DK, DL, DM} & {EK, EL, EM}, which is very similar to the Cross Reference matching, but retains more of the original layout. I.e., it's not just a huge list of all the lines, they are still five groups of three items each, which is a far more informative layout than Cross Reference would generate.
I'm afraid at this time of night this is the best I can explain it.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
difference consists of.
An Evolutionary Solver/Genetic Algorithm is an implementation of Metaheuristics. Metaheuristics tend to be flexible solvers, applicable to a wide variety of problems, fairly easy to implement, but slow. Other examples of Metaheuristic algorithms would be Random Search, Scatter Search, Simulated Annealing and do on. These algorithms are often modelled on physical or biological processes.
Simulated Annealing for example simulates the physical process of annealing (who'd have thunk it), which is basically the slow cooling of a material which allows it to settle into a crystalline lattice, i.e. a low energy distribution of all the atoms. I'm currently adding an SA solver to Galapagos, and in fact just yesterday managed to get the first successful run: http://www.youtube.com/watch?v=VWtYLv-4oP0
Metaheuristics are especially useful for those cases where little is known about the problem ahead of time. If the problem search-space is mathematically well defined (differentiable, especially), then you can use more targeted algorithms such as the Newton-Raphson method, Pareto-search or Uphill search. You can still use these methods on non-differentiable search-spaces, but it involves sampling the local region to death to get an estimate of the differential. This can be a very costly enterprise, especially in high dimensional search-spaces. In a two-dimensional search-space you'll need 3 to get a lame estimate and 4 to get a halfway decent estimate and 8 to get a good estimate. In three-dimensional search space you already need 26 samples, and the number of samples grows exponentially with higher dimensions.
If you have a specific problem you're trying to solve, Metaheuristics are probably not the best solution, even though they may be easiest to program. Rhino uses something akin to Newton-Raphson for certain problems and that's fast enough to run in real-time.
Divide-and-Conquer algorithms are also quite popular. Sometimes they are called Binary-Search or Tree-Search algorithms as well. Their basic premise is to sample the search-space at a few intervals (but enough to capture the needed detail), then find two neighbours with promising values and sample again in between these two. Then repeat. Each new iteration typically doubles accuracy, which is great because then you only need ~30 ~40 iterations to get an answer as good as possible with double-precision floating point accuracy. However not all problems lend themselves well to this sort of search and in higher dimensions it starts getting slow with disconcerting alacrity.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
Added by David Rutten at 1:54am on August 15, 2011