ld work.
For example there's a grid shell and I've got a number of control points (for example 3) that can move up and down.
Depending on the control points I get forms that are structurally good and some that are bad.
In my office we've got a GH-Component, which leads the geometry in structural members and solves the structural forces and so on through an external Software called Sofistik and afterwards gives back to GH some Values, for example maximum bending moments. (Like Karamba)
Now I want to create this optimization component or something like that to minimize e.g. the bending moments in the given geometry.
Let's start with the work of the component.
So when I've three control points that can only move in z-direction.
P1(0,0,Z1), P2(10,0,Z2), P3(5,5,Z3)
They only depend on Z, so everything depends on Z1 to Z3 which have a range between 0 and 10 f.e.
First I want to get some (between 9 and 15) random Particles, one particle consists of this 3 different Z's.
So for example the first particle Part1 is [Z1=10, Z2=5, Z3=7]
and the second particle Part2 is [Z1=7, Z2=1, Z3=9]
and so on.
I created these Start Particles in a Cluster. See attached file.
I also tried this in C#, but thought it is easier in GH.
After I've got the Start Particles I want to give out the first particle and evaluate with its including Z's the target value in GH. Therefore I had to take the first branch and graft this branch (Discussion before)
Afterwards I want to save this Target Value that depends on the first starting Particle. Then I want to give out the second starting Particle to evaluate its target Value and store it. And so on till the last target Value of the last Starting Particle got assigned.
Then I want to assign the particles with its target values. E.g. part1: t=0.9, part2: t=1.8...
Then I want to define neighborhoods or the count of the expected local minima.
These neighborhoods can look like: Each neighborhood has to include not less than 3 particles. And the particles have to be next to each other.
E.g. if there are 12 particles and I want to have a look for 3 local minima, I need 3 or 4 neighborhoods. Then I would take 3 neighborhoods, because the more particles in one neighborhood, the better.
So the Count of the neighborhoods would be N=min{(Count of Part/3)& N_min}
How to define these neighborhoods I don't know at the moment. I think it has to be searched for the distance between the particles. E.g. part1 with (9,9,9) and part2 with (9,9,8) are next to each other but part 3 with(1,1,2) is far away.
Then each StartParticle is set to Partx_localbest.
And in each Neighbourhood the best of these localbeststs is Part_NyBest. (The best ist the one with the smallest target Value)
Loop:
Now I want to create new Particles. These Particles don't change their Z-values randomly. They change their Z-Values depending on Part_NxBest and Part_localBest. Therefore it has to be evaluated a new velocityfactor with v_Partx_new=0,792*v_PartxOld+1,5*random(0,1)*(partx_localbest-partx)+1,5*random(0,1)*(part_NyBest-partx)
The new particles will then be partx_new=partx+v_Partx_new.
The new Particle partx_new will be set to partx and then set in the output.
then there has to be caught the targetValue of part1 afterwards part2 can be put out and its target value caught and so on.
Then it has to be looked for the Partx_localbest through comparing the partx_localbest and its target value with the new part_x and its target value. If the target value of the new partx is smaller than partx_localbest,
then partx_localbest is the new partx.
This has to be done for each partx. Afterwards the same for neighborhoods best (best of all partx_localbest in one neighborhood)
Endloop if velocity gets small.
Output all part_NxBest
Output all targetvalues of the part_NxBests.
So in the Input there have to be:
StartParticles if they are given through the cluster attached.
Device on the target Value like in the attached gh.file from David Rutten I found in the discussions
Count of neighborhoods
And in the output
Output particle for evaluation
Output all part_NxBest
Output all targetvalues of the part_NxBests
Hope didn’t forget anything. And hope it isn’t crushed to badly. Sorry for my bad English by the way ;-)
For more explanation, how the PSO works in other programs. There’s attached a workflow script (is it called like that?) I think for GH it should be a little bit changed like I tried in my explanations.
So if you can help me a in some parts or you have any advices would be great, otherwise thank you nevertheless!!!!
Thankfully there’s no limit for the words in the discussions :-D
Best, Heiko
…
lly it should not make much of a difference - random number generation is not affected, mutation also is not. crossover is a bit more tricky, I use Simulated Binary Crossover (SBX-20) which was introduced already in 1194:
Deb K., Agrawal R. B.: Simulated Binary Crossover for Continuous Search Space, inIITK/ME/SMD-94027, Convenor, Technical Reports, Indian Institue of Technology, Kanpur, India,November 1994
Abst ract. The success of binary-coded gene t ic algorithms (GA s) inproblems having discrete sear ch sp ace largely depends on the codingused to represent the prob lem variables and on the crossover ope ratorthat propagates buildin g blocks from pare nt strings to childrenst rings . In solving optimization problems having continuous searchspace, binary-co ded GAs discr et ize the search space by using a codingof the problem var iables in binary st rings. However , t he coding of realvaluedvari ables in finit e-length st rings causes a number of difficulties:inability to achieve arbit rary pr ecision in the obtained solution , fixedmapping of problem var iab les, inh eren t Hamming cliff problem associatedwit h binary coding, and processing of Holland 's schemata incont inuous search space. Although a number of real-coded GAs aredevelop ed to solve optimization problems having a cont inuous searchspace, the search powers of these crossover operators are not adequate .In t his paper , t he search power of a crossover operator is defined int erms of the probability of creating an arbitrary child solut ion froma given pair of parent solutions . Motivated by t he success of binarycodedGAs in discret e search space problems , we develop a real-codedcrossover (which we call the simulated binar y crossover , or SBX) operatorwhose search power is similar to that of the single-point crossoverused in binary-coded GAs . Simulation results on a number of realvaluedt est problems of varying difficulty and dimensionality suggestt hat the real-cod ed GAs with t he SBX operator ar e ab le to perform asgood or bet t er than binary-cod ed GAs wit h t he single-po int crossover.SBX is found to be particularly useful in problems having mult ip le optimalsolutions with a narrow global basin an d in prob lems where thelower and upper bo unds of the global optimum are not known a priori.Further , a simulation on a two-var iable blocked function showsthat the real-coded GA with SBX work s as suggested by Goldberg
and in most cases t he performance of real-coded GA with SBX is similarto that of binary GAs with a single-point crossover. Based onth ese encouraging results, this paper suggests a number of extensionsto the present study.
7. ConclusionsIn this paper, a real-coded crossover operator has been develop ed bas ed ont he search characte rist ics of a single-point crossover used in binary -codedGAs. In ord er to define the search power of a crossover operator, a spreadfactor has been introduced as the ratio of the absolute differences of thechildren points to that of the parent points. Thereaft er , the probabilityof creat ing a child point for two given parent points has been derived forthe single-point crossover. Motivat ed by the success of binary-coded GAsin problems wit h discrete sear ch space, a simul ated bin ary crossover (SBX)operator has been develop ed to solve problems having cont inuous searchspace. The SBX operator has search power similar to that of the single-po intcrossover.On a number of t est fun ctions, including De Jong's five te st fun ct ions, ithas been found that real-coded GAs with the SBX operator can overcome anumb er of difficult ies inherent with binary-coded GAs in solving cont inuoussearch space problems-Hamming cliff problem, arbitrary pr ecision problem,and fixed mapped coding problem. In the comparison of real-coded GAs wit ha SBX operator and binary-coded GAs with a single-point crossover ope rat or ,it has been observed that the performance of the former is better than thelatt er on continuous functions and the performance of the former is similarto the lat ter in solving discret e and difficult functions. In comparison withanother real-coded crossover operator (i.e. , BLX-0 .5) suggested elsewhere ,SBX performs better in difficult test functions. It has also been observedthat SBX is particularly useful in problems where the bounds of the optimum
point is not known a priori and wher e there are multi ple optima, of whichone is global.Real-coded GAs wit h t he SBX op erator have also been tried in solvinga two-variab le blocked function (the concept of blocked fun ctions was introducedin [10]). Blocked fun ct ions are difficult for real-coded GAs , becauselocal optimal points block t he progress of search to continue towards t heglobal optimal point . The simulat ion results on t he two-var iable blockedfunction have shown that in most occasions , the sea rch proceeds the way aspr edicted in [10]. Most importantly, it has been observed that the real-codedGAs wit h SBX work similar to that of t he binary-coded GAs wit h single-pointcrossover in overcoming t he barrier of the local peaks and converging to t heglobal bas in. However , it is premature to conclude whether real-coded GAswit h SBX op erator can overcome t he local barriers in higher-dimensionalblocked fun ct ions.These results are encour aging and suggest avenues for further research.Because the SBX ope rat or uses a probability distribut ion for choosing a childpo int , the real-coded GAs wit h SBX are one st ep ahead of the binary-codedGAs in te rms of ach ieving a convergence proof for GAs. With a direct probabilist ic relationship between children and parent points used in t his paper,cues from t he clas sical stochast ic optimization methods can be borrowed toachieve a convergence proof of GAs , or a much closer tie between the classicaloptimization methods and GAs is on t he horizon.
In short, according to the authors my SBX operator using real gene values is as good as older ones specially designed for discrete searches, and better in continuous searches. SBX as far as i know meanwhile is a standard general crossover operator.
But:
- there might be better ones out there i just havent seen yet. please tell me.
- besides tournament selection and mutation, crossover is just one part of the breeding pipeline. also there is the elite management for MOEA which is AT LEAST as important as the breeding itself.
- depending on the problem, there are almost always better specific ways of how to code the mutation and the crossover operators. but octopus is meant to keep it general for the moment - maybe there's a way for an interface to code those things yourself..!?
2) elite size = SPEA-2 archive size, yes. the rate depends on your convergence behaviour i would say. i usually start off with at least half the size of the population, but mostly the same size (as it is hard-coded in the new version, i just realize) is big enough.
4) the non-dominated front is always put into the archive first. if the archive size is exceeded, the least important individual (the significant strategy in SPEA-2) are truncated one by one until the size is reached. if it is smaller, the fittest dominated individuals are put into the elite. the latter happens in the beginning of the run, when the front wasn't discovered well yet.
3) yes it is. this is a custom implementation i figured out myself. however i'm close to have the HypE algorithm working in the new version, which natively has got the possibility to articulate perference relations on sets of solutions.
…
n common tasks like updating GH definitions, viewing images on the GH canvass, and augmenting existing study-types. Most of the improvements to Honeybee have been in the making for a while and are just getting into the spotlight with this release. Notably, a number of improvements have been made to support large-scale full building energy models, including fixes to memory issues with large models, better components for splitting building masses into zones, and the ability to store HBZones in external files. Additionally, the THERM workflows have gotten a boost and these simulations can now be run directly from the Grasshopper canvass.
As always you can download the new release from Food4Rhino. Make sure to remove the older version of Ladybug and Honeybee before you do so and update your scripts. So, without further adieu, here is the list of the new capabilities added with this release:
LADYBUG
Better Method for Updating Old Grasshopper Files - As many of you have come to realize, Ladybug + Honeybee is updated on a fairly regular basis, with a stable release roughly every 6 months and a github version that never ceases to improve itself on a weekly basis. For this reason, we realize that updating old Grasshopper definitions to use recent components is a challenge for many of us. While we’ve had some methods for this in the past, there were always hiccups, particularly when it came to components that had new inputs/outputs since the previous version. Accordingly, Mostapha has added a new “Ladybug_Update File” component that will automatically update any Grasshopper Definition to be synchronized with the version of Ladybug+Honeybee that is currently in your toolbar (aka. the components in your userobjects folder). If there is a component that has new inputs/outputs since the time you built the definition, it will be automatically circled in red in your GH definition and a newer version of the component will be automatically added right next to this component:
While you still have to do some manual connecting of inputs to the newer component in this case, it should be much faster than our older methods and will hopefully help your old definitions survive long into the future!
EPWmap Now includes OneBuilding Files - Mostapha has added a number of new features to the EPWmap web interface that the “Download Ladybug” component connects to. Among the improvements are a color wheel that quickly shows you how hot, cold, and comfortable a given climate is and, perhaps more importantly, there is now support for EPW files sourced from OneBuilding. With the addition of many more weather files, you should now be able to use Ladybug with ease for more locations across the planet. We should also note that the “Open EPW and STAT” component that downloads/unzips files from a URL now supports OneBuilding URLs.
New Image Viewer Component - Mingbo Peng has graced Ladybug with a fantastic new “Image Viewer” component that takes a given image file on one’s machine and displays it on the Grasshopper canvas. It also enables one to pull color data off of the image with ease by simply clicking on the pixel of the image one is interested in. This new component is useful for a wide variety of cases, including the viewing of screenshots after they have been taken with the “Ladybug_Capture View” or “Ladybug_Render View” components. However, many of you will likely recognize it as most immediately useful in workflows involving image-based Honeybee Daylight (Radiance) simulations. This is particularly true as Migbo has built-in the capability to read many image file types, including PNG, JPEG, GIF, TIFF and the High Dynamic Range (.HDR) image files that Radiance Outputs:
The following video gives a quick overview of the Image Viewer’s capabilities:
The new component can be found under the Ladybug_Extra tab and I think I speak for us all in saying thank you Mingbo for this great component!
New Sun Shades Calculator Released Under WIP - After over a year of software development and nearly a career's worth of geometric math development, a joint effort between Abraham Yezioro and Antonello Di Nunzio has produced a new sun shade design component that can be described as nothing short of “magical.” Based on a similar principle to the current “Ladybug_Shading Designer,” the new component takes an input of sun vectors and produces shade geometries that can block the vectors. However, in comparison to the shading designer, the range of shade options that are available in this new component is truly staggering, ranging from classic overhangs, louvers and fins to pergolas and custom shade surfaces. Perhaps more importantly, the calculation methods used by this new component are faster and more reliable. It can currently can be found under the WIP section of Ladybug and it will continue to evolve in new versions of Ladybug.
Renewable Component Now Support Sandia and CEC Photovoltaics Modules - Polishing off his many contributions to the “Renewables” section of Ladybug, Djordje Spasic has added support for a couple more ways of defining Photovoltaic modules for renewables estimation. Specifically, the Ladybug WIP section now includes components to import modules defined with the California Energy Commission (CEC) and Sandia Labs.
HONEYBEE
Support for OpenStudio 2.x - A few months ago, the National Renewable Energy Lab (NREL) released a stable version of OpenStudio version 2, which included a number of improvements in stability and available features. This stable release of Honeybee is built to work with the new version of OpenStudio and, in the coming months, Honeybee will be adding a few more capabilities to its OpenStudio workflows to support v2.x’s new capabilities. Most notable among these will be support for OpenStudio measures. Measures are short scripts written in Ruby using OpenStudio’s SDK to quickly edit and change OpenStudio models. They are fundamental to visions of OpenStudio as a flexible energy modeling interface and to Honeybee’s goals of being a collaborative interface between the architectural and engineering industries. Stay tuned for the next release for many of these new capabilities!
Critical Memory Issue Fixed for Large Energy Models - A number of you wonderful members of our community have been aware of computer memory issues with large Honeybee models for some time (examples: 1, 2, 3, 4). Namely, a model that is larger than 50 zones could quickly eat up 16 GBs of memory and change Honeybee from a fast-flying insect to something more reminiscent of a snail. We are happy to say that, after a much longer time than it should have taken us, we finally identified and fixed the issue. In this version of Honeybee, such large models can now be created using less than 2% of the memory and time previously. Thanks to all of you who made us aware of this and hopefully you will now reap the rewards of your struggle.
Split Building Mass Component Getting a Makeover - Many of you veteran Ladybug users will recognize Saeran Vasathakumar as one of the original contributors of Ladybug who added components for solar fans and envelopes years ago. Now he’s back with new components to split a building mass into zones that are truly revolutionary in their speed and methodology. Saeran has divided the new capabilities into two components (one for floor-by-floor subdivision and another for core-perimeter subdivision) and they both can be found under the WIP section of this release. In this WIP version, core-perimeter thermal zones can only be generated for all convex and very simple concave geometries. Most concave geometries and geometries with holes (or courtyards) in them will fail. However it can handle even very complex convex geometries with speed and ease. You can expect the component to start accommodating concave/courtyard geometries very soon.
Load / Dump HB Objects to File - Keeping in line with the support of large, full building energy models, this release includes full support for two components that can dump and load any HBObjects to a standalone file. All information about HBzones can go into this file including custom constructions, schedules, loads, natural ventilation, shading devices, etc. You can then send the resulting .HB file to someone else and they can load up the same exact zones in another definition. This also makes it possible to have one Grasshopper file for generating the zones and running the simulation and another GH definition to import results and color zones/surfaces with those results, make energy balance graphics, etc.
Write ViewFactorInfo to File - After many of you asked for it, the _viewFactorInfo that is output from the “Honeybee_View Factor” component can now be written out to an external file using the same Load / Dump HB Objects components cited above. For those of you who have worked with the comfort map workflows, you probably already know that calculating these view factors is one of the most time consuming portions of building a microclimate map. Having to re-run this calculation each time you want to open up the Grasshopper script is a nuisance and, thanks to this new capability, you should only have to run it once and then store your results in an external .HB file.
Transform Honeybee Components Modified for Large Model Creation - Many large buildings today are made up of copies of the same rooms repeated over and over again across multiple floors, or along a street, etc. Accordingly, one can imagine that the fastest way to create a full building energy model of such buildings is to simply move and copy the same zones several times. This is what a new set of edits to the Honeybee Transform components is aimed at supporting by allowing one to build a custom set of zones, translate them several times with a Honeybee_Transform component, then solve adjacencies on all zones to make a complete energy model.
Central Plants Available on HVAC Systems - While Honeybee has historically supported the assigning of separate HVAC systems to different groups of zones, each HVAC was always an entirely new system from the ground up. So a building with separate VAV systems for each floor would be modeled with a different chiller and boiler for each floor. While this can be the case sometimes, it is more common to have only one chiller and boiler per building but separate air systems for each floor. The new ‘centralPlant_’ options on the Honeybee coolingDetails and heatingDetails enable you to create this HVAC structure by making a single boiler and chiller for any HVAC systems that have this option toggled on. Furthermore, in the case of VRF systems, you can also centralize the ventilation system, using the grouping of zones around a given HVAC to assign which zone terminals are connected to a given heat pump.
More HVAC Templates Added - As the profession continues to push the industry standard towards lower-energy HVAC systems, Honeybee intends to keep up. In this release, we have included a few more templates for modeling advanced HVAC systems including Radiant Ceilings, Radiant Heated Floors + VAV Cooling, and Two Ground Source Heat Pump (GSHP) systems. Variable Refrigerant Flow (VRF) systems have also gotten a large boost as it is now possible to model these systems with more efficient water-source loops. The next release will include the ability to model central ground source systems that use hydronics for heating cooling delivery.
Run THERM Simulations Directly from Grasshopper - Anyone who has used the THERM workflow in the past likely realized that, while Honeybee can write the THERM file, you would still have to open model in THERM yourself and hit “simulate” to get results. Now that LBNL has started a transition to becoming more open, they have graciously allowed free access for everyone to run THERM from a command line. What this means for Honeybee is that you no longer need to open THERM at all in order to get results and you can now work entirely in Rhino/Grasshopper. This also opens up the possibility of long parametric runs with THERM models since you can now automatically run simulations and collect results as you animate sliders, use galapagos, etc. A special thanks is due to the LBNL team for exposing this feature, including Setphen Selkowitz, Christian Kohler, Charlie Curcija, Eleanor Lee, and Robin Mitchell.
All Options Exposed for THERM Boundary Conditions - To finish off the full implementation of THERM in Honeybee, a final component has been added called “Honeybee_Custom Radiant Environment.” This component completes the access to all boundary condition options that THERM offers, including separate radiant and air temperatures, different view factor models, and the specification of additional heat flux (which is typically used to account for solar radiation).
Improvements to Schedule-Generating Components - Many of you who have watched the Honeybee energy modeling video tutorials have likely gotten in the habit of using CSV schedules for everything. While this is definitely one valid way to work, it is not always the most efficient since simple schedules can be specified much more cleanly to EnergyPlus/OpenStudio and the use of CSVs can also make it difficult to share your energy models (since you have to send CSV files along with the schedules themselves). This release adds two new schedule components that should take care of a lot of cases where CSV schedules were unnecessary. The new “Constant Schedule” component allow you to quickly make a schedule that is set at a single value or a set of constantly repeating 24-hour values. The second component allows you to create “Seasonal Schedules” by connecting “week schedules” from the other schedule components along with analysis periods in which these seek schedules operate. Together, these will hopefully make our schedule-generating habit a bit better as a community.
Lastly, many of you may know Mingbo Peng as the current maintainer of the Design Explorer web interface and the Colibri components under TTToolbox. Both of these tools have been revolutionary in enabling “brute force” studies of design spaces (aka. Grasshopper scripts where one runs all combinations of a set of sliders). Now, Mingbo has graced Ladybug with the aforementioned image viewer component and it is with pride that we welcome Mingbo Peng to the development team!
As always let us know your comments and suggestions.Cheers!
The Ladybug Tools Development Team
…
e a fundamental failure on my part. On the other hand, Grasshopper isn't supposed to be on a par with most other 3D programs. It is emphatically not meant for manual/direct modelling. If you would normally tackle a problem by drawing geometry by hand, Grasshopper is not (and should never be advertised as) a good alternative.
I get that. That’s why that 3D shape I’m trying to apply the voronoi to was done in NX. I do wonder where the GUI metaphor GH uses comes from. It reminds me of LabVIEW.
"What in other programs is a dialog box, is 8 or 10 components strung together in grasshopper. The wisdom for this I often hear among the grasshopper community is that this allows for parametric design."
Grasshopper ships with about 1000 components (rounded to the nearest power of ten). I'm adding more all the time, either because new functionality has been exposed in the Rhino SDK or because a certain component makes a lot of sense to a lot of people. Adding pre-canned components that do the same as '8 or 10 components strung together' for the heck of it will balloon the total number of components everyone has to deal with. If you find yourself using the same 8 to 10 components together all the time, then please mention it on this forum. A lot of the currently existing components have been added because someone asked for it.
It’s not the primary components that catalyzed this thought but rather the secondary components. I was toying with a component today (twist from jackalope) that made use of three toggle components. The things they controlled are checkboxes in other apps.
Take a look at this jpg. Ignore differences; I did 'em quickly. GH required 19 components to do what SW did with 4 commands. Note the difference in screen real estate.
As an aside, I really hate SolidWorks (SW). But going forward, I’ll use it as an example because it’s what most people are familiar with.
"[...] has a far cleaner and more intuitive interface. So does SolidWorks, Inventor, CATIA, NX, and a bunch of others."
Again, GH was not designed to be an alternative to these sort of modellers. I don't like referring to GH as 'parameteric' as that term has been co-opted by relational modellers. I prefer to use 'algorithmic' instead. The idea behind parameteric seems to be that one models by hand, but every click exists within a context, and when the context changes the software figures out where to move the click to. The idea behind algorithmic is that you don't model by hand.
I agree, and disagree. I believe parametric applies equally to GH AND SW, NX, and so forth, while algorithmic is unique to GH (and GC and Dynamo I think). Thus I understand why you prefer the term. I too tend to not like referring to GH as a parametric modeler for the same reason.
But I think it oversimplifies it to say parametric modelers move the clicks. SW tracks clicks the same way GH does; GH holds that information in geometry components while SW holds it in a feature in the feature tree. In both GH and SW edits to the base geometry will drive a recalculation, but more commonly, it’s an edit to input data, beit equations or just plain numbers, that drive a recalculation.
I understand the difference in these programs. What brought me to GH is that it can create a visual dialog that standard modelers can’t. But as I've grown more comfortable with it I’ve come to realize that the GUI of GH and the GUI of other parametric modelers, while looking completely different, are surprisingly interchangeable. Do not misconstrue that I’m suggesting that GH should replace it’s GUI with SW’s. I’m not. I refrain from suggesting anything specific. I only suggest that you allow yourself to think radically.
This is not to say there is no value in the parametric approach. Obviously it is a winning strategy and many people love to use it. We have considered adding some features to GH that would make manual modelling less of a chore and we would still very much like to do so. However this is such a large chunk of work that we have to be very careful about investing the time. Before I start down this road I want to make sure that the choice I'm making is not 'lame-ass algorithmic modeller with some lame-ass parametrics tacked on' vs. 'kick-ass algorithmic modeller with no parametrics tacked on'.
Given a choice, I'd pick kick-ass algorithmic modeller with no parametrics tacked on.
2. Visual Programming.
I'm not exactly sure I understand your grievance here, but I suspect I agree. The visual part is front and centre at the moment and it should remain there. However we need to improve upon it and at the same time give programmers more tools to achieve what they want.
I'll admit, this is a bit tough to explain. As I've re-read my own comment, I think it was partly a precursor to the context sensitivity point and touched upon other stated points.
This now touches upon my own ignorance about GH’s target market. Are you moving toward a highly specialized tool for programmers and/or mathematicians, or is the intent to create a tool that most designers can master? If it’s the former, rock on. You’re doing great. If it’s the latter, I’m one of the more technically sophisticated designers I know and I’m lost most of the time when using GH.
GH allows the same freedom as a command line editor. You can do whatever you like, and it’ll work or not. And you won’t know why it works or doesn't until you start becoming a bit of an expert and can actually decipher the gibberish in a panel component. I often feel GH has the ease of use of DOS with a badass video card in front.
Please indulge my bit of storytelling. Early 3D modelers, CATIA, Unigraphics, and Pro-Engineer, were unbelievably difficult to use. Yet no one ever complained. The pain of entry was immense. But once you made it past the pain threshold, the salary you could command was very well worth it. And the fewer the people who knew how to use it, the more money you could demand. So in a sense, their lack of usability was a desirable feature among those who’d figured it out.
Then SolidWorks came along. It could only do a fraction of what the others did, but it was a fraction of the cost, it did most of what you needed, and anyone could figure it out. There was even a manual on how to use it. (Craziness!) Within a few short years, the big three all had to change their names (V5, NX, and Wildfire (now Creo)) and change the way they do things. All are now significantly easier to use.
I can tell that the amount of development time that’s gone into GH is immense and I believe the functionality is genius. I also believe it’s ease of use could be greatly improved.
Having re-read my original comments, I think it sounded a bit snotty. For that I apologize.
3. Context sensitivity.
"There is no reason a program in 2014 should allow me to make decisions that will not work. For example, if a component input is in all cases incompatible with another component's output, I shouldn't be able to connect them."
Unfortunately it's not as simple as that. Whether or not a conversion between two data types makes sense is often dependent on the actual values. If you plug a list of curves into a Line component, none of them may be convertible. Should I therefore not allow this connection to be made? What if there is a single curve that could be converted to a line? What if you want to make the connection now, but only later plan to add some convertible curves to the data? What you made the connection back when it was valid, but now it's no longer valid, wouldn't it be weird if there was a connection you couldn't make again?
I've started work on GH2 and one of the first things I'm writing now is the new data-conversion logic. The goal [...] is to not just try and convert type A into type B, but include information about what sort of conversion was needed (straightforward, exotic, far-fetched. etc.) and information regarding why that type was assigned.
You are right that under some conditions, we can be sure that a conversion will always fail. For example connecting a Boolean output with a Curve input. But even there my preferred solution is to tell people why that doesn't make sense rather than not allowing it in the first place.
You bring up both interesting points and limits to my understanding of coding. I’ve reached the point in my learning of GH where I’m just getting into figuring out the sets tab (and so far I’m not doing too well). I often find myself wondering “Is all of this manual conditioning of the data really necessary? Doesn’t most software perform this kind of stuff invisibly?” I’d love to be right and see it go away, but I could easily be wrong. I’ve been wrong before.
5. Components.
"Give components a little “+” or a drawer on the bottom or something that by clicking, opens the component into something akin to a dialog box. This should give access to all of the variables in the component. I shouldn't have to r-click on each thing on a component to do all of the settings."
I was thinking of just zooming in on a component would eventually provide easier ways to access settings and data.
I kinda like this. It’s a continuation of what you’re currently doing with things like the panel component.
"Could some of these items disappear if they are contextually inappropriate or gray out if they're unlikely?"
It's almost impossible for me to know whether these things are 'unlikely' in any given situation. There are probably some cases where a suggestion along the lines of "Hey, this component is about to run 40,524 times. It seems like it would make sense to Graft the 'P' input." would be useful.
6. Integration.
"Why isn't it just live geometry?"
This is an unfortunate side-effect of the way the Rhino SDK was designed. Pumping all my geometry through the Rhino document would severely impact performance and memory usage. It also complicates the matter to an almost impossible degree as any command and plugin running in Rhino now has access to 'my' geometry.
"Maybe add more Rhino functionality to GH. GH has no 3D offset."
That's the plan moving forward. A lot of algorithms in Rhino (Make2D, FilletEdge, Shelling, BlendSrf, the list goes on) are not available as part of the public SDK. The Rhino development team is going to try and rectify this for Rhino6 and beyond. As soon as these functions become available I'll start adding them to GH (provided they make sense of course).
On the whole I agree that integration needs a lot of work, and it's work that has to happen on both sides of the isle.
You work for McNeel yet you seem to speak of them as a separate entity. Is this to say that there are technical reasons GH can only access things through the Rhino SDK? I’d think you would have complete access to all Rhino API’s. I hope it’s not a fiefdom issue, but it happens.
7. Documentation.
Absolutely. Development for GH1 has slowed because I'm now working on GH2. We decided that GH1 is 'feature complete', basically to avoid feature creep. GH2 is a ground-up rewrite so it will take a long time until something is ready for testing. During this time, minor additions and of course bug fixes will be available for GH1, but on a much lower frequency.
Documentation is woefully inadequate at present. The primer is being updated (and the new version looks great), but for GH2 we're planning a completely new help system. People have been hired to provide the content. With a bit of luck and a lot of work this will be one of the main selling points of GH2.
It begs the question that I have to ask. When is GH1.0 scheduled to launch? And if you need another person to proofread the current draft of new primer.
patrick@girgen.com
I can’t believe wikipedia has an entry for feature creep. And I can’t believe you included it. It made me giggle. Thanks.
8. 2D-ness.
"I know you'll disagree completely, but I'm sticking to this. How else could an omission like offsetsurf happen?"
I don't fully disagree. A lot of geometry is either flat or happens inside surfaces. The reason there's no shelling (I'm assuming that's what you meant, there are two Offset Surface components in GH) is because (a) it's a very new feature in Rhino and doesn't work too well yet and (b) as a result of that isn't available to plugins.
I believe it’s been helpful for me to have figured this out. I recently completed a GH course at a local Community College and have done a bunch of online tutorials. The first real project I decided to tackle has turned out to be one of the more difficult things to try. It’s the source of the questions I posted. (Thanks for pointing out that they were posted in the wrong spot. I re-posted to the discussions board.)
I just can't seem to figure out how to turn the voronoi into legitimate geometry. I've seen this exact question posted a few times, but it’s never been successfully answered. What I'm showing here is far more angular than I’m hoping for. The mesh is too fine for weaverbird to have much of an effect. And I haven't cracked re-meshing. Btw, in product design, meshes are to be avoided like the plague. Embracing them remains difficult.
As for offsetsurf, in Rhino, if you do an offsetsurf to a solid body, it executes it on all sides creating another neatly trimmed body thats either larger or smaller than the original. This is how every other app I know of works. GH’s offsetsurf creates a bunch of unjoined faces spaced away from the original brep. A common technique for 3D voronois (Yes, I hit the voronoi overuse easter egg) is to find the center of each cell and scale them by this center. If you think about it, this creates a different distance from the face of the scaled cell to the face of the original cell for every face. As I've mentioned, this project is giving me serious headaches.
Don't get me wrong, I appreciate the feedback, I really do, but I want to be honest and open about my own plans and where they might conflict with your wishes. Grasshopper is being used far beyond the boundaries of what we expected and it's clear that there are major shortcomings that must be addressed before too long. We didn't get it right with the first version, I don't expect we'll get it completely right with the second version but if we can improve upon the -say- five biggest drawbacks (performance, documentation, organisation, plugin management and no mac version) I'll be a happy puppy.
--
David Rutten
Thank you for taking the time to reply David. Often we feel that posting such things is send it into the empty ether. I’m very glad that this was not the case.
And thank you for all of the work you've put into GH. If you found any of my input overly harsh or ill-mannered, I apologise. It was not my intent. I'm generally not the ranting sort. If I hadn't intended to provide possibly useful input, I wouldn't have written.
Cheers
Patrick Girgen
Ps. Any pointers on how to get a bit further on the above project would be greatly appreciated.
…
k questions during the workshop but I will be using Ladybug example files which you can download from the Ladybug group page so you can follow along with the workshop. You can send me your questions later.
The workshop is a part of the Environmental Design Studio for the students of the Master in Environmental Building Design (MEBD) program at PennDesign.
Tentative outline:
Getting Started: (epw file, 3d charts, conditional statements)
Wind analysis: (windrose, conditional statement > potential for natural ventilation)
Sky + Radiation: (radiation rose, sky dome)
Sunpath + Shading design: (hourly data overlay, conditional statement)
Radiation, Sunlight hours analysis (roof design, optimum orientation)
In case we have time we will also cover one example about:
Parametric optimization
To watch the workshop please register below:
https://attendee.gotowebinar.com/register/7552068540507890433
After registering, you will receive a confirmation email containing information about joining the webinar. Thanks to Thornton Tomasetti for making streaming the workshop possible.
Mostapha
PS. Sorry for the short notice. Please feel free to forward this to anyone of interest. There will be a similar workshop next Sunday (April 6th) at the same time on Honeybee for daylighting simulation. I will send the registration link early next week. Stay tuned!…