ke triangle faces like they are in the 2D case of mostly hexagons/pentagons being the dual of a triangular mesh. What you are seeing is in fact fragments of the original non-flat mesh surface.
Perhaps I could isolate the mostly hexagons themselves and create alternative cells with patches for faces to handle non-flat faces. See, if you look very close at the literature figures, they simply leave out the lines in their actual surface faces that themselves have multiple mesh faces, whereas I'm outputting NURBS so end up with polysurface faces when I make a formal clipped Voronoi.
In the 2D case, flattening the cell edges is equivalent to flattening the 3D faces, but that's rarely what people want to do in the 2D case so they just chop the boundary up into curved little cell edges:
It was going to be difficult to clip the 3D case at all without grabbing a small hexagonal/pentagonal piece of the original mesh but once I have done that, I can then possibly replace it with a single surface often non-flat patch, as an option instead. If I tried to make them all flat it would require altering the geometry at least in places, likely most places. See the figure on the right. The faces are not flat!
The question is whether the Rhino Patch command will reliably close the cell with a mere patch on there instead of a faceted polysurface.
I'll look into this. One option is to include the center point in the patch forming command, to not flatten the face so much.
Doing Patch in Rhino, manually, I'm *not* getting a closable solid easily:
Any ideas? I can increase the spans of the patch I guess, without a huge memory hit since it's just surface pieces. Even with 10 spans and stiffness only 1 it still won't close though. Ah, it's because it has sharp facets from the clipping itself and a patch will simply not form a sharp kink in the face of a single surface so will never close?! 30 spans is already getting up there and it won't close either:
Not even if I include a mesh version of the polysurface face in my Patch command will it close the solid, even with low stiffness, since it simply will not make a proper kink in the the edge. It can't really, since a patch is a single surface and it would require huge numbers of UV control points to get within closing tolerance.
I'm kind of stumped. I've included a file if you want to show me how to patch that surface.
Loft to a point from the border curve to the vertex just gives back a more complicated polysurface:
…
as one element.
Thank you
Comment by karamba on October 7, 2014 at 11:27pm
Hello Patricio, divide the beams in such a way that each boundary vertex of the shell becomes an endpoint of a beam segment.
Best, Clemens
Comment by Llordella Patricio on October 8, 2014 at 8:30amDelete Comment
Hi Clemens,
I did what you suggested but now assemble element doesn´t work properly. Could you please tell me how to fix it? Thanks in advance, Patricio
8-10-14losa%20cadena.gh
Comment by karamba on October 8, 2014 at 11:59am
Hi Patricio, if you flatten the 'Elem'-input at the 'Assemble'-component the definition works. The triangular shell elements have linear displacement interpolations whereas the beam deflections are exact. In order to get correct results you should refine the shell mesh.
Best, Clemens
Comment by Llordella Patricio on October 9, 2014 at 8:35amDelete Comment
Hello, succeeds in creating the mesh to the slab, and built the beam segment, but when I see the deformations are not expected because the beam is deformed as the slab.
Thanks for the help
PS: maybe I'm using the program for a type of structure that is not the most appropriate, as I saw in the examples of other structures. But this type of structure is that students taught
best regards
Patricio
9-10-14%20Example%201.gh
Comment by karamba on October 9, 2014 at 10:46am
You could use the 'Mesh Edges'-component to retrieve the naked edges and turn them into beams - see attached file:91014Example1_cp.gh
Best regards,
Clemens
Comment by Llordella Patricio on October 15, 2014 at 3:41pmDelete Comment
Dear clemens
I was doing a rough estimate of the deformation, and I can not achieve the same result with Karamba. When I make a rough estimate of the result with Karamba beams and mine are very similar, I think the problem is when I connect the shell, because there are no similar results.
I sent the GH file, and an image of the calculation
The structure is concrete The result I get is 0.58cm
thank youPatricio
15-10-14%20Example.gh
Comment by karamba yesterday
Dear Patricio,
try to increase the number of shell elements. As mentioned in the manual they are linear elements. A mesh that is too coarse leads to a response which is stiffer than the real structure.
Best,
Clemens
…
isseminated at the firms I've worked at:
Always write your scripts as though someone else is going to have to use and debug them without any instruction from you. This is kind of an overall governing principle that drives a lot of the other best practices.
Structure your definitions left to right. This way it is clear what is dependent on what, what executes in what order, and makes it easy to use the "Moses" tool (alt-click and drag on the canvas to spread components apart) to insert intermediate functionality to an already-existing definition.
For a given functional group (a set of components that does a well-defined thing) keep all the data going in to the group as labeled parameters on the left, and everything going out of the group as labeled parameters on the right. This is the intent of my "Best Practicizer" tool in Metahopper (now a menu item instead of a component). In essence, you're treating each group as though it's about to be clustered - you've defined what it does, what its inputs are, and what its outputs are. This makes troubleshooting much easier - if something is going wrong, you can easily isolate which group is causing the trouble by looking at inputs and outputs.3b. If you're grabbing some value or data (e.g. "STEP COUNT") many times from elsewhere in the definition, don't make a bunch of long wires that connect all the way back to the original source - grab that data into ONE labeled parameter and then connect all your inputs that need it to that - makes for one long wire instead of 20.
Annotate, annotate, annotate. Label your params (and if you're in icon mode, switch them manually to text). Label your groups. Use scribbles to mark larger regions of functionality. Use panels for "instructions" wherever it might not be clear how someone is supposed to use your tools.
Avoid "wireless" (Hidden Wires) connections. If you MUST use them, make sure you create params at both ends with matching names so it's clear what the data represents and where it comes from.
Cluster where possible. It's extremely helpful to isolate functional groups into clusters - it makes debugging faster and easier, since you don't have to wait for the whole definition to recompute when making small edits to the inside of a cluster, and it sets you up well to create code that can be re-used later on. However, don't take whole definitions and cluster them. As a rule of thumb, if a cluster has more than ~10 inputs, it should probably be broken into multiple clusters. There is a slight performance impact when clustering, because unlike an un-clustered group of components, which only executes the parts of the definition where something has changed, any time ANY input to a cluster changes, the WHOLE cluster re-computes. Because of this, a cluster shouldn't generally wrap any groups of components that are not related / don't connect with each other.
Color code your groups. Many firms develop a standard around group coloring so that it's easy to understand what parts of a definition are doing what kind of task. For instance, at Woods Bagot where I work, we have different colors for component groups that highlight inputs, outputs, rhino references, baking, and visualization. You may find that a different set is useful to you, but having a consistent standard can improve legibility. That's my 2c. At the end of the day, everyone works a little bit differently, and that's unavoidable (and not even a bad thing!) As long as you keep #1 in mind, all the rest will follow.
…
rm. Im guessing this is not what is referred to as a "plug in".
I can access and retrieve an existing Rhino application object, or create a new one, from my form. However, when I try to interact with Rhino in any any way through using my own functions defined in a class library (.dll), and using fucntions stored in the RMA library, I keep hitting a brick wall. I just cant get it to work.
As a test, see code below, I have a console application that is calling a Class1 object (referenced as an extenal dll) that simply assigns a value to the x property of this object. Im usng the On3Dpoint object.
This simple instruciton causes an error message that is being duplicated in my more complicated project. I belive the Rhino_DOTNET.dll is up to date. Im targeting the .NET 2.0 frame work. Using Rhino 4, SR8. Note this is a very basic test just to see if it is the reference/compiler/other settings that are causing the probelm. Im not interacting directly with Rhino, just testing to see if I can call functions from the referenced RMA libraries etc. It appears I cant :(
This is the class1 code:
Public Class Class1
Sub test()
Dim oPt As New RMA.OpenNURBS.On3dPoint
oPt.x = 10
Dim btest As Boolean = False
End Sub
End Class
This is the console code
Module Module1
Sub Main()
Dim oClass As New Class1
oClass.test()
Dim bTest As Boolean = False
End Sub
End Module
The error message I get is
Could not load file or assembly 'Rhino_DotNet, Version=4.0.61206.14, Culture=neutral, PublicKeyToken=552281e97c755530' or one of its dependencies. An attempt was made to load a program with an incorrect format.
HELP!!!
…
mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…
e chosen to dive into Grasshopper. I’m about 6 months in. If some of my comments are completely off, please take that to mean that a feature is too inaccessible to a newish user rather that it’s just missing, as I may have stated.
One of my primary pain points is this. Things that can be done in other programs are invariably easier in other programs. This is a big enough issue that I doubt there’s an easy solution that an armchair qb like myself can offer up.
The interface:
I’ve used a lot of 3D programs. I’ve never encountered one as difficult as grasshopper. What in other programs is a dialog box, is 8 or 10 components strung together in grasshopper. The wisdom for this I often hear among the grasshopper community is that this allows for parametric design. Yet PTC (Parametric Technology Corp.) has been doing parametric design software since 1985 and has a far cleaner and more intuitive interface. So does SolidWorks, Inventor, CATIA, NX, and a bunch of others.
In the early 2000's, when parametric design software was all the rage, McNeel stated quite strongly the Rhino would remain a direct modeler and would not become a parametric modeler. Trends come. Trends go. And the industry has been swinging back to direct modeling. So McNeel’s decision was probably ok. But I have to wonder if part of McNeel’s reluctance to incorporate some of the tried and proven ideas of other parametric packages doesn't have roots in their earlier declaration to not incorporate parametrics.
A Visual Programming Language:
I read a lot about the awesomeness and flexibility of Grasshopper being a visual programming language. Let’s be clear, this is DOS era speak. I believe GH should continue to have the ability to be extended and massaged with code, as most design programs do. But as long as this is front and center, GH will remain out of reach to the average designer.
Context sensitivity:
There is no reason a program in 2014 should allow me to make decisions that will not work. For example, if a component input is in all cases incompatible with another component's output, I shouldn't be able to connect them.
Sliders:
I hate sliders. I understand them, but I hate ‘em. I think they should be optional. Ya, I know I can r-click on the N of a component and set the integer. It’s a pain, and it gives no feedback. The “N” should turn into the number if set. AAAnd, sliders should be context sensitive. I like that the name of a slider changes when I plug it into something. But if I plug it into something that'll only accept a 1, a 2, or a 3, that slider should self set accordingly. I shouldn't be able to plug in a “50” and have everything after turn red.
Components:
Give components a little “+” or a drawer on the bottom or something that by clicking, opens the component into something akin to a dialog box. This should give access to all of the variables in the component. I shouldn't have to r-click on each thing on a component to do all of the settings.
And this item I’m guessing on. I’m not yet good enough at GH to know if this may have adverse effects. Reverse, Flatten, Graft, etc.; could these be context sensitive? Could some of these items disappear if they are contextually inappropriate or gray out if they're unlikely?
Tighter integration with Rhino:
I'm not entirely certain what this would look like. Currently my work flow entails baking, making a few Rhino edits, and reinserting into GH. I question the whole baking thing, btw. Why isn't it just live geometry? That’s how other parametric apps work. Maybe add more Rhino functionality to GH. GH has no 3D offset. I have to bake, offsetserf, and reinsert the geometry. I’m currently looking at the “Geometry Cache” and “Geometry Pipeline” components to see if they help. But I haven't been able to figure it out. Which leads me to:
Update all of the documentation:
I'm guessing this is an in process thing and you're working toward rolling GH from 0.9.00075 to 1.0. GH was being updated nearly weekly earlier this year. Then it suddenly stopped. If we're talking weeks before a full release, so be it. But if we're looking at something longer, a documentation update would help a lot. Geometry Cache and Geometry Pipeline’s help still read “This is the autogenerated help topic for this object. Developers: override the HtmlHelp_Source() function in the base class to provide custom help.” This does not help. And the Grasshopper Primer 2nd Ed. was written for GH 0.60007.
Grasshopper is fundamentally a 2D program:
I know you'll disagree completely, but I'm sticking to this. How else could an omission like offsetsurf happen? Pretty much every 3D program in existence has this. I’m sure I can probably figure out how to deconstruct the breps, join the curves, loft, trim, and so forth. But does writing an algorithm to do what all other 3D programs do with a dialog box seem reasonable? I'm sure if you go command by command you'll find a ton on such things.
If you look at the vast majority of things done in GH, you'll note that they're mostly either flat or a fundamentally 2D pattern on a warped surface.
I've been working on a part that is a 3D voronoi trimmed to a 3D model. I've been trying to turn the trimmed voronoi into legitimate geometry for over a month without success.
http://www.grasshopper3d.com/profiles/blogs/question-voronoi-3d-continued
I’ve researched it enough to have found many others have had the exact same problem and have not solved it. It’s really not that conceptually difficult. But GH lacks the tools.
Make screen organization easier:
I have a touch of OCD, and I like my GH layout to flow neatly. Allow input/output nodes to be re-ordered. This will allow a reduction in crossed wires. Make the wire positions a bit more editable. I sometimes use a geometry component as a wire anchor to clean things up. Being able to grab a wire and pull it out of the way would be kinda nice.
I think GH has some awesome abilities. I also think accessing those abilities could be significantly easier.
~p…
re are major changes and enhancements.
HONEYBEE
More Flexible Workflow - Many small modifications were made to support a more flexible workflow, such as the ability to separate a zone created with masses2Zones into editable HBSrfs that can be recombined. For the energy components, it is now possible to plug custom constructions directly into the components that set the zone constructions without writing them first into the library. For the daylighting components it is now possible to change all of the materials of specific surface types at once.
Support for Complex Geometry - Many small bugs for complex geometry have been fixed including the ability to import energy results correctly for curved NURBS surfaces as well as unconventional window configurations. Also, the intersectMasses component now almost always succeeds in splitting all of the surfaces of adjacent zones, no matter how complex the intersection is.
Automatic Download Issues Fixed - Many users who faced issues with not having “gendaymtx.exe” or who had trouble syncing with our github know that we faced an issue with automatic background downloads.
Air Walls - Honeybee EnergyPlus models now officially support air walls (or virtual partitions) in a basic implementation. Now, any time that you use the air wall construction or set a surface type to “air wall,” the air between adjacent zones will be automatically mixed. At present, this mixing is just a constant flow based on the surface area between zones connected by air walls multiplied by an adjustable “flow factor.” It is important to stress that this basic air mixing is not with the EnergyPlus Airflow Network, although the groundwork laid in this release will eventually allow for the implementation of the Airflow Network in future releases. As such, this present air mixing is only suitable for multi-zone conditions where there is not significant buoyancy-driven flow between zones.
Natural Ventilation - To go along with the new potential introduced by air walls, there has been a basic implementation of EnergyPlus’s natural ventilation objects in a new component called “Set EP Airflow”. The current setup allows for three possible types of natural ventilation: 1) natural ventilation through windows (with auto-calculated flow based on window area, outdoor wind speed/direction, and stack effects), 2) custom wind and stack objects that can be used to model things such as chimneys off of single zones, and 3) constant, fan-driven natural ventilation.
Additional Thermal Mass - The capability to add additional thermal mass to zones has been added. This is useful for factoring in the mass of indoor furniture or heavy interior objects such as chimneys.
New Utility Components - Abraham has added a couple of useful components to help calculate lighting loads based on bulb types and target lighting levels as well as a converter from ACH to the m3/s-m2 that the other HB components accept. Along this vein, there is also a component for adding in the resistance of Air Films to HB constructions.
Improved and Editable Ideal Air Loads System - The EnergyPlus Ideal Air System now goes through an automatic sizing period at the start of the simulation based on the extreme weeks of the weather file. Furthermore, the ability to adjust many of the parameters of the ideal air loads system have been added with a new “Set Ideal Air Loads Parameters” component. The component allows you to add in heat recovery, air side economizers and demand-controlled ventilation.
OpenStudio Export Update - The OpenStudio workflow is still largely under development but this release includes a version with a working VAV and PTHP system template for those curious with experimenting. Note that not all of the new features available for the basic “Run Energy Simulation” component are available for the OpenStudio component (such as air walls, natural ventilation, or additional thermal mass).
Microclimate/Indoor Comfort Maps - Blossoming from initial experiments with the radiant temperature map, a workflow for looking into sub-zone microclimate and indoor comfort has been initiated. All components for this are presently under the Honeybee WIP tab but, over the next month, they will be completing their development phase and moving into the rest of the tabs. If you are interested in testing when they are ready, please let Chris know. For a teaser video of the intended capabilities, see this video: (https://www.youtube.com/watch?v=fNylb42FPIc&list=UUc6HWbF4UtdKdjbZ2tvwiCQ)
LADYBUG
Monthly Bar Chart - After much demand from multiple parties, a new component to create monthly bar and line charts has been added. The component is particularly useful for plotting the outputs of the “Average Data” component like monthly EPW data or averaged monthly-per hour data. It also supports daily data and any type of Energy simulation results.
Wind Profile - To go along with the new capabilities of natural ventilation in Honeybee, Ladybug now has a fully fleshed-out Wind Profile component that allows you to visualize how wind speed changes with height in relation to your building geometry. The component is geared to understanding the conditions of prevailing wind and will be useful in the future for setting up CFD models. Credit goes to Djordje Spasic for adding in all of the new capabilities. In a similar vein, the appearance of the wind rose has also been improved thanks to suggestions from Alejandra Menchaca.
Faster Solar Adjusted Temperature - Thanks to the SolarCal method from the Center for the Built Environment at UC Berkeley (http://escholarship.org/uc/item/89m1h2dg), the solar adjusted temperature component now includes an option for a much faster calculation that produces results that are very close to those originally obtained with the genCumSky component. Instead of using the cumulative sky, the component can now accept the direct and diffuse radiation from the ImportEPW component. Over a whole year, this essentially takes a calculation that used to be a half-hour and shrinks it down to 10 seconds. Thanks again to those at UC Berkeley for keeping their work open source!
Instructions - Last but not the least, [It took me almost two years to understand this but finally] we have a text file that describes the installation step by step and is way easier to modify than a video. You can find it in the zip file. Credit goes to Chris!
We also want to welcome Anton, Patrick and Sandeep to the team. Anton has kicked off his development by working on a component to import and visualize epw ground temperature data and he will be continuing to develop components to bring in reliable precipitation data to Ladybug. With this basis, he will continue to implement Honeybee components for ground heat storage, earth tubes, rain collection and hot water systems. Patrick and Sandeep are working on integration of Honeybee to Energy Performance Calculator.
As always let us know your comments and suggestions.
Enjoy!…
e a fundamental failure on my part. On the other hand, Grasshopper isn't supposed to be on a par with most other 3D programs. It is emphatically not meant for manual/direct modelling. If you would normally tackle a problem by drawing geometry by hand, Grasshopper is not (and should never be advertised as) a good alternative.
I get that. That’s why that 3D shape I’m trying to apply the voronoi to was done in NX. I do wonder where the GUI metaphor GH uses comes from. It reminds me of LabVIEW.
"What in other programs is a dialog box, is 8 or 10 components strung together in grasshopper. The wisdom for this I often hear among the grasshopper community is that this allows for parametric design."
Grasshopper ships with about 1000 components (rounded to the nearest power of ten). I'm adding more all the time, either because new functionality has been exposed in the Rhino SDK or because a certain component makes a lot of sense to a lot of people. Adding pre-canned components that do the same as '8 or 10 components strung together' for the heck of it will balloon the total number of components everyone has to deal with. If you find yourself using the same 8 to 10 components together all the time, then please mention it on this forum. A lot of the currently existing components have been added because someone asked for it.
It’s not the primary components that catalyzed this thought but rather the secondary components. I was toying with a component today (twist from jackalope) that made use of three toggle components. The things they controlled are checkboxes in other apps.
Take a look at this jpg. Ignore differences; I did 'em quickly. GH required 19 components to do what SW did with 4 commands. Note the difference in screen real estate.
As an aside, I really hate SolidWorks (SW). But going forward, I’ll use it as an example because it’s what most people are familiar with.
"[...] has a far cleaner and more intuitive interface. So does SolidWorks, Inventor, CATIA, NX, and a bunch of others."
Again, GH was not designed to be an alternative to these sort of modellers. I don't like referring to GH as 'parameteric' as that term has been co-opted by relational modellers. I prefer to use 'algorithmic' instead. The idea behind parameteric seems to be that one models by hand, but every click exists within a context, and when the context changes the software figures out where to move the click to. The idea behind algorithmic is that you don't model by hand.
I agree, and disagree. I believe parametric applies equally to GH AND SW, NX, and so forth, while algorithmic is unique to GH (and GC and Dynamo I think). Thus I understand why you prefer the term. I too tend to not like referring to GH as a parametric modeler for the same reason.
But I think it oversimplifies it to say parametric modelers move the clicks. SW tracks clicks the same way GH does; GH holds that information in geometry components while SW holds it in a feature in the feature tree. In both GH and SW edits to the base geometry will drive a recalculation, but more commonly, it’s an edit to input data, beit equations or just plain numbers, that drive a recalculation.
I understand the difference in these programs. What brought me to GH is that it can create a visual dialog that standard modelers can’t. But as I've grown more comfortable with it I’ve come to realize that the GUI of GH and the GUI of other parametric modelers, while looking completely different, are surprisingly interchangeable. Do not misconstrue that I’m suggesting that GH should replace it’s GUI with SW’s. I’m not. I refrain from suggesting anything specific. I only suggest that you allow yourself to think radically.
This is not to say there is no value in the parametric approach. Obviously it is a winning strategy and many people love to use it. We have considered adding some features to GH that would make manual modelling less of a chore and we would still very much like to do so. However this is such a large chunk of work that we have to be very careful about investing the time. Before I start down this road I want to make sure that the choice I'm making is not 'lame-ass algorithmic modeller with some lame-ass parametrics tacked on' vs. 'kick-ass algorithmic modeller with no parametrics tacked on'.
Given a choice, I'd pick kick-ass algorithmic modeller with no parametrics tacked on.
2. Visual Programming.
I'm not exactly sure I understand your grievance here, but I suspect I agree. The visual part is front and centre at the moment and it should remain there. However we need to improve upon it and at the same time give programmers more tools to achieve what they want.
I'll admit, this is a bit tough to explain. As I've re-read my own comment, I think it was partly a precursor to the context sensitivity point and touched upon other stated points.
This now touches upon my own ignorance about GH’s target market. Are you moving toward a highly specialized tool for programmers and/or mathematicians, or is the intent to create a tool that most designers can master? If it’s the former, rock on. You’re doing great. If it’s the latter, I’m one of the more technically sophisticated designers I know and I’m lost most of the time when using GH.
GH allows the same freedom as a command line editor. You can do whatever you like, and it’ll work or not. And you won’t know why it works or doesn't until you start becoming a bit of an expert and can actually decipher the gibberish in a panel component. I often feel GH has the ease of use of DOS with a badass video card in front.
Please indulge my bit of storytelling. Early 3D modelers, CATIA, Unigraphics, and Pro-Engineer, were unbelievably difficult to use. Yet no one ever complained. The pain of entry was immense. But once you made it past the pain threshold, the salary you could command was very well worth it. And the fewer the people who knew how to use it, the more money you could demand. So in a sense, their lack of usability was a desirable feature among those who’d figured it out.
Then SolidWorks came along. It could only do a fraction of what the others did, but it was a fraction of the cost, it did most of what you needed, and anyone could figure it out. There was even a manual on how to use it. (Craziness!) Within a few short years, the big three all had to change their names (V5, NX, and Wildfire (now Creo)) and change the way they do things. All are now significantly easier to use.
I can tell that the amount of development time that’s gone into GH is immense and I believe the functionality is genius. I also believe it’s ease of use could be greatly improved.
Having re-read my original comments, I think it sounded a bit snotty. For that I apologize.
3. Context sensitivity.
"There is no reason a program in 2014 should allow me to make decisions that will not work. For example, if a component input is in all cases incompatible with another component's output, I shouldn't be able to connect them."
Unfortunately it's not as simple as that. Whether or not a conversion between two data types makes sense is often dependent on the actual values. If you plug a list of curves into a Line component, none of them may be convertible. Should I therefore not allow this connection to be made? What if there is a single curve that could be converted to a line? What if you want to make the connection now, but only later plan to add some convertible curves to the data? What you made the connection back when it was valid, but now it's no longer valid, wouldn't it be weird if there was a connection you couldn't make again?
I've started work on GH2 and one of the first things I'm writing now is the new data-conversion logic. The goal [...] is to not just try and convert type A into type B, but include information about what sort of conversion was needed (straightforward, exotic, far-fetched. etc.) and information regarding why that type was assigned.
You are right that under some conditions, we can be sure that a conversion will always fail. For example connecting a Boolean output with a Curve input. But even there my preferred solution is to tell people why that doesn't make sense rather than not allowing it in the first place.
You bring up both interesting points and limits to my understanding of coding. I’ve reached the point in my learning of GH where I’m just getting into figuring out the sets tab (and so far I’m not doing too well). I often find myself wondering “Is all of this manual conditioning of the data really necessary? Doesn’t most software perform this kind of stuff invisibly?” I’d love to be right and see it go away, but I could easily be wrong. I’ve been wrong before.
5. Components.
"Give components a little “+” or a drawer on the bottom or something that by clicking, opens the component into something akin to a dialog box. This should give access to all of the variables in the component. I shouldn't have to r-click on each thing on a component to do all of the settings."
I was thinking of just zooming in on a component would eventually provide easier ways to access settings and data.
I kinda like this. It’s a continuation of what you’re currently doing with things like the panel component.
"Could some of these items disappear if they are contextually inappropriate or gray out if they're unlikely?"
It's almost impossible for me to know whether these things are 'unlikely' in any given situation. There are probably some cases where a suggestion along the lines of "Hey, this component is about to run 40,524 times. It seems like it would make sense to Graft the 'P' input." would be useful.
6. Integration.
"Why isn't it just live geometry?"
This is an unfortunate side-effect of the way the Rhino SDK was designed. Pumping all my geometry through the Rhino document would severely impact performance and memory usage. It also complicates the matter to an almost impossible degree as any command and plugin running in Rhino now has access to 'my' geometry.
"Maybe add more Rhino functionality to GH. GH has no 3D offset."
That's the plan moving forward. A lot of algorithms in Rhino (Make2D, FilletEdge, Shelling, BlendSrf, the list goes on) are not available as part of the public SDK. The Rhino development team is going to try and rectify this for Rhino6 and beyond. As soon as these functions become available I'll start adding them to GH (provided they make sense of course).
On the whole I agree that integration needs a lot of work, and it's work that has to happen on both sides of the isle.
You work for McNeel yet you seem to speak of them as a separate entity. Is this to say that there are technical reasons GH can only access things through the Rhino SDK? I’d think you would have complete access to all Rhino API’s. I hope it’s not a fiefdom issue, but it happens.
7. Documentation.
Absolutely. Development for GH1 has slowed because I'm now working on GH2. We decided that GH1 is 'feature complete', basically to avoid feature creep. GH2 is a ground-up rewrite so it will take a long time until something is ready for testing. During this time, minor additions and of course bug fixes will be available for GH1, but on a much lower frequency.
Documentation is woefully inadequate at present. The primer is being updated (and the new version looks great), but for GH2 we're planning a completely new help system. People have been hired to provide the content. With a bit of luck and a lot of work this will be one of the main selling points of GH2.
It begs the question that I have to ask. When is GH1.0 scheduled to launch? And if you need another person to proofread the current draft of new primer.
patrick@girgen.com
I can’t believe wikipedia has an entry for feature creep. And I can’t believe you included it. It made me giggle. Thanks.
8. 2D-ness.
"I know you'll disagree completely, but I'm sticking to this. How else could an omission like offsetsurf happen?"
I don't fully disagree. A lot of geometry is either flat or happens inside surfaces. The reason there's no shelling (I'm assuming that's what you meant, there are two Offset Surface components in GH) is because (a) it's a very new feature in Rhino and doesn't work too well yet and (b) as a result of that isn't available to plugins.
I believe it’s been helpful for me to have figured this out. I recently completed a GH course at a local Community College and have done a bunch of online tutorials. The first real project I decided to tackle has turned out to be one of the more difficult things to try. It’s the source of the questions I posted. (Thanks for pointing out that they were posted in the wrong spot. I re-posted to the discussions board.)
I just can't seem to figure out how to turn the voronoi into legitimate geometry. I've seen this exact question posted a few times, but it’s never been successfully answered. What I'm showing here is far more angular than I’m hoping for. The mesh is too fine for weaverbird to have much of an effect. And I haven't cracked re-meshing. Btw, in product design, meshes are to be avoided like the plague. Embracing them remains difficult.
As for offsetsurf, in Rhino, if you do an offsetsurf to a solid body, it executes it on all sides creating another neatly trimmed body thats either larger or smaller than the original. This is how every other app I know of works. GH’s offsetsurf creates a bunch of unjoined faces spaced away from the original brep. A common technique for 3D voronois (Yes, I hit the voronoi overuse easter egg) is to find the center of each cell and scale them by this center. If you think about it, this creates a different distance from the face of the scaled cell to the face of the original cell for every face. As I've mentioned, this project is giving me serious headaches.
Don't get me wrong, I appreciate the feedback, I really do, but I want to be honest and open about my own plans and where they might conflict with your wishes. Grasshopper is being used far beyond the boundaries of what we expected and it's clear that there are major shortcomings that must be addressed before too long. We didn't get it right with the first version, I don't expect we'll get it completely right with the second version but if we can improve upon the -say- five biggest drawbacks (performance, documentation, organisation, plugin management and no mac version) I'll be a happy puppy.
--
David Rutten
Thank you for taking the time to reply David. Often we feel that posting such things is send it into the empty ether. I’m very glad that this was not the case.
And thank you for all of the work you've put into GH. If you found any of my input overly harsh or ill-mannered, I apologise. It was not my intent. I'm generally not the ranting sort. If I hadn't intended to provide possibly useful input, I wouldn't have written.
Cheers
Patrick Girgen
Ps. Any pointers on how to get a bit further on the above project would be greatly appreciated.
…