o re-evaluate. I try to help myself with GH groups. If I don't need to see dependencies on my most recently created geometry (at the end of the definition), I can just select all components I don't want to see and deactivate them. This is rare though. I will usually need to see a specific surface or curve just for a short moment as I modify sliders to double-check how everything works together, and if perhaps I need to modify.
I know the option to "only draw preview of selected component", yet it is rarely useful because you always need to see other geometry as well that depens on the selected component.
GH seems to have some kind of hierarchy which components preview is shown on top of the other. This can create a great amount of confusion because you effectively highlight something but nothing green shows up in Rhino until you have un-previewed 10+ components elsewhere in your definition that sit "on top" of your desired component.
Thus: It'd be nice if selecting a component will ALWAYS show its green preview. I would also like to test how mouse-hovering over component = green preview would work.
Moreover, finding the component you want to see and disabling all others that hide it is one thing, the next is the different preview modes. I'm not talking about wireframe or shaded mode, I mean stuff like the "Dash" component. Essentially it's just a different kind of preview. I really like it. It's fantastic to use on guide curves that represent boundaries of your surfaces. It's just a bit laborious to always put up a new dash component for every curve, and edit their lengths and gutters, etc. So you will find yourself feeding multiple inputs into one Dash component. This becomes very confusing once your definition has become larger, and several wires are crossing all over it just to go into that one Dash component. Hiding or fainting the wires doesn't really help either, because I still need to be able to see which has been assigned with this special type of preview.
So a second suggestion: Allow assigning previews on the component itself, rather than using an external one. Simply indicate the preview type with an icon, similar to how graft, simplify, and so on, are indicated.Then with this info: have a library of components being previewed with attributes in what "mode" their previewed: wireframe, shaded, dashed, etc, ...even "preview=off". Allow to group them at hand of their preview. Again, I would be curious to see how mouse-hover = highlight component in green works. Double-click could jump you to the component on the RH canvas, just like GH's "saved views" work right now.…
y from the Rhino model and having the absorption coefficients of the materials that are entered into Pachyderm, why is it not possible to generate a reverberation time diagram, without the need to start any analysis?
MAPPING METHOD: When for example the mapping of the Strenght Index (G) is generated through the "create map" option, succesively I can´t generate any other energy criterion map, but I have to redo the simulation.
Is it a limitation of the software or am I wrong something?
MAPPING METHOD: I kindly wanted to ask what is the difference between minimum and detailed convergence and why the number of reflections order it takes into account for the simulation is not specified. The mapping method take care only of the Raytracing Method or the Image Source too?
MAPPING METHOD: Why is the mapping value that can be exported to Rhino not generated for all the calculation raster points, but maximal only for 100 values?
MAPPING METHOD: This method hasn't been implemented in Grasshopper yet, has it?
RAYTRACING METHOD (Pach:RT): I did a raytracing through the components of GH, using only the Pach_RT, and I had these curious results in terms of time:
RaysCount: 15.000, IS_Order:1 = 5min
RaysCount: 15.000, IS_Order:2 = 12min
RaysCount: 15.000, IS_Order:3 = 3min
RaysCount: 15.000, IS_Order:4 = 8min
RaysCount: 15.000, IS_Order:5 = 3min
Why a raytracing with only 2 order, is more and more extensive than the 3/4 and 5 order?
ANALYSIS RESULT: Would there be a way to export all the results of a simulation, as is done via Odeon, to a .txt list?
I apologize in advance for asking so many questions, I hope you can find the time to answer,
Yours sincerely from Müller-BBM…
uts an instance of a class type, which I refer to as Class Instance A for this discussion post.
Component B, has a for-loop, through which it does a few things, and makes "n" number of variations (in this case, just 3) of Class Instance A. Simplified/generalized structure of Component B is as follows:
a=[]
for i in range (n):
.....variation = DoSomething (StartShape)
.....a.append(variation)
.....for obj in ghenv.Component.OnPingDocument().Objects:
..........if obj.Name == "MakeAssembly": ...............obj.ExpireSolution(False)
I am not sure if I am using the ExpireSolution method correctly here. I also have found this (Grasshopper.Instances.ActiveCanvas.Document.NewSolution(False)) from some other discussions, but it doesn't seem to solve this problem. Or maybe I have the placement of ExpireSolution or NewSolution wrong. Upon pressing F6 then F5, Component B keeps updating Class Instance A, instead of "resetting" it to its original condition. For-loop in Component B must have the original Class Instance A every time, in order for it to spit out variations. I need to find a way to expire the solution for Component A only, at the end of each For-loop. Eventually, I need it to loop 100+ times, so manually pressing F5 is not really an option... Below is an image of what should happen:
What is the best way to tackle this problem (in either Grasshopper-Python or IronPython), so that the same original Class Instance A is fed in every time the loop is run in Component B?
Thank you! …
rasshopper data in two ways; GH and GHX. Since Grasshopper 0.8.0050, the default format is GH.
GH obviously stands for 'GrassHopper' and GHX adds an X for Xml. There is no functional difference between GH and GHX files concerning information content, however they are obviously not the same.
Grasshopper data is stored in a hierarchical, type-safe database format which is defined and managed by the GH_IO.dll library. These databases can be read from and written to binary and text-based files.
GH is a binary format, meaning it stores its data as pure bytes. This makes the file unreadable by human eyes and also rather difficult to parse by programs. Data inside GH files is however stored quite efficiently -i.e. with little overhead- and it is even compressed using standard Deflate compression, reducing the file size even more.
GHX on the other hand is text based, meaning all data is stored as human-readable strings of characters. This means there is in fact a large amount of overhead needed and GHX files will be 10 to 100 times as big as GH files. When Grasshopper puts data in the clipboard, it is stored as GHX.
Why do we need both formats and when should you use one or the other?
Benefits of GH over GHX:
Much smaller footprint. Saves and loads faster from/to a slow medium.
GH is unknown to all software except Grasshopper. There is no risk of anything trying to parse it behind your back. A common problem with GHX is that webbrowsers and email clients try to interpret xml data, potentially ruining it.
Benefits of GHX over GH:
GHX can be easily parsed, modified and even written using manual text-editors or programs without the need to use GH_IO.dll.
If a GHX file gets damaged it's possible in theory -though often still rather difficult in practice- to recover parts of it. This is not possible at all with damaged GH files.
Basically, you're almost always better off using GH. Only if you want to interact with the Grasshopper data without using the Grasshopper interface does it really make sense to use GHX.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
Added by David Rutten at 11:31am on September 28, 2012
mbre de 9:00 am a 8:00 pm Este taller está dirigido principalmente a arquitectos y diseñadores interesados en el aprendizaje del diseño paramétrico y generativo aplicados a la generación y racionalización de geometrías complejas para su implementación en diferentes procesos de diseño. En el curso se abordarán los conceptos básicos y metodología para hacer frente a diversas problemáticas del diseño mediante el desarrollo de herramientas algorítmicas a través de un lenguaje de programación visual y el desarrollo de esquemas de fabricación digital. No se requieren conocimientos previos de Rhinoceros 3D ni de programación, conocimientos previos de CAD deseables. Estudiantes: 2,500 MXN Profesionales: 3,000 MXN
CONCURSO DE RENDERS - BECA DEL 100% - Parametric & Generative Architecture & Design Grasshopper Workshop.
- Publica tu render en www.facebook.com/3dmetrica - El render con más likes será el ganador. - Fecha límite de votaciones 15 de septiembre del 2012.
Informes e Inscripciones: workshop@3dmetrica.com 04455 28790084 www.3dmetrica.com www.facebook.com/3dmetrica
…
that both the ASHRAE and European Adaptive models were derived from surveys of awake occupants. While the topic has not been investigated as well as it should be, the few adaptive-style surveys of sleeping occupants that have been conducted show that people tend to desire significantly cooler temperatures when they are sleeping as opposed to when they are awake.
Notably, Chapter 8 of Humphrey's recently-published book on Adaptive Comfort (https://books.google.com/books?id=lOZzCgAAQBAJ&printsec=frontcover&dq=Adaptive+Thermal+Comfort+Foundations+and+analysis&hl=en&sa=X&ved=0ahUKEwi6npqSi__KAhUJMj4KHf7SCXMQ6AEIKjAA#v=onepage&q=Adaptive%20Thermal%20Comfort%20Foundations%20and%20analysis&f=false) provides some interesting insights into this. In a 1973 survey, Humphreys found that the quality of sleep started to deteriorate at temperatures above 24-26C regardless of the time of year and that there was no clearly-determinable lower limit to comfortable sleeping temperatures (in other words, people were fine at 12C if they were given enough blankets). He surveyed only British occupants who were sleeping in traditional beds with mattresses and a wide range of blankets. This is important because the nature of the findings is such that the comfort temperatures would be very different if the survey participants had been sleeping in a hammock or in closer contact with the ground (both popular practices for a number of cultures living in warmer climates). Traditional mattresses cut the ability to radiate body heat in half as compared to a standing human body and I would venture a guess that this is a big reason why much cooler temperatures are desired while sleeping on mattresses as opposed to standing awake/uptight.
So for your case, if you want to account for a time of the day that occupants are sleeping on mattresses, I would change the comfort temperature for this these hours down to 24C. Otherwise, if you are trying to show the comfortable hours of awake people in your space, your current 100% comfortable nighttime hours are a better estimate. I have also noticed that nighttime temperatures become comfortable in extreme weeks of hot/dry climates. This is what is happening in this extreme week simulation of Los Angeles' San Fernando Valley here:
https://www.youtube.com/watch?v=WJz1Eojph8E&index=3&list=PLruLh1AdY-Sj3ehUTSfKa1IHPSiuJU52A
I will put in the ability to set custom values for comfort temperatures into the Adaptive Comfort Recipe soon so that you can test out a 'sleeping comfort temperature' if you would like. I have created a github issue for it here:
https://github.com/mostaphaRoudsari/Honeybee/issues/486
I was not so convinced by Nicol's argument about humidity on those pages as I was when I saw the correlations of both operative temperature and effective temperature to surveyed comfort votes in real buildings. Humphreys shows these correlations on page 106 of the book I linked to above. Notably, the correlation of Effective Temperature to comfort votes (0.257) is slightly worse than the correlation of just Operative Temperature (0.265). In other words, trying to account for humidity actually weakened the predictive power of the metric. This difference in correlation is not so great as for me to discount an Adaptive comfort model based on Effective temperature (as deDear once proposed). However, the correlations of PMV (0.213) and SET (0.185) to comfort votes are so poor that I now use the PMV model only with great caution.
This reason for the decreased importance of humidity may be multi-faceted, whether it's Nicol's explanation or another. Still, the data suggests that we are probably better off ignoring humidity when forecasting comfort and should only consider it when evaluating conditions of extreme heat stress where people's primary loss of heat is through sweating.
-Chris…
/www.grasshopper3d.com/forum/topics/vb-vs-c-vs-python
http://www.grasshopper3d.com/forum/topics/which-programming-language-should-i-focus-on-vb-or-python
VB.Net and C#
VB.Net and C# both belong to the ".Net" family of languages, and the things you can do with them in Rhino/Grasshopper are nearly 100% equivalent. Grasshopper itself was written in a combination of VB.Net and C#. Some advantages/comments, in no particular order:
Performance - VB.Net and C# scripts tend to execute faster because they are "Just-in-time" compiled as opposed to interpreted.
Autocomplete - both VB.Net and C# have rich autocomplete functionality in their respective script editor components - significantly more so than the python editor. This can be helpful for beginners since you can "hunt" for methods and properties by just typing a "." after an object name and looking at the list of available methods/properties.
Native Component development - If you eventually want to develop GHA assemblies/plug-ins for grasshopper, as of Rhino 5 you will have to use one of these two languages. However, there are plans to introduce python-based plugins in Rhino 6. Even so, the resources around plug-in development are very rich in the C# and VB.Net environments (with c# seeming to be the more popular of the two).
"Strong Typing" - VB.net to some degree, and C# especially, are less "forgiving" languages than python - they require you to know about the data type of the objects you're operating on. This can sometimes result in more verbose code - as you explicitly convert from type to type - but it also promotes good programming practice and helps make errors more understandable.
.Net ecosystem - using a .Net language means you have access to the thousands of libraries publicly available, and the process of referencing these libraries and making use of them is comparatively straightforward relative to python. More on this in the following section.
Resources/Support - At least as of 2012, VB and C# turned up more results on this forum than python, and I think you'll find slightly more expert-level coders in those languages able to help you here.
Which one between the two? C# or VB.Net? - Personally, I greatly prefer C# - I find it to be cleaner and clearer to use. I also have some programming background in C++/Java/Processing so I found the "C family" approach to be more familiar. As David and Damian point out in some of the posts linked above, C# is more popular than either python or VB.net in the rest of the coding world. However, if you are learning without any prior programming experience you may find VB.net to be a bit easier to learn.
Python
Python is, without a doubt, a beautiful and elegant language, which is probably more than can be said for VB.Net/C#. It is very popular with beginner coders, and its syntax is more readily understandable.
Syntax - Python is beautiful to read and write. Its syntax is very clear and free of extraneous punctuation (for example the ";" line endings in c#). It has many very nice language features that make common tasks more concise, like its loop syntax, list comprehensions, list "map" and "filter."
Multiple ways to talk to Rhino/Grasshopper - Python enables two general approaches to interacting with the Rhino/Grasshopper environment: RhinoCommon and RhinoScriptSyntax. If you have prior experience with Rhinoscript, you may find RhinoScriptSyntax to be preferable - it adapts many of the methods you're familiar with to the python language, and simplifies some tasks. A word of caution though - working with Rhinoscriptsyntax can introduce a performance hit relative to RhinoCommon operations. C# and VB.net by contrast can only work with RhinoCommon.
"Goodies" - The Python environment in Grasshopper has some "special features" that the other languages lack. In particular, the "GHPythonLib" library enables the ability to call most Grasshopper components from within your code, and the ability to easily enable parallel processing to improve performance. (A word of caution though - these two features do not seem to "play well" with each other, there may be bugs causing memory leaks that result in increasingly worse performance with each execution).
Cross-Platform - Unlike C#/VB.net, Python can be used natively in Rhino for Windows and Rhino for Mac.
Direct scripting in Rhino - You can also use Python directly in the Rhino environment without the need for Grasshopper if you desire, using the Rhino Python editor.
IronPython / Ecosystem issues - one frustration / potential downside to working with Python for Rhino/GH is that though there is a vast, amazing ecosystem of external libraries for Python, getting these to install/work properly in the Rhino/GH environment can be a real pain - largely because the language is actually "IronPython," a version of python designed to work closely with the .Net ecosystem. Many popular libraries like numpy and scipy are very challenging to get working in Rhino/GH.
Scripting in other programs - Especially in the AEC industry, Python is a popular scripting language for other applications. Tools like Revit, Dynamo, Blender, and ArcGIS all offer their own Python scripting interface - so learning Python in Rhino/GH can give you a leg up in eventually scripting in these other programs.
Python's Stock is Rising - there are currently a number of efforts to improve the "status" of python within the Rhino/GH ecosystem. The python editor in Rhino 6 has a number of improvements, not least of which is the ability to "compile" add-ons for Grasshopper written in python. I'm sure Giulio can speak to other upcoming improvements.
I hope this summary helps you find the right option for you. Ultimately you can't go wrong; concepts from any of the available scripting languages will make it much easier to learn the next one. In my day to day work I use a combination of both C# and python, where appropriate, and I love them both.
I hope others will feel welcome to chime in on this FAQ and add their own thoughts about advantages/disadvantages of these various options! If you have time, read through some of the other posts linked to at the beginning - there's lots of additional great information there. …
r." I'm sorry to hear that, I take the interface and ease-of-use rather seriously so this sounds like a fundamental failure on my part. On the other hand, Grasshopper isn't supposed to be on a par with most other 3D programs. It is emphatically not meant for manual/direct modelling. If you would normally tackle a problem by drawing geometry by hand, Grasshopper is not (and should never be advertised as) a good alternative."What in other programs is a dialog box, is 8 or 10 components strung together in grasshopper. The wisdom for this I often hear among the grasshopper community is that this allows for parametric design."Grasshopper ships with about 1000 components (rounded to the nearest power of ten). I'm adding more all the time, either because new functionality has been exposed in the Rhino SDK or because a certain component makes a lot of sense to a lot of people. Adding pre-canned components that do the same as '8 or 10 components strung together' for the heck of it will balloon the total number of components everyone has to deal with. If you find yourself using the same 8 to 10 components together all the time, then please mention it on this forum. A lot of the currently existing components have been added because someone asked for it."[...] has a far cleaner and more intuitive interface. So does SolidWorks, Inventor, CATIA, NX, and a bunch of others."Again, GH was not designed to be an alternative to these sort of modellers. I don't like referring to GH as 'parameteric' as that term has been co-opted by relational modellers. I prefer to use 'algorithmic' instead. The idea behind parameteric seems to be that one models by hand, but every click exists within a context, and when the context changes the software figures out where to move the click to. The idea behind algorithmic is that you don't model by hand.This is not to say there is no value in the parametric approach. Obviously it is a winning strategy and many people love to use it. We have considered adding some features to GH that would make manual modelling less of a chore and we would still very much like to do so. However this is such a large chunk of work that we have to be very careful about investing the time. Before I start down this road I want to make sure that the choice I'm making is not 'lame-ass algorithmic modeller with some lame-ass parametrics tacked on' vs. 'kick-ass algorithmic modeller with no parametrics tacked on'.
Visual Programming.I'm not exactly sure I understand your grievance here, but I suspect I agree. The visual part is front and centre at the moment and it should remain there. However we need to improve upon it and at the same time give programmers more tools to achieve what they want.
Context sensitivity."There is no reason a program in 2014 should allow me to make decisions that will not work. For example, if a component input is in all cases incompatible with another component's output, I shouldn't be able to connect them."Unfortunately it's not as simple as that. Whether or not a conversion between two data types makes sense is often dependent on the actual values. If you plug a list of curves into a Line component, none of them may be convertible. Should I therefore not allow this connection to be made? What if there is a single curve that could be converted to a line? What if you want to make the connection now, but only later plan to add some convertible curves to the data? What you made the connection back when it was valid, but now it's no longer valid, wouldn't it be weird if there was a connection you couldn't make again?I've started work on GH2 and one of the first things I'm writing now is the new data-conversion logic. The goal this time around is to not just try and convert type A into type B, but include information about what sort of conversion was needed (straightforward, exotic, far-fetched. etc.) and information regarding why that type was assigned.You are right that under some conditions, we can be sure that a conversion will always fail. For example connecting a Boolean output with a Curve input. But even there my preferred solution is to tell people why that doesn't make sense rather than not allowing it in the first place.
Sliders."I think they should be optional."They are optional."The “N” should turn into the number if set."What if you assign more than one integer? I think I'd rather see a component with inputs 'N', 'P' and 'X' rather than '5', '8' and '35.7', but I concede that is a personal preference."But if I plug it into something that'll only accept a 1, a 2, or a 3, that slider should self set accordingly."Agreed.
Components."Give components a little “+” or a drawer on the bottom or something that by clicking, opens the component into something akin to a dialog box. This should give access to all of the variables in the component. I shouldn't have to r-click on each thing on a component to do all of the settings."I was thinking of just zooming in on a component would eventually provide easier ways to access settings and data."Could some of these items disappear if they are contextually inappropriate or gray out if they're unlikely?"It's almost impossible for me to know whether these things are 'unlikely' in any given situation. There are probably some cases where a suggestion along the lines of "Hey, this component is about to run 40,524 times. It seems like it would make sense to Graft the 'P' input." would be useful.
Integration."Why isn't it just live geometry?"This is an unfortunate side-effect of the way the Rhino SDK was designed. Pumping all my geometry through the Rhino document would severely impact performance and memory usage. It also complicates the matter to an almost impossible degree as any command and plugin running in Rhino now has access to 'my' geometry."Maybe add more Rhino functionality to GH. GH has no 3D offset."That's the plan moving forward. A lot of algorithms in Rhino (Make2D, FilletEdge, Shelling, BlendSrf, the list goes on) are not available as part of the public SDK. The Rhino development team is going to try and rectify this for Rhino6 and beyond. As soon as these functions become available I'll start adding them to GH (provided they make sense of course).On the whole I agree that integration needs a lot of work, and it's work that has to happen on both sides of the isle.
Documentation.Absolutely. Development for GH1 has slowed because I'm now working on GH2. We decided that GH1 is 'feature complete', basically to avoid feature creep. GH2 is a ground-up rewrite so it will take a long time until something is ready for testing. During this time, minor additions and of course bug fixes will be available for GH1, but on a much lower frequency.Documentation is woefully inadequate at present. The primer is being updated (and the new version looks great), but for GH2 we're planning a completely new help system. People have been hired to provide the content. With a bit of luck and a lot of work this will be one of the main selling points of GH2.
2D-ness."I know you'll disagree completely, but I'm sticking to this. How else could an omission like offsetsurf happen?"I don't fully disagree. A lot of geometry is either flat or happens inside surfaces. The reason there's no shelling (I'm assuming that's what you meant, there are two Offset Surface components in GH) is because (a) it's a very new feature in Rhino and doesn't work too well yet and (b) as a result of that isn't available to plugins.
Organisation.Agreed. We need to come up with better ways to organise, document, version, share and simplify GH files. GH1 UI is ok for small projects (<100 components) but can't handle more complexity.
Don't get me wrong, I appreciate the feedback, I really do, but I want to be honest and open about my own plans and where they might conflict with your wishes. Grasshopper is being used far beyond the boundaries of what we expected and it's clear that there are major shortcomings that must be addressed before too long. We didn't get it right with the first version, I don't expect we'll get it completely right with the second version but if we can improve upon the -say- five biggest drawbacks (performance, documentation, organisation, plugin management and no mac version) I'll be a happy puppy.
--
David Rutten
david@mcneel.com…
ding is not for the faint of heart and is quite a significant understanding. However, I don't know what your dealing with, so that may be the way to go about it.
Your component if its "finished" has to supply some sort of results that are then used downstream. AFAIK there isn't a way to "prevent" down stream components from calculating until your finished. They have to get some sort of information or else they'll just be waiting. Considering how the results of those components are likely to be invalid until the information gets calculated, it may be better off supplying them with nulls until you have some actual information to give them.
Anyway, I think that you should think very closely about the structure of your routine, and specifically how it will interact and update itself. The way I'm thinking about it now is that there really isn't anything that's done in the "solve instance" function if you will. Essentially the "solve instance" function would either A) start the reading of the file if no data is found, or B) output some data if it is found. This is an extreme undersimplification, but the simpler you keep this the more likely this will work. Here are a few more "details", i guess, of how I could see this potentially working...
Thread A - Initial call to Solve Instance function
+ Check and see if there are any results that exist from reading your file - at this point there shouldn't be. These results should be stored in some sort of class variable that is accessible to both threads. It might also be a good idea to have some boolean flag that will also be accessible that represents whether your reading/writing those variables.
+ Fire a function in another thread that begins the read process. Note that you'll likely have to do this through a delegate and an invoke call, but I'm not 100% sure
+ Fill in some null values for the variables you must supply
+ Output the nulls, thus finishing the Solve Instance function
Thread B - File Read Function running in separate thread
+ Open up the file. Note that its probably a good idea just to pass the file path (as a string) between the different threads. Leave the creation of the file/text stream to the one thread that's using it.
+ Perform all the necessary reading from the file
+ Copy all your data to the variables that are accessible to both threads.
+ Expire either the solution on either the component in question or (at last resort) the whole canvas. I know expiring the whole canvas is defenitely possible, but it should be possible to just expire the one component that's doing the reading.
Thread A - "Second" call to Solve Instance after being manually expired
+ Check and see if there are any results that exist from reading your file, which there now should be.
+ Output those shared results
+ Clear the last results (or cache them in some way) so that the next time the Solve Instance function is fired, you don't find any results and reread the file.
I think there are a few variations to this that could happen too, including having a separate function for reading and writing through the data that's called using its own delegate/invoke call to make sure that its extra safe.
If you haven't already, you should really look into event driven programming, delegates, and asyncronous messaging. These are going to be the 3 things that you'll need to have a decent hold on to make sure this things works. Just to let you know, debugging these things can be a bitch.…
an run. GH2 still uses the Rhino SDK for the geometry functionality, so curve offsets, meshing, brep intersections etc. will run exactly as fast as they do now.
However even in the absence of a working version of GH2 which can be profiled, we can still discuss some of the major aspects of performance:
Preview display. Each GH solution involves a redraw of all the Rhino viewports at the end (unless is preview is switched off, which I imagine is exceedingly rare). For simple GH files, the viewport redraw takes far more time than the solution. Rhino6 has a completely rewritten display pipeline using more modern APIs so we should see a speed-up here in the future, be it GH1 or GH2 or GHx.
Canvas display. Each GH solution involves a redraw of the Grasshopper canvas. If the canvas shows a lot of bitmaps or intricate geometry (lots or text, dense graphs, etc.) this can take a significant amount of time. GH2 will use Eto instead of GDI+ as a UI platform. Eto can be both faster and slower than GDI, depending on what's being drawn. It is particularly fast when drawing images, not so much when drawing lots of lines. There is a little room for improvement here and I intend to take full advantage of that.
Preview meshing. Grasshopper uses standard Rhino mesher to generate preview meshes. If a GH file generates lots of breps, a large amount of time will be required to create the preview meshes. The new display improvements in Rhino6 will allow us to get away with previewing some types of geometry without the need to mesh them first, and I imagine some effort will be spend in the near future to improve the Rhino mesher as well.
Data casting. Most component code operates on standard framework and RhinoCommon types (bool, int, string, Point3d, Curve, Brep, ...), however Grasshopper stores and transfers data wrapped up in IGH_Goo derived types. This means that every time a component 'does it's thing', data needs to be converted from one type into another, and then back again. This involves type-checking and often type instantiation. This stuff is fast, but it's overhead nonetheless and can take significant amount of processor cycles when there's lots of data. GH2 no longer does this, it stores and transfers the types directly as they are. There will still be some overhead left, but hopefully a lot less.
Computation. GH1 is a single-threaded application. When a component operates on a large collection of data, each iteration waits for the next. GH2 will be parallel, meaning components will be invoked on multiple threads, each thread focusing only on part of the data. Then all the results need to be merged back into a single data tree. On my 8-core machine (4 physical cores, each with 2 logical cores) I've been getting performance speed-ups of 4~6 times when using my multi-threading code. I wish it was 8, but clearly there is some overhead involved here as well.This will not help to speed up a single very complicated solid boolean operation, but if you're offsetting 800 curves, then each thread can be assigned 100 curves and the time it takes will set by whatever thread takes the longest.
Algorithms. If a specific component is slow, there may be things we can do to speed it up. Either improve the Rhino SDK, or improve the GH code. Depends on the component in question.
When all's said and done, I'd love to see a 10x speed increase for GH2 over GH1 for simplish stuff, and I shall get very cross if it's anything less than 5x.…