t happens, is that the first loop already produces a result (in this case the total weight of the structure) and Galapagos registers it. Then (I guess, it takes it as one result) goes to create the next generation. But this result is actually just a result from an initial value from Anemone.
I am triggering the loops by one of input params of Galapagos. So whenever the slider changes the loops starts again. But I would like to wait until all the loops had results, thus for the most optimal one. I should have around 5 or 6 loops. (see attached video, around 00:36)
https://vimeo.com/232972437
Imagine the function of the loops as a small "galapagos". It is actually connected to Karamba3D optimiser component, so it looks for the lightest structure within one loop.
So the ideal way would be that Galapagos registers only, let say every 6th results, or alternatively wait 3 seconds for the last result. Then probably my fitness function would be a bit more efficient. (maybe)
Another video about Galapagos in action ;)
https://vimeo.com/232614581
Thanks again, Br, Balazs
…
ot seem to grasp how these points are presented into the Python-component. when i print the 'pts' (line 5) i am only returned 1 point. however, when i draw a line from this(/these?) point(s), a line is clearly drawn to EVERY single point.
It seems like the points are grafted to a tree, but i still cant seem to append them to a new list.
How do I import the points as a simple list variable? (I tried right-clicking 'points' input and set 'list access' and 'tree access' and various type hints, but none worked.)
Does python import one point at the time and the run the script x times, or why does this happen?
Is there btw. a python-command for flattening the input/output?
Thanks in advantage
- Lasse
…
sent a 3D shape without any ambiguity. If the shape you're trying to convey falls outside the scope of existing standards, then it can't be done, but this is a problem of standards, not an intrinsic shortcoming of pencils.
[...] with the computer theoretically acting as a decision maker.
The computer makes no decisions on it's own. It's a fully deterministic machine, meaning that any output is the result of applying a set of rules to some pre-existing data. Humans make the rules. At no point can you blame the computer for coming up with a bad answer, it's always some human who is responsible.
[...] it seems to often be split between Computerization, and Computation.
I'm willing to concede there exist cases that are unambiguously one or the other, but there's a gradient in between these two extremes, they are not separate categories. If I draw a box by specifying the 8 corner points as XYZ coordinates then computation can be said not to be involved. If I draw a box by specifying 2 opposite corners then the computer has to compute the other 6 coordinates and we're already on our way towards the other extreme. If I draw a box by specifying a width, height and a required volume, more computation is needed. If I specify a box by a width, a volume and the requirement is doesn't cast too much shadow on some other shape, more computation is needed. At what point do we say "now it qualifies as computation/solving"?
--
David Rutten
david@mcneel.com…
Added by David Rutten at 7:22am on November 28, 2013
signing):
On trivial matters:
1. "Long" sheets means a big value "along" x (say 6 - 9 meters) BUT short "along" y (say 60-90 cm). This means that the min size dictates our panel policy UNLESS we use a self supporting triangular frame that is further triangulated and hosts the Lexan stuff (and your freaky rings of fire). But this negates the planar glazing ultra clean aesthetics and we are back into the misery of that rabbit hole. I hardly can see any Nobel (or EMMY or Oscar or Pulitzer or something equivalent anyway) around. On the other hand tinted glass can being made in sizes up to 3*3 m (or a bit more).
NOTE: Of course using smart glass (Google that) could negate your rings ... but don't tell that to the client.
2. Other than that ... er... cleaning the whole thingy ... well ... I have a solution for that: a bunch of flying dwarfs (or Smurfs) trained by the Lord (Himself) that can do the job (at a price).
3. Other than that the sensible thing to do is to cover (IF the cover is in the agenda) the T truss using the outside "side". But truth is that cleaning could be an issue. US Firm Birdair (King of membranes, master of all tensile things) addresses this using Teflon on certain fabrics ... meaning that rain can wash the membrane quite effectively.
BTW: Projects that have no budget floor makes me feel very happy - this is the proper way to blow millions (for no reason) he , he.…
taList
but
DA.SetData List
is not valid C# code. You must use the same brackets, you can't open with a parenthesis and close with a square bracket. Arguments must be separated by commas,
(1 tabelts];
should be
(1, tabelts);
If you want to output a multi-dimensional array, do the following:
GH_ObjectWrapper wrap = new GH_ObjectWrapper(tabelts);
DA.SetData(1, wrap);
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
an run. GH2 still uses the Rhino SDK for the geometry functionality, so curve offsets, meshing, brep intersections etc. will run exactly as fast as they do now.
However even in the absence of a working version of GH2 which can be profiled, we can still discuss some of the major aspects of performance:
Preview display. Each GH solution involves a redraw of all the Rhino viewports at the end (unless is preview is switched off, which I imagine is exceedingly rare). For simple GH files, the viewport redraw takes far more time than the solution. Rhino6 has a completely rewritten display pipeline using more modern APIs so we should see a speed-up here in the future, be it GH1 or GH2 or GHx.
Canvas display. Each GH solution involves a redraw of the Grasshopper canvas. If the canvas shows a lot of bitmaps or intricate geometry (lots or text, dense graphs, etc.) this can take a significant amount of time. GH2 will use Eto instead of GDI+ as a UI platform. Eto can be both faster and slower than GDI, depending on what's being drawn. It is particularly fast when drawing images, not so much when drawing lots of lines. There is a little room for improvement here and I intend to take full advantage of that.
Preview meshing. Grasshopper uses standard Rhino mesher to generate preview meshes. If a GH file generates lots of breps, a large amount of time will be required to create the preview meshes. The new display improvements in Rhino6 will allow us to get away with previewing some types of geometry without the need to mesh them first, and I imagine some effort will be spend in the near future to improve the Rhino mesher as well.
Data casting. Most component code operates on standard framework and RhinoCommon types (bool, int, string, Point3d, Curve, Brep, ...), however Grasshopper stores and transfers data wrapped up in IGH_Goo derived types. This means that every time a component 'does it's thing', data needs to be converted from one type into another, and then back again. This involves type-checking and often type instantiation. This stuff is fast, but it's overhead nonetheless and can take significant amount of processor cycles when there's lots of data. GH2 no longer does this, it stores and transfers the types directly as they are. There will still be some overhead left, but hopefully a lot less.
Computation. GH1 is a single-threaded application. When a component operates on a large collection of data, each iteration waits for the next. GH2 will be parallel, meaning components will be invoked on multiple threads, each thread focusing only on part of the data. Then all the results need to be merged back into a single data tree. On my 8-core machine (4 physical cores, each with 2 logical cores) I've been getting performance speed-ups of 4~6 times when using my multi-threading code. I wish it was 8, but clearly there is some overhead involved here as well.This will not help to speed up a single very complicated solid boolean operation, but if you're offsetting 800 curves, then each thread can be assigned 100 curves and the time it takes will set by whatever thread takes the longest.
Algorithms. If a specific component is slow, there may be things we can do to speed it up. Either improve the Rhino SDK, or improve the GH code. Depends on the component in question.
When all's said and done, I'd love to see a 10x speed increase for GH2 over GH1 for simplish stuff, and I shall get very cross if it's anything less than 5x.…
and visualizing data for ENVI-Met 4 software. ENVI-met is a cutting edge software used to analyse microclimate interactions in urban environment. Tens of different analysis types can be performed on the chosen building context. From Mean radiant temperature and local Wind speed to CO2 concentration and Pollutant dispersion in the air. To generate the building context for Ladybug ENVI-met components, Antonello used Gismo:
An example similar to results in upper screenshots has been been attached below. To run it, Gismo, Ladybug and Human plugins need to be installed. To perform the ENVI-met analysis, download ENVI-met 4 Basic for free, and install it. Steps in the .gh example file have been labelled from 1 to 11. They mostly consist of just setting a boolean toggle to True. An exception to this are steps 6 (set the folder path of your ENVI-met application install folder), and 8 (running the ENVI-met simulation). Step 8 has been explained in detail in the photo attached below (step8.jpg). Special thanks to Antonello for developing and guidance on ENVI-met application and components! Post questions below if you have any issues!…
Added by djordje to Gismo at 11:30am on March 25, 2017
"Type hint") would be faster than "No Type Hint" (which I suppose is essentially a form of Duck Typing). But I guess that is simply due to the small cost of casting which accumulates and become more apparent when performed on large input lists. The other two points are very clear and understandable. It's interesting to hear that compiled components will be faster. Do you think this is also the case for the new compiled GHPython component in the Rhino 6 WIP?Your general advice on performance also make perfect sense. This was what initially made me run these tests. And the fact that by making just a few simple alterations to a scripting component you can make it substantially faster (whether C# or Python), seemed like important information to share. Hehe, with regards to which language to implement I try to adhere to the Rule of Least Power, which as you correctly mention is more often than not about productivity before optimization :)I have previously been testing whether or not it is worth it to implement things like Numpy/Scipy for numerically intense calculations, but have since dropped it due to its incompatibility with 64 bit IronPython. We have also been experimenting with using ctypes to call compiled C++ code which actually works quite well (although debugging can be a hassle). I suppose in the end it'll likely be much less of a headache to simple go with C# for this kind of work and use libraries from the .NET world. It is a bit of shame though, what with the large amount of potentially very useful Python modules from the world of scientific computing (which seem to all rely on Numpy).Thanks again,Best,Anders…
hat aren’t completely there. BIM will have to continue to evolve some more if their supporters want to get to realize the promise that still is. I can’t say much about PLM, but I would say that both BIM and PLM should be considered in future developments of GH and Rhino. David has said several times that some GH limitations regarding geometry and data structures (central to interoperability) are actually Rhino limitations. So, I wouldn’t put so much pressure on David for this, or at least I would distribute the pressure also on the core Rhino development team.
Talking about Rhino vs. GH geometry, there is one (1) wish I have: support for extrusion geometry. GH already inputs extrusion elements from Rhino, but they are converted to breps. Is not a bad thing per se. The problem is when you need to bake several breps that make the Rhino file to weight several hundred MB. When these breps are actually prismatic, extrusion-like solids, is a shame that they aren’t stored as Rhino V5’s extrusion geometry in a file of just a couple of MB (I overcame this once with an inelegant RhinoScript that wasn’t good for other people). This was one of RhinoBIM’s main arguments. We can develop a structural model made of I-beams in GH using the Extrude components. We should be able to bake them as extrusions. That would also work for urban models with thousands of prismatic massing buildings (e.g. extruded footprints). Even GH’s boxes are baked as breps! Baking boxes as extrusions could be practical for voxelated or Minecraft-like models.
(2) Collaborative network support. Maybe with worksession handling, or something that aloud project team members to work on a single definition or in external references or something alike. I know there is another Rhino limitation on this, but maybe clusters are already going in that direction?
And maybe on the plug-ins domain:
(3) Remote control panel that could be really “remote”, like from other computer or device. There is an old Android App for that, but is not only a matter of updating. I mean, it would be great to control a slider with the accelerometer of an Android phone, but to have that on an iPhone will require another development team. If GH could support networks, a remote counterpart of a RCP plug-in could be developed as a cross-platform web app. I don’t know if you can access accelerometer functionality through HTML5 already, but for now, asking a client (or an spectator or any stakeholder for that matter) to control your sliders from gestures of his/her own phone would be awesome (maybe Firefly will fill that hole?).
(4) GIS support. GH already imports .shp files. Meerkat can even access the database, but what about writing to shapefiles or generating our own with databases processed/generated in GH?
(5) SketchUp support. Not only starchitects and corporations are using GH in the AEC. There are a lot of small firms, freelancers and students interested. Most of them use SketchUp for 3D modeling (not CATIA, neither Revit). Yes, you can import/export .skp from Rhino, but if GH could support nested block at bake time (also mentioned by others), it could write .skp files with complex relations of blocks (that are called components in SketchUp) and nested groups, going beyond what Rhino can export.
(6) Read/Write other formats. There are some challenges with proprietary formats that are not completely supported by Rhino, but they’re still a lot of open formats that are relevant to the fields of GH users, like stl and ply for 3D-printing. It could be nice to write mesh colors to a ply for 3D-printing a colored prototype based on GH colors. There are others, like IGES, STEP, COLLADA, etc. and 2D, like svg, odg and pdf. Some of them could offer special formatting options like custom data that the format supports but nobody uses just because is impractical to access this from direct modeling environments (but not from visual programming).
--Ernesto…
mment%3A1637953
First of all, the invalid Rhino license as seen previously has been removed, and the correct educational license we have is re-installed for this test.
The re-appearing issue is that RAM usage spikes once GH is open in Rhino. It seems that this happens when a series of large GH project files incrementally saved are stored in the same folder. Moving those previously saved large project files to a new folder seems to be able to solve this issue.
The images below explains the issue and the hypothetical solution:
1. A series of GH files were incrementally saved in the same folder previously, and the last few GH files are the ones opened most recently:
2. The total RAM usage is at the normal 5GB level once Rhino is open:
3. Once GH is open, the RAM usage spikes, and the it becomes very slow to maneuver the GH window before even opening any one of those GH files:
4. Once GH and Rhino are closed, the RAM usage drop to the previous level before the GH interface was open:
5. Now, all the incrementally saved GH files are moved to a new folder "wip" except the last one, i.e. for the last GH file, there is no other previous GH files in the same location:
6. Now, if we open GH, there is no sudden increase of RAM usage, and the 3x3 thumbnails on the GH canvas shows "missing" as those previously opened GH files are no longer in the same location as they were before:
I understand that David mentioned that the thumbnails for previously opened GH files on GH canvas will not take much RAM. Nevertheless, I'm still not sure what is causing the increase of RAM usage and slowdown of GH interface. Relocating the large project files previously saved in the same folder as the current GH file seems to be able to make this issue go away, for unknown reason ...
Appreciate if anybody experiencing similar issue can help to check if this solution works.
Thank you.
…