and I here is what I have to share:
Thanks! Thank you for being awesome! When I released Ladybug two years ago I could never imagine how this project will take over my life! It has been such an invaluable experience for me so far and it wasn’t possible without you - so thank you so much.
What’s next? Recently I get this question more and more and here is my fairly long answer! Chris is pushing the boundaries with comfort tools. Chien Si is working on HVAC systems integration. Chris, Anton and Alejandra will figure out how to effectively get natural ventilation to be modeled. Patrick, Sandeep, Michal and Boris are working on their developments. I’m working on getting 3 Phase method integrated, and Butterfly will be out at some point, but... they are not going to be what makes the next step. The next step is up to you. It is what you will do with the development. So go ahead and let us know what’s next!
If you can help someone on the group please do! Doing so you are not only helping another person (and potentially yourself) but also the developers. The more you can help each other here the more we will have time for development and documentation.
Best place to send your questions is this group. If you are using the latest version from github then you may want to sent it to github. Please consider emails as the last option. Go back to number 3 again! Thanks.
Don’t be nice to us! Well, I mean don’t just be nice to us. I love your nice comments like anybody else and please keep them coming ;) but what we also need next to nice comments is your critiques, wishes and insight. I feel that recently we are getting less wishes and critiques than what it used to be. You can post them here in the group or on github and either way we will know about it. Thank you to all of you who has already done this.
Thanks again! Before I let you go I want to specially thank all of you who contributed to the project by your development, thoughts and support. You are great and I can’t thank you enough.
David Weinberger in his book “Too Big to Know” says: “When an expert network is functioning as its best, the smartest person in the room is the room itself.” Reading some of the discussions on the group gives me the feeling of staying inside a smart room. Thank you and let’s keep the room growing!
Cheers,
Mostapha
PS: To avoid sending another post, I just post the updates about the two upcoming workshops here:
I will lead a workshop in LA next Friday (Feb. 6) and there is still few seats left. If you want to learn more about energy and daylighting simulation with Honeybee here is your chance. Here is more information who to register: (http://www.facadesplus.com/technology-workshops/).
Chris will lead a 3 days intense and comprehensive Ladybug and Honeybee workshop in Mexico City this March. You have probably watched Chris’s tutorials and already know what you can expect from a workshop with Chris so I don’t have to speak for that! I would take this workshop if I was around that area. If you are around Mexico City or know a friend who might be interested please let them know. Here is more information about the workshop: (https://www.facebook.com/LadyBugforGrasshopper/photos/a.442320969114095.107084.413910668621792/919318878080966).
…
y from the Rhino model and having the absorption coefficients of the materials that are entered into Pachyderm, why is it not possible to generate a reverberation time diagram, without the need to start any analysis?
MAPPING METHOD: When for example the mapping of the Strenght Index (G) is generated through the "create map" option, succesively I can´t generate any other energy criterion map, but I have to redo the simulation.
Is it a limitation of the software or am I wrong something?
MAPPING METHOD: I kindly wanted to ask what is the difference between minimum and detailed convergence and why the number of reflections order it takes into account for the simulation is not specified. The mapping method take care only of the Raytracing Method or the Image Source too?
MAPPING METHOD: Why is the mapping value that can be exported to Rhino not generated for all the calculation raster points, but maximal only for 100 values?
MAPPING METHOD: This method hasn't been implemented in Grasshopper yet, has it?
RAYTRACING METHOD (Pach:RT): I did a raytracing through the components of GH, using only the Pach_RT, and I had these curious results in terms of time:
RaysCount: 15.000, IS_Order:1 = 5min
RaysCount: 15.000, IS_Order:2 = 12min
RaysCount: 15.000, IS_Order:3 = 3min
RaysCount: 15.000, IS_Order:4 = 8min
RaysCount: 15.000, IS_Order:5 = 3min
Why a raytracing with only 2 order, is more and more extensive than the 3/4 and 5 order?
ANALYSIS RESULT: Would there be a way to export all the results of a simulation, as is done via Odeon, to a .txt list?
I apologize in advance for asking so many questions, I hope you can find the time to answer,
Yours sincerely from Müller-BBM…
make quad mesh usable with Kangaroo and with limited inputs parameters in order to simulate funicular structures like "Vaulted Willow" or "Pleated Inflation" from Marc Fornes and the Verymany.
Here is a first attempt script.
As inputs there are :
Lines_in, just lines, no duplicates, on XY plane could have Z values, but the algorithm works on a , on XY plane could have Z values, but the algorithm works on a flat representation.
Tolerance is used to glue lines when points are closer than tolerance
Width is the half width of the “roads” going through the network
Angle is the shape of the ends of the roads, 0° means flat end, 180° a totally rounded end
Deviation is the shift generating spikes or enabling to generate pleated geometry
N_u is the number of subdivision along the “roads”, image above with 3 subdivisions on the roads
N_u is the number of subdivision across the “roads”
Zbool if false everything is flat, if true the mesh is in 3d, best with angle = 180° or -180°
For the outputs there is the topology of the network (like Sandbox)
As outputs geometry are put on datatree, each branch represent a path on the road, above 3 paths, which are brep output.
Adding a diagonal there are now 4 paths so 4 branches
The mesh M goes with F which are fixed points, anchor in Kangaroo.
U and V are lines in datatree, there will be used as spring in Kangaroo, U above
This script could be used to draw sort of roads, like in here https://codequotidien.wordpress.com/2013/03/22/hemfunction/
But the primary purpose is to do that.
…
Grasshopper contains a VB.net and C# component. These components
allow you to run your own custom code within Grasshopper.
Understanding how to make even simple code components can be very
useful in G
about it.
2. Nick's comment below got me thinking about unit testing for clusters. Being able to work will data flowing in from outside the cluster or having multiple states to test against could be really cool. Creating definitions that were valid across a general cross section of possible input parameters was a significant issue for us. It was all too easy to write the definition as if we were drawing (often we were working from sketches) and then have it fail when the input parameters changed slightly.
4. I wasn't thinking about threading the solver itself. I was thinking along the lines of some IDEs that I've seen which compile your project while you type it. I know that threading within components and at the rhinocommon-level is a freaking hard problem that has been discussed at length already. (although when, 5-10 years from now, it's finished it will be very cool)
Let's say the solver is threaded and the canvas remains responsive. As soon as you make a change to the GH file, the solver needs to be terminated as it is now computing stale data.
What if the solver was a little more atomic and like a server? A GH file is just a list of jobs to do with the order of the jobs and the info to do them rigidly defined - right? The UI could pass the solver stuff to do and store the results back in the components on a component by component basis (i have no idea what the most efficient way to do this is in reality - I'm just talking conceptually) this might even allow running multiple solvers to allow for at least the parallelism the might be built into a given GH file to be exploited (not within components but rather solving non-interdependent branches of components simultaneously). This type of parallelism would more than make up for the performance hit you alluded to for separating the UI and the solver (at least for most of the definitions i write).
I was imagining a couple of scenarios:
a) Writing a parallel module: solver starts chewing away - you see it working - you know it's done 1/3 of the work - if you have something to do at that point you could connect up to some of the already calculated parameters and write something in parallel to the main trunk which is still being solved.
b) Skipping modifications: you need to make a series of interventions at different intervals along a section of code. Sure you could freeze out that bit of a section of down steam code and make modifications so you can observe the effects more quickly. Unfreeze a bit more and repeat etc. etc. until your done and then unfreeze that big chunk at the end to make sure you haven't blown anything up. Just letting it resolve as far as it can while you sit there waiting for inspiration seems a lot more intuitive to me though.
On a file which takes 15 minutes to solve that's no big deal, but you certainly don't want to be adding a 20 millisecond delay to a solution which only takes 30 milliseconds.
You also wouldn't notice it at that point :-) perhaps for things where it would really make a difference, like Galapagos interactivity, it could be disabled - or could the existing "speed" setting just digest this need? Since the vast majority of time that Gh is solving is on files under active development not on finished code, i think qualitative performance is probably more important that quantitative performance (again with cases like Galapagos needing to be accommodated). In our case the code only had to "work" once since its output went to a cnc machine to make a one off project and it didn't really matter if it took 15 seconds or 15 hours for the final run.
Lastly, I have no way to predict how long a component is going to take. I can probably work out how far along in steps a component is, but not how far along in time.
that's ok, from a user point of view, just seeing a percentage tick along once in a while would be nice reassurance that the thing is just slow and has not, in fact, crashed. Maybe there could be two modes of display: the simple percentage version for unpredictable code and, for those of us able to calculate the time taken for our algorithm based on the number of input parameters, a count down in seconds or minutes or whatever.
I think a good place to start with these sort of problems is to keep on improving clusters, ... etc etc
i totally agree.
…
Added by Dieter Toews at 7:53pm on September 4, 2013
- nickname is rather the best approach - and not on active group, but that's irrelevant anyway).
Step back (assuming that you are talking about the "Tens_from_random_blah_blah" definition):
1. Engineering is the art of demystifying (or we are promising that anyway, he he). This means that you start defining (better: outlining) some topology for things based on some "generic" rules (like the ones applied for the masts,cables,cones etc etc). These things are kept in some kind of structure (Lists, DataTrees etc). Things are few in 99.99999% of cases (i.e. : even the biggest membrane "module" has, say, 20-50 masts per "module").
2. Then ... handling things "individually" (mostly modifying) becomes the most critical part. See this (an x "possible" solution by combining a myriad of "options" : a no cones membrane solution, in plain English):
3. But the above is impossible (for more than obvious reasons). You should deploy masts in some high/low sequence in order to achieve some meaningful convex/concave formation that could work.
4. This "works" : 5. This doesn't:
6. This works partially (the formation at the back is "flat" == undo able):
7. This is utterly kitsch (and faulty as the case6 - the back portion):
So it's quite obvious that without a (quite complex) capability to individually control things (in this occasion : mast heights) the whole definition is a waste of computer time. Additionally the more the solution is "demystified" (some curve is defined, some random points are created, some masts are in place, some cables appear etc etc) the more additional constrains are required in order to "narrow" the possibilities (In plain English : sliders should control other sliders as regards their min/max values, true/false, you/me etc etc).
Remember that we are talking about ONE (mast height) out of a myriad things that you should control "manually" (it's utterly pointless to mastermind some kind of "generic" rules - or use naive attractors etc etc) .You'll see the difference when I'll completely reform the definition by adding individual control upon anything.
PS: what about the blocks? (the real life stuff that actually make any solution possible). Can you imagine a 2nd set of "restrictions" imposed by "a child to his parent"? (Assembly/Component modeling , that is).
more soon
…
ut. Here's an image to see what I'm talking about:
I also attached my file.
I have a list of objects (planes) coming in on 4 different branches (0;3 through 0;6)
Just as a test I ran this code:
*From scripted component labeled (A) in image
Private Sub RunScript(ByVal x As DataTree(Of Plane), ByRef A As Object)
Dim tree1 As New datatree(Of System.Object)Dim path1 As New GH_Path(7)
For i As Integer = 0 To x.Branch(0).count - 1tree1.Add(x.Branch(0).item(i), path1)Next
a = tree1
Which outputs all planes that were on branch {0;3} on new branch {7}. This works the way I expect it to.
What I'm confused about is when I add this little bit at the end:
*From scripted component labeled (B) in image
Private Sub RunScript(ByVal x As DataTree(Of Plane), ByRef A As Object)
Dim tree1 As New datatree(Of System.Object) Dim path1 As New GH_Path(7)
For i As Integer = 0 To x.Branch(0).count - 1 tree1.Add(x.Branch(0).item(i), path1) Next
a = tree1.Branch(path1)
The planes are now on branch {0;3;0}, but it seems (to me at least) that it should produce the same result as above since path1 was set to (7).
Ultimately I want a way of addressing each Branch on the same branch name they came in on. I can retrieve each branch easily:
*From scripted component labeled (C) in image
a = x.Branch(0)
b = x.Branch(1)
c = x.Branch(2)
d = x.Branch(3)
But each of these branches has a path of {0;3;0}
That doesn't seem right to me either... Is there something I'm missing?
By using the following code, I can basically recreate the branch structure that comes in:
*From scripted component labeled (D) in image
Private Sub RunScript(ByVal x As DataTree(Of Plane), ByRef A As Object)
Dim tree1 As New datatree(Of System.Object)
For g As Integer = 0 To x.BranchCount - 1 Dim path1 As New GH_Path(g)
For i As Integer = 0 To x.Branch(0).count - 1 tree1.Add(x.Branch(g).item(i), path1) Next
Next
a = tree1
but then I run into the same problem when trying to address any branch:
*From scripted component labeled (D) in image
Private Sub RunScript(ByVal x As DataTree(Of Plane), ByRef A As Object)
Dim tree1 As New datatree(Of System.Object)
For g As Integer = 0 To x.BranchCount - 1 Dim path1 As New GH_Path(g)
For i As Integer = 0 To x.Branch(0).count - 1 tree1.Add(x.Branch(g).item(i), path1) Next
Next
a = tree1.Branch(0)
I apologize for the length of this post, but I appreciate any help I can get!
Thanks,
Brian…
uick answers. Below you will find some suggestions, but don't think of them as rules and especially don't think of them as guarantees.
1. Choose a descriptive title for your post
Don't call your question "Help!" or "I have a problem" or "Deadline tonight!", but actually describe the problem you are having.
2. Be succinct but clear in your wording
People need to know some details about your problem in order to understand what sort of answers would satisfy you, but nobody cares about how angry your boss or how bad your teacher or how tight your deadline is. Talk about the problem and only the problem. If you don't speak English well, you should probably post in your native language as well as providing a Google Translation of your question.
3. Attach minimal versions of all the relevant files
If you have a GH/GHX file you have a question about, attach it to the post. Don't expect that people will recreate a file based on a screen-shot because that's a lot of pointless work. It's also a good idea to remove everything non-essential from a GH file. You can use the 'Internalise Data' menu option to cut everything to the left of a parameter:
If you're importing curves or Breps or meshes from Rhino, you can also internalise them so you won't have to post a 3DM file as well as a GH file. If you do attach large files, consider zipping them first. Do not use RAR, Ning doesn't handle it.
It is especially a good idea to post files that don't require any non-standard components if at all possible. Not everyone has Kangaroo or Hoopsnake or Geco installed so if your file relies on those components, it might not open correctly elsewhere.
4. Include a detailed image of the GH file if it makes sense
If your question is about a specific (group of) components, consider adding a screenshot of the file in the text of the post. You can use the Ctrl+Shift+Q feature in Grasshopper to quickly create nice screenshots with focus rectangles such as this:
5. Include links to online resources if possible
If you have a question about Schwarz Minimal surfaces, please link to a website which talks about these.
6. Create new topics rather than continuing old ones
It's usually better to start a fresh question, even if there's already a discussion that kinda sorta tangentially touches upon the same issue. Please link to that discussion, but start anew.
7. This is not a 'do my work for me' group
Many of us like to help, but it's good to see effort on our part being matched by effort on your part. Questions in the form of 'I need to do X but cannot be bothered to try and learn the software' will (and should) go unanswered.
7b. Similarly, questions in the form of 'How do I quickly recreate this facade that took a team of skilled professionals four months to figure out?' have a very low success rate.
--
David Rutten
Lead Grasshopper Development
Robert McNeel & Associates…
Added by David Rutten at 12:58pm on October 1, 2013
: ----------------------------------------------------------------------------------------------
1)
Hi Clemens I've analysed a plate structure using Karamba and wanted to do a convergence analysis on results computed as a function of the number of elements.
Now, when strictly looking at the result magnitudes of internal energy (IE) and maximum displacement (w_max), it's acceptable, that their relative deviations are very small. But I cannot explain the tendencies of their graphs. From what I know, FEM should always compute underestimated results when compared to analytical solutions. So I don't understand why both the IE and w_max seem to be decreasing for an increasing number of elements.
But my main concern is the behaviour of the peak moment, it seems to be simply hill climbing untill suddenly a singularity kicks in. I initially wanted to use the peak moment as a fitness value for optimisation, but with this behaviour, I don't think that would make sense. I've attached my GH file as well.
It would be much appreciated if you could enlighten me on these subjects. Cheers Daniel Andersen
2)
Hi Daniel,
I could not run your definition because I have not all the plug-ins installed that you use.
You are basically right that the displacement should increase with a finer mesh. However the result of the shell analysis also depends on the shape of the triangles (well formed vs. very distorted). In order to test this, I think it would be interesting to use a very simple example (e.g. rectangular plate with one column) where you can easily control mesh generation. Would you like to start a discussion on this in the karamba group at http://www.grasshopper3d.com/group/karamba?
It is not a good idea to use the bending moment at a singularity for optimization because the result will be heavily mesh dependent. Also real columns do have a certain diameter and modeling them as point supports introduces an error.
Best,
Clemens
3)
oh, and by the way!
Here's some relevant literature on handling peak moments: https://books.google.dk/books?id=-5TvNxnVMmgC&pg=PA219&lpg=PA219&dq=blaauwendraad+plates+and+fem&source=bl&ots=SdDcwnrSA1&sig=6HulPmKNIhqKx4_rGxitteMC4CU&hl=da&sa=X&ved=0CDEQ6AEwA2oVChMIg66k0LPaxgIVgY1yCh1KPAeY#v=onepage&q=chapter%2014&f=false (Blaauwendraad, J., 2010. Plates and FEM : Surprises and Pitfalls, see Chapter 14) It would be great if a feature dealing with peak moments could be incorporated in Karamba. In my work, I ended up exporting my models to Robot in order to verify the moment values. Best, Daniel
4)
Hi Daniel,
thank you for your reply and the link to Blaauwendraads excellent book!
At some point I hope to include material nonlinearity in Karamba which will help in dealing with stress singularities.
If you want you could open a discussion with a title like 'moment peaks in shells at point-supports'. Then we could copy and paste the text of our conversation into it.
Best,
Clemens
----------------------------------------------------------------------------------------------…