adiation results, specially when comparing the results from LB with those of HB.
Issue 1: Results so different.
In the attached file upper part to the right of the canvas you see both definitions (LB and HB). The images obtained show results so different that i can't find a logic explanation (2.46 vs 41 kWh/m2 for the same period of time). I believe that LB are in the OK range. HB are to high for just 5 hrs of calculation. I don't believe the material definition are making such a big difference (though i tried to have them similar).
Issue 2: Can't get annual grid based calculations plotted.
In the attached, right side at the bottom. I get the calculation, but after connecting the results to the HB_readAllTheDSHourlyResults it takes ages to calculate and at the end rhino crashes. Can be that this is a memory problem? Or there is a way to make this work (total annual radiation for GridBased simulation)? For now i disabled the component, but i just wonder ...
Words of wisdom for both issues will be appreciated.
Thanks,
-A.…
ould you want to have the same name for several things), but that doesn't explain why it isn't working at present, because the code looks ok as it flattens all input volatile data:
<code>
foreach (IGH_Param param in Params.Input[2].Sources){ foreach (Object myObj in param.VolatileData.AllData(true)){ if (myObj is GH_Number && pCount < 8){ if(!criteria.Contains(param.NickName)){ GH_Number temp = (GH_Number)myObj; performas.Add(temp.Value); criteria.Add(param.NickName); pCount++;
}
}
}
}
</code>
Anyway, you can only have 8 performance criteria max, so I would suggest splitting your list and naming each performance measure accordingly:
As for the speed, this is very hard to tell without a file to go on. Ultimately biomorpher is doing practically nothing compared to the time it takes to calculate each grasshopper instance.
However, I would recommend reducing the population size and disabling the grasshopper preview (on the initial screen). Also, try running the thing but just inputting a simple mesh sphere instead of the actual geometry (whilst still inputting the correct performance measures), and see if you get any speed improvement then let me know. That would be interesting to know, because there might be ways I can improve the speed by not importing meshes necessarily.
Alternatively, just send me a cut down version of your definition and I'll have a look.
Thanks,
John.
…
cs algorithms are ill-suited to calculating the transmission of a partially open screen. The sampling of rays required, and indeed the lack of actual wave-based movement of sound intensity makes them only suitable for large scale studies of spaces making the following assumptions, among others:
- The primary behavior of sound can be described by rays
- Diffraction lends only low significance effects
- Few, preferably no obstructions between the source and reciever
With regard to sound hitting a partially open screen, a variety of behaviors come into play. Sound moves in and out and around various points of a screen - meaning that rays can not describe the behavior of sound for such small delicate structures.
The good news is that some of the latest versions of Pachyderm also employ numerical methods. Try typing "Pachyderm_Numeric_Timedomain into the command prompt, and you'll get the controls for the Finite Volume Method. This method accounts for wave-based phenomena.
Now more bad news: The method does not have implemented an insertion loss calculation, so you would have to work in the source code to implement it, and it still does not have materials implemented (that last part may not be terribly important unless you intended to use porous sound absorptive materials).
So, in any case, I don't recommend using Pachyderm to determine the sound transmission of your design. Now for some more good news - you can do a rough calculation on a calculator, making a few assumptions, if you know the open area of your screen. Let's say that we assume your materials do not transmit at all (which they won't, but they will transmit far less than any opening in the screen). So let's assume you design a 50% open area screen. The transmission loss of the assembly, independent of octave band will be at most:
TL = 10*log10(0.5) = 3 dB
This means that the noise from your source will be 3 dB less on the quiet side of the screen than it will be on the railway side of the fence. Let's say that isn't enough... ok 20% open.
TL = 10*log10(0.2) = 7 dB
So now it is 7 dB less on the quiet side than it is on the rail side (it will probably be up to 3 dB louder at low frequencies, but this is a rough estimate).
So now the last bit of bad news - it is difficult, maybe impossible to get a strong amount of attenuation with a screen with open area. Even with a wall with no open area, the maximum attenuation will be 20 dBA. When you open it up, this will severely hamper the isolation of the screen. I hope this helps.
- Arthur…
release... Yes, the issue has been corrected when you manually set a number.
However, I still encounter multiple updates under the following three conditions (most easily observed when recording slider behavior...)
1. When sliding to a value, the last number is always set twice.
2. When manually setting a value from within the slider pop-up menu, the value is set twice after OK.
3. When manually setting a value from the right-click drop down menu multiple updates occur:
a. when selecting the current value, the current value is recorded twice.
b. when confirming a new value, the new value is set twice.
I should also note that the digit scroller component also has some strange update behavior...most notably, any click-drag event will cause a value to update...duplicate values are constantly being reset until the scroller clearly arrives at a new value.
(I hope I'm not known as 'that slider guy'... I just have an idea I want to test that is being hindered by unwanted duplicate records as a result of strange slider behavior.)…
priety software). Think Kangaroo with RON 100 fuel (add some nitrous oxide).
Back to domes.
1. Obviously you know the free WinDome Bono thing...but anyway get it (code included).
2. As I said on another thread (http://www.grasshopper3d.com/forum/topics/the-necessity-for-a-data-tree-manager) ... the big thing in AEC (because, for instance, nobody does domes for decoration/artistic stuff etc etc) is how to implement already designed things (see images above) within a smart stuff definition (or many).
3. Goes several steps beyond: these "breps" (to speak GH/Rhino language) are in most cases nested and some parts are "locked" for transformations some other not. That's the big thing when trying to outline real-life AEC solutions in the so called Smart applications. I think that this is not doable in Rhino since there's no way to edit (in place) a nested block.
4. Goes even further: for each custom made thing (truss nodes and the likes) ... there's a bill waiting. Meaning that the less customized a solution is (with regard industrial sourced existed parts) the more is possible for the client to sign the dotted line.
Best, Peter…
orm a brute force search).
But that's ok, a GA (or any other stochastic algorithm) doesn't promise to find the best answer. Technically the only thing that is guaranteed is that no matter when you stop it, you will always get some answer, and the longer you're willing to wait the better that answer will become.
There are several ways to decide when to stop, Galapagos uses the first 3 of these:
Time limit. The solver is allowed to run for N minutes.
Target value. Perhaps you're not looking for the best answer, perhaps any answer that yields a fitness above 53.9 is good enough.
Stagnation. If you haven't been able to improve on an answer for X generations or N minutes, assume you're done.
Genetic variability. If all the genomes are extremely similar, you've lost the variability you need for further exploration. At this point you can either throw in the towel or start again, making sure you distribute the genomes for the next solver run away from all the genomes of previous runs. That way you're guaranteed to start exploring new areas that may hide undiscovered fitness peaks.
--
David Rutten
david@mcneel.com
Tirol, Austria…
Added by David Rutten at 2:18am on September 26, 2013
list of points (only if you have too many points, so that you don't have a big delay)
2. Use a data recorder with a record limit (right click on the recorder to set this) at least twice the number of points and as large as possible to still run smoothly. (I am testing 50 points and have set the record limit to 1000 and it works ok)
3. Use [CullPt] set to "Cull All" (right click again).
4. And test when this list of points will be empty (list length equal to 0).
The accuracy of this depends on the number of points tested(larger=better), the record limit of Data recorder(larger=better) and the T input of CullPt(smaller=better).
If you want to be absolutely sure the simulation has finished completely, you could add another data recorder at the output of [Equality] and use [Mass Addition] to count the number of True's, so you could bake only after you have, let's say, 1000 true's.…
f, start over:
I believe the problem of assigning the correct X,Y coordinates to each vessel, was solved in the last definition I posted (I used the first value of each branch -MMSI- to identify the vessels and it seems to be working OK...). This way, even if a vessel stops transmitting for a while, the definition will find it when it starts transmitting again and continue drawing it's course.
Before we get into implementing the acceleration into the definition I'd like you to explain how you imagine this (even with a quick sketch). The first thing I am skeptical about is this:
You say you want to create a construction out of this definition. This means you have 3 available dimensions to work with. Right now X and Y dimensions represent the vessels' position. So this leaves you with Z and you will have to decide what will be represented in this dimension. Until now it was time that was represented in Z. If you want this to be acceleration instead, then you will not be able to show the vessel's position in a specific time (and I thought this was a thing you wanted). So please, explain this thought a little bit better and we will get to it!
PS. What exactly do you have in mind? To stop getting info from the site after some time?…
on ... er ... Rhino fails (without any apparent reason) to do the bool intersection required. See the disabled native GH component as well that also fails. Why? You tell me.
2. Added a more realistic option for the sides:
instead of that entirely off-topic solution:
3. Added an indicative bus shelter layout that proves ... er ... that the whole approach is wrong: you should do some layout the full parametric way (not a task for a novice) and then sample the random points required ALWAYS between the bus parking places: that way inside columns are possible and functional.
PS: In order to use the test instance definitions load Rhino file first.…
is "The average number of hours of direct sunlight received by the test _geometry." as the tooltip says. Actually I remember that this was just the sum of all the sunlight hours of each node, wasn't it? If it was not thenI am really getting old and my memory with me:)
But ok then, if it is the average then:
1 - why it is still called "total" and not "average"?
2 - can you please explain me these results?
I have these two same geometries to test (the dark grey boxes) and the surrounding context.
These are the results
Should not then the result 2 and 3 be the same?
Thank you.
Francesco…