and Grasshopper. Recently I tried doing some test project just to see what can I do. My target is to design a small house for an atom family. Though as you might think - it'll be a parametric one. And I encountered exactly what's in the title. So here it goes: 1. Something is wrong with the measuring units in the complex profiles. I met this problem while making I-beam. In ArchiCAD it had 127/76 mm while in Grasshopper i had 127000/76200mm so a little bigger. 2. I'm unable to turn off the preview. I mean when I delete something in Grasshopper/Rhino it still exists in ArchiCAD. I have to unlock it and then delete it. 3. Coordinates for points seem broken. They have to be multiplied 1000 times to match. 4. Now one of the most important. Is it possible to somehow SHOW Grasshopper where are already made in ArchiCAD objects. Even if they'll remain still. For example I want to make a parametrical roof. Do I have to model whole building from scratch in Grasshopper or is there some fast way to "import" existing scene so I can limit my work with Grasshopper only to parametrical one. 5. Is it possible to make "points" as controlling points in AC? Like, if I'd like to make a beam in a desired place which I will mark by that point and then I will "show" Grasshopper that point and tell it to make an object in there so I can control it within grasshopper. I tried ti do this using AC Control Point but when I click "Send changes" button, Grasshopper and Rhino crush immediately. It only happens then, with control points. 6. It seems that "move" component won't work with "2D curve" component connected directly. It is possible that some of those problems are outdated. I was playing around in Grasshopper a few months ago, before summer break, but now I plan to try something new and it would be nice to know what to do. I appreciate any answer to any of those questions. Please help, you guys, are my only hope. Thanks in advance! Karol…
ted a picture of in your post. The reason is that sound has larger wavelengths than light.
With a light rendering model, energy can be said to reflect specularly, relative to their geometry, because the wavelength of light is inifinitesimally small relative to any object you might have modelled. With sound, energy may travel and reflect diffusely, or move around objects, depending on the scale of those objects. Think of the fundamental equation of frequency to wavelength - speed of sound = frequency X wavelength. Using that, you can see that a wave in the 125 hz octave is about as tall as a human being (or maybe a little taller) and would easily move around your body, not being reflected at all. A wave in the 1000 Hz. octave band is as big as your forearm, and might reflect specularly from your torso. A wave in the 4000 hz. octave band is about as long as your index finger, and might reflect off of your torso, or even your head.
Similarly, if you were to model the seats explicitly, it might be relatively accurate at very high frequencies (say 4000 hz. and above) but that is a very small part of the answer. Consensus in the field is that the most accurate way to model the seats is with a flat plane, raised to about shoulder height, and then with scattering coefficients applied to represent the varying effects of geometry on sound. I tend to use low coefficents below 250 hz. (say around 30%) and high coefficents above 250 Hz.(90%).
Absorption depends on the seat which was chosen. This is often a good area to use for a model calibration based on measured reverberation time.
Arthur…
on excel (leaving 0,0 cell blank and also making sure there are no commas in the names ) Also let's call the names "ID"
2 - For the weight, use numbers ranging from 1 - 10 where 10 is the highest dependancy.
3 - Save the file as a Unicode CSV from excel
4 - Create another file on excel that has the attributes of your spaces, with the names of your spaces under the header ID (let's start with a simple "area" and "SNo" attribute but you could add more features for sorting and manipulating your data)
5 - Open Gephi and further open your matrix CSV file
5 - Import it as "," (comma delimited file) and make sure you check "matrix" for the data type
6 - Ensure the import is nondirectional as well (or Gephi adds silly arrows)
7 - Not gonna go into the gephi bit too much but select a force atlas layout and set the force to something high 1000 or 10000 depending on the size of the data and the attraction to a 1000th of that 1 or 10. Go to the data lab and import your excel with the attributes and append to your existing datasheet.
8 - Set the node attributes to use the area for the node size and color scheme to SNo
9 - Play around with all the layout options and finally go to your preview. Once you're happy with it, export it to a GDF graph file.
the GDF now has the coordinates of the circles and the diameters. as well as the edge connections.
I've written a very amateur script that converts this to GH geometry (below)
Hope this helps someone out, I'm still figuring out the gephi streaming API but I've only started with python about a month ago so might take a while to get there.
You can use the second half of the GDF files to also create dependency chord diagrams online as shown in the third image.
https://flourish.studio/2018/07/25/how-to-make-a-chord-diagram/
Cheers,
Sanjay
…
tion) which would amount to -at a rough guess based on your image- about 1000 genes. I'm not sure how well Galapagos will be able to deal with such an amount (ironically the biggest problem will probably be the interface, not the solver algorithm).
The main theoretical problem I see is the fitness function definition for this. It won't be good enough to count intersections and minimize those. Reason being is that the number of intersections is an integer (it doesn't vary smoothly) and as such there's no 'selection pressure' towards better solutions. If you start with a setup like this:
it would have a 'fitness' of 2. We'd like to move these two shapes apart but even if we move them a large distance in the correct direction, we still have a fitness of 2:
It seems like a more useful metric would be the area of the overlap, at least then when leaves move apart, the fitness value will change, which allows the algorithm to make an informed decision.
Computing curve region intersections and areas will be a very intense step, making the whole process even slower than it already is. If I had to do this, I'd first try and tackle this using pixels, as computers are very good at dealing quickly with them. You could draw an image of all the shapes, drawing them each in a transparent black. Then, when two shapes overlap, the resulting pixel will be darker than the fill. If three shapes overlap it will be darker still. Then, once you've created the image, you could all the pixels and compute a value based on how many dark pixels there are.
This can probably be done in a reasonably low resolution, but you'd need to write some code to create and analyse the images.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
ting if its purpose is only meant to be a measure of direct beam sun. Still, given the history that LEED seems to have with selecting problematic daylight metrics that it later needs to revise (slide 2 of Alstan's presentation), I guess I should not be so surprised at these issues are arising.
I see that, in Alstan's presentation, he seems to be using Daysim to calculate ASE and, Sarith, you are right to bring up this issue that Daysim distributes the direct beam sun between 3-4 sky patches like so:
Mostapha should contribute when he gets the chance but I might imagine Daysim's poor accounting for the position of the sun in the sky might be his grounds for not exposing it on his component. This said, I know that Alstan is a smart man and a daylight expert who I highly respect so I would have liked to have heard his thoughts on this process in his presentation.
In any case, for a more accurate means of observing where the sun is in the sky than Daysim's method, you can use the Ladybug sunpath component and the corresponding sun vectors. So, if you want a means of calculating the percentage of the floor in direct beam sun over the year, you should use the sunlight hours component with the sun path and, if you want to exclude sun when it is very cloudy or low in the sky, you can use a conditional statement on the sun path to remove sun vectors when the outdoor global horizontal illuminance is below a certain threshold. For LEED, I guess that this threshold is 1000 lux multiplied by whatever the transmittance of your windows is but I would rather set it at 4000 lux since, as I said at the top of this post, I don't know how IES or LEED arrived at this number and I at least know that 4000 lux is what the experts have agreed the upper limit of Useful Daylight Illuminance (UDI) should be. This workflow with the sunPath is similar to what I do in my office to account for glare, although I also add in an extra step to account for the fact that the hourly EPW data can add in an East-West bias (since illuminance values are recorded at the end of each hour as opposed to the start). I also put the results in terms of "hours of a typical 24-hour day" since it seems many people have a hard time understanding "hours of the year" intuitively. Finally, I use these sunlight hours to make a temporal map showing when the glare is likely to occur. I have found a good correlation between the presence of direct beam sun shining on the floor in these studies (even in small amounts) and Daylight Glare Probability (DGP) values above 0.4 (perceptible glare) for views looking towards the window or towards the floor by the window.
Here is an example file showing you how to do this calculation with the sunpath for yourself:
http://hydrashare.github.io/hydra/viewer?owner=chriswmackey&fork=hydra_2&id=Estimate_Glare_Potential_Over_a_Year
Also, Mahsa, there is usually no need to use excel when you have the data in GH. GH provides you with native math components that essentially have all of the capabilities of Excel (see the example file above).
Hope this helps,
-Chris…
you may know, PCS (from now I will call polar coordinate system with PCS, and cartesian one with CCS) describes point position with 2 values (like x and y in CCS) which are r and theta(r,theta). r is for distance from PCS center, theta is angular dimension which is in 0 to 360 or 0 to 2*pi domain.
To hark back to David's guide line - here it is replaced with guide circle.
Why to sort points like this ? As usual, one image tells more...
Here is logic behind all this stuff :
Find an average point of all given points*
Search for furthest point from an average point*
Create a circle with center at average point and radius = distance from average point to furthest point*
*Steps 1-3 can be replaced with custom hand-made circle, I decided to automate it that way.
For each point find closest point on circle - this will be used for finding theta value
For each point find distance to average point - this is r value
To overcome problem with same theta (t) values (like same x values in CCS), instead of multiplying by 1000, we will use a new create set component. This component creates set of integers, each one representing one unique input value. So if points A, B, C, D, E are (r,theta) :
A (1, 30)
B (2, 30)
C (3, 30)
D (1, 45)
E (1, 60)
Then create set will output list of integers = 0,0,0,1,2 (same theta for A, B, C other theta for D and E). Now its getting really easy - remap r values to domain 0 to 0.5 (or any less then 1), and add integers from create set component to remapped r values.
7. So what we have now is list of floating point numbers : A=0, B=0.25, C=0.5, D=1, E=2
Profit of remapping is that r values will never affect integers representing theta values - and all the information is stored in one floating point number ! By sorting these values we will obtain proper order of points - to complete this, we need to sort points parallel with values.
What's really cool about polar sorting - there could be any amount of points, but polyline connecting all of them will never self-intersect. Probably there is some relation with 2d convex hull.…
(1) I have been exporting small sections of a larger model into Maya from Rhino as FBX. In Maya I rotate and scale the models (-90 in X, Scale XYZ 0.001). The Named Views are being saved, but do not have a successful import into the Maya model. They do not appear as in Rhino, and the problem is not solved by scaling or rotating the cameras.
(2) If I try going the other direction, the cameras exported from Maya as FBX are also not aligning with the model in Rhino as they are in Maya.. I will do my best to post some images of the problem and hope you can help.
error !!
This is what the named views look like
here I am trying to the other way with a good view from Maya
strange placement..
This is the best result I can achieve, after I scale the camera by 1000
Any Advice???
Thanks, Robert.
…
.. then you put (or drill) rather "canonical" patterns that formulate the inner/outer skin (or both).
2. The above approach hits 3 walls: (a) very slow response (Rhino is a surface modeller) (b) booleans/fillets potential issues (Rhino is a surface modeller) (c) a potential aesthetic antithesis between the liberty of the "whole" VS the "strict" rules of the "details".
3. Since you opt to work with Rhino It could be worth considering playing his own game: deforming surfaces that is ... by working against control points or via the Morph methods. Then join them and get the decorative thingy as "solid".
Images below are from a C# that actually gets the control points of Surfaces in Lists and "deforms" them according a gazillion of options (a) via any "on-the-fly" defined pattern (Take or skip this control point: shift branches/items that is) (b) using any number of attractors in any push/pull mode (c) using chaotic vector values (d) using ... well too many ways to list them here.
Imagine what the Alien cuppa def does (modifies "diagonally" control points) ... multiplied by 1000.…
s about our take on the issue.
How do we use it? Why do we use it? These are great questions. SGJJR sees the potential of using iterative parametric models to chart the design space. Understanding how the current design performs is helpful, but the next question is always "what can we do to make it better." If we understand the design space we can predict how to improve the design and discuss the ramifications of such decisions when they are being made. Too often decisions are made without a full understanding of the consequences because analysis is expensive and not timely enough to feasibly include it in a meeting. This is especially true in conceptual design, when some of the most important decisions are made. In practice, tools like Pollination can empower a conversational design process that allows an integrated team to collaborate far more effectively. The trick in our mind is predictive modeling, which pollination supports. The goal should be to direct the team towards the critical design decisions, make the right call, and move on. This process is empowered when models account for competing metrics, such as cost, energy and daylight quality. I'm curious how others think these tools can be applied. It is incredible to be able to calculate 1000 iterations, and I trust everyone on this forum to figure out the nuts and bolts to make that possible, but what do we do with it? What design decisions are we targeting? What should we be targeting?
One successful small example we did was to inform the interior design of an office building. We had some ideas about changing the room finishes, cubicle heights, cubicle finish, and of creating a lighter colored perimeter floor band. All of these things went against the interior design. By running every combination of these input parameters (at several settings) we were able to quickly determine that all successful designs had low cubicles. Rather than waste time fighting over finishes, we held firm on the cubicles and compromised on everything else. I imagine more teams can use pollination to guide the design process in this way. Rather than use Pollination to optimize, it empowers us to understand the design space, identify the critical parameters, and chart a path forward.
You mentioned massing and orientation. I think this is a huge opportunity for us all. These decisions are often made long before proper analysis is considered, because up until now that analysis has been too slow, too expensive and not useful enough. This new tech can change all that. It's a tough problem. Site constraints, programming constraints, and client preferences rightly play a large part, but we need to understand performance as well. What do our architects need to know when developing that first blocking mass study? What drives performance? As our architects balance 18 different goals, what are the critical things they need to consider to understand performance? It's not enough to know that too much glazing will hurt EUI. That will not sway the designer. Knowing that there is no way to meet LEED platinum with that southwest glassy atrium, will. Identifying that the only solutions that perform well have certain things in common will encourage those concerns to rise in the hierarchy of criteria being considered.
So what metrics are we calculating? My first instinct is EUI, cost, DA, and ASE. But that's not actually what we are looking for. Our architects care about % of regularly occupied area that is well daylit, or if we are going to meet our energy or budget targets. We need to take a step past the raw metric and translate that into a design decision recommendation. Displaying the data in terms of "% Well daylit", "% EUI under target", and "$/sf" are more useful to the team. Anyone have other suggestions?
What input parameters can we control to affect performance? Floor-to-floor height, plate depth, ceiling height, fenestration design, construction type, orientation, external shading, massing options, mechanical system choice. That's a lot of iterations, but in theory, we should be able to study all combinations. What else am I missing?
Can anyone share links to articles or white papers that explore this topic? I'm eager to learn more and hear what other people are thinking.
…