one bug on our end.
First, you set the natural ventilation setpoint to be only one degree greater than the heating setpoint. This frequently causes the building to expend heating energy reaching the setpoint only to throw away this heat a few minutes later when the indoor temperature rises by one degree. Generally, I would leave at least 3 degrees between these two setpoints unless you are using some special types of schedules that ensure this conflict does not happen.
Second, I do not know what you were trying to do with so many solve adjacency components but, when you hook up only one zone to this component and set "removeAdjacencies" to true, you will over-write all of the previous boundary conditions that you set in the surface-by surface method. These were the boundary conditions that you were sending into EnergyPlus in the file you uploaded:
As you can see, you were blowing your heating load out of proportion by getting rid of your originally-set adiabatic walls. In the attached GH file, I use just one solve adjacencies component and these are your resulting boundary conditions:
Finally, there was a bug in the function that checks the normal direction of surfaces input to the energy model and Mostapha just fixed this one this past week (https://github.com/mostaphaRoudsari/Honeybee/issues/365). This was causing the solar gain calculation to be incorrect in your model, which, admittedly, is the major reason why the heating loads between the north and south facades were not different. As you see in the attached GH file, you now get very different heating loads for the north and south facades:
I hope that all of this helped and, again, sorry for the delay.
-Chris…
3dParameter("Points", "P", "description", GH_ParamAccess.item);
pManager.AddCurveParameter("Curves", "C", "description", GH_ParamAccess.item);
Register Output:
pManager.AddPoint3dParameter("Nodes", "N", "description", GH_ParamAccess.item);
pManager.AddPoint3dParameter("Tails", "T", "description", GH_ParamAccess.item);
Tails are the NurbsCurve.Startpoints.. and that is where I am getting 3 points, instead of the expected 2 points.
Many Thanks!
…
other ordering mechanisms. If you feed layer names in explicitly, output is grouped by layer. If you feed multiple types or multiple object name filters, output is grouped by these instead, following standard data tree behavior. There are a number of ways to achieve layer-based sorting of objects using existing components:
Letting anything be organized by the layer-palette order is extremely sticky, since it can be re-ordered on the fly by various sort mechanisms. See more on this topic in my conversation with Tim Halvorson in the comments on the Human Group page. As it stands now I do not alphabetize layers in the layer table component as you say - I use the layer index, so that as new layers are added/rearranged, the entire order of the set doesn't change in unpredictable ways. If layer sort order is preferable to you, you can use the simple one-line script I provided Tim in order to retrieve the sort index and use it to sort your layers. My priority for both object order and layer order in the components is to minimize unnecessary changes/refreshes/event listening by returning everything exactly as the SDK gives it to me.
(3) I've added the Include Locked and Include Hidden options to the latest release attached to this post. However, this only pays attention to hidden objects, not hidden layers. I do not want to include a toggle for "Include Hidden Layers" because this would force the dynamic pipeline to expire every time a layer's visibility changed, which I do not want - for most purposes this would cause unnecessary recomputing. However, with the components as they exist now, you can drive the dynamic pipeline with a layertable set to auto update, like so:
This has the effect I think you are after, which is to ignore objects on layers that are hidden - and recognize them as soon as the layer is turned on.
…
to do once I figured out how you use only a small portion of each of my generated curves to make the 360 degree Loft surface. I had a huge AHA! moment when I realized the complete Loft surface really only needs a small portion of the generated curves rotated around to form a closed (except for top & bottom) surface. That is a major new insight for me and I appreciate you pointing it out.
I also tweaked the Twist angle parameter a bit so the resulting positive and negative Twist surfaces, when combined, yielded a result that was closer to my original shape. This is when I discovered something very interesting.
When I baked/exported the result using just one of the 2 twisted surfaces I got an STL file that had no errors, that 3D Builder was able to simplify from a 37 MB file to a 3 MB file, and that sliced A-OK. But, when I combined the left and right twisted surfaces, I was back with my same set of problems: the exported STL file had many errors, could not be fixed, and did not slice properly.
I went back to my original layout that uses the complete set of generated curves to create the Loft surface and found I got exactly the same results - using only one twisted surface worked fine, but nothing worked when the left and right twisted surfaces were combined. By nothing I mean I tried all the standard methods (GH Join and Sunion, Rhino Solid/Union, Join, etc.) What I think this means is that the Loft surface behaves the same, and apparently is the same, regardless if it is generated by rotating strips or by using complete closed curves.
Furthermore, I am guessing the problems with the combined/exported STL file made from both left and right twisted surfaces has to do with overlapping/coincident parts of each one - like the top & bottom planar surfaces and some of the wiggly parts.
If I am correct about this then it suggests to me that there is some sort of glitch in Rhino's STL Export function. This is surprising to me since I though an STL file only paid attention to the external shape of thngs,and did not know or care about any inside stuff. Of course this is all conjecture on my part, but at least for now seems it will be impossible for me to actually print the double-twisted geometry.…
Added by Birk Binnard at 3:52pm on September 23, 2016
be in your definition is impossible.
The 2 (or 3) sides of the cube's corner pieces can never be the same color (see image:)
because they are actually part of the same piece (they always move together).
So in your definition it is as if you have removed the stickers from the cube and replaced them randomly, which results in an unsolvable cube...
In order to start with a properly scrambled cube, I believe you could start with a solved cube and perform a big number of random rotations on it (just like you would do in real life).
On another subject:
"There are over 43 quintillion legal positions of the Rubik’s Cube.
It would take thirteen hundred million years to see every position if you were able to view one thousand per second.
If we stacked 43 quintillion pennies, the stack would be tall enough to reach the sun and return to the earth four thousand billion times."
source: http://b.chrishunt.co/how-many-positions-on-a-rubiks-cube
So, trying to brute-force the Rubik's cube is definitely not the way to go... :)
Of course there is a number of programming algorithms for solving the cube (examples) but I don't know how easy it would be to implement them in GH....
Best of luck and please keep us posted!
Nikos
…
Added by nikos tzar at 10:42am on January 31, 2017
ulio´s latest bakeAttribute, so it also sets a specified layercolor?
Thanks,
Phillip
Reply by Giulio Piacentino 1 hour ago
Hi Phillip
if possible, you should try to modify layer colors independently from baking. A layer can have only one color, but many objects.
To modify a layer color, use something along these lines:
if(!string.IsNullOrEmpty(layer) && !color.IsEmpty) { int n = RhinoDoc.ActiveDoc.Layers.Find(layer, true); if(n < 0) return; Rhino.DocObjects.Layer l = RhinoDoc.ActiveDoc.Layers[n]; l.Color = color; A = l.CommitChanges(); }
Can I also ask you to start a new discussion next time? I hope this helps,
- Giulio
…
mple:
I wish to populate a rectangle with some random points, but I need them to be more dense at the base of the rectangle and then linearly getting more and more sparse towards the top.
This is how I worked it around:
1) first I have created a triangular prism,
2) then I've populated its volume with some random points
3) and finally I've projected them on the plane I'm wishing to populate.
But I don't really like the final result since the points are not as nicely spaced as if they were produced by the "Populate 2d" command. They look kind of "clumpy":
Do you have any better idea?
The best thing would be to be able to put a grayscale bitmap underneath and use it as a "density map"...
Here you have the .gh file I made:
prism.gh
Thank you very very much for the help! :)
By the way:
While I was preparing my 3d random distribution of points I've spotted a weird behaviour of the random command:
Even if the seeds are all different, for some values of them the points still belong to some common planes...
To solve that I had to jitter the output of one of the Random components.
I suppose this is a weakness of the pseudorandom generator implemented in the random component, isn't it?…
ilion.
Then i sketched the outline curves in rhino with a few control points. The building is symetric so i only draw one side. But i'm not sure what is better for a voroni. a sharp or a soft surface? Or dose i need points?
So i have some questions:
1. how can i loft the curves correctly? My problem is that if i divide my curves for more control points, grasshopper automatically change my curve. thats ok but than i've the problem with a short curve, which fit bevor with the large one, but after the devision it can't connect.
So i tryed to duplicate the long curve and split it but with the shatter battery it dosen't work. It always cut the curve somewhere.
2. my next problem is, the curves in rhino should be my main construction, which is always visible. so i decided to offset the curves that i got a colum. but i don't know how to orient the offset curves in the xyz axis.
3. hopefully if i have the surfaces, how can i build a voroni which is offsetet, and has maybe some different thicknesses? :D
Would be really great if s.o. can help me. I tried a lot but not every thing is simple.
Sorry for my bad english.
Thx max
Here are my files:
FCP_MAX_GH_konstruktion_1.3dm
FCP_MAX_GH_konstruktion_1.gh
…
w how. Thanks for that. Now I do have some questions.
1. I am using the area weight tool. I am first calculating the volume of the form. I then multiply that value by it's density. So for concrete I am using 2400 kg/m^3 x volume. I then divided that number by the area of the membrane that is supporting the mass. This gives me my area weight. It seems to be working well but I want to verify that this is the correct workflow. I also want to verify that gravity would be turned off since I am thinking it is already calculated within the weight component.
2. I am finding that the new triangular element tool works much better than trying to use EA/L as input for the springs from mesh. Even when I set the timestep, subiteration, and drag I still have issues with getting very stiff materials to work. On the new finite elements tool I wanted to verify that E was in pascals. I also wanted to ask if I use imperial units can psi be entered. Now from what I am seeing the materials are deforming more than expected and to get less deformation and stretch in the mesh area I am finding the E value needs to be increased more than the true material values. Often I am raising E by a multiple of 10 or 100.
I am going to describe my problem and I will gladly share the definition if you'd prefer looking it over but basically I have an inflated membrane at a certain pressure made of a particular material. I then have a certain volume of concrete on top of the inflated membrane. My goal is to review the displacements as the concrete is applied over the membrane and find the proper pressures to apply to keep it free from deformation. I am including a picture from a project that we used kangaroo on and attempted to deal with such issues. It was a class sponsored by Cloud9 architecture held at Art Center College of Design where I was one of the instructors. Hopefully this illustrates the problem. To summarize any example file that shows the best way to implement real material properties and unit based forces would be a helpful reference and would be greatly appreciated.
…
iangular element properly discretizes the area continuum forces, so they are independent of meshing density, unlike simply using a network of 1d springs.
The warp and weft stresses can also be set separately allowing greater control of the shape (making them equal will give minimal surfaces).
Because the soap film elements alone do not have any in-plane stiffness, it can often be useful to have some spacer elements to keep the nodes well distributed.
Also, if mesh edges follow geodesics on the surface, it helps keep the strips straight when unrolled, allowing more efficient use of material.
The G-string component can be used for both - keeping the nodes well spaced, and aligning edges with geodesics. It pulls each node toward a combination of its neighbours, but taking only the part of the force tangential to the surface, so it does not interfere with the shape of the surface, only affecting how the nodes are distributed on it.
The "GeoIndex" input lets you choose which neighbours will be used here. In this example, a quad mesh is used, and index 0 and 2 give the neighbours in the warp direction, while 1 and 3 are in the weft direction. Note that it is the triangulation of this quad mesh that is used for the actual soap-film elements.
There is also a "spacing" option. If this is true, the nodes will try and space out evenly along the geodesic, while if it is false, only the direction is affected. In this example it is set to false for the warp and true for the weft.
The example also includes use of the stripper and unroller components to get the flat strips. I have shown the result of splitting in either direction, and as you can see, only one of these is straight.
Finally, if all of this sounds overly complex, don't worry - for quick studies you can still use the simpler approach of just turning all edges of a mesh into springs, and provided you have a decent starting mesh, the result will be very similar to using the 2d element method given here. This is just provided for those that want to take things to greater degree of accuracy and further towards fabrication.…