ed when membrane cones are invited to the party (then mesh (via Starling is the best way) the brep and send data to Kangaroo : the easiest thing to do). But patch doesn't trim the inner Loops and ... well initially I thought to find this in SDK and do the job:
Well... I confess that I can't get the gist of the Brep.Trim (as explained in SDK).
Thus go to plan B: having already the closed breps (the "cones") as cutters ... attempt a Boolean difference
but this does that (this looks to me a bit paranoid, but some reason must exist):
What I want is this:
the code that mess things is (open the script inside definition attached):
BTW: where in SDK is that DeBrep thing?
BTW: Delaunay GH syntax is still cryptic to me (but this is not an issue anymore)
I would greatly appreciate any help on that final step (to greatness).
The full working definition soon (v5: with 90% of components replaced by C# stuff).
best, Peter
…
bout angle since the exact same wires can suddenly start working fine later! Just adding new items to Rhino and then using undo to get back to your failing geometry will fix it sometimes?! Flipping the pair of curves' directions, either one or both, fixes it. It's just black box broken. It happens for really boring angles near 90 degrees.
Rotating the entire pair in space has no effect.
Rescaling the lines from their joint point has no effect.
Simply cutting and pasting the lines out of Rhino back in *sometimes* fixes it, so it's angle and something else that makes certain lines "toxic."
Duplicating the pair of failed lines via alt-dragging the Rhino gumball fails to fix it.
Running the "line-like curves" through a Line component to give "lines" doesn't fix it.
Re-creating the lines by extracting endpoints fails to fix it.
Each line, if separated from each other works fine.
Grafting makes each line into its own little cylinder minus a hub.
The error is the boilerplate "Object reference not set to an instance of an object."
Once the pair spontaneously starts working I cannot reproduce the error with that pair again, though sometimes Rhino undo will get me back to failing.
CAN ANYBODY REPRODUCE THIS WITH MY FILE? If so I can submit a bug report.
Exoskeleton is here: http://www.grasshopper3d.com/group/exoskeleton
Source code is here but it's for compiling, not something I can just test in a C# component out of the box:
https://github.com/davestasiuk/Exoskeleton2/commit/f63c4aa691a7f26b...
…
s is like flattening your data PARTIALLY - chopping an index off the end of the branch paths without obliterating the tree entirely. When working with one "set" of input data, a flatten works to get these lists to match up - but when working with multiple sets, we need to be careful to preserve the original branch indices that keep all four of your original regions separate. As a rule, whenever you're feeding two data trees into any component, they should have the same number of branches. (or one should have branches and the other should be a flat list, in other cases).
The rule of thumb I tend to teach is this:
In 90% of cases...
For lists, all your inputs should either have 1 item or N items. That is to say, if you're feeding 4 items into one input and 9 items into another, something is probably wrong.
For trees, all your inputs should have either 1 branch or M branches. That is to say, if you're feeding a tree w/ branches {0;0} to {0;3} into one input, and a tree w branches {0;0;0} to {0;3;8} into the other input, something is probably wrong.
Grasshopper essentially matches up branches first, then lists second. By "matching" I mean it processes them together. Simple example of the Line component - it will match the first branch of points in the A input to the first branch of points in the B input, creating lines between those points, then match the second branches, the third branches, etc. THEN, it applies the same logic to the level of the list (with a pair of matched branches {0;2}, match all the items in those branches to each other - first item in one branch to the first item in the other branch, etc.)
This is a tricky concept but it seems like you're already well on your way to understanding it from your definition - "PShift" is a critical tool in your path management arsenal. I hope this (overly long) response helps clear things up for you!
…
he TOF and TSRF indices. They show, how "distant" is your _PV_SWHsurface from the optimal _PV_SWHsurface surface in terms of tilt and azimuth angles.However, in your case we are not interested in TOF and TSRF indices. We would just like to know what are the _PV_SWHsurface optimal tilt and azimuth angles, regardless of the supplied _PV_SWHsurface.
So the circular surface supplied to the "TOF" component's _PV_SWHsurface input is irrelevant. It can be of any area, and any tilt/azimuth angle.The PV_SWHsurfacesArea output of the "PV SWH system size" component depends on a couple of factors:moduleActiveAreaPercent_ (leave it at 90%).
moduleEfficiency_,
systemSize_.Calculation of systemSize_ depends on your electricity demand, cost of the PV system, type of the object, country, local regulations etc. This is something that an engineer needs to determine.For example, in USA for a residential house in the Sunbelt, depending on finances, a household would try to cover 100% of its annual electricity needs with their PV system. Which means that the systemSize_ you chose needs to cover the annual electricity consumption. You can perform EnergyPlus simulation or use any other way to get the annual electricity consumption.
Ladybug "Photovoltaics Performance" component can calculate the optimal systemSize_ by given the annual electricity consumption.However the component is made to address fixed tilt and azimuth PV systems only.An approximate way to overcome this is to calculate the optimal systemSize_ for fixed tilt and azimuth PV system, and then multiply it with the "difference in %s" panel at the very right of the fixed_vs_tracker_PV2.gh file. Again, this is not what Ladybug "Photovoltaics Performance" component is made to do, but it will probably get you in a ball park.
Inputted 32 degrees for north_ direction is actually 328 degrees.This is due to Ladybug Photovoltaics being based on NREL model which uses clockwise angles convention. This convention is also most commonly used in solar radiation analysis.
Dubai weather data files are uploaded in here.
…
s levels of detail by subdividing a 6 sided cube mesh and projecting its vertices according to a referenced height map. This is one of the standard conventions for building full sizes planets. At the lowest level (0) the mesh planet is made of 6 pieces(each 32x32 resolution). The next level down (1) is made of 24 pieces... 6 divided by 4 = 24. Level (2) is 96 quads etc etc. The script will generate each quad at its sub-division level and compare edge vertices to neighboring quads. It will then make sure any shared vertices are in fact at the same projected vector. This ensures a planet quad with edge vertices that match.
The problems comes in texturing each quad.
If I build the quad as a nurb surface from points I can place the texture easily because each surface UV maps squarely to my texture map (which is also square).
If I build the quad as a mesh I cannot just apply the square texture to the mesh UVs. This is because when you unwrap the UVs from a mesh they will not unwrap like a nurb surface's UVs. Therefore to get the correct mapping I would have to manipulate each UV back to an evenly aligned array (which is 1024 points in a 32x32 resolution UV). Maya and blender have 'relax uv' and 'align UV' functions but they don't do the trick and manual corrections are out of the question. So why not skip the mesh method and use the nurb method?
I did this and there is a trade off. The nurb will accept the material texture I want with no other work on my end but when I export the object as an .obj rhino creates its own mesh to describe the nurb(with various unsatisfactory setting options). This works great up to a point because at some level the interpreted mesh will have vertices that do no match at the edges, ie .. creating visible seams in the mesh. The picture below is the nearly seamless planet at LOD(1) made of 24 quads, each with 32x32 vertice resolution and a 512x512 jpg texture running in Unity3d 5. It works but at close level there are seams. This will be resolved simply by having the next LOD(x) instantiate before getting close enough to see the seam but at core nerd level I want the seamless mesh.
So, I can make the seamless mesh but I can not realistically texture map it. I can also make the nurb surface from points and texture it at the expense of the edge vertices matching. I am at the split in the road but I want to have my cake and eat it too. Thoughts, comments, trolls...?
Thanks for reading =)
Footnote: For you pros I am not using seamless noise across the map I am using grasshopper to sew up my otherwise non perfect edges.
Other programs in the pipeline:
-WorldMachine 2
-Wilbur
-Photoshop
-Unity3d…
ive collaborative environment.
TYPE : Course module and Workshop
The event is open for anybody interested from all the fields of design, including: architecture, interior design, furniture design, product design, fashion design, scenography, and engineering.
1. COURSE MODULE (20-23 April 2014) - optional
+ type: 3 days intensive course regarding basic knowledge in parametric design (LEVEL 1)
+ software: Rhinoceros & Grasshopper
+ plugins: Kangaroo, Weaver Bird, Lunch box, Ghowl, Geco
+ achievements:
- acquainting to the components & the concept of Generative Design
- understanding the strategies in Algorithmic Design
- how to easily insert simple mathematical equation into the project to gain more control
- how to utilize proper plugins with respect to their nature of the project
- interacting with different analysis platforms such as Ecotect & remote controller
- solving several exercises with different scales( 2D- 3D ) during each phase of the workshop
2. WORKSHOP (23-27 April 2014)
A 5 day Design-Based Research Workshop exploring new techniques in Digital Architecture/Fabrication, with a specific focus on the use of generative systems and parametric modeling as tools for creative expression.
Our ultimate goal is to increasing the efficiency of utilizing digital tools in parallel with geometric performance of the primitive design agent.
+ + CONCEPT
Fashion and Architecture are both based on basic life necessities – clothing and shelter.
However, they are also forms of self-expression – for both creators and consumers.
Both fashion and architecture affect our emotional being in many ways.
The agenda of this workshop is to investigate on the overlap between these two areas of design, art & fashion.
Fashion and architecture express ideas of personal, social and cultural identity, reflecting the concerns of the user and the ambition of the age. Their relationship is a symbiotic one and throughout history, clothing and buildings have echoed each other in form and appearance. This only seems natural as they not only share the primary function of providing shelter and protection for the body, but also because they both create space and volume out of flat, two-dimensional materials.
While they have much in common, they are also intrinsically different – address the human scale, but the proportions, sizes and shapes differ enormously.
+ + + OBJECTIVES
So far, Architects have been using techniques such as folding, bending etc. to create space, structural roofs or different other structural shapes.
The agenda of this workshop goes further with the investigation of algorithmic thinking through generative tools Integrated in design.
The challenge is creating a bridge that connects these two areas of design, architecture and fashion that perform at two opposite scales.
+ + + + TECHNICAL BRIEF
In the early stages physical models and low-tech strategies will be used, allowing the participants to gain a greater understanding of materials, fabrication and assembly methods as well as simple, yet pragmatic structural solutions.
Later in the workshop these strategies will be digitalized and elaborated using software visualizing tools such as Rhinoceros and the algorithmic plug-in Grasshopper.…
rves/holes. However, the Kangaroo script itself is prone to locking up so it seems like it might take forever. You can even double click stop the timer from the Windows task bar, I hadn't noticed that before:
You have to use that or right click disable the timer since even with the Reset toggle button input set to True the timer itself locks up the script a bit when you are changing things around.
Just setting the min/max numbers both to a desired mesh size gives a uniform mesh:
Oh weird, it's about if the timer is right click set to so small an interval that it gets ahead of Kangaroo! When you see how long each cycle is taking with the Display > Canvas Widgets > Profiler you just set the timer for above that and the interface comes back into being responsive. It only takes a few Kangaroo cycles to do the inflation, so a full second timer interval is even workable.
A finer mesh:
It's funny running it so slow since it overinflates at first, bulging out, before it equilibrates.
You have control over inflation pressure and mesh stiffness, for a variety of effects.
This is a good system once I realized the timer needed to be mellowed out.
What made it work was the fast custom meshing since a normal mesh is awful and MeshMachine wouldn't work with sharp corner holes at all, breaking out of the boundary even if I fixed curves or vertices or did the equivalent with NURBS surfaces instead of a starting mesh.
There is an initiation time for Kangaroo that doesn't show up on its Profiler time that happens even with the timer off.
There are some fine areas that can't inflate with a reasonable mesh setting:
Worth playing with but no match for ArtCAM since it suffers odd delays in between working fast. If I could get better 2D meshes, that were more adaptive it would be better, but MeshMachine is one of the only re-meshers I know and it's broken for even mildly sharp hole features.
Ah, how about a crude mesh that is then subdivided, guaranteeing inner vertices everywhere? Sort of works, but is still too dense. Way too dense to even do anything. The subdivision triangulates the quads, vastly increasing the mesh wire density. Better just to make a finer initial mesh with plenty of quads.…
Added by Nik Willmore at 12:57am on February 21, 2016
(1) I have been exporting small sections of a larger model into Maya from Rhino as FBX. In Maya I rotate and scale the models (-90 in X, Scale XYZ 0.001). The Named Views are being saved, but do not have a successful import into the Maya model. They do not appear as in Rhino, and the problem is not solved by scaling or rotating the cameras.
(2) If I try going the other direction, the cameras exported from Maya as FBX are also not aligning with the model in Rhino as they are in Maya.. I will do my best to post some images of the problem and hope you can help.
error !!
This is what the named views look like
here I am trying to the other way with a good view from Maya
strange placement..
This is the best result I can achieve, after I scale the camera by 1000
Any Advice???
Thanks, Robert.
…
ysim.ning.com/
When you run the simualtion you will notice on the batch terminal that Daysim is also being called, so you may want to consider how Daysim uses Radiance files & data.
Regarding your current problem, I think you stumbled onto something weird and interesting.
Interior and exterior readings appear to differ by 40 in the best case scenarios. Even setting the transmittance to 1 yields similar results. I tried changing from cummulative sky to climate sky and got similar values. Changing the test points did nothing either.
I think, (yet I'm too lazy to prove this) that the difference in values stems from diffuse radiation over the sky dome.
If you delete everything except the glass you'll notice that interior values are like 80-90% of the exterior values (this seems like the expected behaviour with a transmittance of 1). So, if we consider that a vertical window, part of an opaque box, is receiving radiation from 25% of a sphere, as you start to inset the interior test points the radiation they receive will be a fraction of the 25%.
Let me try to explain this better...The exterior surface receives radiation from a section of a sphere calculated by 180degrees on the xy plane (let’s call this angle theta) and by 90degrees (let’s call this angle phi) in azimuthal elevation. If you integrate this over spherical coordinates (theta from 0 to pi; phi from 0 to pi/2) you will find that it comes to a quarter of a sphere. By comparison, the interior surface will not integrate theta from 0 to 180degrees,nor phi from 0 to 90degrees, instead it will be the subtended angle from the exterior surface as a function of their separation; the farther in you go the smaller the view of the outside.
If my hypothesis is correct there shouldn't be that much difference since the separation is only 10cms...the subtended angle would be like 170 instead of 180 for theta and 85 instead of 90 for phi...overall if you integrate both spherical areas there should only by a difference of 10%.
In conclusion, I believe the unexpected behaviour stems from the previous subtended angle thing. If direct radiation was the only factor the difference would be the aforementioned 10%, which suggests that an additional source of energy is also affected by this. Perhaps indirect and diffuse radiation from other areas of the sky dome.
I’m definitely intrigued on why this is happening. Please post if you figure it out.
Regards,
Mauricio
…
TB of RAM. I think I'm going to start a GoFundMe campaign to buy one for myself :)
2- The server's cost is about $13 an hour. I get free access to supercomputer through my university and xsede.org because I earned an NSF Honorable mention last March, however, the supercomputers available through both resources are a little complicated for me to use, as opposed to the one available from amazon that has Microsoft server 2012 already installed.
3- I wanted to run 400 annual glare simulations for 400 different views.
4- I tried a to perform annual glare simulation for one view on my Dell XPS that has Intel Core i7-6700HQ processor and 16GB of system memory. The simulation took 2 hours to complete. Radiance parameter ab was set to 6.
5- I wanted to obtain the batch file for each view so I can run them on the server. So I used the fly component to run all 400 simulations and closed the cmd windows, that wasn't bad ( for me at least) because I asked my son to this job for me, he was just glad to help me :)
6- I created one batch file using this cmd command:
dir /s /b *.bat > runall.bat
This created a file with the path to each .bat file. I edited this file in Notepad++ to include the word "start" at the beginning of each line. This was done using the "find and replace" dialogue box.
7- I split my newly created batch file into 3 batch files, each one has about 130 file names and " start" before the file names.
8- installed radiance on my server
9- Ran the first batch file on the server, this started 130 cmd windows performing my simulations, CPU usage was anywhere between 90% to 100% and about 105 GB of RAMs were used.
10. It took about 5 hours to complete all 130 simulations, I expected to run all in 2 hours but can't complain because this would've taken about 260 hours to run on my laptop. After the simulations done I ran the second and then the third batch files ( total of about 15 hours).
11. I got 400 valid dgb files. Couldn't be happier!
…