int and normal (a unit vector) to create a point for each face outside of each face center.
Etc.
Filippos offers already straight spikes.
Then you can twist the vectors by making a line, dividing the line into segments, then adding some sort of space warp overall in 3D space, to twist each point in space, then also add randomness, separately.
Break it down into workable pieces, never doubting yourself, as if you are doing it "wrong" or not good enough.
SLOW DOWN TO SPEED UP.
Buy and read "On Growth And Form" by D'Arcy Wentworth Thompson, to learn how Nature "cheats" and thus you should too, think about cheating and not be embarrassed by it, by making things work, efficiently:
Extra credit, over the next decade:
…
Added by Nik Willmore at 7:16pm on October 20, 2015
ally put the component together a few months ago.
Abraham, the case of 1.2 failing was because of a bug where I simply put "< 1.2" in the code instead of "<=1.2". The error was specifically happening with 1.2 because that is the wind speed threshold at which the "hump" of the ASHRAE adaptive comfort polygon grows. It is perhaps easier to see this threshold in the inputs of the CBE Adaptive comfort chart:
I fixed the bug in the attached file and on the github.
Grasshope, to address why you are getting branched data and the 2 comfort polygons, it is because you have connected up the EPW windSpeed to the component. Usually, if you connect up multiple values for windSpeed_, the component will produce a comfort polygon for each wind speed. However, I built a mechanism into the component that stops it from generating 8760 comfort polygons (one polygon for each hour) if you connect up full EPW data. 8760 comfort polygons takes almost a minute to generate and so I figured that this is probably not what people want to do most of the time. Instead, the component just generates one polygon for the minimum wind speed and another for the maximum wind speed (to show you the range). The branched data that you get out of the component are values for each polygon (2 branches for two polygons). If you connect up the data to the 3D chart, this might make things clearer:
One polygon is for 0 wind speed and the other is for the maximum EPW wind speed.
I am interested in hearing your thoughts on my way of dealing with this since I am thinking of implementing something similar on psychrometric chart. I realized that a lot of people were crashing the psych chart by connecting up the EPW wind speed, which currently tells the component to generate 8760 comfort polygons.
Thanks, as always, for all of your help,
-Chris…
sistance of radiative and convective heat transfer through the _filmCoefficient input on the "Create Therm Boundaries" component. This filmCoefficient in W/m2K represents the "U-Value" of the air film between the edge of the THERM materials and the surrounding environment that is at the specified _temperature. The extra resistance from this air film is why the full construction U-Value that you are getting out of THERM is a lower than just the (conductivity of material) / (depth of the material). Accounting for air films is particularly important when you get constructions that have a high overall conductivity (like a single pane window), since almost all of the resistance of such a construction is due to the air films.
To elaborate further, you might have noticed that, in the example files on hydra, I set this filmCoefficient to be either "indoor" or "outdoor", which basically uses some code that I wrote to autocalculate the film coefficient for you. I take into account both the emissivity of the material at the boundary (which gives you more air film resistance for lower emissivities) as well as the orientation of the boundary in the 3D space of the Rhino model. The code I wrote will take these parameters and match them to those published in ASHRAE Fundementals, which you can see in table 1 of the first page of this PDF:
http://edge.rit.edu/content/C09008/public/2009%20ASHRAE%20Handbook
I interpolate between these values in the event that your emissivity is not 0.05, 0.2, 0.9 or the orientation of your boundary is not any one of the 5 that they give.
I know that THERM also has the capability to actually run the radiative and convective formulas that you posted, Mauricio, as opposed to just using a single film coefficient to account for all of this resistance. The running of these formulas is particularly useful is the radiant temperature at the boundary is different than the air temperature. However, as long as you are ok with this assumption that the air and radiant temperatures are the same (which is the case for all of the situations that I have encountered), the film coefficient is perfectly sufficient. If anyone ever has need for this capability of running boundary conditions that have different radiant and air temperatures, please post here and I can think of a way to implement it. I rather like the simplicity of the current interface, though, and I think that I will keep it this way until we understand the purposes for why someone would need separate radiant and air temperatures.
-Chris…
S of elements (chain links in this case), work first at low resolution to reduce compute time and avoid freeze-ups! When the model looks right you can increase resolution (i.e., the number of contours/links in this case).
The first effort (chainmail_2017Jan18b.gh) is getting points at the intersection of contour lines from two planes, XY and YZ, then getting surface normals for those points to 'Orient' the pair of links. This required ignoring all the extraneous points (due to multiple surfaces making up the "polysurface") returned by 'Srf CP (Surface Closest Point)' using two criteria: the 'D (Distance)' value and the 'uvP' coordinates that indicate an edge point (x=0 or 1, y=0 or 1).
Using the YZ plane worked well for the front and back surfaces but results in sparse points on the sides. Using the XZ plane instead has the opposite effect. Would be great if they could be blended somehow (animated gif):
Ultimately, it would probably work best to treat the body section and the arm as two different cylinders with contours conforming better to each surface separately.
The second method (chainmail_2017Jan18c.gh) uses only XY contours and 'DivLength' (divided by length), similar to a brick wall:
…
Added by Joseph Oster at 2:34pm on January 18, 2017
to calculate PET. There is the Höppe paper from 1984 which is in German and mostly mo one have everread that. The often cited 1999 paper doesn't give any insights into the calculation methods. It's completely useless to that extend.
So all of the calculation methods available are basically a code-downstrip of a program called MEMI that has been relesead as a part of a german VDI guideline. Otherwise, it won't be possible to compile the required data.
The 1984 Höppe paper was a Diploma thesis (!). We all know, how many errors there are in a diploma thesis. And so is in this one.
For the ENVI-met implementation of PET, we went back to the diploma thesis (not a great deal as it as in german) and then removed all the errors inside of it.
That was the basis of the completely re-designed PET modul in ENVI-met.
Still, there is an on going project at the university of Hamburg and she found still a number of errors in the corrected ENVI-met version.... So there is still space to go
Calculation time: The Biomet solves the procedure correctly on a scientific level. You might also get results much faster on a less strict level...
Also: If you don't change the domain settings in BioMet, the whole 3D space will be calculated, while I assume in the Grasshopper component mentioned here, only one 1D slice is calculated (-you can set this in ENVI-met Biomet, too)
Finally, I totally understand that the change from a totally free program to a payed program (if you need the full version) is not funny.
However, as the post in the forum and others show, there is a huge interest in ENVI-met form the architectural, landscape architectural or urban planing scene.
They all demand basic or new functions (what is right).
But with V4, ENVI-met cannot longer be handeld as a hobby project of mine (which it wasup to 3.1).
We're still not at the point to be able to hire programmers as permanent staff (which would be neccessary to cont our work) but with the selling of the SCIENCE and PRO licenses, we're at least able to carry on working.
Without this business model, it would be Version 3.1 forever,
Thanks for understanding and best of all
(Keeping track of this great component)
Michael
-----------------
…
quick preview of the updates:All the morphing between surfaces and meshes will now have a "t" input which allows you to use a graph mapper for non-linear divisions. I am working on getting this to work with x and y divisions on the surfaces.
The voxelize component has this for all axis, and now has an input for base plane so you can change the orientation of the voxels.
I have also received requests for an option to export lattices to nTopology. This is a powerful tool and they have created an open source file format for transferring lattice information https://github.com/nTopology/LTCXI have created and export module and will create in import module soon.
In more exciting news, the 3MF consortium has released a beam lattice extension for the 3D Manufacturing Format. I will be working on a module for exporting beam lattice information to .3MF files soon. https://3mf.io/beam-lattice-extension/
Thank you for all the support and stay tuned for updates as they come.- Aaron…
lso created something similar before which was one of the bits of development so far. I am planning an installation for a university module, which is based on creating a false perspective of surfaces (i.e. skewed in 3d however looks correct from specified vantage point - co-incidentally where i'd like the visitor to enter / where the projector must be (as a slight issue if anyone has any suggestions) so i've tried to create a surface (a 'perceived screen' (hidden) ) which i've then divided. The sub-surfaces are then scaled (with the projector/viewpoint as centre) to produce a collection of surfaces which appear to be flat (i.e. appear to be the perceived screen) i had originally planned to map the surface of my image onto the individual panels, however i'm wondering if rhino will realise the cunning game of optics i'm attemtping and if it will in fact just tile the images at the same size all over the various bits, therefore ruining the illusion. my thoughts are : i've heard that some plugins have bake functions, to bake mid algorithm and then scale the baked surfaces with pixel data intact. also while typing i was thinking would it be possible to use the scale factor (for scaling the perceived panel to actual panel) be plugged into the scale factor for the image? not sure about how to align / locate the images though. i had also planned to make the panels more complex howevr i've been using list item to extract each subsurface (i simplified it to keep it easy in these earlyish stages) but is this really the only way ? For the final piece i'd like quite a complex array of panels (is this inevitable if i want to modify panels individually and i'll just end up with a horrible looking GH file?) that is the main bulk of my questions, for any extra helpful people ..... ------------------------------------------------------------------------------------ as a last issue, i would like to try and render a series of animations moving through (various iterations of ) the model, can rhino render an animation? failing that i'm gonna try 3ds max - i've used it for rendering before but never animation. For any 3ds users - is there a project image function within 3ds max which would save me trying to map the images in rhino? also i'd like to try making a convex hemisphere shape you would walk into the replicate a dolly shot (so the centre of the image gets larger as the background moves out of shot. any suggestions?
cheers everybody.…
xpect to print in a 3D rapid Prototyping, that is why I can not use a mesh for the edges, I think maybe it could do with weaver bird, but not how to use it, any suggestions or ideas?
attached is my definition and a picture of something like what I do (the smoothed edges and curves, no the complex geometry)
thanks!…
would like to ask someone with patience, time and disposition for a definition of maximum displacement, resulting force of gravity and internal elastic energy. I know that these topics appear on the Karamba manual, however the explanation is quick and brief and I, and perhaps some others, can´t grasp completely what are they and how they work.
Secondly I would like to ask advice on how to deal the problem of minimizing the quantity of material used and keeping the structure strength in an acceptable range.
Those were my two questions. Now I am going to explain the definition that I am working on in order to show how this relates to the problem I am trying to solve.
I am trying to optimize a column made of plastic, which is intended to be fabricated in a 3D printer. I have created a grasshopper definition that lets me customize plenty of options (height, width, number of sides, number of divisions, type of interconnections, etc… ).
Image 1 can provide a quick look of what I am trying to do.
I am using galapagos to fine-tune some of the values in order to achieve the best possible structure that can withstand a certain arbitrary weight (for example 100 Kg) within acceptable deformation values and use the least possible material.
Perhaps the key values that I am letting Galapagos manipulate are the number of division in plan and section of the column.
The problem arises when I choose to optimize by minimizing the maximum displacement, which is the most common case in tutorials and examples.
Galapagos naturally tends to divide the column in the maximum number of section that I allow (which is logical since it creates more beams and minimizes their lenght), image 2 provides an example of the minimum and maximum number of division that I am allowing.
This solution (empirically) seems wasteful. I believe that the real solution to the problem (sustaining an arbitrary weight without failing and most importantly using the less possible material) must be between the two columns presented in image 2.
Thank you guys for your help and for reading such a long post.
Sincerely
Diego…
n changes in the viewport camera location, zoom, etc. The behavior isn't as smooth as I'd like but it's ok. The problem comes in drawing some figures. It can successfully draw draw shaded 3d geometry from a model, brep edges from a model, points, and circles.
I am receiving strange behavior when trying to draw additional breps that are created on a custom plane (near - but in front of the camera location). The plane that I have built is reliable and will constantly depict geometry like points or circles. But it won't draw polylines, curves or breps, even when created from points that sit on the plane and render in the custom view.
Here is a good chunk of the code:
public override void DrawViewportMeshes(IGH_PreviewArgs args) {
// Material definitions.
// Geometry drawing
//Brep geometry from model sits here inside a foreach loop. The code seems to work fine - all geometry draws relaible. The code used to draw a brep face is:
args.Display.DrawBrepShaded(element, material);
// Graphics drawing
// This sits inside a boolean toggle to switch on and off - the sample code that works reliably is. As mentioned point and circle drawing is fine. Code examples are:
args.Display.DrawPoint(displayAnchor,System.Drawing.Color.FromArgb(225, 0, 200));
args.Display.DrawCircle(cg,System.Drawing.Color.FromArgb(225, 0, 200));
// Problems emerge when drawing breps, be they single or multiple. Looping through lists or regular singular instances don't make a difference. Here is the code:
args.Display.DrawBrepShaded(graphicelelemnt, graphicmaterial);
}
What am I missing here? Why isn't this working?
I am working in Visual Studio or the native C# GH component - same results.
I test the geometry to see if it exists (it does) through a standard output (for example "A = ..." in the c# component or through a DA in the VS environment. The geometry is good it just won't draw through the IGH.
…