ort and export from the images below and also from the HELP file of DB in attachments (Page 71: Importing Geometric Data; Page 78-80: Import 3 - D CAD Data). In their HELP file, they mention about "import geometric data".
However, regarding the input of schedules, loads, constructions and etc., DB normally uses "Component " and "Template" (Page 29: Templates And Components; Page 591: Templates; Page 533: Components). "Templates" are databases of typical generic data, including Activity templates, Construction templates, Glazing templates, Facade templates, HVAC templates, Location Templates, and etc. "Component " are databases of individual data items (e.g. a construction type, material, window pane).
Both "Component " and "Template" are allowed to be imported and exported by using "Import / Export library data" command (.ddf format - DB Database File; Page 734: Import Components/Templates, Export Components/Templates). DB also allows us to build up our own libraries of templates and components (Page 731: Library Management; Page 733: Template Library Management).
In order to import both geometric information and other information related to schedules, loads, constructions and etc. from GH to BD, we supposed the following two ways:
1. GH(HB+GB) --> gbXML (both geometric and "Component " and "Template" information) --> DB
This is the way we most prefer. We did see information related to schedules, loads, constructions encoded in the gbXML file generated by GB, but still do not know the reason why DB did not take this information (I also mentioned this in Q6 within the gh file). We assume this might because the gbXML file we create encodes the schedules based on a different template / schema than the one DB expects. We also post this question to the DB forum for help.
(http://www.designbuilder.co.uk/component/option,com_forum/Itemid,25/page,viewtopic/p,13755/#13755)
2. GH(HB+GB) --> gbXML (geometric information only) + .ddf ("Component " and "Template" information only) --> DB
If the first way doesn't work and DB only takes geometric information from the gbXML, then we might think of the other way - generating the .ddf files from GH(HB+GB) to pass the schedule, load and construction information to DB.
I was wondering if it is feasible for HB and GB to have this function? And what is your suggestion to achieve this?
In addition, we notice that DB can export XML files (not gbXML), so we are trying to figure out if DB also accepts / reads the XML file. If so, we might be able to convert the gbXML (with both geometric and schedule information) to XML. What do you think about that?
Thank you again for all your help!
Best,
Ding
DB import
DB export
Template libraries
Component libraries
…
understanding of the graphical algorithm editor, and then dive into more complex parametric models. We’ll also learn tricks to keep our project responsive and enjoyable to use.
Course outline
inspired in the first, visual programming part of the Grasshopper primer
(http://www.grasshopper3d.com/page/tutorials-1)
Duration: 3 days (24 hours).
Including
An understanding of the Grasshopper interface and the visual programming theory
Base parameters, large numbers of points and vectors, and small geometrical instances
Data flow
Troubleshooting definition problems and solutions
Know the main component types
Be able to join, and manage connections and trees
Expressions for both calculation and boolean creation
Understand Data Matching and casting
Managing long lists of objects within Grasshopper
Have an understanding of the functioning of Grasshopper components
Experience creating definitions
Parametric geometry examples, like attractors and list culling
Re-utilizable modeling examples: colored panelization, surface population, gradient and picture sampling and manipulation, catenary line and weaving
Spline animation examples
Getting ready to prepare own definitions in groups
More information...
…
dive into more complex parametric models. We’ll also learn tricks to keep our project responsive and enjoyable to use. Course outline
covering similar content as the first part of the primer (http://www.grasshopper3d.com/page/tutorials-1)
novel material
duration: 3 days (24 hours)
Including
An understanding of the Grasshopper interface and the visual programming theory
Base parameters, large numbers of points and vectors, and small geometrical instances
Data flow
Troubleshooting definition problems and solutions
Know the main component types
Be able to join, and manage connections and trees
Expressions for both calculation and boolean creation
Understand Data Matching and casting
Managing long lists of objects within Grasshopper
Have an understanding of the functioning of Grasshopper components
Experience creating definitions
Parametric geometry examples, like attractors and list culling
Re-utilizable modeling examples: colored panelization, surface population, gradient and picture sampling and manipulation, catenary line and weaving
Spline animation examples
Getting ready to prepare own definitions in groups
More information...
…
here are my questions.
1. The difference in general attractor transition is that, i only want the points are moving toward x axis, so if i just have ONE curve to distinguish, which is'nt the problem to find points location are in the right of left side of curve, but if i have TWO or THREE curves need to be distinguished, that is totally confused to me!
2. The points near curve which moved too big, how can i make it more equal?
3. I hope all the points can stay in the square boundary.
If anyone can give me some hint, i would be very appreciate with that.
thanks a lot!!
Shaun
…
are just the 8 cases, so you're actually doing it right here (scroll down on this page, and you'll see a separate subset all about marching tetrahedrons http://paulbourke.net/geometry/polygonise/). The benefit to using marching tetrahedrons is exactly this: that the number of possible "cuts" through the tetrahedron are dramatically smaller in number than those through a cube.
However, I have found that also what you're seeing that the linear interpolation creates some odd distortions (which is why I went ahead and later did the marching cubes implementation). Some of this comes from the density of the sampling grid: the more dense, the fewer distortions.
What I would suggest, if you want a (relatively) quick way to improve this outcome:
1) build up a full mesh rather that bunch of surfaces, and use Rhinocommon to combine identical vertices, and rebuild the vertex normals
2) run a couple rounds of laplacian smoothing on the mesh to better distribute your vertices (for each vertex, make it equal in location to the average of its neighbours)
3) create a line normal to each vertex roughly the length of your sampling grid and test the endpoints of it against your scalar field formula, and then do one final linear interpolation between those two points for your vertex.
This should give you a smoother mesh for sure.
But good work getting this far! …
Added by David Stasiuk at 1:37am on February 6, 2015
though, I suspect they roughly have the same approach:
Create the mesh or the bitmap, this gives you a rectangular grid of sampling points. The density of these points equals the accuracy of your result. The accuracy (i.e. spacing between adjacent sample points) should be significantly smaller than the smallest geometric details on your regions.
For each pixel/vertex in the bitmap/mesh, sum up the contributions of each region. This is obviously quite tricky, because one needs to take the whole region into account, weighing nearby areas of a region more strongly than distant ones.
Elevate the mesh vertices based on the summed illumination, or, if you have a bitmap, create a heightfield mesh.
Contour the mesh.
In code I would in fact take a different approach to step (2). Instead of iterating over all sampling points and summing up all region contributions, I'd iterate over all regions and affect all illuminated sampling points. It's logically the same thing but my gut tells me it'll be easier to optimize this way.
One problem with a bitmap approach is that you'll be limited to the colour accuracy. If you go for greyscale then you'll only have 256 levels to play with. If you're willing to use the full integer range you get a more wiggle room, but your bitmaps will not be 'readable' by human eyes until you map the values back to some smooth gradient. The big benefit of bitmaps is that they're fast and easily storable and shareable as files.…
Added by David Rutten at 4:40pm on February 17, 2016
ther math and logic. i can usually conceptualise what i want to do and cobble some semi working thing together but don't know which components to use and how to patch it. so i'm super happy to have someone who knows what he's doing to find this interesting.
and i'm glad you mention the fanned frets again, there is one input parameter that's still missing for the multiscale frets to be fully parametric, it's the angle of the nut or which fret should be straight. it depends a bit on personal preferences and playing posture what is more comfortable. so being able to adjust this easily would be cool. again i have no idea how the maths for that work or if you can just rotate each fret the same amount around it's middle point. The input either as fret number (for the straight fret) or as a simple slider from bridge to nut should do as input setting.
Here are the two extremes and the middle ground:
i've been thinkin today while analysing your patches and cleaning up my mess what exactly the monster should do.
Here are the input parameters needed, i think it's the complete list
scale length low E string
scale length high e string
fret angle/straight fret
string width at nut
string width at bridge
number of frets
fretboard overhang at nut (distance from string to fretboard bounds)
fretboard overhang at last fret
string gauges
string tensions
fretboard radius at nut (for compound radius fretboard radius at bridge is calculated with the stewmac formula)
fretwire crown width
fretwire crown height
action height at nut (distance between bottom of string and fretwire crown top)
action height at last fret
pickup 1 neck position
pickup 2 middle position
pickup 3 bridge position
nut width
the pickup positions should be used to draw circles for the magnet poles on each string so they are perfectly aligned and can be used for the pickup flatwork construction. ideally they would need a rotation control aligning the center line of the pickup so it's somewher between the last fret angle and bridge angle. personally i do this visually depending on the design i'm looking for, some people have huge theories on pickup positioning but personally i don't believe in it.
that should result in everything needed to quickly generate all the necessary construction curves or geometry for nut/fingerboard/frets/pickups. this is the core of what makes a guitar work, the more precise this dynamic system is the better the guitar plays and sounds.
i posted another thread trying to understand how i could use datasets form spreadsheets,databse, csv to organize the input parameters. What would make sense for the strings for example is hook into a spreadsheet with the different string sets, i attached one for the d'Addario NYXL string line which basically covers all combos that make sense.
The string tension is an interesting one, and implmenting it would sure be overkill albeit super interesting to try. it should be possible to extrapolate from the scale length of each string what the tension for a given string gauge of that string would be so that you could say 'i want a fully balanced set' or 'heavy top light bottom) and it would calculate which SKU from d'addario would best match the required tension. All the strings listed in the spreadsheet are available as single strings to buy.
i'm trying to reorganize everything which helps me understand it. i just discovered the 'hidden wires' feature which is great since once i understood what a certain block does or have finished one of my own, i can get the wires out of the way to carry on undistracted. a bit risky to hide so many wires but it makes it so much easier not to get completely lost :-)
btw, the 'fanned fret' term is trademarked, some guy tried to patent it in the 80's which is a bit silly since it has been done for centuries. there is a level of sophistication above this as well, check out http://www.truetemperament.com/ and that really is something else. it really is astounding how superior the tuning is on those wigglefrets, the problem is that it's rather awkward for string bending and also you can't easily recrown or level the frets when they are used. …
e matching with a dedicated component which creates combinations of items. You can find the [Cross Reference] component in the Sets.List panel.
When Grasshopper iterates over lists of items, it will match the first item in list A with the first item in list B. Then the second item in list A with the second item in list B and so on and so forth. Sometimes however you want all items in list A to combine with all items in list B, the [Cross Reference] component allows you to do this.
Here we have two input lists {A,B,C} and {X,Y,Z}. Normally Grasshopper would iterate over these lists and only consider the combinations {A,X}, {B,Y} and {C,Z}. There are however six more combinations that are not typically considered, to wit: {A,Y}, {A,Z}, {B,X}, {B,Z}, {C,X} and {C,Y}. As you can see the output of the [Cross Reference] component is such that all nine permutations are indeed present.
We can denote the behaviour of data cross referencing using a table. The rows represent the first list of items, the columns the second. If we create all possible permutations, the table will have a dot in every single cell, as every cell represents a unique combination of two source list indices:
Sometimes however you don't want all possible permutations. Sometimes you wish to exclude certain areas because they would result in meaningless or invalid computations. A common exclusion principle is to ignore all cells that are on the diagonal of the table. The image above shows a 'holistic' matching, whereas the 'diagonal' option (available from the [Cross Reference] component menu) has gaps for {0,0}, {1,1}, {2,2} and {3,3}:
If we apply this to our {A,B,C}, {X,Y,Z} example, we should expect to not see the combinations for {A,X}, {B,Y} and {C,Z}:
The rule that is applied to 'diagonal' matching is: "Skip all permutations where all items have the same list index". 'Coincident' matching is the same as 'diagonal' matching in the case of two input lists which is why I won't show an example of it here (since we are only dealing with 2-list examples), but the rule is subtly different: "Skip all permutations where any two items have the same list index".
The four remaining matching algorithms are all variations on the same theme. 'Lower triangle' matching applies the rule: "Skip all permutations where the index of an item is less than the index of the item in the next list", resulting in an empty triangle but with items on the diagonal.
'Lower triangle (strict)' matching goes one step further and also eliminates the items on the diagonal:
'Upper Triangle' and 'Upper Triangle (strict)' are mirror images of the previous two algorithms, resulting in empty triangles on the other side of the diagonal line:
…
ake a modest notice about the two new Ladybug components, one of which creates a 3d terrain shading mask and another one which visualizes and exports horizon angles. A terrain shading mask is essentially a diagram which maps the silhouette of the surrounding terrain (hills, valleys, mountains, tree tops...) around the chosen location, and account for the shading losses from the terrain. It can be used as a context_ input in mountainous or higher latitude regions for any kind of sun related analysis: sunlight hours analysis, solar radiation analysis, view analysis, photovoltaics/solar water heating sunpath shading...
My home town is an example of the shading caused by the terrain. Here is how it looks from the tallest building in the town:
And the created terrain shading mask:
A mask for any land location up to 60 degrees North can be created:
There will also be a support for a few major cities above this limit.
Both Terrain shading mask and Horizon angles components can be downloaded from here. An example .gh file can be found in here.
Component will prompt the user to download and copy certain files in order to be able to run.
It was created with assistance from Dr. Bojan Savric. Support on various issues was further given by: Dr. Graham Dawson, Dr. Alec Bennett, Dr. Ulrich Deuschle, Andrew T. Young, LiMinlu, Jonathan de Ferranti, Michal Migurski, Christopher Crosby, Even Rouault, Tamas Szekeres, Izabela Spasic, Mostapha Sadeghipour Roudsari, Dragan Milenkovic, Chen Weiqing, Menno Deij-van Rijswijk and gis.stackexchange.com community.
I hope somebody might find the components useful.…
st between those two applications. But as soon as every frame is re-calculated I noticed that intersection function is very slow. It is actually so slow, that maximum number of polygons to play with is only 10 or less.
Could you help me to find a faster solution for my script?
calculation of intersection lines;
//////////////////////////////////////////////////////////////////////////////////////////
import ghpythonlib.components as ghcompimport rhinoscriptsyntax as rsdef ctr(crv): pts = ghcomp.Explode(crv)[1] pts = ghcomp.CullDuplicates(pts,0.001)[0] return ghcomp.Average(pts)pts = []lines = []ctr_c1 = ctr(C1)for crv in C2: if ctr(crv) != ctr_c1: int = ghcomp.CurveXCurve(C1, crv)[0] if int: [pts.append(x) for x in int] lines.append(rs.AddLine(int[0],int[1]))
/////////////////////////////////////////////////////////////////////////////////////////////
The overall description of the script:
a)Processing+ghowl is used for moving objects and physics
b)python script (slowest part) calculates intersection lines
c)intersected parts of polygons are rotated in 90 degrees.
I have attached grasshopper and processing files. (processing is not necessary to test the script)
Thank you in advance,
Pereas.
…