roader than this. IFC (Industry Foundation Class) as an open BIM format permits users with the choice of other software they work with. From Grasshopper you can choice to use the model data in Digital Project, Archicad, Microstation, Tekla and beyond. http://buildingsmart-tech.org/implementation/implementations
But there is much scope for improvements in the implementation of model importing/exporting in all software. This is also why I decided to develop a Revit Addon (I can reuse 80 to 90% of my existing c# code) to import the data within this environment. But again this enabling workflows from all the software above plus Rhino/Grasshopper into Revit.
Certainly I think the work of the others is impressive and in areas I haven't really broached yet. But given demand for these aspects I will certainly be looking at it.
Look forward to hearing other thoughts and observations.
…
temperature.
I changed it and things are clearer, yet now I became intrigued by a discrepancy between what is reported by the "conditionOfPerson" and the "percentOfTimeComfortable" outputs. At first glance would seem that they are reporting the same thing in different manner, but they both provide different outcomes (See pic).
Why is that the case?
Shouldn´t the "conditionOfPerson" output had to incorporate the values within the threshold of 80 or 90% (in case that is not happening)?
Any comments?...
PD: The AdaptiveComfortChart doesn´t provide any output data. Not that I wan´t to rush any one out to fix it, but I just want to report it.
Thanks!,
Alejandro
…
other notation... where x, y and z are variables, and a,b,c... are constants but sliders, open for dynamic change. + Defining its interval.
- Another minor question; the intcrv box, it is by default a polynomial interpolation? In general, where can you get information on the underlying math behind the boxes?
- Is it possible to define the intervals on the sliders based on other sliders or inputs?
Many questions, but I have been trying to figure this out for quite some time now. I am truly grateful for all help on this matter! :) Maybe they will be of help to other engineers or architects out there...
…
Grasshopper. So, I once made an attempt to bind ms sqlServer in order to get frozen definitions at some states, to avoid managing baked objects in Rhino and also be able to retain whole results without using the GH state manager that rebuilds everything.
But at that time GH's VB.Net component didn't properly read referenced dlls and I forgot it since then.
At first, I was surprised by Slingshot's extensive interface : I was still having in mind my own old project, a tool that would have acted at the Rhino's geometry object level, and auto creating the needed tables.
The bd would have consisted of a main table, owning the objects ID and name, and related tables containing the necessary information relative to the main objects.
For example, a Brep is made of so and so underlying objects, passed to respective tables, according to GH objects definition layout (just the way they are written in the xml schema).
Then, on a db, query an object by name, and retrieve the whole object or underlying objects (e.g. at the bounding curves level, or points level for a Brep).
With Slingshot, I made a few attempts to cheat GH with BLOB data fields, but no way to get a whole object. It seems that GH simply provides an object.toString ... and GH is definitely not conceived to produce persistence outside of Rhino. If I have some spare time, I will try to extract
About points and colors, I am now simply using a single field with CHAR(asLargeAsNeeded...), as GH parses String to every Point (or Vector or Color) entry of any component.
I do so because it need less to display on the canvas...
Whatever I wrote before, I really like your conception, as opened to relational interactions between ...whatever you need or dream of !
One last thing : GH can't open the definition file "Genome_DB_Template.gh" that I've downloaded from your site : http://slingshot-dev.wikidot.com/database-genome. I was expecting to learn a lot from your very smart stuff ! (I am running GH 08.00.13 and Slingshot 0.7.2.0)
Slingshot is running great, opened to any use...Thanks again.
Best,
Stan
…
rees west to 1 degree west). Changing the latitudinal domain from, say, 0:1 (the equator to 1 degree north) to 88:89 (88 degrees north to 89 degrees north), has zero effect on the x,y shape of the topography map generated. However, in reality, the map should be far, far thinner in the latter case, because longitudinal lines get closer together toward the north and south poles. In actuality, the shape should be close to a trapezoid in both cases, but this is probably not a necessary detail for most people producing maps, since, at an urban or smaller scale, the latitudinal lines bounding the north and south of the map will probably not be that significantly different in length. But the maps should at least stretch from close-to-square for a 1 degree x 1 degree map near the equator to an extremely thin rectangle for a 1 degree x 1 degree map near the north pole.
As an example, I'm looking at a location in Sheffield, UK. The relevant SRTM HGT file spans from 53 N to 54 N, and 2 W to 1 W. The length of the map in the north-south direction should be approximately 111 km, as is the case with the topo map generated by Elk (and a near-standard for 1 degree latitude anywhere in the world). The length of the map in the east-west direction, however, should be somewhere in the range of 67 km, since the 2 W and 1 W longitudinal lines are much closer together at this latitude than they are at the equator. Thus the map should be nearly twice as long in the North-South direction as it is wide in the East-West direction.
If this were to be sorted out, I think it would be really nice to then have the SRTM topo map be positioned automatically in relation to the OSM map being brought in. I think it's good that the OSM map is positioned at 0,0, rather than it's world coordinates, but maybe the SRTM topo map could be aligned with it based on the latitude and longitude domains we input to the SRTM grasshopper module.…
nitions prior to Karamba are to allow the genes to manipulate the form of the shell and then kangaroo to relax the form to its "equilibrium" state.
The definition, as attached, runs fine over one iteration. However, when I run the Galapagos solver, rhino slowly uses up my computers memory and then ultimately crashes (around 80 Galapagos iterations). I don't think that the surface patch, or kangaroo are the issue, as I have run other iterative definitions through them without issue.
I believe Karamba may be occupying memory each iteration that is not released when a new iteration begins. This problem is exasperated by the fact that I am running 11 load cases, 9 of which are point loads defined over each vertex of the mesh. I ran a definition with only one load case, and it reached 170 generations (with a population of 50 for each generation). However, at this point it had occupied 90% of my computer's available memory.
Do you know of a way to ensure that Karamba purges its memory after an iteration, or is this a possible memory leak bug?
Thanks again, any help you can provide is much appreciated.
Sean
…
make a new curve from points I attempt to do it by writing:
Curve cv = new Curve();
This gives me the error:Error: 'Rhino.Geometry.Curve.Curve()' is inaccessible due to its protection level (line 88)
I have also tried calling the CreateInterpolatedCurve() from the Curve class, but
Curve cv = null; cv = new Curve.CreateInterpolatedCurve(Mould.Branch(0), 3);
But from this i get:
Error: 'Rhino.Geometry.Curve.CreateInterpolatedCurve(System.Collections.Generic.IEnumerable<Rhino.Geometry.Point3d>, int)' is a 'method' but is used like a 'type' (line 89)I'm really quite lost about how I can this to work. Can anyone help me?…
), my script is triangulating slabs by drawing line in a crossreference way. This part was "easy"
What I want to do now is to link those slabs together
ie : if a slab is a surface AxBxCxDx
I want to link A1 to A2, B1 to B2, C1 to C2 etc.
I know it's a simple question of restructuring the tree in my Pshift component, so that I can use the line component with shortest list, and link each of those points.
Any ideas on how to fix that?
Thank you
Simon…
r "virtual partitions" as follows:
What I mean "air walls" here, is derived from the description of the E+ documentation with the header of "Air wall, Open air connection between zones". (Page 17, http://apps1.eere.energy.gov/buildings/energyplus/pdfs/tips_and_tricks_using_energyplus.pdf)
As I understand, the term "air wall" used in E+ here refers to a description of something like "boundary condition" between adjacent interzone heat transfer surfaces, but not a kind of "construction or material" (like air space resistance or air gaps within a wall/double glazing window).
The main purpose of introducing the "air wall", is to simulate or approximate the airflow/convection/natural ventilation effect between multiple thermal zones which are connected by a large opening.
In my previous tests, using HBzones and GB, I managed to create the gbXML file which can be successfully imported to DB (without assigning any constructions within HB). And the adjacency condition can be recognized automatically by DB, even when I did not use the "Solve adjacencies" component in HB - shared surfaces between multiple thermal zones are recognized automatically by BD as "internal - partition"(which are standard partitions, but not virtual partitions).
In order to create/approximate "virtual partition", I need to manually draw a "hole" in the standard partition surface (fig.1&2). Again, the reason why we want to use "virtual partitions"(or "air wall") is that it allows airflow between multiple thermal zones which are connected by large openings and we could get different temperature of the each subdivided thermal zone which compose a large thermal zone.
My question is, if there is a possible way to simulate/approximate this kind of "virtual partitions"(or "air wall") in HBzones or in GB? If so, I would like to test if DB recognizes it or not. Actually, we expect that there is no need to involve any manual operations (like drawing a "hole" in the standard partition surface) in DB, due to an automatic optimization loop.
Thank you!
Best,
Ding
fig.1
fig.2
…