een them to be adiabatic (at first. See later on for an alternative).
The whole process is fine/clear up to the solveAdjacencies: The walls are defined as "outdoors" and "surface" for the boundary conditions. So far so good.
Now i get to the HB_makeAdiabaticByType, and some issues appear (See A in the file).
Setting the interiorWalls to True doesn't change the condition from"surface" to "adiabatic" (A1 in file).
Setting the walls set both internal and external, to adiabatic (A2 in file). Is this supposed to work like this? Why the just the internal doesn't make the change?
In addition to this i'll appreciate your advice in the following. Let's say that i want the internal walls to be divided in 2 parts each. One should be adiabatic and the other "air wall". How do you recommend to do this? Is the modeling in the file correct, or i must do surface by surface?Or using the Decompose Honeybee Zone ...?
How can i retain the air walls and still use the makeAdiabaticByType component?
Thanks for your help!!
-A.
…
start (if there is a better one I would appreciate a hint), but I thought I populate a rectangle with points, interconnect the points to later let the borders of the rectangle attract each other.
I have 4 rectangles: A, B, C and D.
On each I have 20 points (A0-A19, B0-19, etc.)
Now I want to connect each point to all the points on the other rectangles, e.g. point A0 with all the (60) points on rectangle B, C, D.
I saw the discussions about the topic sorting lists (e.g. flipping), but I didn't see them fitting on this problem, or I don't know how to abstract them for me.
Also this is a problem I am having with another definition, so if someone could help me with that list stuff, I think I could use it furthermore.
But if there is a better solution to the rectangle organisation (tessellation), I am open for that.
regards,
Max
sry for the long text…
ion technologies offer a completely new way to think and approach design, architecture and urban planning.
. . .
The ADVANCED ARCHITECTURE SUMMER SCHOOL organised in Paris by VOLUMES coworking, NOUMENA architecture and architect/teacher/designer Francesco Cingolani in partnership with the prestigious École Nationale des Ponts et Chaussées is a 3 weeks learning experience designed as an immersive journey in social innovation, computational design, digital fabrication and collaborative culture.
Details and informations for applicants> volumesparis.org/summerschool2015…
r).
http://www.agrob-buchtal.de/en/cd/produkte/produkte_seiten_13045.ht...
2. 1 puts some "modular Z" increment puzzles (for more than obvious reasons). Additionally the excavation cost VS any ECO-benefits ... (heat exchangers in the foundation blah blah). OK that means that the footprint it's also modular., not to mention the whole composition (potentially).
3. So: use the projection ONLY for defining where a given footprint meets the terrain (see the yellow and blue things in V2) and then LOFT pairs (see PlanA, B) of profiles into 2 DISTINCT portions ("solids" so to speak): (a) the basement (or at least something where some potential partitions could being classified as "underground" spaces), (b) the classic building.
4. By doing 3 ... keep an eye on 2 as well (Don't forget the classic minor terrain "adjustments" around each building (meaning usage or "tmp" solids), access roads/pavements (ditto), potential connection of basements (parking), soil stabilization issues, bad seismic behavior on unevenly(Z) formed basements etc etc).…
per space. In the upper right corner you draw another dot, and you write "1, 1" next to it. You now have 2 points defined in paper space (uv space).
Ok, lay down the pencil and pick up the paper. You'll notice that the two points have just moved through world-space. They were very close to the desk, but now they are hovering above it. The coordinates you wrote down on the paper or the other hand are still valid.
No matter what you do to this piece of paper; crumple it, fold it, take it on a plane to South Africa, those two points remain fixed in paper space.
A surface is always a rectangle in Rhino. It may be deformed, it may have holes cut into it, but in the end it's always a rectangle, just like your piece of paper. UV coordinates are points that are defined in Surface UV space. They consist of only two numbers, because a surface has no thickness. At any point in time, you can translate these UV points into World XYZ points using what is called a surface evaluator. Where these XYZ points end up depends entirely on the *shape*, *size* and *location* of the surface.
----
Surface uv-space (and Curve t-space) are vital when dealing with nurbs geometry. If you do not understand the concept of parameter space, you will have a lot of problems because many components in Grasshopper use these coordinates.
--
David Rutten
david@mcneel.com
Seattle, WA…
Added by David Rutten at 6:40pm on September 27, 2009
mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…
, and made the below definition to try it out. (lots of components to draw a line, but I'm just trying to understand the equation)
I had been searching for advice on some geometry topics worth exploring for a class, and now I'm in the class and the teacher wants me to start by learning about splines in general (not nurbs). I just spent the day learning linear spline interpolation, then quadratic, then cubic. I didn't try working them by hand yet, but I'm getting the concepts. It seems cubic is the lowest degree where you can get C2 continuity, which makes it smooth. I read over parameterization and how that simplifies the number of equations. I read about space curves, and then the differences between Hermite, Catmull-Rom, and Cardinal spline, but then got tired and had a cocktail.
So I guess I'm looking for any direction or advice on how to understand parametric curves in 3d space, and how they can be defined (splines or otherwise). Thanks!!!
…
orm of the model but when I try to add the piping to it, I get the message saying Windows has ran out of memory.
I am using Windows 7 and have tried going into the BCD as recommended here: http://wiki.mcneel.com/rhino/memoryincrease but I was not able to run this command.
I have then tried to increase my virtual memory within Windows, even up to 30GB with no luck.
I'm pretty sure I'm doing something really wrong here but I really need to get this modelled ASAP for a portfolio submission.
Any help would be very much appreciated!
Regards
James
…
creating the structural frame, finding the endpoints, linking these endpoints with curves and afterwards lofting the surfaces between the curves.
The results were quite nice, however, the procedure is very time consuming and inefficient. There is just too much copy-pasting involved.
(see attached file: "Old Attempts.zip" )
Mesh relaxation:
I have later on used Daniel Piker's tutorials on Mesh Relaxation and realized that this might be the way to go.
The link to these online tutorials on wewanttolearn.net is:
https://wewanttolearn.wordpress.com/2011/10/22/mesh-relaxation-kangaroo-tutorial/
His tutorials, however, only deal with mesh boxes which are ideal cubes. He then joins them together in various directions, but it is under 90 degrees angle.
( see attached file: "Daniel Pikers Examples" )
What I would like to achieve:
I want my bridges to go in all directions and angles, not just under 90 degree angle.
Ideally I would like to make a square (polygon) follow a curve (which moves in all axis) at certain number of division points. I would then loft these squares into a mesh and use that shape as a mesh box. I would later use this mesh box and relax it the same way as Daniel Piker used the cubes in his tutorial. The anchor points are only the vertices of the squares which create the lofted mesh box.
( see attached file: "New Attempts" )
As you can see below this procedure works even if the curve is moving in all directions not only along xy axis. There are, however, many problems connected to it.
The problem:
Despite all the effort I cannot seem to come up with a design where I would be able to draw a random curve which would be the guideline for my mesh box and then apply this box to one definition in order to relax the mesh and create the shape that I want. Without this I am again forced into a lot of copy pasting as the final mesh box is made out of several sections.
Also is there any way I could make the final resulting mesh a bit smoother? Increasing the number of mesh faces is probably the only way, right?
Thank you guys so much for any potential help.
All best,
Luka
…