column to read the file??
Sure, actually input could be a list of strings, for example:
A1:A20
B15:B50
It will read the columns and put each into a list. Rows also works such as a12:f12
But it does not accept a range such as a12:b24
Hope this is helpful.…
Added by Xiaoming Yang at 2:48am on November 24, 2011
ve jewelry design course teaching Rhino, Grasshopper, Keyshot and 3D printing in collaboration with mything and ShapeDiver. Taught by Eva Blšáková - Zaha Hadid Studio Vienna Andrei PAdure - DesignMorphine / Digital Matters Apply Now and view details at: www.designmorphine.com/workshop/future/algorithmic-accessories-v3/ Follow us on Facebook: https://www.facebook.com/designmorphine and Instagram: https://www.instagram.com/designmorphine/…
e has a sharp break
2) The Curvature "flips"
The curvature graph component creates exactly what I need, but unfortunately it only visualizes the new curve (lets call this c1), without actually making it usable. If it were usable, I could just test for curve-curve intersections, and discontinuities in (c1). I have more or less replicated the behavior of the curvature graph component, except that my imitation is too smooth (lets call my imitation curve c2). I attached two images to demonstrate what I mean.
Is there a way to get exactly what the Curvature graph makes? How is it creating c1? Is it just a much larger sampling of points? Or does it somehow operate (calculus?) on the underlying formula of the curve (c0)?
Any guidance is hugely appreciated. …
Added by Matthew Breau at 11:37am on August 14, 2017
intersection-elements (01/AA etc)
To get a result from RInt i do have to flatten the first set and do have to graft the second. Therefore i can only retrieve the parent-information from the second set.
I hope i could explain my problem and somebody has an easy solution on hand...
Best regards,
Heiko
PS. ObjAtts in the attached files is from human…
Added by Heiko Wöhrle at 10:11am on October 27, 2016
more about book keeping inside a single document. For example: first tab would be UI, second would be formatting, others would be different modules of my script. I have to push stuff around all the time and even with groups or alt-drag it takes a while.
In Excel a sheet is a very large 2d array, but you can add a third dimension by adding sheets and access them with =Sheet2!A1 (3 coordinates : Sheet2 , A , 1)
So sheet are just 2d slices in a 3d array for visualization.
Processing is not such a good example, it's just pages in your code that are executed one after another, but still it's good for clarity : one tab with "setup", one with "draw", one with functions, one with classes etc
Since you're an architect, think of my tabs as the storeys of a building. there are horizontal as well as vertical wires.
I figured you could add a "vertical wire" that could eventually act as a cluster in/outputs for export etc
Just an idea anyway…
t of the bumblebee components. This component does the conversion in vb with no reference to excel, so excel does not need to be running. The XL address has been updated as well so it no longer needs a link to excel.
Also, I caught an error in the address component where the domain input "Location" was reversing the column and row values, this has been fixed.
Conversion from integer to string column ID adapted from: http://stackoverflow.com/questions/181596/how-to-convert-a-column-number-eg-127-into-an-excel-column-eg-aa
Excels limits are listed at: http://office.microsoft.com/en-us/excel-help/excel-specifications-and-limits-HP010342495.aspx
Updates User Object collection can be downloaded at: Bumblebee Version 1.02…
olution emerging in the architectural industry world-wide, the Department of Architecture at The University of Hong Kong will host a two week intensive summer program named Digital Practice.
Led by professors from The University of Hong Kong, as well as invited practitioners with expertise in practice of cutting edge digital techniques, the program offers participants opportunities to experience applications of computational tools during different stages of an architectural project, i.e. concept design, form finding and optimization, delivery, management and communication of design information under the team-based working environment. By learning advanced computational techniques through case studies in the context of Hong Kong, participants are expected to go beyond the conventional perception of technology, considering users and tools as a feedback-based entity instead of a dichotomy. The program, which is taught in English, includes a series of evening lectures related delivered by teaching staff and invited local architects.…
per space. In the upper right corner you draw another dot, and you write "1, 1" next to it. You now have 2 points defined in paper space (uv space).
Ok, lay down the pencil and pick up the paper. You'll notice that the two points have just moved through world-space. They were very close to the desk, but now they are hovering above it. The coordinates you wrote down on the paper or the other hand are still valid.
No matter what you do to this piece of paper; crumple it, fold it, take it on a plane to South Africa, those two points remain fixed in paper space.
A surface is always a rectangle in Rhino. It may be deformed, it may have holes cut into it, but in the end it's always a rectangle, just like your piece of paper. UV coordinates are points that are defined in Surface UV space. They consist of only two numbers, because a surface has no thickness. At any point in time, you can translate these UV points into World XYZ points using what is called a surface evaluator. Where these XYZ points end up depends entirely on the *shape*, *size* and *location* of the surface.
----
Surface uv-space (and Curve t-space) are vital when dealing with nurbs geometry. If you do not understand the concept of parameter space, you will have a lot of problems because many components in Grasshopper use these coordinates.
--
David Rutten
david@mcneel.com
Seattle, WA…
Added by David Rutten at 6:40pm on September 27, 2009
mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…