size component supported only ground PV panels and angled roof PV panels.
Download the newest PV SWH system size component from here (Click on "View Raw" to download it. Then move the downloaded .ghuser file to File->Special Folders->User Objects Folder, an confirm to overwrite it with previously located one).
Just a few opinions on the project you are currently working on:This kind of fixed, non-transparent (overhang) PV panels attached to a building facade are vert convenient for locations with higher latitudes.The reason for this is because they (fixed overhang PV panels) are dimensioned according to the sun position at summer solstice. Elevation angles on summer solstice at higher latitude locations are lower, than those of lower latitude locations.Due to Incheon's low latitude (37), you will get rather short length of the PV panels* : less than 10 centimeters (0.097 meters in the attached .gh file below). As you have mentioned, Galapagos needs to be used too.I will just mention some of the good and bad ways in which the upper issue could be somewhat avoided:1) Increasing the vertical distance between PV panels (PV panels appear above every second window).2) Increase the tilt angle. This will increase the length of PV panels also, but will decrease the final annual AC energy output.An example of this solution has been applied at FKI building in Seoul (latitude: 37N):I already did some tests (with tilt angles: 40, 45, 55) and this does not seem like a good solution, though.3) Shrinking the "sun window" by using the minimalSpacingPeriod_ input. In Photovoltaics, a planner is suppose to make the 9h to 15h part of the sun window free of any obstructions. If you try to decrease the "sun window" to 10 to 14h, the length of your PV panels will increase. You can try to experiment a little bit with this (set your minimalSpacingPeriod_ to 21th of June 10 to 14hours). In general, shrinking the sun window on summer solstice is not a good principle during planning.4) Using tracking PV panels, not fixed ones. But Ladybug Photovoltaics components do not support this kind of PV systems. They only support fixed ones.I would personally go with the first option. You can also experiment with the second and third one.Comment back if you have any other questions.-----------------------* By "length of the PV panels" I mean the: tiltedArrayHeight_ input of the PV SWH system size component.…
ort and export from the images below and also from the HELP file of DB in attachments (Page 71: Importing Geometric Data; Page 78-80: Import 3 - D CAD Data). In their HELP file, they mention about "import geometric data".
However, regarding the input of schedules, loads, constructions and etc., DB normally uses "Component " and "Template" (Page 29: Templates And Components; Page 591: Templates; Page 533: Components). "Templates" are databases of typical generic data, including Activity templates, Construction templates, Glazing templates, Facade templates, HVAC templates, Location Templates, and etc. "Component " are databases of individual data items (e.g. a construction type, material, window pane).
Both "Component " and "Template" are allowed to be imported and exported by using "Import / Export library data" command (.ddf format - DB Database File; Page 734: Import Components/Templates, Export Components/Templates). DB also allows us to build up our own libraries of templates and components (Page 731: Library Management; Page 733: Template Library Management).
In order to import both geometric information and other information related to schedules, loads, constructions and etc. from GH to BD, we supposed the following two ways:
1. GH(HB+GB) --> gbXML (both geometric and "Component " and "Template" information) --> DB
This is the way we most prefer. We did see information related to schedules, loads, constructions encoded in the gbXML file generated by GB, but still do not know the reason why DB did not take this information (I also mentioned this in Q6 within the gh file). We assume this might because the gbXML file we create encodes the schedules based on a different template / schema than the one DB expects. We also post this question to the DB forum for help.
(http://www.designbuilder.co.uk/component/option,com_forum/Itemid,25/page,viewtopic/p,13755/#13755)
2. GH(HB+GB) --> gbXML (geometric information only) + .ddf ("Component " and "Template" information only) --> DB
If the first way doesn't work and DB only takes geometric information from the gbXML, then we might think of the other way - generating the .ddf files from GH(HB+GB) to pass the schedule, load and construction information to DB.
I was wondering if it is feasible for HB and GB to have this function? And what is your suggestion to achieve this?
In addition, we notice that DB can export XML files (not gbXML), so we are trying to figure out if DB also accepts / reads the XML file. If so, we might be able to convert the gbXML (with both geometric and schedule information) to XML. What do you think about that?
Thank you again for all your help!
Best,
Ding
DB import
DB export
Template libraries
Component libraries
…
bi-directional link, the link is unidirectional (downflow only), because of the use of proxies.
Matrix transforms and persistent constraints: I don't think this is true. The parts can have mates to other parts that preserve geometric relationships like 'coincident' , 'aligned' etc. These are essentially bi-directional. GH's algorithmic approach does not do relationships in the same / flexible way. In GH, the 'relationship' has to be part of the generation method that dependent on the creation sequence. I.e. draw line 2 perpendicularly from the end of point of line 1. If you are thinking about parts or assemblies sharing, or referencing parameters as part of the regen process, this is also possible. iLogic does this, and adds scripting. So does Catia. Inventor/iLogic can also access Excel and have all the parameter processing done centrally, if required.
Consequently, scripting the placement of components is irrelevant in GH, unless you decide that each component needs to be contained in its own separate file.
I wouldn't be too hasty here. Yes, you are right about compartmentalisation. I think this needs to happen with GH, in order to deal with scalability/everyday interoperability requirements. Confining projects to one script is not sustainable. MCAD apps have been doing this for ages with 'Relational Modeling'.The Adaptive Components placement example illustrates that it is beneficial to be able to script some 'hints' that can be used on placement of the component. Say, if your component requires points as inputs, then its should be able to find the nearest points to the cursor as it moves around. I think Aish's D# / DesignScript demo'd this kind of behaviour a few years ago. Similarly, Modo Toolpipe reminds me how a lot of UI based transactions can be captured as scripts (macro recorder etc). Allowing this input to be mixed in and/or extended by GH I think will yield a lot of 'modeling efficiency' around the edges. This is a (mis)using GH as an user-programmable 'jig' for placing/manipulating 'dumb' elements in Rhino. It may even give the 'dumb elements' a bit more 'intelligence' by leaving behind embedded attributes, like links to particular construction planes etc.Even if we confine ourselves to scripting. GH is a visual or graphic programming interface. A lot of 'insert and connect' tasks can be done more easily using graphic methods. If we need to select certain vertices on a mesh as inputs for, say, a facade panel, its going to be quicker to do this 'graphically' (like the AC example), then ferreting out the relevant indices in the data tree et al. The 'facade panel' script would then have some coding to filter/prompt the user as to what inputs were acceptable, and so on.
This also brings up the point that generating components and assemblies in MCAD is not as straightforward. In iParts and iAssemblies, each configuration needs to be generated as a "child" (the individual file needs to be created for each child) before those children can be used elsewhere.
Not sure what you mean here. If the i-parts are built up using sketches /profiles or other more rudimentary features (like Revits' profile/face etc family templates) then reuse should be fairly straight forward. I suppose you could make it like GH scripting, if you cut and paste or include script snippets that generate the desired Inventor features.
One of the reasons why the distributed file approach makes perfect sense in MCAD, is that in industry you deal with a finite set of objects. Generative tools are usually not a requirement. Most mechanical engineers, product engineers and machinists would never have any use for that.
I don't think this is true. Look at the automotive body design apps, which are mostly Catia based. All of the body parts are pretty much 'generative' and generated from splines, in a procedural way, using very similar approaches to GH. Or sheet metal design. It's not always about configuration of off-the-shelf items like bolts. And, the constraints manager is available to arbitrate which bit of script fires first, and your mundane workaday associative dimensions etc can update without getting run over by the DAG(s) :-)
…
mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…
what they really mean by that, as in what buttons to push, so I assume it's a Windows Path entry?
2.) Modify PATH
Add the install location on the path, this is usually: C:\Program File\IronPython 2.7
But on 64-bit Windows systems it is: C:\Program File (x86)\IronPython 2.7
As a check, open a Windows command prompt and go to a directory (which is not the above) and type:
> ipy -V PythonContext 2.7.0.40 on .NET 4.0.30319.225
Tutorial on setting a Windows environmental variable (path):
http://www.computerhope.com/issues/ch000549.htm
But this fails to point out that path contains many entries already separated by semicolons so if I merely add a new variable called "path" it's likely that I will destroy existing program function. There's no info on how to just tack on another entry, and the Windows 7 edit box doesn't even show the whole collection, but one item (!), so I copied the existing path into a text editor to see the whole collection successfully and added the C:\Program Files (x86)\IronPython 2.7 entry after an added semicolon, correcting for an Enthought page typo of no 's' on the end of "Program Files". I also checked the others and many pointed to old missing directories so I deleted those entries.
...and the test fails and "ipy" is not recognized as a command, even though the path now shows up using "path" in the Windows CMD window, that is if I copy all by right clicking and pasting the stuff into a text editor to really view it all. I can run it from the source directory just fine.
The rabbit hole was indeed deep. Using the Task Manager (control-alt-delete) to kill Explorer and then Run in the menu to restart "Explorer," along with restarting the Windows CMD window however, worked. I can now invoke Iron Python ("ipy") via command line from any directory. For the "path" I edited path in the System Variables and not the User Variables. No, you don't have to type that whole crazy line above just to test the path variable, just "ipy" (and control-Z to quite IronPython) in the CMD window invoked by typing "cmd" into the Start menu search box.
From the CMD line this step did work fine:
3.) ironpkg
Bootstrap ironpkg, which is a package install manager for binary (egg based) Python packages. Download ironpkg-1.0.0.py and type:
> ipy ironpkg-1.0.0.py --install
Now the ironpkg command should be available:
> ironpkg -h(some useful help text is displayed here)
But of course Step 4 fails, giving pages of what seem to be error messages;
C:\Users\Nik>ironpkg scipy
Traceback (most recent call last):
File "C:\Program Files (x86)\IronPython 2.7\lib\site-packages\enstaller\utils.
py", line 92, in write_data_from_url
File "C:\Program Files (x86)\IronPython 2.7\Lib\urllib2.py", line 126, in urlo
pen
File "C:\Program Files (x86)\IronPython 2.7\Lib\urllib2.py", line 397, in open
File "C:\Program Files (x86)\IronPython 2.7\Lib\urllib2.py", line 509, in http
_response
...
Why can't I just download Numpy as a normal file and thus also have it easy for other users to install it when they use my scripts? This is just crazy and lazy. The Enthought developer has turned this into a computer game, with a missing registration link and then the last step spits out errors with utterly no information on how to fix it manually.
This Step 4 error is covered here:
http://discourse.mcneel.com/t/trying-to-import-numpy-in-rhino-python-but-im-getting-this-error-cannot-import-multiarray-from-numpy-core/12912/16…
Added by Nik Willmore at 2:36pm on October 11, 2015
nowledge, tools, materials and machines. The Clusters provide a focus for workshop participants working together within a common framework.
Clusters provide a forum for the exchange of ideas, processes and techniques and act as a catalyst for design resolution. The Workshop is made up of ten Clusters that respond in diverse ways to the sg2012 Challenge Material Intensities. The Call for Clusters is now open to proposals which respond in innovative ways to this year's challenge.
Deadline: September 19 2011
More information can be found here:
http://smartgeometry.org/index.php?option=com_content&view=article&id=129&Itemid=146
sg2012 takes place from 19-24 March 2012 at EMPAC (http://empac.rpi.edu/) and is hosted by Rensselaer Polytechnic Institute in Troy, upstate New York USA. The Workshop and Conference will be a gathering of the global community of innovators and pioneers in the fields of architecture, design and engineering.
The event will be in two parts: a four day Workshop 19-22 March, and a public conference beginning with Talkshop 23 March, followed by a Symposium 24 March. The event follows the format of the highly successful preceding events sg2010 Barcelona and sg2011 Copenhagen.
sg2012 Challenge Material Intensities
Simulation, Energy, Environment
Imagine the design space of architecture was no longer at the scale of rooms, walls and atria, but that of cells, grains and vapour droplets. Rather than the flow of people, services, or construction schedules, the focus becomes the flow of light, vapour, molecular vibrations and growth schedules: design from the inside out.
The sg2012 challenge, Material Intensities, is intended to dissolve our notion of the built environment as inert constructions enclosing physically sealed spaces. Spaces and boundaries are abundant with vibration, fluctuating intensities, shifting gradients and flows. The materials that define them are in a continual state of becoming: a dance of energy and information.Material potential is defined by multiple properties: acoustical, chemical, electrical, environmental, magnetic, manufacturing, mechanical, optical, radiological, sensorial, and thermal. The challenge for sg2012 Material Intensities is to consider material economy when creating environments, micro-climates and contexts congenial for social interaction, activities and organisation. This challenge calls for design innovation and dialogue between disciplines and responsibilities.sg2010 Working Prototypes strove to emancipate digital design from the hard drive by moving from the virtual to the actual in wrestling with the tangible world of physical fabrication. sg2011 Building the Invisible focused on informing digital design with real world data. sg2012 Material Intensities strives to energise our digital prototypes and infuse them with material behaviour. They have the potential to become rich simulations informed by the material dynamics, chemical composition, energy flows, force fields and environmental conditions that feed back into the design process.
More information can be found at http://www.smartgeometry.org…
ers and researchers, programmers and artists, professionals and academics who come together for 4 days of intense collaboration, development, and design.
The sg2012 Workshop will be organised around Clusters. Clusters are hubs of expertise. They comprise of people, knowledge, tools, materials and machines. The Clusters provide a focus for workshop participants working together within a common framework.
Clusters provide a forum for the exchange of ideas, processes and techniques and act as a catalyst for design resolution. The Workshop is made up of ten Clusters that respond in diverse ways to the sg2012 Challenge Material Intensities.
Applicants to the sg2012 Workshop will select their preferred cluster from the following:
Beyond Mechanics
Micro Synergetics
Composite Territories
Ceramics 2.0
Material Conflicts
Transgranular Perspiration
Reactive Acoustic Environments
Form Follows Flow
Bioresponsive Building Envelopes
Gridshell Digital Tectonics
More information about the Workshop and Clusters can be found here:
http://smartgeometry.org/index.php?option=com_content&view=article&id=116&Itemid=131
The application process will close on January 15th, 2012.
Full Fee $1500
Reduced Fee $750
Scholarship Fee $350
Fees include attendance to both the workshop and conference from March 19th-24th.
Reduced Fee and Scholarships are available only for Academics, Students and Young Practitioners, and are awarded during a competitive peer review process.
sg2012 takes place from 19-24 March 2012 at EMPAC (http://empac.rpi.edu/) and is hosted by Rensselaer Polytechnic Institute in Troy, upstate New York USA. The Workshop and Conference will be a gathering of the global community of innovators and pioneers in the fields of architecture, design and engineering.
The event will be in two parts: a four day Workshop 19-22 March, and a public conference beginning with Talkshop 23 March, followed by a Symposium 24 March. The event follows the format of the highly successful preceding events sg2010 Barcelona and sg2011 Copenhagen.
sg2012 Challenge Material Intensities
Simulation, Energy, Environment
Imagine the design space of architecture was no longer at the scale of rooms, walls and atria, but that of cells, grains and vapour droplets. Rather than the flow of people, services, or construction schedules, the focus becomes the flow of light, vapour, molecular vibrations and growth schedules: design from the inside out.
The sg2012 challenge, Material Intensities, is intended to dissolve our notion of the built environment as inert constructions enclosing physically sealed spaces. Spaces and boundaries are abundant with vibration, fluctuating intensities, shifting gradients and flows. The materials that define them are in a continual state of becoming: a dance of energy and information. Material potential is defined by multiple properties: acoustical, chemical, electrical, environmental, magnetic, manufacturing, mechanical, optical, radiological, sensorial, and thermal. The challenge for sg2012 Material Intensities is to consider material economy when creating environments, micro-climates and contexts congenial for social interaction, activities and organisation. This challenge calls for design innovation and dialogue between disciplines and responsibilities. sg2010 Working Prototypes strove to emancipate digital design from the hard drive by moving from the virtual to the actual in wrestling with the tangible world of physical fabrication. sg2011 Building the Invisible focused on informing digital design with real world data. sg2012 Material Intensities strives to energise our digital prototypes and infuse them with material behaviour. They have the potential to become rich simulations informed by the material dynamics, chemical composition, energy flows, force fields and environmental conditions that feed back into the design process.
More information can be found at http://www.smartgeometry.org
Follow us on Twitter at http://twitter.com/smartgeometry…
Added by Shane Burger at 12:29pm on December 13, 2011
ting at multiple geometries in the same location. I simply sorted the list of values and used the Delete Consecutive component. This potentially rearranges the order of values but I don't think that matters in your case. I also threw in an Int component which actually seems to make a difference (try sidestepping it and you will see!).
2-I flattened the output of the mesh component before sending it to union. This ensures that the original mesh is booleaned once with all the components rather than individually with each of the 86 components.
Is this what the result should look like?
One suggestion for future postings: when referencing geometry in rhino, it often helps if you attach your rhino file as well so people don't have to guess where you are starting from.
If you have further questions, just ask ;-)
cbass…