oftware connections built from the initial seed of the project. As always you can download the new release from Food4Rhino. Make sure to remove the older version of Ladybug and Honeybee and update your scripts.
This release is also special since today it is just about 3 years (3 years and 2 weeks) from the first release of Ladybug. As with any release, there have been a number of bug fixes and improvements but we also have some major news this time. In no specific order and to ensure that the biggest developments do not get lost in the extensive list of updates, here are the major ones:
Mostapha is re-writing Ladybug!
Ladybug for DynamoBIM is finally available.
Chris made bakeIt really useful by incorporating an export pathway to PDFs and vector-based programs.
Honeybee is now connected to THERM and the LBNL suite thanks to Chris Mackey.
Sarith has addressed a much-desired wish for Honeybee (Hi Theodore!) by adding components to model electric lighting with Radiance.
Djordje is on his way to making renewable energy deeply integrated with Ladybug by releasing components for modeling solar hot water.
There is new bug. Check the bottom of the post for Dragonfly!
Last but definitely not least (in case you’re not still convinced that this release is a major one) Miguel has started a new project that brings some of Ladybug’s features directly to Rhino. We mean Rhino Rhino - A Rhino plugin! Say hi to Icarus! #surprise
Before we forget! Ladybug and Honeybee now have official stickers. Yes! We know about T-Shirts and mugs and they will be next. For now, you can deck-out your laptops and powerhouse simulation machines with the symbology of our collaborative software ecosystem.
Now go grab a cup of tea/coffee and read the details below:
Rewriting Ladybug!
Perhaps the most far-reaching development of the last 4 months is an effort on the part of Mostapha to initiate a well structured, well documented, flexible, and extendable version of the Ladybug libraries. While such code is something that few community members will interact with directly, a well-documented library is critical for maintaining the project, adding new features, and for porting Ladybug to other software platforms.
The new Ladybug libraries are still under development across a number of new repositories and they separate a ladybug-core, which includes epw parsing and all non-geometric functions, from interface-specific geometry libraries. This allows us to easily extend Ladybug to other platforms with a different geometry library for each platform (ie. ladybug-grasshopper, ladybug-dynamo, ladybug-web, etc) all of which are developed on top of the ladybug-core.
Without getting too technical, here is an example of a useful outcome of this development. If you want to know the number of hours that relative humidity is more than 90% for a given epw, all that you have to code (in any python interface) is the following:
import ladybug as lb
_epwFile = r"C:\EnergyPlusV7-2-0\WeatherData\USA_CO_Golden-NREL.724666_TMY3.epw"
epwfile = lb.epw.EPW(_epwFile)
filteredData = epwfile.relativeHumidity.filterByConditionalStatement('x>90')
print "Number of hours with Humidity more than 90 is %d "%len(filteredData.timeStamps)
Compare that to the 500 + lines that you would have had to write previously for this operation, which were usually tied to a single interface! Now let’s see what will happen if you want to use the geometry-specific libraries. Let’s draw a sunpath in Grasshopper:
import ladybuggrasshopper.epw as epw
import ladybuggrasshopper.sunpath as sunpath
# get location data form epw file
location = epw.EPW(_epwFile).location
# initiate sunpath based on location
sp = sunpath.Sunpath.fromLocation(location, northAngle = 0, daylightSavingPeriod = None, basePoint =cenPt, scale = scale, sunScale = sunScale)
# draw sunpath geometry
sp.drawAnnualSunpath()
# assign geometries to outputs
...
Finally we ask, how would this code will look if we wanted to make a sunpath for dynamo? Well, it will be exactly the same! Just change ladybuggrasshopper in the second line to ladybugdynamo! Here is the code which is creating the sunpath below.
With this ease of scripting, we hope to involve more of our community members in our development and make it easy for others to use ladybug in their various preferred applications. By the next release, we will produce an API documentation (documentation of all the ladybug classes, methods and properties that you can script with) and begin making tutorials for those interested in getting deeper into Ladybug development.
LADYBUG
1 - Initial Release of Ladybug for Dynamo:
As is evident from the post above, we are happy to announce the first release of Ladybug for Dynamo! You can download the ladybug package from Dynamo package manager. Make sure to download version 0.0.6 which is actually 0.0.1! It took a number of trial and errors to get it up there. Once you have the file downloaded you can watch these videos to get started:
The source code can be find under ladybug-dynamo repository and (as you can already guess) it is using the new code base. It includes a very small toolkit of essential Ladybug components/nodes but it has enough to get you started. You can import weather files, draw sunpaths and run sunlighthours or radiation analyses.
There are two known issues in this release but neither of them is critical. You need to have Dynamo 0.9.1 or higher installed which you can download from here (http://dynamobuilds.com/). It is recommended that you run the scripts with ‘Manual’ run (as opposed to ‘Automatic’) since the more intense calculations can make Dynamo crash in automatic mode.
To put things in perspective, here is how we would map Ladybug for Dynamo vs Ladybug and Honeybee for Grasshopper on the classic ‘Hype graph’. The good news is that what we learned a lot from the last three years, making development of the Dynamo version easier and getting us to the plateau of productivity faster.
We should also note that the current development of the Dynamo interface is behind that of the Ladybug-Core, which means there are a number of features that are developed in the code but haven’t made their way to the nodes yet. They will be added gradually over the next month or two.
If you’re interested to get involved in the development process or have ideas for the development, follow ladybug on Facebook, Twitter and Github. We will only post major release news here. Facebook, github and twitter will be the main channels for posting the development process. There will also be a release of a new ladybug for Grasshopper soon that will use the came Ladybug-Core libraries as the Dynamo interface [Trying hard not to name it as Ladybug 2].
2 - New Project “Icarus” Provides Ladybug Capabilities Directly in Rhino
Speaking of expanded cross-platform capabilities, the talented Miguel Rus has produced a standalone Rhino Plugin off of the original Ladybug code that has been included in this release. After writing his own core C# libraries, Miguel’s plugin enables users to produce sunpath and run sunlight hours analyses in the Rhino scene without need of opening Grasshopper or engaging the (sometimes daunting) act of visual scripting.
This release includes his initial RHP plugin file. It is hoped that Miguel’s efforts will extend some of the capabilities of environmental design to individuals who are unfamiliar with visual scripting, casting the network of our community into new territory. We need your help spreading the word about Icarus since the people who will benefit the most from it have probably not read this far into the release notes. Also, as the project is in the early stages, your feedback can make a great difference. You can download the current release from this link.
Once you download the zip file. Right click and unblock it. Then extract the files under C:\Program Files\Rhinoceros 5 (64-bit)\Plug-ins\ folder. Drag and drop the RHP file into Rhino and you should be ready to go. You can either type Icarus in the command line or open it via the panels. Here is a short video that shows how to run a sunlighhours analysis study in Rhino.
3 - BakeIt Input Now Supports a Pathway to PDF +Vector Programs
As promised in the previous release, the BakeIt_ option available on Ladybug’s visual components has been enhanced to provide a full pathway to vector-based programs (like Illustrator and Inkscape) and eases the export to vector formats like PDFs.
This means that the BakeIt_ operation now places all text in the Rhino scene as actual editable text (not meshes) and any colored meshes are output as groups of colored hatches (so that they appear as color-filled polygons in vector-based programs). There is still an option to bake the colored geometries as light meshes (which requires smaller amounts of memory and computation time) but the new hatched capability should make it easier to incorporate Ladybug graphics in architectural drawings and documents like this vector psychrometric chart.
4 - Physiological Equivalent Temperature (PET) Now Available
Thanks to the efforts of Djordje Spasic, it is now possible to compute the common outdoor comfort metric ‘Physiological Equivalent Temperature’ (PET) with Ladybug. The capability has been included with this release of “Thermal Comfort Indices” component and is supported by a “Body Characteristics” component in the Extra tab. PET is particularly helpful for evaluating outdoor comfort at a high spatial resolution and so the next Honeybee release will include an option for PET with the microclimate map workflow.
5 - Solar Hot Water Components Available in WIP
Chengchu Yan and Djordje Spasic have built a set of components that perform detailed estimates of solar hot water. The components are currently undergoing final stages of testing and are available in the WIP tab of this release. You can read the full release notes for the components here.
6 - New Ladybug Graphic Standards
With the parallel efforts or so many developers, we have made an effort in this release to standardize the means by which you interact with the components. This includes warnings for missing inputs and the ability to make either icons or text appear on the components as you wish (Hi Andres!). A full list of all graphic standards can be found here. If you have any thoughts or comments on the new standards, feel free to voice them here.
7 - Wet Bulb Temperature Now Available
Thanks to Antonello Di Nunzio - the newest member of the Ladybug development team, it is now possible to calculate wet bulb temperature with Ladybug. Antonello’s component can be found under the WIP tab and takes inputs of dry bulb temperature, relative humidity, and barometric pressure.
8 - New View Analysis Types
The view analysis component now allows for several different view studies in addition to the previous ‘view to test points.’ These include, skyview (which is helpful for studies of outdoor micro-climate), as well as spherical view and ‘cone of vision’ view, which are helpful for indoor studies evaluating the overall visual connection to the outdoors.
HONEYBEE
1 - Connection to THERM and LBNL Programs
With this release, many of you will notice that a new tab has been added to Honeybee. The tab “11 | THERM” includes 7 new components that enable you to export ready-to-simulate Lawrence Berkeley National Lab (LBNL) THERM files from Rhino/Grasshopper. THERM is a 2D finite element heat flow engine that is used to evaluate the performance of wall/window construction details by simulating thermal bridging behavior. The new Honeybee tab represents the first ever CAD plugin interface for THERM, which has been in demand since the first release of LBNL THERM several years ago. The export workflow involves the drawing of window/wall construction details in Rhino and the assigning of materials and boundary conditions in Grasshopper to produce ready-to-simulate THERM files that allow you to bypass the limited drawing interface of THERM completely. Additional components in the “11 | THERM” tab allow you to import the results of THERM simulations back into Grasshopper and assist with incorporating THERM results into Honeybee EnergyPlus simulations. Finally, two components assist with a connection to LBNL WINDOW for advanced modeling of Glazing constructions. Example files illustrating many of the capabilities of the new components can be found in there links.
THERM_Export_Workflow, THERM_Comparison_of_Stud_Wall_Constructions
Analyze_THERM_Results, Thermal_Bridging_with_THERM_and_EnergyPlus
Import_Glazing_System_from_LBNL_WINDOW, Import_LBNL_WINDOW_Glazing_Assembly_for_EnergyPlus
It is recommended that those who are using these THERM components for the first time begin by exploring this example file.
Tutorial videos on how to use the components will be posted soon. A great deal of thanks is due to the LBNL team that was responsive to questions at the start of the development and special thanks goes to Payette Architects, which allowed Chris Mackey (the author of the components) a significant amount of paid time to develop them.
2 - Electrical Lighting Components with Enhanced Capabilities for Importing and Manipulating IES Files
Thanks to the efforts of Sarith Subramaniam, it is now much easier and more flexible to include electric lighting in Honeybee Radiance simulations. A series of very exciting images and videos can be found in his release post.
You can find the components under WIP tab. Sarith is looking for feedback and wishes. Please give them a try and let him know your thoughts. Several example files showing how to use the components can be found here. 1, 2, 3.
3- Expanded Dynamic Shade Capabilities
After great demand, it is now possible to assign several different types of control strategies for interior blinds and shades for EnergyPlus simulations. Control thresholds range from zone temperature, to zone cooling load, to radiation on windows, to many combinations of these variables. The new component also features the ability to run EnergyPlus simulations with electrochromic glazing. An example file showing many of the new capabilities can be found here.
Dragonfly Beta
In order to link the capabilities of Ladybug + Honeybee to a wider range of climatic data sets and analytical tools, a new insect has been initiated under the name of Dragonfly. While the Dragonfly components are not included with the download of this release, the most recent version can be downloaded here. An example file showing how to use Dragonfly to warp EPW data to account for urban heat island effect can also be found here. By the next release, the capabilities of Dragonfly should be robust enough for it to fly on its own. Additional features that will be implemented in the next few months include importing thermal satellite image data to Rhino/GH as well as the ability to warp EPW files to account for climate change projections. Anyone interested in testing out the new insect should feel free to contact Chris Mackey.
And finally, it is with great pleasure that we welcome Sarith and Antonello to the team. As mentioned in the above release notes, Sarith has added a robust implementation for electric light modeling with Honeybee and Antonello has added a component to calculate wet bulb temperature while providing stellar support to a number of people here on the GH forum.
As always let us know your comments and suggestions.
Enjoy!
Ladybug+Honeybee development team
PS: Special thanks to Chris for writing most of the release notes!…
currently within a fake euphoria framework - blame China/UAE) and a potential decision about doing/developing this or doing that … well …anyway … read and enjoy.
AEC matters: The good, the bad and the ugly.
The bad news: Rhino is NOT suitable for the job (although some use it … but only in the sense that people use Modo for the so called “hard modeling”). By job I mean things up to shop drawing level + specs + you and me (we call it Final level) – nothing to do with sketches and outlines of some abstract “schematic” topology.
The ugly news: The so called Design-Construct approach gains exponentially momentum especially in countries the likes of China/UAE/BRICS (95% of the whole AEC activity worldwide happens there). DeCo means: AEC engineers deliver some kind of study in a preliminary level and the main contractor splits (outsourcers) the job and assigns the study completion AND the construction to various sup-constructors. That thing appeared first – in a large scale - in Dubai 15 years ago. This means that the era of Sergio Pininfarina is over and out: welcome anonymous Toyota designer. In plain English: days of construction corporations fast replacing practices. Dead men walking.
The good news: All AEC related apps (Revit, AECOsim, Allplan and the likes) are in a lethargic state as regards the brave new world (based on the archaic level driven organization schemas etc etc). Of course they all claim the exact opposite and point that support BIM (nobody mention PLM) better than the other guy. But the 21st century – helped by 2 forthcoming unavoidable crisis (a) about shortage of water (b) about transition from carbon to hydrogen economy – isn’t about bureaucracy: think cost/resources optimization and “fitness” rather than China/UAE type of liquid trend. Days of euphoria fast approaching the Wall.
Top to bottom and visa versa.
Old days Titans (Oscar, Mies, Walter, Pierre Luigi, Frank, Eero, blah blah) outlined things (mostly using crayons) and the rest were struggling to translate these in reality in an one way vector like process : Top to bottom that is. These days the inverse gains momentum : when in the whole consider the part … validate … redo … validate … redo. This means bottom to top geared with top-to-bottom. In plain English : child imposes rules to parent and parent imposes rules to child. This means classic MCAD feature/history modeling (CATIA/NX/MS). This is something that Rhino can’t do (not to mention that Rhino is a surface modeler – a rather critical fact).
The parts that are bigger than the whole.
Go there ( http://www.behance.net/gallery/2885057/a-myriad-of-cables) are inspect the whole thing: it’s a parametric nightmare made with the other guy (Generative Components – slower than a Skoda + bugs + why bother?). But the whole (masts and membranes and the likes) means nothing here: focus to the details that are critical for connecting this with that. Complex feature driven solids that are made with internal (on a per se basis) parameters (like fillets required for casting or radius for cable anchoring) whilst they comply with external rules/parameters (cable angles, topology clash issues etc etc). So the whole outlines possibilities … and the part either can follow…or the part must change…or the whole must change. Can you do that with GH/Rhino? And if not what’s missing? (lot’s of things to be honest).
Some other "similar" things:
The narrow picture.
I agree with what others already said and with pretty much all Ola’s points – especially the visual drag-and-drop path mapper (i.e. a visual data manipulator so to speak) and the enable/disable components in groups capability.
Some other suggestions:
A multi canvas capability. As things are right now…it’s like working in Rhino in one view (rather unsuitable I guess). In fact …since overlapping views they don’t work in Rhino…well…you know, he he.
A working auto profile arrangement capability (non twisted Loft/Sweep and the likes). Worth 1Bn dollars that one.
Ability to locate components that caused this or that in the Rhino view: meaning a 2 way communication approach : GH makes things happen in Rhino and things can indicate their cause in the GH canvas.
A robust collection of components that bake stuff in nested blocks (emulating some primitive assembly/component way of thinking). Why may you ask? Well … the whole objective is to talk to CATIA (via STEP) don’t you agree? CATIA makes things happen in real-life not Rhino.
A robust collection of components that can create real-life parametric tensile membrane solutions (get some inspiration from FormFinder: useless because it’s academic but good to point the way). Membranes (and geodesics) are the future.
I could continue at infinitum but IMHO the big picture is worth 12345,67 “focused” GH improvements.
May the Force (the dark option) be with us all.
…
lly it should not make much of a difference - random number generation is not affected, mutation also is not. crossover is a bit more tricky, I use Simulated Binary Crossover (SBX-20) which was introduced already in 1194:
Deb K., Agrawal R. B.: Simulated Binary Crossover for Continuous Search Space, inIITK/ME/SMD-94027, Convenor, Technical Reports, Indian Institue of Technology, Kanpur, India,November 1994
Abst ract. The success of binary-coded gene t ic algorithms (GA s) inproblems having discrete sear ch sp ace largely depends on the codingused to represent the prob lem variables and on the crossover ope ratorthat propagates buildin g blocks from pare nt strings to childrenst rings . In solving optimization problems having continuous searchspace, binary-co ded GAs discr et ize the search space by using a codingof the problem var iables in binary st rings. However , t he coding of realvaluedvari ables in finit e-length st rings causes a number of difficulties:inability to achieve arbit rary pr ecision in the obtained solution , fixedmapping of problem var iab les, inh eren t Hamming cliff problem associatedwit h binary coding, and processing of Holland 's schemata incont inuous search space. Although a number of real-coded GAs aredevelop ed to solve optimization problems having a cont inuous searchspace, the search powers of these crossover operators are not adequate .In t his paper , t he search power of a crossover operator is defined int erms of the probability of creating an arbitrary child solut ion froma given pair of parent solutions . Motivated by t he success of binarycodedGAs in discret e search space problems , we develop a real-codedcrossover (which we call the simulated binar y crossover , or SBX) operatorwhose search power is similar to that of the single-point crossoverused in binary-coded GAs . Simulation results on a number of realvaluedt est problems of varying difficulty and dimensionality suggestt hat the real-cod ed GAs with t he SBX operator ar e ab le to perform asgood or bet t er than binary-cod ed GAs wit h t he single-po int crossover.SBX is found to be particularly useful in problems having mult ip le optimalsolutions with a narrow global basin an d in prob lems where thelower and upper bo unds of the global optimum are not known a priori.Further , a simulation on a two-var iable blocked function showsthat the real-coded GA with SBX work s as suggested by Goldberg
and in most cases t he performance of real-coded GA with SBX is similarto that of binary GAs with a single-point crossover. Based onth ese encouraging results, this paper suggests a number of extensionsto the present study.
7. ConclusionsIn this paper, a real-coded crossover operator has been develop ed bas ed ont he search characte rist ics of a single-point crossover used in binary -codedGAs. In ord er to define the search power of a crossover operator, a spreadfactor has been introduced as the ratio of the absolute differences of thechildren points to that of the parent points. Thereaft er , the probabilityof creat ing a child point for two given parent points has been derived forthe single-point crossover. Motivat ed by the success of binary-coded GAsin problems wit h discrete sear ch space, a simul ated bin ary crossover (SBX)operator has been develop ed to solve problems having cont inuous searchspace. The SBX operator has search power similar to that of the single-po intcrossover.On a number of t est fun ctions, including De Jong's five te st fun ct ions, ithas been found that real-coded GAs with the SBX operator can overcome anumb er of difficult ies inherent with binary-coded GAs in solving cont inuoussearch space problems-Hamming cliff problem, arbitrary pr ecision problem,and fixed mapped coding problem. In the comparison of real-coded GAs wit ha SBX operator and binary-coded GAs with a single-point crossover ope rat or ,it has been observed that the performance of the former is better than thelatt er on continuous functions and the performance of the former is similarto the lat ter in solving discret e and difficult functions. In comparison withanother real-coded crossover operator (i.e. , BLX-0 .5) suggested elsewhere ,SBX performs better in difficult test functions. It has also been observedthat SBX is particularly useful in problems where the bounds of the optimum
point is not known a priori and wher e there are multi ple optima, of whichone is global.Real-coded GAs wit h t he SBX op erator have also been tried in solvinga two-variab le blocked function (the concept of blocked fun ctions was introducedin [10]). Blocked fun ct ions are difficult for real-coded GAs , becauselocal optimal points block t he progress of search to continue towards t heglobal optimal point . The simulat ion results on t he two-var iable blockedfunction have shown that in most occasions , the sea rch proceeds the way aspr edicted in [10]. Most importantly, it has been observed that the real-codedGAs wit h SBX work similar to that of t he binary-coded GAs wit h single-pointcrossover in overcoming t he barrier of the local peaks and converging to t heglobal bas in. However , it is premature to conclude whether real-coded GAswit h SBX op erator can overcome t he local barriers in higher-dimensionalblocked fun ct ions.These results are encour aging and suggest avenues for further research.Because the SBX ope rat or uses a probability distribut ion for choosing a childpo int , the real-coded GAs wit h SBX are one st ep ahead of the binary-codedGAs in te rms of ach ieving a convergence proof for GAs. With a direct probabilist ic relationship between children and parent points used in t his paper,cues from t he clas sical stochast ic optimization methods can be borrowed toachieve a convergence proof of GAs , or a much closer tie between the classicaloptimization methods and GAs is on t he horizon.
In short, according to the authors my SBX operator using real gene values is as good as older ones specially designed for discrete searches, and better in continuous searches. SBX as far as i know meanwhile is a standard general crossover operator.
But:
- there might be better ones out there i just havent seen yet. please tell me.
- besides tournament selection and mutation, crossover is just one part of the breeding pipeline. also there is the elite management for MOEA which is AT LEAST as important as the breeding itself.
- depending on the problem, there are almost always better specific ways of how to code the mutation and the crossover operators. but octopus is meant to keep it general for the moment - maybe there's a way for an interface to code those things yourself..!?
2) elite size = SPEA-2 archive size, yes. the rate depends on your convergence behaviour i would say. i usually start off with at least half the size of the population, but mostly the same size (as it is hard-coded in the new version, i just realize) is big enough.
4) the non-dominated front is always put into the archive first. if the archive size is exceeded, the least important individual (the significant strategy in SPEA-2) are truncated one by one until the size is reached. if it is smaller, the fittest dominated individuals are put into the elite. the latter happens in the beginning of the run, when the front wasn't discovered well yet.
3) yes it is. this is a custom implementation i figured out myself. however i'm close to have the HypE algorithm working in the new version, which natively has got the possibility to articulate perference relations on sets of solutions.
…
via MIDI controllers.
my idea is to link PureData to GH via UDP. why pure data? cause' i can relate data like GH to generate numeric relations (and link it to audio generation)
so far i got PD and Processing to talk, but i can't get to grasshopper.
i use this definitions to make pd and processing to talk http://ubaa.net/shared/processing/udp/ and this GHX to get the data to GH http://www.grasshopper3d.com/forum/attachment/download?id=2985220%3...
i got this data from this post but the GH definition doesn't work for me. i have tried LAN definitions and "the engine" as well but they both freeze, even if i send data thru processing or PD.
i have a lot of questions at this time
1.- why processing tells me that i am getting the data from diferent ports, while i'm using 6000?
2.- why in the UDP definition i get no data out, even if it should say something like "waiting fordata/port/etc.." that's defined in the C# capsule
3.- is there a direct way to get midi data (key and CC) to GH
i also tried to use firefly to get the data via COM port. i know you can do this trick in processing but i just don't know how.
well. if anyone could help me i would share the results here (since it's a magister, results shoud be very interesting)
UDP has allways been a unsolved issue on other posts. maybe we could work it out ;)
Thanks…
Added by jota aldunce at 8:43am on September 28, 2010
tmap processing, Kinect, video tracking, etc over the past two years! This end script is sort of a solution to a lot of questions: Why can't the Firefly for Xbox 360 Kinect use the coordinate mapper function? How do I track glyphs in Grasshopper in real time and record them? How do I maintain functional FPS when recording? How do I convert between screen resolutions and reformatted point clouds?
The structure of this script is to start with one component automatically triggering the recording of a Point Cloud and other necessary data, shut itself down, and start up the next Calibration Component. The idea is that if the camera is immediately restarted under a coordinate mapper function, the offset can be observed and used for calibration.
The physical glyphs are squares of a color with high saturation. To aid in the consistency of the results, four square glyphs were mounted to a skinny length of steel.
I wrote the ColorBinary and D-Range Components seen in the first attached image separately to make the Firefly2Emgu component a smaller undertaking. Processing and using the Kinect V1 Color Stream and Depth Stream could be a tutorial in of itself, as well as the entire other world of EmguCV.
The second attached image shows the point cloud of the results. I let the component output three indexes to better show the process of the Calibration Component. For each glyph, the bottom right point is the original output of the tracker without any calibration. From there, the most upper left point is the square offset from a value isolated by observing the post process. The third point is the end result created by my Calibration Component.
One limitation of this script is that after recording with the first component, there is significant computation time. Another issue is that the end result is limited to the original resolution of the grid set in the first component, which is smaller to improve FPS. I understand that many people have moved on from the Kinect 360, but I was curious why the coordinate mapper was not worked into the original Firefly design and wanted to explore alternative solutions. I have been sitting on a newer Kinect for some time, and can't wait to see FPS under a newer, similar, singular component with a similar mission statement. At any rate, I plan to create a similar post process setup with the new Kinect and compare.
This is a work in progress, but I wanted to share.
Many thanks to this forum and Andy Payne for all of the Firefly tools and inspiration.
…
rring to the above image)
Area
effective
effective
Second
Elastic
Elastic
Plastic
Radius
Second
Elastic
Plastic
Radius
of
Vy shear
Vz shear
Moment
Modulus
Modulus
Modulus
of
Moment
Modulus
Modulus
of
Section
Area
Area
of Area
upper
lower
Gyration
of Area
Gyration
(strong axis)
(strong axis)
(strong axis)
(strong axis)
(strong axis)
(weak axis)
(weak axis)
(weak axis)
(weak axis)
A
Ay
Az
Iy
Wy
Wy
Wply
i_y
Iz
Wz
Wplz
i_z
cm2
cm2
cm2
cm4
cm3
cm3
cm3
cm
cm4
cm3
cm3
cm
I have a very similar table which I could import to the Karamba table. But I have i_v or i_u values as well as radius of inertia for instance.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
dimensjon
Masse
Areal
akse
Ix
Wpx
ix
akse
Iy
Wpy
iy
akse
Iv
Wpv
iv
Width
Thickness
Radius R
[kg/m]
[mm2]
[mm4]
[mm3]
[mm]
[mm4]
[mm3]
[mm]
[mm4]
[mm3]
[mm]
[mm]
[mm]
[mm]
L 20x3
0.89
113
x-x
4,000
290
5.9
y-y
4,000
290
5.9
v-v
1,700
200
3.9
20
3
4
L 20x4
1.15
146
x-x
5,000
360
5.8
y-y
5,000
360
5.8
v-v
2,200
240
3.8
20
4
4
L 25x3
1.12
143
x-x
8,200
460
7.6
y-y
8,200
460
7.6
v-v
3,400
330
4.9
25
3
4
L 25x4
1.46
186
x-x
10,300
590
7.4
y-y
10,300
590
7.4
v-v
4,300
400
4.8
25
4
4
L 30x3
1.37
175
x-x
14,600
680
9.1
y-y
14,600
680
9.1
v-v
6,100
510
5.9
30
3
5
L 30x4
1.79
228
x-x
18,400
870
9.0
y-y
18,400
870
9.0
v-v
7,700
620
5.8
30
4
5
L 36x3
1.66
211
x-x
25,800
990
11.1
y-y
25,800
990
11.1
v-v
10,700
760
7.1
36
3
5
L 36x4
2.16
276
x-x
32,900
1,280
10.9
y-y
32,900
1,280
10.9
v-v
13,700
930
7.0
36
4
5
L 36x5
2.65
338
x-x
39,500
1,560
10.8
y-y
39,500
1,560
10.8
v-v
16,500
1,090
7.0
36
5
5
I have diagonals (bracings) which can buckle in these "non-regular" directions too, and they do. If I could add those values then in the Karamba model I could assign specific buckling scenarios..... I can see another challenge which will be at the ModifyElement component, I will not be able to choose these buckling lengths, in these directions.
Do you think this functionality can be added within short, or should I try to find another way to model these members?
Br, Balazs
…
ng is deciding how and where to store your data. If you're writing textual code using any one of a huge number of programming languages there are a lot of different options, each with its own benefits and drawbacks. Sometimes you just need to store a single data point. At other times you may need a list of exactly one hundred data points. At other times still circumstances may demand a list of a variable number of data points.
In programming jargon, lists and arrays are typically used to store an ordered collection of data points, where each item is directly accessible. Bags and hash sets are examples of unordered data storage. These storage mechanisms do not have a concept of which data comes first and which next, but they are much better at searching the data set for specific values. Stacks and queues are ordered data structures where only the youngest or oldest data points are accessible respectively. These are popular structures for code designed to create and execute schedules. Linked lists are chains of consecutive data points, where each point knows only about its direct neighbours. As a result, it's a lot of work to find the one-millionth point in a linked list, but it's incredibly efficient to insert or remove points from the middle of the chain. Dictionaries store data in the form of key-value pairs, allowing one to index complicated data points using simple lookup codes.
The above is a just a small sampling of popular data storage mechanisms, there are many, many others. From multidimensional arrays to SQL databases. From readonly collections to concurrent k-dTrees. It takes a fair amount of knowledge and practice to be able to navigate this bewildering sea of options and pick the best suited storage mechanism for any particular problem. We did not wish to confront our users with this plethora of programmatic principles, and instead decided to offer only a single data storage mechanism.*
Data storage in Grasshopper
In order to see what mechanism would be optimal for Grasshopper, it is necessary to first list the different possible ways in which components may wish to access and store data, and also how families of data points flow through a Grasshopper network, often acquiring more complexity over time.
A lot of components operate on individual values and also output individual values as results. This is the simplest category, let's call it 1:1 (pronounced as "one to one", indicating a mapping from single inputs to single outputs). Two examples of 1:1 components are Subtraction and Construct Point. Subtraction takes two arguments on the left (A and B), and outputs the difference (A-B) to the right. Even when the component is called upon to calculate the difference between two collections of 12 million values each, at any one time it only cares about three values; A, B and the difference between the two. Similarly, Construct Point takes three separate numbers as input arguments and combines them to form a single xyz point.
Another common category of components create lists of data from single input values. We'll refer to these components as 1:N. Range and Divide Curve are oft used examples in this category. Range takes a single numeric domain and a single integer, but it outputs a list of numbers that divide the domain into the specified number of steps. Similarly, Divide Curve requires a single curve and a division count, but it outputs several lists of data, where the length of each list is a function of the division count.
The opposite behaviour also occurs. Common N:1 components are Polyline and Loft, both of which consume a list of points and curves respectively, yet output only a single curve or surface.
Lastly (in the list category), N:N components are also available. A fair number of components operate on lists of data and also output lists of data. Sort and Reverse List are examples of N:N components you will almost certainly encounter when using Grasshopper. It is true that N:N components mostly fall into the data management category, in the sense that they are mostly employed to change the way data is stored, rather than to create entirely new data, but they are common and important nonetheless.
A rare few components are even more complex than 1:N, N:1, or N:N, in that they are not content to operate on or output single lists of data points. The Divide Surface and Square Grid components want to output not just lists of points, but several lists of points, each of which represents a single row or column in a grid. We can refer to these components as 1:N' or N':1 or N:N' or ... depending on how the inputs and outputs are defined.
The above listing of data mapping categories encapsulate all components that ship with Grasshopper, though they do not necessarily minister to all imaginable mappings. However in the spirit of getting on with the software it was decided that a data structure that could handle individual values, lists of values, and lists of lists of values would solve at least 99% of the then existing problems and was thus considered to be a 'good thing'.
Data storage as the outcome of a process
If the problems of 1:N' mappings only occurred in those few components to do with grids, it would probably not warrant support for lists-of-lists in the core data structure. However, 1:N' or N:N' mappings can be the result of the concatenation of two or more 1:N components. Consider the following case: A collection of three polysurfaces (a box, a capped cylinder, and a triangular prism) is imported from Rhino into Grasshopper. The shapes are all exploded into their separate faces, resulting in 6 faces for the box, 3 for the cylinder, and 5 for the prism. Across each face, a collection of isocurves is drawn, resembling a hatching. Ultimately, each isocurve is divided into equally spaced points.
This is not an unreasonably elaborate case, but it already shows how shockingly quickly layers of complexity are introduced into the data as it flows from the left to the right side of the network.
It's no good ending up with a single huge list containing all the points. The data structure we use must be detailed enough to allow us to select from it any logical subset. This means that the ultimate data structure must contain a record of all the mappings that were applied from start to finish. It must be possible to select all the points that are associated with the second polysurface, but not the first or third. It must also be possible to select all points that are associated with the first face of each polysurface, but not any subsequent faces. Or a selection which includes only the fourth point of each division and no others.
The only way such selection sets can be defined, is if the data structure contains a record of the "history" of each data point. I.e. for every point we must be able to figure out which original shape it came from (the cube, the cylinder or the prism), which of the exploded faces it is associated with, which isocurve on that face was involved and the index of the point within the curve division family.
A flexible mechanism for variable history records.
The storage constraints mentioned so far (to wit, the requirement of storing individual values, lists of values, and lists of lists of values), combined with the relational constraints (to wit, the ability to measure the relatedness of various lists within the entire collection) lead us to Data Trees. The data structure we chose is certainly not the only imaginable solution to this problem, and due to its terse notation can appear fairly obtuse to the untrained eye. However since data trees only employ non-negative integers to identify both lists and items within lists, the structure is very amenable to simple arithmetic operations, which makes the structure very pliable from an algorithmic point of view.
A data tree is an ordered collection of lists. Each list is associated with a path, which serves as the identifier of that list. This means that two lists in the same tree cannot have the same path. A path is a collection of one or more non-negative integers. Path notation employs curly brackets and semi-colons as separators. The simplest path contains only the number zero and is written as: {0}. More complicated paths containing more elements are written as: {2;4;6}. Just as a path identifies a list within the tree, an index identifies a data point within a list. An index is always a single, non-negative integer. Indices are written inside square brackets and appended to path notation, in order to fully identify a single piece of data within an entire data tree: {2,4,6}[10].
Since both path elements and indices are zero-based (we start counting at zero, not one), there is a slight disconnect between the ordinality and the cardinality of numbers within data trees. The first element equals index 0, the second element can be found at index 1, the third element maps to index 2, and so on and so forth. This means that the "Eleventh point of the seventh isocurve of the fifth face of the third polysurface" will be written as {2;4;6}[10]. The first path element corresponds with the oldest mapping that occurred within the file, and each subsequent element represents a more recent operation. In this sense the path elements can be likened to taxonomic identifiers. The species {Animalia;Mammalia;Hominidea;Homo} and {Animalia;Mammalia;Hominidea;Pan} are more closely related to each other than to {Animalia;Mammalia; Cervidea;Rangifer}** because they share more codes at the start of their classification. Similarly, the paths {2;4;4} and {2;4;6} are more closely related to each other than they are to {2;3;5}.
The messy reality of data trees.
Although you may agree with me that in theory the data tree approach is solid, you may still get frustrated at the rate at which data trees grow more complex. Often Grasshopper will choose to add additional elements to the paths in a tree where none in fact is needed, resulting in paths that all share a lot of zeroes in certain places. For example a data tree might contain the paths:
{0;0;0;0;0}
{0;0;0;0;1}
{0;0;0;0;2}
{0;0;0;0;3}
{0;0;1;0;0}
{0;0;1;0;1}
{0;0;1;0;2}
{0;0;1;0;3}
instead of the far more economical:
{0;0}
{0;1}
{0;2}
{0;3}
{1;0}
{1;1}
{1;2}
{1;3}
The reason all these zeroes are added is because we value consistency over economics. It doesn't matter whether a component actually outputs more than one list, if the component belongs to the 1:N, 1:N', or N:N' groups, it will always add an extra integer to all the paths, because some day in the future, when the inputs change, it may need that extra integer to keep its lists untangled. We feel it's bad behaviour for the topology of a data tree to be subject to the topical values in that tree. Any component which relies on a specific topology will no longer work when that topology changes, and that should happen as seldom as possible.
Conclusion
Although data trees can be difficult to work with and probably cause more confusion than any other part of Grasshopper, they seem to work well in the majority of cases and we haven't been able to come up with a better solution. That's not to say we never will, but data trees are here to stay for the foreseeable future.
* This is not something we hit on immediately. The very first versions of Grasshopper only allowed for the storage of a single data point per parameter, making operations like [Loft] or [Divide Curve] impossible. Later versions allowed for a single list per parameter, which was still insufficient for all but the most simple algorithms.
** I'm skipping a lot of taxonometric classifications here to keep it simple.…
Added by David Rutten at 2:22pm on January 20, 2015