nd the challenge "Building the Invisible: Informing Digital Design with Real World Data". Information about each Workshop Cluster can be found here:
Cyber GardensUse the ForceUrban FeedsSuspended DreamsInteracting with the CityAgent ConstructionAuthored SensingPerforming SkinsResponsive Acoustic SurfacingHybrid Space Structure Typologies
The SmartGeometry 2011 Workshop will take place at CITA http://cita.karch.dk/
Applications to attend the SmartGeometry 2011 Workshop in Copenhagen will close on 31st January 2011. General Conference registration will open within 1 month.
We hope to see you there!
****************************************************
Workshop 28th-31st March
Shop Talk 1 April
Symposium 2 April
Reception 2 April
These events follow the highly successful previous SG events in Barcelona 2010, San Francisco 2009, Munich 2008, New York 2007, Cambridge/London, UK 2006 and multiple preceding events.
Click here for more info...
This year's Challenge is entitled:BUILDING THE INVISIBLEInforming Digital Design with Real World Data
THE PREMISEVast streams of data offer a rich resource for designers. By incorporating external information into our design processes the autonomy of the design is challenged. User data, energy calculations, embedded sensing, material and structural simulation, human behaviour and perception, particle flows and force fields allows design to be situated and responsive. From the simulation of megacities to the solid modelling of material systems, design has the potential to be informed by the real. Design sits not separate from is environment but inhabits an ecological system, open, dynamic and interdependent, diverse, partially self-organising, adaptive, and fragile. Across scale and within time we now have the chance to instil architecture with an immanent intelligence creating new relationships between the user, the built and its ecosphere.THE OPPORTUNITYSystems theorists suggest that data is only a raw material. It can be differentiated from information, knowledge and wisdom. Understanding is multi-levelled: understanding of relations, understanding of patterns, understanding of principles. As digital designers our challenge is in harnessing the power of computation to assist us in informing our design process. Computers help us collect, manage and analyse the environment and inform us about an abundance of data. Our challenge is to use these inputs in a meaningful way to help us make better informed design decisions.THE AIMSG 2011 explores how the incorporation of real world data challenges existing design thinking. The SG 2011 workshop aim is to create physical prototypes of design systems to be exhibited in the SG2011 exhibition.
The SmartGeometry Group is a not-for-profit educational organization dedicated to the use of computational tools in architecture and engineering. SG brings professionals, academics, and industry together to explore the next generation of digital design. SG Workshops are non-platform specific, believing it is the methodology, not the tool, that matters.
…
Added by Shane Burger at 11:23am on January 6, 2011
inventive collaborative environment.
The workshop is part of a series of PARAMETRICA events, promoting computational design thinking and exploring the new possibilities of parametric design.
The workshop is aimed at: students, postgraduates, architects, interior, product and urban designers, engineers, anybody interested.
> Workshop CONCEPT (16th – 28th July 2013):
The advancement of digital technology is helping architects to understand and respond to the complexity of the environment surrounding us.
In this 14 day workshop the various energies which exist in a given environment will be identified, analysed and then digital simulated.
Experimental structures capable of reconfiguring themselves in response to the mapped forces will be generated and fabricated.
> Conference CONCEPT (29th July 2013):
During this day we will present the final workshop projects and our special guest, Patrik Schumacher will exploit the subject of computational design thinking and parametric architecture, by putting the accent on the subject “Parametric Semiology – Architecture as the interface of communication”
> OBJECTIVES:
The workshop objectives are two-fold, in the first phase the workshop focuses on the identification and analysis of resources inherent to the environmental context, thus developing a better understanding of their nature as well as optimized methods of use or response.
In the next phase, the objective is to generate structures which through either means of fabrication or material properties can respond to, or utilize the environmental energy sources.
> The project TEAM:
Key lecturer: PATRIK SCHUMACHER (DE)
Profile: Director, Zaha Hadid Architects, London
Dr Phil, Dip Ing, ARB, RIBA
Founder AA Design Reseach Lab London
Lecturer: Ina Leonte (RO)
Profile: PhDc, teaching assistant (UAIM, Bucharest, Romania)
Co-founder, Zest
Workshop main tutors:
HOOMAN TALEBI [IR]
Profile: MArch (AADRL, London), MSc (AUT, Tehran)
Lead Designer, Zaha Hadid – London
FARSHAD MEHDI’ZADEH [IR]
Profile: March (IaaC-UPC, Barcelona, Spain)
Co-founder, Tehran Architecture Studio (Iran)
Workshop assistant:
MOHSEN MARIZAD [IR]
Profile: MAA 2010 - Architect (IaaC-UPC, Barcelona, Spain)
Parametric design expert
Workshop coordinator: Diana Nitreanu (RO)
Profile: MAA 2010 - Architect/Urban Designer (IaaC-UPC, Barcelona, Spain)
Official Rhino Trainer
Co-founder, Laboratorul de Arhitectura; Co-founder & Tutor, Parametrica
> EQUIPMENT Workshop: Each participant must provide their own laptop with the following software installed: A. Rhinoceros 3D 5.0 B. Grasshopper 3D (Latest Version) C. Arduino
Machines to work on: 1. Laser Cutter - small laser for prototyping 2. Big laser cutter for final production
Materials (provided by Parametrica) - To be specified according to the subject of study for each group;
FOR MORE INFO®ISTRATION:
www.dynamicfields.ro
www.parametrica.ro
office@parametrica.ro
…
ised in the past to accurately and fairly describe exactly the observations David has made:
'Identikit Architecture': One place to start is close to home, scan the GH image library: twisty tower count? Voronoi-_add geometry of choice_ count? Tons of cringe worthy, samey, pointless, architectural diarrhoea;
and then there is:
'Architectural Autism': Here, only the 'designer' himself understands his project and imagery (...but actually, he probably doesn't). It’s self serving, lacks rationale, and inhibits the ability for critical appraisal as any critic is reduced to the equivalent level of retardation inherent in the work as only two utterly pointless conclusions can ever be made: either 1) "I like it", or 2) "I don't like it". An example can be seen in the nonsensical imagery critiqued by David on his blog post. (Also found in the vast majority of schools of architecture).
The reaction so far since the blogs publication, plus the defence for "intuitive" design and a “pure artistic” approach to architectural design, proves just how deep the rot is in the system of education: the institutions and the professional bodies are all in denial, so unsurprisingly, so are the students. Most tutors and the ubiquitous ‘guest critics’ don’t even have any qualifications to teach. Worse still, almost all of them haven’t even built anything. If architecture really is a profession, the education must be vocational, and of worthy academic merit.
A knock-on impact of this issue arises in practice where the disconnect between what is taught at university and what the reality in office is like becomes apparent. So many students get lulled into a false sense of security at schools of architecture. When it comes to being serious in a professional environment they either do not have what it takes to do the job, or are perplexed when faced with the fundamentals of building design. Students should demand that what is taught at schools of architecture is relevant to their chosen field of work – it’s a massive con and a serious disservice to anyone undertaking the course.
Then there is the huge gulf in quality between one architect to the next; what ever happened to consistency and professional rigour? It would appear that the professional bodies, whose role it is to create prestige that in-turn improves the quality of the profession, have become complacent. Even some prominent names in the industry – individuals who have strategic roles in government advisory groups for the built environment - do not have any qualifications in architecture. It makes a mockery of the whole profession as well as the long commitment to its study.
Ultimately, architecture is unique in that is it the only one of the professions that is suffering from a profound intellectual retardation. All of the other technical-based professions are constantly looking for ways to progress, ways to utilise the latest technology, ways to re-invent the wheel and look outside the box (engineering etc). Yet, the cutting edge of architecture can only conjure up whimsical manifestos who the vast majority of poorly educated students indoctrinated at institutions buy into: 'Parametricism' for example. So with all the advancements in technology, construction and the digital industries, the best architects can come up with is a pseudo-intellectual movement which has no relevance to any of the challenges we face today, and one which is obsessed purely with style. It’s all so primitive, backwards and embarrassing. Hopefully the abysmal pay scales after 5-7 years worth of study should now become self explanatory and self justifying.
This can only change if architectural education is totally overhauled and refocused back on architecture for architectures sake, rather than this lazy, narrow-minded, cerebrally-stunted obsession with art and creating pretty, or incomprehensible imagery that can only withstand by assigning a polemic guff.
Irrelevant, confused and complacent. Sadly, the whole industry is now yesterday’s game.…
t. So here we go!
1. Honeybee is brown and not yellow [stupid!]...
As you probably remember Honeybee logo was initially yellow because of my ignorance about Honeybees. With the help of our Honeybee expert, Michalina, now the color is corrected. I promised her to update everyone about this. Below are photos of her working on the honeybee logo and the results of her study.
If you think I'm exaggerating by calling her a honeybee expert you better watch this video:
Thank you Michalina for the great work! :). I corrected the colors. No yellow anymore. The only yellow arrows represent sun rays and not the honeybee!
2. Yellow or brown, W[here]TH Honeybee is?
I know. It has been a long time after I posted the initial video and it is not fun at all to wait for a long time. Here is the good news. If you are following the Facebook page you probably now that the Daylighting components are almost ready.
Couple of friends from Grasshopper community and RADIANCE community has been helping me with testing/debugging the components. I still think/hope to release the daylighting components at some point in January before Ladybug gets one year old.
There have been multiple changes. I finally feel that the current version of Honeybee is simple enough for non-expert users to start running initial studies and flexible enough for advanced users to run advanced studies. I will post a video soon and walk you through different components.
I think I still need more time to modify the energy simulation components so they are not going to be part of the next release. Unfortunately, there are so many ways to set up and run a wrong energy simulation and I really don’t want to add one new GIGO app to the world of simulation. We already have enough of that. Moreover I’m still not quite happy with the workflow. Please bear with me for few more months and then we can all celebrate!
I recently tested the idea of connecting Grasshopper to OpenStudio by using OpenStudio API successfully. If nothing else, I really want to release the EnergyPlus components so I can concentrate on Grasshopper > OpenStudio development which I personally think is the best approach.
3. What about wind analysis?
I have been asked multiple times that if Ladybug will have a component for wind study. The short answer is YES! I have been working with EFRI-PULSE project during the last year to develop a free and open source web-based CFD simulation platform for outdoor analysis.
We had a very good progress so far and our rockstar Stefan recently presented the results of the work at the American Physical Society’s 66th annual DFD meeting and the results looks pretty convincing in comparison to measured data. Here is an image from the presentation. All the credits go to Stefan Gracik and EFRI-PULSE project.
The project will go live at some point next year and after that I will release the Butterfly which will let you prepare the model for the CFD simulation and send it to EFRI-PULSE project. I haven’t tried to run the simulations locally yet but I’m considering that as a further development. Here is how the component and the logo looks like right now.
4. Teaching resources
It has been almost 11 months from the first public release of Ladybug. I know that I didn't do a good job in providing enough tutorials/teaching materials and I know that I won’t be able to put something comprehensive together soon.
Fortunately, ladybug has been flying in multiple schools during the last year. Several design, engineering and consultant firms are using it and it has been thought in several workshops. As I checked with multiple of you, almost everyone told me that they will be happy to share their teaching materials; hence I started the teaching resources page. Please share your materials on the page. They can be in any format and any language. Thanks in advance!
I hope you enjoyed/are enjoying/will enjoy the longest night of the year. Happy Yalda!
Cheers,
-Mostapha
…
bi-directional link, the link is unidirectional (downflow only), because of the use of proxies.
Matrix transforms and persistent constraints: I don't think this is true. The parts can have mates to other parts that preserve geometric relationships like 'coincident' , 'aligned' etc. These are essentially bi-directional. GH's algorithmic approach does not do relationships in the same / flexible way. In GH, the 'relationship' has to be part of the generation method that dependent on the creation sequence. I.e. draw line 2 perpendicularly from the end of point of line 1. If you are thinking about parts or assemblies sharing, or referencing parameters as part of the regen process, this is also possible. iLogic does this, and adds scripting. So does Catia. Inventor/iLogic can also access Excel and have all the parameter processing done centrally, if required.
Consequently, scripting the placement of components is irrelevant in GH, unless you decide that each component needs to be contained in its own separate file.
I wouldn't be too hasty here. Yes, you are right about compartmentalisation. I think this needs to happen with GH, in order to deal with scalability/everyday interoperability requirements. Confining projects to one script is not sustainable. MCAD apps have been doing this for ages with 'Relational Modeling'.The Adaptive Components placement example illustrates that it is beneficial to be able to script some 'hints' that can be used on placement of the component. Say, if your component requires points as inputs, then its should be able to find the nearest points to the cursor as it moves around. I think Aish's D# / DesignScript demo'd this kind of behaviour a few years ago. Similarly, Modo Toolpipe reminds me how a lot of UI based transactions can be captured as scripts (macro recorder etc). Allowing this input to be mixed in and/or extended by GH I think will yield a lot of 'modeling efficiency' around the edges. This is a (mis)using GH as an user-programmable 'jig' for placing/manipulating 'dumb' elements in Rhino. It may even give the 'dumb elements' a bit more 'intelligence' by leaving behind embedded attributes, like links to particular construction planes etc.Even if we confine ourselves to scripting. GH is a visual or graphic programming interface. A lot of 'insert and connect' tasks can be done more easily using graphic methods. If we need to select certain vertices on a mesh as inputs for, say, a facade panel, its going to be quicker to do this 'graphically' (like the AC example), then ferreting out the relevant indices in the data tree et al. The 'facade panel' script would then have some coding to filter/prompt the user as to what inputs were acceptable, and so on.
This also brings up the point that generating components and assemblies in MCAD is not as straightforward. In iParts and iAssemblies, each configuration needs to be generated as a "child" (the individual file needs to be created for each child) before those children can be used elsewhere.
Not sure what you mean here. If the i-parts are built up using sketches /profiles or other more rudimentary features (like Revits' profile/face etc family templates) then reuse should be fairly straight forward. I suppose you could make it like GH scripting, if you cut and paste or include script snippets that generate the desired Inventor features.
One of the reasons why the distributed file approach makes perfect sense in MCAD, is that in industry you deal with a finite set of objects. Generative tools are usually not a requirement. Most mechanical engineers, product engineers and machinists would never have any use for that.
I don't think this is true. Look at the automotive body design apps, which are mostly Catia based. All of the body parts are pretty much 'generative' and generated from splines, in a procedural way, using very similar approaches to GH. Or sheet metal design. It's not always about configuration of off-the-shelf items like bolts. And, the constraints manager is available to arbitrate which bit of script fires first, and your mundane workaday associative dimensions etc can update without getting run over by the DAG(s) :-)
…
about it.
2. Nick's comment below got me thinking about unit testing for clusters. Being able to work will data flowing in from outside the cluster or having multiple states to test against could be really cool. Creating definitions that were valid across a general cross section of possible input parameters was a significant issue for us. It was all too easy to write the definition as if we were drawing (often we were working from sketches) and then have it fail when the input parameters changed slightly.
4. I wasn't thinking about threading the solver itself. I was thinking along the lines of some IDEs that I've seen which compile your project while you type it. I know that threading within components and at the rhinocommon-level is a freaking hard problem that has been discussed at length already. (although when, 5-10 years from now, it's finished it will be very cool)
Let's say the solver is threaded and the canvas remains responsive. As soon as you make a change to the GH file, the solver needs to be terminated as it is now computing stale data.
What if the solver was a little more atomic and like a server? A GH file is just a list of jobs to do with the order of the jobs and the info to do them rigidly defined - right? The UI could pass the solver stuff to do and store the results back in the components on a component by component basis (i have no idea what the most efficient way to do this is in reality - I'm just talking conceptually) this might even allow running multiple solvers to allow for at least the parallelism the might be built into a given GH file to be exploited (not within components but rather solving non-interdependent branches of components simultaneously). This type of parallelism would more than make up for the performance hit you alluded to for separating the UI and the solver (at least for most of the definitions i write).
I was imagining a couple of scenarios:
a) Writing a parallel module: solver starts chewing away - you see it working - you know it's done 1/3 of the work - if you have something to do at that point you could connect up to some of the already calculated parameters and write something in parallel to the main trunk which is still being solved.
b) Skipping modifications: you need to make a series of interventions at different intervals along a section of code. Sure you could freeze out that bit of a section of down steam code and make modifications so you can observe the effects more quickly. Unfreeze a bit more and repeat etc. etc. until your done and then unfreeze that big chunk at the end to make sure you haven't blown anything up. Just letting it resolve as far as it can while you sit there waiting for inspiration seems a lot more intuitive to me though.
On a file which takes 15 minutes to solve that's no big deal, but you certainly don't want to be adding a 20 millisecond delay to a solution which only takes 30 milliseconds.
You also wouldn't notice it at that point :-) perhaps for things where it would really make a difference, like Galapagos interactivity, it could be disabled - or could the existing "speed" setting just digest this need? Since the vast majority of time that Gh is solving is on files under active development not on finished code, i think qualitative performance is probably more important that quantitative performance (again with cases like Galapagos needing to be accommodated). In our case the code only had to "work" once since its output went to a cnc machine to make a one off project and it didn't really matter if it took 15 seconds or 15 hours for the final run.
Lastly, I have no way to predict how long a component is going to take. I can probably work out how far along in steps a component is, but not how far along in time.
that's ok, from a user point of view, just seeing a percentage tick along once in a while would be nice reassurance that the thing is just slow and has not, in fact, crashed. Maybe there could be two modes of display: the simple percentage version for unpredictable code and, for those of us able to calculate the time taken for our algorithm based on the number of input parameters, a count down in seconds or minutes or whatever.
I think a good place to start with these sort of problems is to keep on improving clusters, ... etc etc
i totally agree.
…
Added by Dieter Toews at 7:53pm on September 4, 2013
h, and using the BScale and BDistance are creating havoc somehow too. I've simplified first, and used the Kangaroo Frames component along with setting internal iterations, to make MeshMachine act like a normal component, along with releasing the FixC and FixV. The FixV didn't make any sense anyway. I've also set Pull to 0 to speed it up during testing, since much less calculation is involved to just let the meshes collapse, prevented from disappearing altogether by using a mere 15 iterations.
Also, your breps are open so that allows much more chaos and then collapse, though they did manage to close themselves too at times. Here is closed breps with a full 45 iterations:
So now that it's working, lets re-Fix the curves, and the problem arises that there is an extra seam line that is getting fixed too, running along the cylinder, stopping the mesh from pulling tight under tension wherever a vertex happens to be near that line:
So lets grab only the naked edge curves instead:
And what happens if we lose the end caps, now that we don't have an extra line skewing the result?:
There is no real curvature differences since it's not a curvy brep so the Adapt at full 1 setting has little to do. Now what does the BScale and BDist do? Nothing! Why? Your scale is out of whack, 99 mm high cylinders but only a falloff maximum of about 5, so let's make the falloff be 25 instead, but I must restore the end caps or the meshes collapse away for some reason and freezes Rhino for a minute or so the first time I try it:
It's a start.
If I intersect the cylinders, nothing changes, since they are being treated as separate runs. MeshMachine outputs a sequence of two outputs though, due to Frames being set to a bare minimum of 2 needed to get it to work, so I filter out the original run, which is just the unmodified initial mesh it creates.
The lesson so far is that closed meshes are much less prone to collapse and glitches leading to screw ups.
A Boolean union of the cylinders is when it gets funner, here show with and without the fixed curves that seem to define boundaries too where really there are just polysurface edges:
…
xternal loads acting on the structure.But if gravity is zero and I change the material or the cross section (while the external loads are unchanged), than the values of the section forces change as well.The difference is not much, but it is there. How is this explained?
I made two files to address the gravity problem.
The first test file is based on a simple supported beam. Here, whether the gravity is on or not, you don't see any change in the values of the section forces of the load case (LC1) with the external load (when you are looking at the lists which are the output of the section force component), if the material or the cross section is changed.
And this is how it should be because these list list the values for each loadcase, and the values of the loadcase with the external load don't take into account the gravity (hence material and cross section).
The second file is a smaller definition of the main file I have been working on.
It is with beams and springs, corresponding to a deployable scissor structure.
To use this file, you have to use the corresponding rhino file. You have to set the curves (in the grasshopper file, indicated with the arrows). In the first set of curves (GH) you set the curves of layer 1 (RH), the red ones, and please do so by selecting them clockwise. In the second set of curves (GH) you set the curves of layer 2 (RH), the purple ones, again selecting them clockwise.
Normally the Karamba output should appear now.
In this file the values of the section forces of the loadcase with the external loads (LC1) (output list of the section force component) change when you change the material (steel or aluminium) and the cross section.
And this is not correct, because these values should be independent from gravity, whether gravity is on or not, right?
Is it because I am using springs (and their related cross section and material)?
Thanks for the help!
Best
Lara…
ay to use a sine wave length along a curve with the grasshopper script below. [Update: done, thank you TOM].
I am trying to figure out a way to reverse the sine wavelength.
Current problems:
1.) Reversing the sine wavelength along the curve. And provide a graph to allow for different graphs.
I.e., I want to have a sine wavelength on the base, and 3200mm above a wavelength of a different sort. Current File: (apply curve to any curve)
Brick%20problem.gh
2.) Contouring when lofted to allow for HFrame placement
I need to be able to apply the script to curves; and be able to adjust a series of points, and the multipiers to adjust the width the curve extrudes in the y axis away from the curve at a perpendicular angle.
Current: (took a long time but this is where I am now)
OLD:
Fixing: Achieved:
Also, is there a way to do this without series and by linking surfaces already in Rhino3d?
Sine wavelength below, and intention to use the brick wall on in picture after.
I've been following Nick Senske's work on YouTube for those wondering.
I hope to achieve something similar to the pictures attached:
http://www.zja.nl/image/2014/10/15/2112_500_325.jpg(mediaclass-default.f996b08cf5abfa43b3b03133a89ec231272756a9).jpg
http://www.zja.nl/en/page/2311/parametric-design-for-brick-surfaces
Edits: Removal of unneeded content and grammar. Update of pictures and progress. Thanks to Tom for his assistance. …
Added by Iain McQuin at 9:52pm on October 18, 2016
ou will see a list of potential matches, sorted from most relevant to least relevant:
Some components and objects support initialisation codes, which means you can assign certain values directly from the popup box. You can do this by adding an equals symbol after the name and then the value you wish to assign. For example, the [Curve Offset] component allows you to specify the offset distance via the popup box by typing =5 after the offset command:
However the popup box also supports a set of special formats that allow you to create specific objects without even typing their names. As of 0.9.0077 (which hasn't been released yet at the time of writing) you can use the following shortcuts to create special objects. In the notation below optional parts of a format will be surrounded by square brackets and hashes (#) will be used to indicate numeric values. So #,#[,#] means;
at least two numeric values separated by a comma, with an optional second comma and third number.
A complete list of special formats (not all of these are supported yet in 0.9.0076):
"∙∙∙ If the format starts with a double quote, then the entire contents (minus any other double quotes) will be placed into a Text Panel.
//∙∙∙ If the format starts with two forward slashes, then the entire contents will be placed in a Text Panel.
~∙∙∙ If the format starts with a tilde, then the entire contents will be placed in a Scribble object.
#,#[,#] If the format contains two or three numerics separated by commas, a Point parameter will be created with the specified coordinates.
+[#] If the format starts with a plus symbol followed by a numeric, then an Addition component will be created.
-[#] If the format starts with a minus symbol followed by a numeric, then a Subtraction component will be created.
*[#] If the format starts with an asterisk symbol followed by a numeric, then a Multiplication component will be created.
/[#] If the format starts with a forward slash symbol followed by a numeric, then a Division component will be created.
\[#] If the format starts with a backward slash symbol followed by a numeric, then an Integer Division component will be created.
%[#] If the format starts with a percent symbol followed by a numeric, then a Modulus component will be created.
&[∙∙∙] If the format starts with an ampersand symbol, then a Concatenation component will be created.
=[∙∙∙] If the format starts with an equals symbol, then an Equality component will be created.
<[*] If the format starts with a smaller than symbol, then a Smaller Than component will be created.
>[*] If the format starts with a larger than symbol, then a Larger Than component will be created.
[# *] Pi If the format contains the text "Pi" with an optional multiplication factor, then a Pi component will be created.
# If the format can be evaluated as a single numeric value, then a Slider will be created with the specified initial value and sensible™ lower and upper limits.
#<# If the format contains two numerics separated by a smaller than symbol, a Slider with the specified limits will be created. The initial slider value will be equal to the lower limit.
#<#<# If the format contains three numerics separated by a smaller than symbol, a Slider with the specified limits will be created. The initial slider value will be the value in the middle.
#..# If the format contains two numerics separated by two or more consecutive dots, a Slider with the specified limits will be created. The initial slider value will be equal to the lower limit.
#..#..# If the format contains three numerics separated by two or more consecutive dots, a Slider with the specified limits will be created. The initial slider value will be the value in the middle.
#/#/[#] If the format contains two or three numerics separated by forward slashes, a Calendar object will be created. The order of value is day/month/year. If year is omitted then the current year is used. Note that a second slash is required because #/# is interpreted as a number and thus results in a Slider.
#:#[:#] [am/pm] If the format contains at least two numerics separated by a colon, a Clock object is created. Seconds are optional, as are am/pm suffixes.
f([...[,...[,...]]]) [= *]If the format starts with a lower case f followed by an opening bracket, an Expression component is created. A list of comma separated arguments can be provided as inputs, and anything after the optional equals symbol becomes the expression string.
Note that decimal places will be harvested from formats that indicate sliders. I.e. the format 0..2..10 is not the same as 0..2..10.00, as the former will create an integer slider from zero to ten whereas the latter will create a floating point slider with two decimal places from zero to ten.…
Added by David Rutten at 3:24pm on February 18, 2013