d the workshop PDF from this link: http://goo.gl/bcvRNH Download event poster from this link: http://goo.gl/Q0KWCM Brief: Cairo is filled with barriers controlling people movements, suppressing them as well as detaining green and public spaces to the extent that most people have been taking these spaces for granted. Public spaces have been for a while the periphery of our daily life. We will explore in this workshop how we can manipulate and alter people’s perception and direct their attention to how these spaces are integral for city life. This exploration will be backed up by intensive technical tutorials introducing computational design and fabrication techniques and tools mainly Rhino, Grasshopper, Geco and Ecotect. Not only will this be the typical technical workshop, but rather you will also have the chance to be guided step by step on how these tools are used through out different design stages in a real world scenario. Design prototypes will be produced through 3D printing, the main workshop output will be a fabricated one to one functional model for one of the designs using our new in-house CNC machine. Tutors (check the PDF for bio): Olga Kovrikova, MArch DIA Alexandr Kalachev, MArch DIA Karim Soliman, MArch DIA Islam Ibrahim, MArch DIA Sherif Tarabishy, B.Sc. AAST Application: Application deadline 1 September 2013 ** For students (undergrad / Master), teachers and PhD proof of status is required (university ID with a date or a certificate of enrollment) to apply for the students package. Packages (choose one of the following in the application form): 1. Standard registration Course fee is 4250 EGP For Students 3500 EGP 2. Early bird registration discounted fee For Professionals 3750 EGP For Students 3000 EGP ** Early bird offer ends on 14 August 2013 3. Group registrations discounted fee (5 or more) For Students 20% off - You will have to fill out an application form here: http://goo.gl/0QxAga - You will need to submit your CV and Short Portfolio (max. 10 MB) to info@morph-d.com, email subject: “Morphing Norms Application” (we will decide if you are eligible for an early bird discount or not based on the date of your email submission) - We will confirm receiving emails from all applicants. Successful applicants will be contacted 5 days after each deadline (early bird/final) and will have to confirm participation within 3 days, if they fail to do so, places will be given to others on the waiting list. - A maximum of 30 applicants will be selected.
…
m
-Area of blue line: min. 80% of the rectangel a x b
-Max. hight h of the top point: h,max = a
-Min. Volume between rectangel a x b and membrane: 500 m3
Can anyone help me?…
mplex the models are. If we are running multi-room E+ studies, that will take far longer to calculate.
Rhino/Grasshopper = <1%
Generating Radiance .ill files = 88%
Processing .ill files into DA, etc. = ~2%
E+ = 10%
Parallelizing Grasshopper:
My first instinct is to avoid this problem by running GH on one computer only. Creating the batch files is very fast. The trick will be sending the radiance and E+ batch files to multiple computers. Perhaps a “round-robin” approach could send each iteration to another node on the network until all iterations are assigned. I have no idea how to do that but hope that it is something that can be executed within grasshopper, perhaps a custom code module. I think GH can set a directory for Radiance and E+ to save all final files to. We can set this to a local server location so all runs output to the same location. It will likely run slower than it would on the C:drive, but those losses are acceptable if we can get parallelization to work.
I’m concerned about post-processing of the Radiance/E+ runs. For starters, Honeybee calculates DA after it runs the .ill files. This doesn’t take very long, but it is a separate process that is not included in the original Radiance batch file. Any other data manipulation we intend to automatically run in GH will be left out of the batch file as well. Consolidating the results into a format that Design Explorer or Pollination can read also takes a bit of post-processing. So, it seems to me that we may want to split up the GH automation as follows:
Initiate
Parametrically generate geometry
Assign input values, material, etc.
Generate radiance/ E+ batch files for all iterations
Calculate
Calc separate runs of Radiance/E+ in parallel via network clusters. Each run will be a unique iteration.
Save all temp files to single server location on server
Post Processing
Run a GH script from a single computer. Translate .ill files or .idf files into custom metrics or graphics (DA, ASE, %shade down, net solar gain, etc.)
Collect final data in single location (excel document) to be read by Design Explorer or Pollination.
The above workflow avoids having to parallelize GH. The consequence is that we can’t parallelize any post-processing routines. This may be easier to implement in the short term, but long term we should try to parallelize everything.
Parallelizing EnergyPlus/Radiance:
I agree that the best way to enable large numbers of iterations is to set up multiple unique runs of radiance and E+ on separate computers. I don’t see the incentive to split individual runs between multiple processors because the modular nature of the iterative parametric models does this for us. Multiple unique runs will simplify the post-processing as well.
It seems that the advantages of optimizing matrix based calculations (3-5 phase methods) are most beneficial when iterations are run in series. Is it possible for multiple iterations running on different CPUs to reference the same matrices stored in a common location? Will that enable parallel computation to also benefit from reusing pre-calculated information?
Clustering computers and GPU based calculations:
Clustering unused computers seems like a natural next step for us. Our IT guru told me that we need come kind of software to make this happen, but that he didn’t know what that would be. Do you know what Penn State uses? You mentioned it is a text-only Linux based system. Can you please elaborate so I can explain to our IT department?
Accelerad is a very exciting development, especially for rpict and annual glare analysis. I’m concerned that the high quality GPU’s required might limit our ability to implement it on a large scale within our office. Does it still work well on standard GPU’s? The computer cluster method can tap into resources we already have, which is a big advantage. Our current workflow uses image-based calcs sparingly, because grid-based simulations gather the critical information much faster. The major exception is glare. Accelerad would enable luminance-based glare metrics, especially annual glare metrics, to be more feasible within fast-paced projects. All of that is a good thing.
So, both clusters and GPU-based calcs are great steps forward. Combining both methods would be amazing, especially if it is further optimized by the computational methods you are working on.
Moving forward, I think I need to explore if/how GH can send iterations across a cluster network of some kind and see what it will take to implement Accelerad. I assume some custom scripting will be necessary.…
eventually found out about genetic algorithms on which I found extensive researches, projects,... ! I looked into it and ended up on a few papers which I believe are the jumpstart for my master thesis.
"Galapagos; on the logic and limitations of generic solvers" by David RuttenArticle in Architectural Design 83(2) March 2013
"Black-box optimisation methods for architectural design" by Thomas Wortmann and Giacomo NanniciniConference Paper: CAADRIA 2016, At Melbourne, AU, Volume: 177-186
So I started looking into alternatives to genetic algorithms in architectural design.So far, I've ended up on :
Thomas Wortmann's work with the surrogate(or model) based optimization approach!You can check out the tool he developped for GH (Opossum):http://www.food4rhino.com/app/opossum-optimization-solver-surrogate-models
Judyta Cichocka's work, specially with the Swarm approachYou can check out the tool she developped for GH (Silvereye):http://www.food4rhino.com/app/silvereye-pso-based-solver
And that's it !!! I've been researching through article references (mainly on "researchgate") but I'm now stuck in a loop of references I already visited!That probably means the litterature on the subject is not (yet) extended but I might probably be missing something.The keywords make it difficult to search : "optimisation", "algorithms", "architecture", send me most of the time to computational engineering and deep mathematics papers I unfortunately do not have the background knowledge to comprehend ! So there it is ! If you have any clue of where (or how ! ) I should be looking, please tell me :)I know Mr Rutten is pretty active on the forum so hopefully... (fingers crossed :p) !Also if you have any good tips for getting into algorithms in general (you think could help), I'd be glad to hear(read) it ! A book, tutorials maybe ?!So, autors, architects, projects books, articles, conferences I should go to,specialized architecture offices/studios (I'm also looking for an internship so ...).If you know about a more appropriate forum please let me know !If you want to get deeper into this, you can contact me at :
e1635331@student.tuwien.ac.at
tdissaux@student.ulg.ac.be
My master thesis is due for may 2018 but I have a paper to write for January 2018 in order to be elligible for a PHD program afterwards.What I mean by that is that if you read this message in 6 month, I'll still be open to discussion !
I am right now an erasmus student at TUWien (Vienna) but my main university is The university of Liège in Belgium.I can handle French, English, Italian litterature and eventually Dutch if really you think it's worth it ! I have access to most online libraries via my university's portals so access shouldn't be an issue !I'm very excited to hear from you I wish you all a great day,Cheers,Thomas
…
ther math and logic. i can usually conceptualise what i want to do and cobble some semi working thing together but don't know which components to use and how to patch it. so i'm super happy to have someone who knows what he's doing to find this interesting.
and i'm glad you mention the fanned frets again, there is one input parameter that's still missing for the multiscale frets to be fully parametric, it's the angle of the nut or which fret should be straight. it depends a bit on personal preferences and playing posture what is more comfortable. so being able to adjust this easily would be cool. again i have no idea how the maths for that work or if you can just rotate each fret the same amount around it's middle point. The input either as fret number (for the straight fret) or as a simple slider from bridge to nut should do as input setting.
Here are the two extremes and the middle ground:
i've been thinkin today while analysing your patches and cleaning up my mess what exactly the monster should do.
Here are the input parameters needed, i think it's the complete list
scale length low E string
scale length high e string
fret angle/straight fret
string width at nut
string width at bridge
number of frets
fretboard overhang at nut (distance from string to fretboard bounds)
fretboard overhang at last fret
string gauges
string tensions
fretboard radius at nut (for compound radius fretboard radius at bridge is calculated with the stewmac formula)
fretwire crown width
fretwire crown height
action height at nut (distance between bottom of string and fretwire crown top)
action height at last fret
pickup 1 neck position
pickup 2 middle position
pickup 3 bridge position
nut width
the pickup positions should be used to draw circles for the magnet poles on each string so they are perfectly aligned and can be used for the pickup flatwork construction. ideally they would need a rotation control aligning the center line of the pickup so it's somewher between the last fret angle and bridge angle. personally i do this visually depending on the design i'm looking for, some people have huge theories on pickup positioning but personally i don't believe in it.
that should result in everything needed to quickly generate all the necessary construction curves or geometry for nut/fingerboard/frets/pickups. this is the core of what makes a guitar work, the more precise this dynamic system is the better the guitar plays and sounds.
i posted another thread trying to understand how i could use datasets form spreadsheets,databse, csv to organize the input parameters. What would make sense for the strings for example is hook into a spreadsheet with the different string sets, i attached one for the d'Addario NYXL string line which basically covers all combos that make sense.
The string tension is an interesting one, and implmenting it would sure be overkill albeit super interesting to try. it should be possible to extrapolate from the scale length of each string what the tension for a given string gauge of that string would be so that you could say 'i want a fully balanced set' or 'heavy top light bottom) and it would calculate which SKU from d'addario would best match the required tension. All the strings listed in the spreadsheet are available as single strings to buy.
i'm trying to reorganize everything which helps me understand it. i just discovered the 'hidden wires' feature which is great since once i understood what a certain block does or have finished one of my own, i can get the wires out of the way to carry on undistracted. a bit risky to hide so many wires but it makes it so much easier not to get completely lost :-)
btw, the 'fanned fret' term is trademarked, some guy tried to patent it in the 80's which is a bit silly since it has been done for centuries. there is a level of sophistication above this as well, check out http://www.truetemperament.com/ and that really is something else. it really is astounding how superior the tuning is on those wigglefrets, the problem is that it's rather awkward for string bending and also you can't easily recrown or level the frets when they are used. …
e matching with a dedicated component which creates combinations of items. You can find the [Cross Reference] component in the Sets.List panel.
When Grasshopper iterates over lists of items, it will match the first item in list A with the first item in list B. Then the second item in list A with the second item in list B and so on and so forth. Sometimes however you want all items in list A to combine with all items in list B, the [Cross Reference] component allows you to do this.
Here we have two input lists {A,B,C} and {X,Y,Z}. Normally Grasshopper would iterate over these lists and only consider the combinations {A,X}, {B,Y} and {C,Z}. There are however six more combinations that are not typically considered, to wit: {A,Y}, {A,Z}, {B,X}, {B,Z}, {C,X} and {C,Y}. As you can see the output of the [Cross Reference] component is such that all nine permutations are indeed present.
We can denote the behaviour of data cross referencing using a table. The rows represent the first list of items, the columns the second. If we create all possible permutations, the table will have a dot in every single cell, as every cell represents a unique combination of two source list indices:
Sometimes however you don't want all possible permutations. Sometimes you wish to exclude certain areas because they would result in meaningless or invalid computations. A common exclusion principle is to ignore all cells that are on the diagonal of the table. The image above shows a 'holistic' matching, whereas the 'diagonal' option (available from the [Cross Reference] component menu) has gaps for {0,0}, {1,1}, {2,2} and {3,3}:
If we apply this to our {A,B,C}, {X,Y,Z} example, we should expect to not see the combinations for {A,X}, {B,Y} and {C,Z}:
The rule that is applied to 'diagonal' matching is: "Skip all permutations where all items have the same list index". 'Coincident' matching is the same as 'diagonal' matching in the case of two input lists which is why I won't show an example of it here (since we are only dealing with 2-list examples), but the rule is subtly different: "Skip all permutations where any two items have the same list index".
The four remaining matching algorithms are all variations on the same theme. 'Lower triangle' matching applies the rule: "Skip all permutations where the index of an item is less than the index of the item in the next list", resulting in an empty triangle but with items on the diagonal.
'Lower triangle (strict)' matching goes one step further and also eliminates the items on the diagonal:
'Upper Triangle' and 'Upper Triangle (strict)' are mirror images of the previous two algorithms, resulting in empty triangles on the other side of the diagonal line:
…
nowledge, tools, materials and machines. The Clusters provide a focus for workshop participants working together within a common framework.
Clusters provide a forum for the exchange of ideas, processes and techniques and act as a catalyst for design resolution. The Workshop is made up of ten Clusters that respond in diverse ways to the sg2012 Challenge Material Intensities. The Call for Clusters is now open to proposals which respond in innovative ways to this year's challenge.
Deadline: September 19 2011
More information can be found here:
http://smartgeometry.org/index.php?option=com_content&view=article&id=129&Itemid=146
sg2012 takes place from 19-24 March 2012 at EMPAC (http://empac.rpi.edu/) and is hosted by Rensselaer Polytechnic Institute in Troy, upstate New York USA. The Workshop and Conference will be a gathering of the global community of innovators and pioneers in the fields of architecture, design and engineering.
The event will be in two parts: a four day Workshop 19-22 March, and a public conference beginning with Talkshop 23 March, followed by a Symposium 24 March. The event follows the format of the highly successful preceding events sg2010 Barcelona and sg2011 Copenhagen.
sg2012 Challenge Material Intensities
Simulation, Energy, Environment
Imagine the design space of architecture was no longer at the scale of rooms, walls and atria, but that of cells, grains and vapour droplets. Rather than the flow of people, services, or construction schedules, the focus becomes the flow of light, vapour, molecular vibrations and growth schedules: design from the inside out.
The sg2012 challenge, Material Intensities, is intended to dissolve our notion of the built environment as inert constructions enclosing physically sealed spaces. Spaces and boundaries are abundant with vibration, fluctuating intensities, shifting gradients and flows. The materials that define them are in a continual state of becoming: a dance of energy and information.Material potential is defined by multiple properties: acoustical, chemical, electrical, environmental, magnetic, manufacturing, mechanical, optical, radiological, sensorial, and thermal. The challenge for sg2012 Material Intensities is to consider material economy when creating environments, micro-climates and contexts congenial for social interaction, activities and organisation. This challenge calls for design innovation and dialogue between disciplines and responsibilities.sg2010 Working Prototypes strove to emancipate digital design from the hard drive by moving from the virtual to the actual in wrestling with the tangible world of physical fabrication. sg2011 Building the Invisible focused on informing digital design with real world data. sg2012 Material Intensities strives to energise our digital prototypes and infuse them with material behaviour. They have the potential to become rich simulations informed by the material dynamics, chemical composition, energy flows, force fields and environmental conditions that feed back into the design process.
More information can be found at http://www.smartgeometry.org…
greatly appreciate it!!
You can write the number of the question and write your answer next to it, example:
1) a
2) c
3) a) Washington University in St. Louis
4) 2 weeks (1week+1week shipping)
5) 130
6) b
7) b
The survey questions are as follows:
1)
Did you 3D print before?
5)
How much did it cost (in dollars)?
a.
Yes, for a school project
a.
Between 20 & 50
b.
Yes, for a personal project
b.
Between 50 & 80
c.
Between 80 & 120
2)
Print size
d.
Please specify if otherwise: _____ dollars
a.
Between 2 & 6 cubic inches
b.
Between 6 & 12 cubic inches
6)
Do you think the price was expensive?
c.
Between 12 & 20 cubic inches
a.
Not at all
d.
Please specify if otherwise: ____cubic inches
b.
A little bit expensive
c.
Very expensive
3)
Where did you print your object?
a.
School
7)
Were you satisfied with the printed object?
b.
Outside school: _________________
a.
Yes, it was a great print without problems
b.
Not bad, some issues
4)
How long did it take to print?
c.
I was not satisfied, very bad quality
a.
___ days
b.
___ weeks
Thank you very much to all!!
PS: If you did many 3D prints, you can post multiple answers.
Wassef…
whole design intent, but this is what Inventor is good at. The way it packages bits of 'scripted' components into 'little models' that can be stored and re-assembled is central to MCAD working.
The Inventor model shown is almost 5 years old. We don't model like that any more, however it does offer a good idea of general MCAD modeling approaches.
iParts is useful in certain situations, it could've been useful in the above model, its usefulness is often in function of the quantity of variants/configurations.
So much is scripted in GH, maybe it should also be possible to script/define/constrain/assist the placement/gluing of the results?
...
Starting point: I think we are talking across purposes. AFAIK, the solving sequence of GH's scripted components is fixed. It won't do circular dependencies... without a fight. The inter-component dependencies not 'managed' like constraints solvers do for MCAD apps.
Components and assemblies are individual files in MCAD.
Placement of these within assemblies in MCAD is a product of matrix transforms and persistent constraints. There is no bi-directional link, the link is unidirectional (downflow only), because of the use of proxies.
Consequently, scripting the placement of components is irrelevant in GH, unless you decide that each component needs to be contained in its own separate file.
This also brings up the point that generating components and assemblies in MCAD is not as straightforward. In iParts and iAssemblies, each configuration needs to be generated as a "child" (the individual file needs to be created for each child) before those children can be used elsewhere.
You notice the dilemma, if you generate 100 parts, and then you realize you only need 20, you've created 80 extra parts which you have no need for, thus generating wasteful data that may cause file management issues later on.
GH remains in a transient world, and when you decide to bake geometry (if you need to at all), you can do that in one Rhino file, and save it as the state of the design at that given moment. Very convenient for design, though unacceptable for most non-digital manufacturing methods, which greatly limits Rhino's use for manufacturing unless you combine it with an MCAD app.
One of the reasons why the distributed file approach makes perfect sense in MCAD, is that in industry you deal with a finite set of objects. Generative tools are usually not a requirement. Most mechanical engineers, product engineers and machinists would never have any use for that.
The other thing that MCAD apps like Inventor have, is the 'structured' interface that offers up all that setting out information like the coordinate systems, work planes, parameters etc in a concise fashion in the 'history tree'. This will translate into user speed. GH's canvas is a bit more freeform. I suppose the info is all there and linked, so a bit of re-jigging is easy. Also, see how T-Flex can even embed sliders and other parameter input boxes into the model itself. Pretty handy/fast to understand, which also means more speed.
True. As long as you keep the browser pane/specification tree organized and easy to query.
:)
Would love to understand what you did by sketching.
I'll start by showing what was done years ago in the Inventor model, and then share with you what I did in GH, but in another post.
Let's use one of the beams as an example:
We can isolate this component for clarity.
Notice that I've highlighted the sectional sketch with dimensions, and the point of reference, which is in relation to the CL of the column which the beam bears on. The orientation and location of the beam is already set by underlying geometry.
Here's a perspective view of the same:
The extent of the beam was also driven by reference geometry, 2 planes offset from the beam's XY plane, driven by parameters from another underlying file which serves as a parameter container:
Reference axes and points are present for all other components, here are some of them:
It starts getting cluttered if you see the reference planes as well:
Is I mentioned earlier, over time we've found better ways to define and associate geometry, parameters, manage design change, improving the efficiency of parametric models. But this model is a fair representation of a basic modeling approach, and since an Inventor-GH comparison is like comparing apples and oranges anyways, this model can be used to understand the differences and similarities, for those interested.
I haven't even gotten to your latest post yet, I will eventually.…
Added by Santiago Diaz at 10:36am on February 26, 2011
he picture (4).
Previously, I had a problem with generating intersections between the two directions of the beams, but a colleague helped me by extending beams, so there was no problem with lines of intersection. But this solution has generated curl (5) at the highest vertex geometry, which I ignored in order to repair it before printing, perhaps this mean my problem with my beam spread properly. Only when the beams is 19, does not jump no problem, but I still can not distribute them properly.
(1)
(2)
(3)
(4)
(5)
I tried to show as simply as possible by removing or signing my code in GHX file.
Thank you in advance for your help
…