ble: Informing Digital Design with Real World Data
Information about each Workshop Cluster can be found here:
Cyber Gardens
Use the Force
Urban Feeds
Reflective Environments
Interacting with the City
Agent Construction
Authored Sensing
Performing Skins
Responsive Acoustic Surfacing
Hybrid Space Structure Typologies
The SmartGeometry 2011 Workshop will take place at CITA http://cita.karch.dk/
Applications to attend the SmartGeometry 2011 Workshop in Copenhagen will close next Monday 31st January 2011. General Conference registration will open within 1 month.
We hope to see you there!
****************************************************
Workshop 28th-31st March
Shop Talk 1 April
Symposium 2 April
Reception 2 April
These events follow the highly successful previous SG events in Barcelona 2010, San Francisco 2009, Munich 2008, New York 2007, Cambridge/London, UK 2006 and multiple preceding events.
Click here for more info...
BUILDING THE INVISIBLEInforming Digital Design with Real World Data
THE PREMISEVast streams of data offer a rich resource for designers. By incorporating external information into our design processes the autonomy of the design is challenged. User data, energy calculations, embedded sensing, material and structural simulation, human behaviour and perception, particle flows and force fields allows design to be situated and responsive. From the simulation of megacities to the solid modelling of material systems, design has the potential to be informed by the real. Design sits not separate from is environment but inhabits an ecological system, open, dynamic and interdependent, diverse, partially self-organising, adaptive, and fragile. Across scale and within time we now have the chance to instil architecture with an immanent intelligence creating new relationships between the user, the built and its ecosphere.THE OPPORTUNITYSystems theorists suggest that data is only a raw material. It can be differentiated from information, knowledge and wisdom. Understanding is multi-levelled: understanding of relations, understanding of patterns, understanding of principles. As digital designers our challenge is in harnessing the power of computation to assist us in informing our design process. Computers help us collect, manage and analyse the environment and inform us about an abundance of data. Our challenge is to use these inputs in a meaningful way to help us make better informed design decisions.THE AIMSG 2011 explores how the incorporation of real world data challenges existing design thinking. The SG 2011 workshop aim is to create physical prototypes of design systems to be exhibited in the SG2011 exhibition.
The SmartGeometry Group is a not-for-profit educational organization dedicated to the use of computational tools in architecture and engineering. SG brings professionals, academics, and industry together to explore the next generation of digital design. SG Workshops are non-platform specific, believing it is the methodology, not the tool, that matters.
…
Added by Shane Burger at 8:01pm on January 27, 2011
t of data it has to operate on. So only those aspects of the algorithm that differ in these cases are relevant.
For example if your algorithm always does exactly the same thing (let's say all it does is measure the size of an array and display it on screen) will be O(1), because it doesn't matter if you run it on an array containing 10 or 1000000 items. Measuring the size of an array is a constant-time operation:
Print(string.Format("Array contains: {0} element(s)", data.Length);
However if your algorithm works on not on arrays but on linked-lists, then it becomes an O(N) operation because counting all the elements in a linked list means you have to iterate over all of them. And the longer the list, the more iterations you need. In fact the number of iterations is exactly the same as the number of items. (ps. if you'd be using the System.Collections.Generic.LinkedList<T> class then it's still O(1), because apparently that particular implementation of linked lists caches the count and keeps it up to date.)
If you have a loop that runs for each item, and then inside that loop there is another loop that also runs for each item, then your complexity becomes O(N²). Or, in a similar case if your algorithm consumes two collections (N and M) and iterates over all items in N, and then inside that loop it iterates over all items in M, the complexity is O(N×M).
The case can be made that only the most severe complexity is relevant enough to report. For example if you have an algorithm that comprises of three steps, the first of which is O(log(N)), the second is O(N²) and the third is O(3ⁿ), then technically the total complexity would be O(log(N) + N² + 3ⁿ), however the first two parts are utterly insignificant compared to the third and therefore can be omitted entirely. Consider for example increasing the input size from 10 to 20 elements:
log(10) + 10² + 3¹⁰ = 1 + 100 + 59049 = 59150
log(20) + 20² + 3²⁰ ≈ 1 + 400 + 3486784401 ≈ 3486784802
As you can see the increase of the complexity is almost entirely due to the O(3ⁿ) portion, so much so that there's almost no point in mentioning the other two.
Now, your specific questions:
Constructors/declarations and method invokes are not necessarily O(1). In this particular case they are, but it is possible that some constructor you call may have a higher complexity. For example if instead of an empty List<T> you're constructing a SortedList<T> based on your inputs, then it definitely may be the most significant complexity in your entire algorithm and it needs to be taken into account.
Correct. A loop like this has complexity O(N), ignore stuff that only happens once like the declaration of the iteration variable.
I don't understand that line of code. cP is already a list. Why are you calling ToList() on it? In general making copies of memory-contiguous collections (like arrays or lists) can be done in O(1), depending on implementation, because blocks of memory can simply be duplicated or moved at one go using the correct hardware ops. However other times it will require a loop in which the complexity goes up.
It's very cheap to add items to lists, provided the list has enough space to add new items. By default a list is big enough to contain only 4 items. If you try and add a fifth one, the list will need to allocate more memory elsewhere, copy the 4 existing items into the newly allocated space and only then add a new item. So, if you know ahead of time how many items you'll be adding to a list (or even if you only know a theoretical upper bound), you should construct the list using that known capacity. This will speed up the process of adding many items to a single list.
Don't know how crypto providers work, but since this part of your algorithm does not depend on cp.Count or the magnitude of populationCount, it doesn't matter for the big-O complexity metric.
…
easy. There is room for discussion and clarification. The most wanted ideas quickly, and clearly, rise to the top. Voting is limited in a clever way and forces users to choose their most valued features.
The general rules, copied from: http://3dsmaxfeedback.autodesk.com/forums/80695-general-feature-requests/
Each user has 20 votes for each forum
Each idea can have no more than 3 votes by a single user
If you enter an idea, it will cost you 1 vote – therefore try to make sure the idea doesn’t already exist
The more precise and detailed a description you give, the more likely your idea will be considered
When an idea is implemented (or declined), votes are returned to all the users that voted
Users can change their votes at any time
Admins can move, edit and delete ideas as they see fit to better meet the goals of the forum
We will flag ideas that are getting our attention as “under review”. Because of limits on what we can say publicly, that is as far as we can go with commenting on a particular idea. If it is “under review” it simply means we’re studying it for possible implementation or gathering data, but there is no commitment to do it.
…
Added by Jonah Hawk at 6:13pm on September 9, 2014
d Design workshop is a two day intensive workshop, exploring the new KingKong plugin for Grasshopper. The software simuates curved folding, and offers simple attractor functions to modulate an array of folded panels on a surface. The focus will be on building simulations of folding inspired by physical folding, and the comtrol of complex arrangements of panels, which will be realised with the CraftROBO vinyl cutter.
Day 1 - AM: Material Computation - intuitive techniques for designing shapes foldable by robots
Day 2 - PM: Folding Design - digitise fold patterns and use the KingKong plugin to simulate folding, design arrangements of panels with the attractor system, using differnet grid types, using different surface types, fill/empty feature, live baking feature
Day 3 - AM: Fabrication Data - refining the design for fabrication, outputing data
Day 4 - PM: Panel Assembly - cutting on the CraftROBO vinyl cutter, assembly of components
Two more dates in April and May.
More details and booking on the RoboFold website:
http://www.robofold.com/index.php?WEBYEP_DI=18…
mething like an i7 with four cores would serve best. i am running 4x3.4 here. you should see 100% cpu utilization when solving.
2) model specifics: topology (= how many elements coming together in one joint), joint and support freedom, which both define the number of degrees of freedom of the model. the more DOF, the larger the stiffness-matrix to invert , the longer computation time. truss-bars are a LOT faster than beam elements.
3) loads and load cases: in general the more load cases, the longer the solving time. the more load vectors on single nodes (which it all comes down to), the longer too. but loads dont affect the computation time too much, especially since once the stiffness matrix has been inverted, most load cases can be applied to it i think.
eigenmodes take a LOT longer to compute than normal analysis, in certain karamba releases the automatic calculation of the first eigenmode (for debugging your geometry) was turned on inside the analysis module when something was wrong with the actual calculation to debug. this could turn out to be pretty annoying with big models so now it's turned off again.
with 'nonlinear', do you mean the large deformations iterative approximation component of karamba?
an average model with 10000 beams and three load cases takes ~400ms here, so take this times 20 for some non-linear iterations and you are there, roughly.
best
robert…
com/Master-2020/
05 October 2021 - 04 October 2022 at Faculty of Engineering - Sapienza University of Rome Registration deadline is 26th of May 2020
Number of students: 20 – 30 students Official language: English Credit hours: 60 CR. Duration: One year of 1180 total hours; 600 for courses and laboratories + 480 for internship + 100 for Final project Place: 9 months in Sapienza University of Rome - Faculty of Engineering and 3 months Internship outside the university
…
termedio a avanzado.
2013 | mayo 22, 23, 24 y 25. 20 Hrs.
Horario: 18:00 – 22.00 Jueves, Viernes y Sábado de 8:00 a 15:00 Hrs. Instructor_ Arch. David Hernández Melgarejo.
http://bioarchitecturestudio.wordpress.com
Objetivos:
El curso está dirigido a cada diseñador, ingeniero o arquitecto que quiere obtener una sólida base en modelado generativo y paramétrico dentro del flujo de trabajo en Rhinoceros.
En el curso se explorarán y construirán estructuras en el espacio paramétrico, incorporando entidades geométricas (Curvas, Superficies, Puntos, etc…) y usando patrones algorítmicos para la generación de estructuras con metabolismos contextualizados.
Cada paso será soportado con ejercicios que gradualmente incrementarán su complejidad.
El alumno aprenderá cómo trabajar con asociación geométrica y parámetros. Para perfeccionar asociación geométrica – asociación entre partes, asociación dinámica – las formas geométricas son generadas al seguir la conexión lógica entre la parte geométrica y sus restricciones, dimensión paramétrica y él proceso dinámico del diseño: Estimulamos el pensamiento relacional para la construcción de Diseño y Arquitectura de alto desempeño.
Resultados:
Los participantes con éste entrenamiento obtendrán las siguientes fundamentos.
· Generar aplicaciones orientadas al análisis, la optimización, documentación del diseño y fabricación.
Palabras clave:
Diseño Computacional, Scripting, Rhinoceros 5.0 + Grasshopper, Parametrización, Análisis, Galapagos, Genetic Solver, Optimización, Fabricación Digital.
Para mayor información:
MArch. Kathrin Schröter. E-mail: kschroter@itesm.mx
Dirección de Arquitectura. Oficinas de Aulas 1, segundo piso.…
sinergetici associati alla compresenza simultanea di differenti strumenti di analisi e digital design all'interno di un processo di progettazione in svolgimento. I partecipanti utilizzeranno Grasshopper (modellatore parametrico per Rhino): l'uso di questo editor grafico di algoritmi si integra alla perfezione con gli strumenti di modellazione di Rhinoceros 3D espandendo le possibilità di corstruire modelli parametrici altamente complessi. Per generare una complessità simile saranno utilizzati collegamenti live ai diversi programmi elencati di seguito: . Autodesk Ecotect Analysis via GECO . FEA software GSA via SSI Durante questi intensi 3 giorni, i partecipanti impareranno il workflow dei plug-ins con l'aiuto di esempi esplorando una panoramica dei differenti software, le possibilità di testare le performances di un progetto o l'uso di strumenti addizionali non legati ad un singolo sistema (es. accentuazione, formazione, reazione parametrica) [english text] The focus of the workshop is to integrate and correlate the synergistic effect associated with simultaneous presence of different digital design- and analysis tools in an ongoing design process. The main attention is set on easy to handle interface , which should be used at a early stage of conceptual design to respond to external and internal influences in a intelligent and sustainable way. Participants will use the software Grasshopper as a parametric modeling plug-in for Rhino. The usage of this graphical algorithm editor tightly integrated with Rhino's 3-D modeling tools open up the possibility to construct highly parametrical complex models. To generate this complexity we will use live linkages to several programs listed below: . Autodesk Ecotect Analysis via GECO . FEA software GSA via SSI In this 3 intense days, the participants should learn the workflow of the plug-ins with the help of examples and get an overview of the different software's, there possibilities for evaluating the performance of a design or the usage of additional tools to be not chained to a single system . (e.g. parametrical accentuation, parametrical formation, parametrical reaction) [.] Dettagli : Istruttori: Thomas Grabner & Ursula Frick from [uto]. lingua del corso: inglese (saranno disponibili tutor di supporto ma è richiesta una conoscenza di base della lingua unglese).
Quote d'iscrizione (min 12 max 20 posti): educational* : € 280.00 + iva professional: € 450.00 + iva * studenti, docenti, ricercatori, dottorandi e laureati fino a un anno dalla data di laurea OFFERTA EARLY BIRD SPECIAL: le prime 5 domande di iscrizione pervenute entro il 31 Dicembre 2011 avranno diritto ad una quota di iscrizione scontata del 20% Quote d'iscrizione E.B. SPECIAL: E.B. SPECIAL educational* : € 224.00+ iva E.B. SPECIAL professional: € 360.00+ iva. ulteriori info, dettagli e iscrizioni: http://www.co-de-it.com/wordpress/nexus-advanced-grasshopper-workshop-with-uto.html…
r the course is conditional on being committed to change : ) We are looking for people who want personal challenges, not massive videos. We believe on individual training to give learning experience to our students that are based on their choices, interest, passions and ambitions, giving them more voice into the learning process.
As first step we create your course with your input and we start with your weekly challenges. Be part of the new wave of online courses : )
info@pazacademy.xyz
…