option, after downloading check if .ghuser files are blocked (right click -> "Properties" and select "Unblock"). Then paste them in File->Special Folders->User Object Folder. You can download the example files from here. They act in similar way, Ladybug Photovoltaics components do: we pick a surface, and get an answer to a question: "How much thermal energy, for a certain number of persons can my roof, building facade... generate if I would populate them with Solar Water Heating collectors"? This information can then be used to cover domestic hot water, space heating or space cooling loads:
Components enable setting specific details of the system, or using simplified ones. They cover analysis of domestic hot water load, final performance of the SWH system, its embodied energy, energy value, consumption, emissions... And finding optimal system and storage size. By Dr. Chengchu Yan and Djordje Spasic, with invaluable support of Dr. Willian Beckman, Dr. Jason M. Keith, Jeff Maguire, Nicolas DiOrio, Niraj Palsule, Sargon George Ishaya and Craig Christensen. Hope you will enjoy using the components! References: 1) Calculation of delivered energy: Solar Engineering of Thermal Processes, John Wiley and Sons, J. Duffie, W. Beckman, 4th ed., 2013. Technical Manual for the SAM Solar Water Heating Model, NREL, N. DiOrio, C. Christensen, J. Burch, A. Dobos, 2014. A simplified method for optimal design of solar water heating systems based on life-cycle energy analysis, Renewable Energy journal, Yan, Wang, Ma, Shi, Vol 74, Feb 2015
2) Domestic hot water load: Modeling patterns of hot water use in households, Ernest Orlando Lawrence Berkeley National Laboratory; Lutz, Liu, McMahon, Dunham, Shown, McGrue; Nov 1996. ASHRAE 2003 Applications Handbook (SI), Chapter 49, Service water heating
3) Mains water temperature Residential alternative calculation method reference manual, California energy commission, June 2013. Development of an Energy Savings Benchmark for All Residential End-Uses, NREL, August 2004. Solar water heating project analysis chapter, Minister of Natural Resources Canada, 2004.
4) Pipe diameters and pump power: Planning & Installing Solar Thermal Systems, Earthscan, 2nd edition
5) Sun postion and POA irradiance, the same as for Ladybug Photovoltaics (Michalsky (1988), diffuse irradiance by Perez (1990), ground reflected irradiance by Liu, Jordan (1963))
6) Optimal system and storage tank size: A simplified method for optimal design of solar water heating systems based on life-cycle energy analysis, Renewable Energy journal, Yan, Wang, Ma, Shi, Vol 74, Feb 2015.…
eather data so it cannot be easily compared to Archsim. My account of the differences between Honeybee and Archsim will be far from complete but here are the key ones that I am aware of:
1) This difference is a bit of a superficial one but points to a deeper thinking about how the software should be used. Honeybee has many more components than Archsim, which means that Honeybee has a steeper learning curve than Archsim and will take longer to master. Along with this, you may also encounter a general mentality in the Honeybee community that "you should not be running a certain type of simulation unless you know how it works" whereas I know that Archsim is a bit more amenable to making things fast and easy to set up even when you are not sure what is going on under the hood. However, as a result of the large number of components in Honeybee, it is more open-ended, customizable, and includes more freedom in terms of cases that you can run and the parameters of the energy simulation that you can change than Archsim. You will also notice that, while there is a general ethos in the Honeybee community that you should not be running certain simulations unless you know what you are doing, we try to provide you with many resources to educate yourself if you are motivated. For example, we have long component descriptions that we assemble into documentation books like this (https://www.gitbook.com/book/mostapharoudsari/honeybee-primer/details), hours of video tutorial playlist like this one (https://www.youtube.com/playlist?list=PLruLh1AdY-SgW4uDtNSMLeiUmA8YXEHT_), and many GH example files on a github-based file sharing system (https://hydrashare.github.io/hydra/index.html). Not to mention a community of people who would respond to discussions like this one.
2) Archsim as a standalone application will soon be no more and will be instead distributed with the DIVA daylight analysis tool (http://diva4rhino.com/). While I am unclear on the exact trajectory of DIVA, it currently has a price tag attached to it and so I would assume that the future of Archsim will also carry this price tag. On the other hand, Honeybee and any derivative software will forever be free and open source under the GPL licence (https://github.com/mostaphaRoudsari/Honeybee/blob/master/License_Honeybee_GPL.txt).
3) This third point is a bit of a reiteration of the last one but Honeybee is open source, meaning that, if you need a feature of EnergyPlus that is not yet implemented on either interface, you can usually add it in yourself with a few lines of python code in Honeybee. This type of workflow is not possible with Archsim since it is closed source and requires you to use EnergyPlus's text editor interface after Archsim has exported an IDF in order to implement any additional EnerygPlus features.
4) The libraries and templates for Honeybee come from OpenStudio - the open source interface for EnergyPlus (https://www.openstudio.net/), which is supported by the US Department of Energy (just like EnergyPlus). Since Honeybee is open source, it is able to make use of the large database of building type schedules/loads and constructions that have been assembled by the OpenStudio team over the last several years as well as OpenStudio's SDK. I can also say that almost all of the development efforts of the Honeybee team are now focused now on integrating efforts with OpenStudio, including an exporter from Honeybee to OpenStudio that should be fully functional for the next stable release. I am not certain of the current extent of Archsim's libraries but, last I had checked, the creator was pulling them from his own experience and, as such, only had a few libraries to choose from. For all of my knowledge, through, this may be changing with the integration of Archsim with DIVA.
Let me know if this is helpful and, if anyone has more up-to-date knowledge on Archsim than I, please post there.
-Chris…
ack to .ghx?
This is in relation to a discussion I've been having with David Rutten & Scott Davidson about GH consuming memory in a relatively large GH definition (~. I think what I've learned from this is that one should limit the size of the GH file, or put some incremental stops in the definition to limit the length of calculations that it runs at once. Is this a valid conclusion?
The GH file we're talking about is 7Mb & the Rhino file is about 120Mb, but when working w/ the GH def. I try to only keep about 2 curves turned on.
Here's a summary of the discussion:
Hi Mike,thanks for sending it over. I've been fiddling with the file for about 10 minutes and it climbed from 1.7 GB to 1.9GB, but then I've been switching previews on which means more meshes get calculated so you'd expect a higher memory consumption. It is possible we're leaking memory, but if you're working for hours on end, memory fragmentation might also explain part of the increase. Basically, memory gets fragmented just like disks get fragmented after prolonged use, difference is that memory cannot be defragmented unless you restart the application and allow it to start with a clean slate. I'll try and find any leaks we may have missed in the past.Goodwill,David
──────────── David Rutten
On 09/03/2011 06:19, Mike Calvino wrote:
Thanks very much David for the quick response. I've attached the files zipped. I can't figure out what's doing it. After working in the file for awhile, the memory usage in the Windows Task Manager climbs . . . it's gotten to 1.57+Gb before I exited GH & Rhino5Wip & let it dissipate, then restart & work for awhile before it does it again. It probably takes like 4 or 5 hours before it gets that high. That's the highest it's gotten, & that only happened while I was working in a Rhino file that had all of the elements baked into it - turned off at least, but it still climbed to 1.57+Gb. It seems to climbs when you work in the file & move around in both the GH def. & the Rhino file. Like turn on a few of the Extr components at the right end of the "StandareRibExtuder" groups, you can watch the MemUsage go up, but when you turn them off, it does not go down. - goes up fast at this point. Maybe I need to figure out how to do the definition with fewer components, I'm sure that's part of it, but I must confess, I think I'm still early on in the learning curve.I really hope that this is not operator error on my part & I do apologize up front if it is. I have done a disk cleanup, I have tried excluding .3dm & .ghx files from my NOD32 antivirus, no change. I hope you can find something.Let me know if you have any trouble with the files.See if you find anything & please let me know . . . thanks!Cheers! --Mike CalvinoCalvino Architecture Studio, inc.www.calvinodesign.com
…
ag gets pinned in Temeswar)
7 days of training + exhibition and party!
During the the first 3 days we have prepared a training course where the participants will get acquainted with the basic notions and elementary algorithms in Grasshopper. Within the following 4 days you will have to apply your general knowledge in order to design and produce a 1:1 mockup of the digital model.
It’s going to be massive!
_ORGANIZERS AND TUTORS:
F-O-R
Oana Simionescu
Alex Cozma
DtArchLab + Idz
Ionut Anton
Dana Tanase
T_A_I
Irina Bogdan
_HOSTS:
EduKube Multimedia Center
Find out how to apply here and make sure to keep an eye on our blog. You cand also keep yourself updated by following our facebook page.
See you at EduKube, Timisoara on the 16th of July!
…
e systems?
Architecture can engage technologies to make buildings tectonically transforming and becoming aware of the active surroundings investigating data and responding to environmental change.
The 10-day workshop investigates the design of computational kinetic structural systems, which interact with the behavior inherent in the city, environment and population.
The aim of the workshop is to investigate parametric kinetic strategies that transform according to the ever-changing data system. Like architectural cybernetic machines embedded in a smart city, the projects interact with the population and the environment of Rome proposing another layer of urban strategy. These operations take place both by continually detecting the physical and non-physical data via sensors and by transforming their own forms. The complex dynamic interaction approach leads us to discard the imposition of a fixed form and, instead, create and use a computational kinetic artifice.
Initially students attend lectures on current mainstream and academic research as well as tutorials on parametric modeling software, digital fabrication prototyping and robotic assembly. Then applying skills in a team-based structure, pursue computational design research coupling physical analogue experiments with computer-controlled kinetic prototype programming. The proposals are built assembling digitally fabricated parts and electronic devices, like Arduino boards, sensors and servos to interact with a data exchange and regulating form.
For additional informations go to the AA WEBSITE: http://www.aaschool.ac.uk/STUDY/VISITING/rome
_
…
tura digital en corte Láser, corte CNC, impresión 3d, y modelado paramétrico.
Este tercer taller enseña los fundamentos del modelado paramétrico y algunas bases de manufactura digital.
PERFIL DEL ALUMNO QUE INGRESA:
Diseñador, Arquitecto, Artista con conocimientos de Rhinoceros interesados en comenza a modelar paramétrico con Grasshopper para fabricación digital básica.
PERFIL DEL ALUMNO QUE EGRESA:
El alumno terminará con los conocimientos y criterios para el desarrollo de piezas o proyectos utilizando fabricación digital, mejorando y agilizando los flujos de trabajo, así como los criterios fundamentales del Modelado Paramétrico -Generativo.
Taller de modelado paramétrico con Grasshopper
Interfase
Manejo de Datos
Data Volátil
Data Persistente
Rangos y dominios
Atractores
Listas y Cull
Modelado por Layer Object
Análisis Básicos
Conexión de Curvas
Superficies
Análisis de Superficies
Panelización Básica
Relaciones con Excel
Modelado generativo
Fechas: del 8 de Febrero al 1º de Marzo
Días: Sábado
Horarios: de 10 am a 3 pm
Sesiones: 4 de 5hrs
Duración: 20 horas
Precio: $3,000.00…
como 2ECTS - Horario: Jueves y Viernes de 16:00 a 18:00 - Inicio: final de Febrero 2016 -fecha exacta por especificar- - Inscripciones: Envía tus datos (nombre, apellidos, NIF, mail, teléfono) indicando tu preferencia a iamadrid.arquitectura@upm.es
-Aprendizaje del entorno de programación visual para la generación de prototipos dinámicos de proyectos completos. Plataformas de programación basadas en nodos (node-based) para su gestión. -Diseño de algoritmos interrelacionados. Planificar y explicitar procesos. Traducción de procesos a lenguajes de programación. Sintaxis básicas comunes entre todos los lenguajes de programación. -Explorar tanto derivas como objetivos concretos. Programar herramientas de proyecto como parte del proyecto mismo. Explorar el proceso como esencia del proyecto. -Incorporar al diseño datos externos al mismo. Aprender a programar, automatizar y después matizar decisiones. Generar proyectos adaptativos y reactivos en continua reinformación. -Explorar los límites de lo codificado: producción de codigos como asistentes y no como imposiciones. -Interrelacionar decisiones de equipos. Generar marcos y rutinas para el diseño colaborativo. -Explorar topologías y prototipos, entornos de incertidumbre y posibilidades. Manejo de bases de datos y flujos de herencia y transporte de datos. - Generación dinámica, evolutiva y modificable. Producción de herramientas de codigo abierto.
http://dpa-etsam.com/iam/iam-cursos
https://www.facebook.com/iamadridETSAM?fref=ts
+34 91 336 6537 / 6589…
, presso la sede Manens-Tifs, nei giorni 26,27 e 28 maggio 2016.
Il comfort visivo e la gestione dell’illuminazione naturale in relazione al risparmio energetico diventano sempre più rilevanti per una progettazione innovativa degli edifici. Ad esempio, il nuovo protocollo LEED 4 riconosce crediti per le simulazioni di daylighting e conferma l’importanza degli aspetti progettuali per “collegare gli occupanti con lo spazio esterno, rinforzare i ritmi circadiani, ridurre i consumi di energia elettrica per l’illuminazione artificiale con l’introduzione della luce naturale negli spazi”. Senza strumenti software per la simulazione della luce non è possibile ottenere risultati di qualità. Radiance è un software validato, utilizzato sia a livello di ricerca che dai progettisti ed è tra i più accurati per la simulazione professionale della luce naturale e artificiale. Non ha limiti di complessità geometrica ed è adatto a essere integrato in altri software di calcolo e interfacce grafiche. Queste ultime facilitano le procedure di programmazione. Le principali e più versatili saranno oggetto del corso (DIVA4Rhino e Ladybug+ Honeybee, plug-in per Grasshopper e Rhinoceros 3D).
Il corso è rivolto a progettisti e ricercatori che vogliano acquisire strumenti pratici per la simulazione con Radiance al fine di mettere a punto e verificare le soluzioni più adatte alle proprie esigenze. Sono previste lezioni di teoria e pratica con esempi ed esercitazioni volte a coprire in modo dimostrativo ed interattivo i concetti trattati.
Le domande di iscrizione devono essere presentate entro il 12 maggio 2016.
La brochure con i contenuti del corso e tutte le informazioni sono disponibili su questo link
Il corso è sponsorizzato da Pellinindustrie.…
hreads where Thread I solves object A1 and Thread II solves object A2. As soon as A1 is completed, Thread I can move on to object B1 and as soon as A2 completes, Thread II can move on to object B3 (whichever comes first). When both A1 and A2 are complete, we can spawn a new thread (III) to take care of object B2.
If B2 completes before B3, then Thread III will terminate. If B3 completes before B2, then Thread II terminates. Whichever thread is last will pick up execution of object C3. And so on and so forth.
This sort of threading is actually not guaranteed to help much though, as it is likely that the bottleneck components in the network will still need to be handled by a single thread.
A more efficient solution would be to divvy up the execution per component to multiple threads. If you're trying to compute the Curve Closest Point for 10,000 points and your machine contains 4 cores, then we can assign 2,500 points to the first core, 2,500 points to the second core etc.
This approach will actually work when there's only a few bottleneck components and it also means the order in which components are solved is no longer important.
An even more fine-grained approach to threading would be to make the Curve Closest Point function in the Rhino SDK threaded. There's a lot of looping going on in any given Curve CP computation so the curve could be broken up into loose spans where each span is solved by a different core. Then the partial results get consolidated once all threads finish.
The benefit here is that it would be multi-core for everyone, not just Grasshopper components.
The bad news: Some functions in Rhino are not thread-safe. Meaning that data structures such as NurbsCurves cannot be modified from multiple threads at once as it will compromise their validity. You might well end up with invalid curves and quite possible weird crashes. In very bad cases it might even be that a specific function in our SDK can only be running once, so even if you were to duplicate the curve it would still not work.
Until our SDK is thread-safe there can be no global threading in Grasshopper. I don't know where we're headed with this, but I do know that we've started using some threaded algorithms in the display as of Rhino5, so it seems we're at least getting our feet wet.
--
David Rutten
david@mcneel.com
Seattle, WA…
Added by David Rutten at 5:47pm on November 17, 2010
difference consists of.
An Evolutionary Solver/Genetic Algorithm is an implementation of Metaheuristics. Metaheuristics tend to be flexible solvers, applicable to a wide variety of problems, fairly easy to implement, but slow. Other examples of Metaheuristic algorithms would be Random Search, Scatter Search, Simulated Annealing and do on. These algorithms are often modelled on physical or biological processes.
Simulated Annealing for example simulates the physical process of annealing (who'd have thunk it), which is basically the slow cooling of a material which allows it to settle into a crystalline lattice, i.e. a low energy distribution of all the atoms. I'm currently adding an SA solver to Galapagos, and in fact just yesterday managed to get the first successful run: http://www.youtube.com/watch?v=VWtYLv-4oP0
Metaheuristics are especially useful for those cases where little is known about the problem ahead of time. If the problem search-space is mathematically well defined (differentiable, especially), then you can use more targeted algorithms such as the Newton-Raphson method, Pareto-search or Uphill search. You can still use these methods on non-differentiable search-spaces, but it involves sampling the local region to death to get an estimate of the differential. This can be a very costly enterprise, especially in high dimensional search-spaces. In a two-dimensional search-space you'll need 3 to get a lame estimate and 4 to get a halfway decent estimate and 8 to get a good estimate. In three-dimensional search space you already need 26 samples, and the number of samples grows exponentially with higher dimensions.
If you have a specific problem you're trying to solve, Metaheuristics are probably not the best solution, even though they may be easiest to program. Rhino uses something akin to Newton-Raphson for certain problems and that's fast enough to run in real-time.
Divide-and-Conquer algorithms are also quite popular. Sometimes they are called Binary-Search or Tree-Search algorithms as well. Their basic premise is to sample the search-space at a few intervals (but enough to capture the needed detail), then find two neighbours with promising values and sample again in between these two. Then repeat. Each new iteration typically doubles accuracy, which is great because then you only need ~30 ~40 iterations to get an answer as good as possible with double-precision floating point accuracy. However not all problems lend themselves well to this sort of search and in higher dimensions it starts getting slow with disconcerting alacrity.
--
David Rutten
david@mcneel.com
Poprad, Slovakia…
Added by David Rutten at 1:54am on August 15, 2011