t. So here we go!
1. Honeybee is brown and not yellow [stupid!]...
As you probably remember Honeybee logo was initially yellow because of my ignorance about Honeybees. With the help of our Honeybee expert, Michalina, now the color is corrected. I promised her to update everyone about this. Below are photos of her working on the honeybee logo and the results of her study.
If you think I'm exaggerating by calling her a honeybee expert you better watch this video:
Thank you Michalina for the great work! :). I corrected the colors. No yellow anymore. The only yellow arrows represent sun rays and not the honeybee!
2. Yellow or brown, W[here]TH Honeybee is?
I know. It has been a long time after I posted the initial video and it is not fun at all to wait for a long time. Here is the good news. If you are following the Facebook page you probably now that the Daylighting components are almost ready.
Couple of friends from Grasshopper community and RADIANCE community has been helping me with testing/debugging the components. I still think/hope to release the daylighting components at some point in January before Ladybug gets one year old.
There have been multiple changes. I finally feel that the current version of Honeybee is simple enough for non-expert users to start running initial studies and flexible enough for advanced users to run advanced studies. I will post a video soon and walk you through different components.
I think I still need more time to modify the energy simulation components so they are not going to be part of the next release. Unfortunately, there are so many ways to set up and run a wrong energy simulation and I really don’t want to add one new GIGO app to the world of simulation. We already have enough of that. Moreover I’m still not quite happy with the workflow. Please bear with me for few more months and then we can all celebrate!
I recently tested the idea of connecting Grasshopper to OpenStudio by using OpenStudio API successfully. If nothing else, I really want to release the EnergyPlus components so I can concentrate on Grasshopper > OpenStudio development which I personally think is the best approach.
3. What about wind analysis?
I have been asked multiple times that if Ladybug will have a component for wind study. The short answer is YES! I have been working with EFRI-PULSE project during the last year to develop a free and open source web-based CFD simulation platform for outdoor analysis.
We had a very good progress so far and our rockstar Stefan recently presented the results of the work at the American Physical Society’s 66th annual DFD meeting and the results looks pretty convincing in comparison to measured data. Here is an image from the presentation. All the credits go to Stefan Gracik and EFRI-PULSE project.
The project will go live at some point next year and after that I will release the Butterfly which will let you prepare the model for the CFD simulation and send it to EFRI-PULSE project. I haven’t tried to run the simulations locally yet but I’m considering that as a further development. Here is how the component and the logo looks like right now.
4. Teaching resources
It has been almost 11 months from the first public release of Ladybug. I know that I didn't do a good job in providing enough tutorials/teaching materials and I know that I won’t be able to put something comprehensive together soon.
Fortunately, ladybug has been flying in multiple schools during the last year. Several design, engineering and consultant firms are using it and it has been thought in several workshops. As I checked with multiple of you, almost everyone told me that they will be happy to share their teaching materials; hence I started the teaching resources page. Please share your materials on the page. They can be in any format and any language. Thanks in advance!
I hope you enjoyed/are enjoying/will enjoy the longest night of the year. Happy Yalda!
Cheers,
-Mostapha
…
esentar Digital Process: Generative Design Technologies Workshop; Taller especializado que se llevara a cabo en 4 de las ciudades mas importantes de la republica mexicana [Puebla] [Mexico DF] [Guadalajara] [Leon] en Enero y Febrero de 2012. http://gendesigntech.wordpress.com/
Enfocado principalmente a arquitectos, diseñadores industriales, diseñadores de interiores, Urbanistas, Artistas digitales, estudiantes y profesionistas afines al diseño; este Workshop tiene como objetivo proporcionar a los participantes los conocimientos y recursos tecnológicos que les permitan desarrollar los elementos de un proyecto desde la concepción hasta su aplicación de manera completa. Apoyándose en un conjunto potente y flexible de plataformas, los participantes aprenderán a generar, analizar y racionalizar morfologías complejas, formas orgánicas libres y algoritmos computacionales avanzados así como a producir visualizaciones fotorealístas aplicables en diversos proyectos de Diseño. A lo largo de 5 dias de intenso trabajo, exploración y retroalimentación los participantes seran guiados en el desarrollo de un flujo de trabajo mas dinamico, que les permitira explotar al maximo el potencial de las herramientas y potencializar sus habilidades, aptitudes y capacidades. Instructores: Leonardo Nuevo Arenas [Complex Geometry] José Eduardo Sánchez [DesignNest] Daniel Camiro/Luis de la Parra [Chido Studio] http://issuu.com/chidostudiodiseno/docs/digprowork Conoce el programa aquí. http://gendesigntech.wordpress.com/program/ Para registrarte por favor visita. http://gendesigntech.wordpress.com/registro
…
three categories, each one corresponding to different shapeType_ input:- polygons (shapeType_ = 0): anything consisted of closed polygons: buildings, grass areas, forests, lakes, etc
- polylines (shapeType_ = 1): non closed polylines as: streets, roads, highways, rivers, canals, train tracks ...- points (shapeType_ = 2): any point features, like: Trees, building entrances, benches, junctions between roads... Store locations: restaurants, bars, pharmacies, post offices...
So basically when you ran the "OSM shapes" component with the shapeType_ = 2, you will get a lot of points. If you would like to get only 3d trees, you run the "OSM 3D" component and it will create 3d trees from only those points which are in fact trees. You can also check which points are trees by looking at the exact location on openstreetmap.org. For example:
Or use the "OSM Search" component which will identify all trees among the points, regardless of whether 3d trees can be created or not.However, when it comes to 3d trees there is a catch:
Sometimes the geometry which Gismo streams from OpenStreetMap.org does not contain a "height" key. Or it does contain it but the value for that key is missing.OpenStreetMap is free editable map database, so anyone with internet access and free registered account on openstreetmap.org can add features (like trees) to the map database. However, regular people sometimes do not have height measuring devices which are needed for specific objects as trees.So "OSM 3D" component will generate 3d trees from only those tree points which contain a valid "height" key.However, a small workaround is to input a domain(range) into the randomHeightRange_ input of "OSM 3D" component (for example the following one: "5 to 10"):
This will result in creation of other 3d trees which do not have defined height, by randomizing their height. randomHeightRange_ input can also be applied to 3d buildings, and it is definitively something I need to write a separate article on.
In the end it may be that nobody mapped the trees in the area you are looking for.
After you map a tree to openstreetmap.org then it will instantly be available to you or any other user of Gismo. I will be adding some tutorials in the future on how this can be done. But probably not in the next couple of weeks.
Let me know if any of this helps, or if I completely misunderstood your issue.…
Added by djordje to Gismo at 3:52am on February 8, 2017
ing results and I think it is based on the assumption of small displacements. That’s why I want to try with LaDeform.
But doing this I met some problems. I tried to experiment it on the small examples that are provided with Karamba:
1.LaDeform in load-controlled behavior
I know Karamba has mainly been created make form-finding and not properly precise calculations, but I’d like to evaluate deformations of my structure under certain loads (load-controlled). It is said to let it in Default value for MaxDisp (-1).
[Rhino view for deflection of the rope]
In this example derived from a Karamba example (Large_Deformation_Rope.gh), the programs shows different ways to get approximately equal max deflection. But, getting into it, I realized Load Multiplier for gravity is different from one model to another (-3.237 for Analyze TH1 and -134 for LaDeform). So what is the interest of the example if the quite similar shape of deflections are not got under the same loadings? (quite different loadings indeed)
Doesn’t it show on the contrary that LaDeform algorithm does not work properly, if you need to change the load multiplier?
The Grasshopper file is shown below.
2.MaxDisp
When I use the model is “max disp”, I command the deformation, but how can I get the value of the virtual force exerted (which I don’t know because it is now imposed)? What is its link with the imposed deflection?
Otherwise I can’t figure how to use it with displacement-controlled loading
3.Iterative process
As it seems impossible to use LaDeform process, I tried to test it by iterations, as you recommend it on the forum, saying that it is equivalent to an iterative Analyze Th1 process.
I tried to reproduce this loading but the result is not very enthusiastic as you can see. The Rhino file shows the progressive loading, with the corresponding Grasshopper files, where I
- disassemble the model,
- get the previous deformed model
- put in another part of the load,
- re-assemble and then calculate it on the previous deformed shape.
Do you have any idea why the answer is not the same ? (LaDeform seem to give like 5 times less for the same loadings) (and even controlling it by displacements the shapes do not fit the principle of the algorithm would let think)
[RhinView for Iterative process]
First step by analyze Th1, and result by LaDeform
4.Analyze Th1 after LaDeform?
Some tutorials of Karamba show that an analysis with Analyze Th1 is sometimes made immediately after a calculation in large deformations. What is its reason? It seems to sometimes change considerably the result. What is the sense of such an operation? Would it mean that LaDeform is not trustworthy?
ð My question is then: is there a way to make the use of LaDeform for other purposes than form-finding affordable and coherent? If I mistake using it, where?
Thank you very much for your help,
…
administration, education and consumption, the contemporary world can be increasingly conceived as a global and systemic environment. All our activities are profoundly influenced by a new condition of fluidity and interdependence of various and very often, unpredictable parameters and factors, introducing us progressively to a systemic and parametric understanding of the world and our position in it. Architecture and the building process are reflecting this new conception of the world by redefining themselves according to new principles and means. The fast development of digital techniques to simulate, represent and generate Architecture promises a continuous design process, including the seamless transfer of information between the involved parties and making performance a key issue in the planning process. In this process, concepts of adaptability, transformability and flexibility are replacing already tested and secure solutions, customization is replacing standardization and metrics, and digital tools are replacing analogue representations. In these new conditions the scaleless and the seamless appear as the two key pillars of the requested integration in contemporary architectural practice and education. Do the design and planning practices and construction industries respond with digital synergies to these new requests? Can the curricula of architecture schools escape from the dominance of traditional fragmentation within their structure and the organisation of the modules and academic units towards more holistic concepts and workflow? How can the traditionally separate courses offered by departments and modules of architectural education institutions be redefined in order to assure a scale-less and seamless thinking about form, materiality and its social and cultural representations, its environmental aspects and its urban and contextual references?
The organisers are inviting architects, teachers and researchers of architecture in Europe to present their views, research outcomes and teaching experiences related to the theme of the Conference.
An abstract of 600-700 words must be submitted by September 5, 2012. Please indicate into which of the five aforementioned themes your abstract falls. You will be asked to submit your final paper by the 22nd of October 2012 for the publication of the proceedings, which will be distributed to all EAAE/ENHSA school members.
For any further queries please do not hesitate to contact us on info@enhsa.net or info@scaleless-seamless.org…
th the most crucial and imposing challenges that Mexico City faces and the ways in which architecture and urbanism can shape the metropolis at different scales. In these sense the progamme sees the city as a laboratory where the virtual and experimental tradition of the Architectural Association finds a fertile and concrete ground for the application of its methodology in Mexico.
“Manufactured Landscapes/Manufactured Urbanities” explores the metropolitan condition understood as a manufactured process by and for human beings. Henceforth the traditional opposing concepts, artificial vs nature, are replaced under the premise, nature does not exist, where nature is not natural but naturalised and the artificial is not an external or impose construct but manufactured intrinsically.
With this as a starting point the programme will study 2 instances of Mexico City’s “Manufactured Landscapes/Manufactured Urbanities”: The ravines in the west of Mexico City, last bastion of the existing “Nature” and its crucial role in the viability of Mexico City and social housing, as the fundamental construct of the “artificial” habitat in the metropolis´s urban tissue. These “Manufactured Landscapes/Manufactured Urbanities” and the ways in which they are designed, produced, reinvented regenerated, show a vast spectrum representative of the crucial urban conditions to be address and therefore they posed an enormous urban and architectonic challenge to confront in order to apply contemporary design methodologies.
To tackle the complexities of the “Manufactured Landscapes/Manufactured Urbanities”, the programme will immerse students and staff in a 10 day intensive workshop within a multidisciplinary environment where national and international experts from various fields will enrich their proposals. Students will work in architecture and/or urban scale teams and will critically assess the impact of their multiple scales interventions.
A backbone of lectures, talks and seminars, including local and international speakers, are designed to broaden and reflect the relevance and the importance of the topic for Mexico City. Finally a public exhibition of student’s work will be held at Centro Cultural de España in autumn 2013.
…
ectual property that goes nowhere:
In my opinion it's very dificult to determine when someones intelectual work becomes actual property that you should be able to protect.
There's a big difference between intelectual property and other types of scarce property (like a computer, a chair, etc.). Usually, its a good idea that scarce resouces are bought and sold in the market instead of sharing them because the price mechanism (supply and demand) determines its best possible use in that given moment. Intelectual property on the other hand is not scarce once it has been created, so if a 5 year old with an internet connection downloads a Grasshopper definition i created, it's not preventing an architect to use it for a more suitable purpose. Just like, in a practical sense, the more air I breath doesen't mean the less less air other people have left to breath, because there is so much air it could be asumed (today, at least) that the abuncance is infinite. So trading air in the maket place is nonsensical.
The only reason for copyright and patent laws to artificially make scarce a particular piece of intelectual property is so that people have an economic incentive to innovate and create new intelectual property. The advances in inovation should offset the artificial scarcity.
If that last point is true, it should be a good thing that people are not giving things up for free but rather selling them because it promotes inovation, but I'm personally not sure if this is true. Probably McNeel will agree to the last point on some extent and say that maybe patent laws go too far but copyright laws that protect Rhino and Grasshopper (even though right now it's free, it still 'owned' by McNeel) should be in place.
So I end up as I started, it's very dificult to determine when its a good idea (not just for an individual but in general) to sell or share this stuff.
If someone is interested in an extreme anti intelectual property rant from someone that otherwise defends private property, see this guy: http://www.youtube.com/watch?v=oRqsdSARrgk
…
ld see were the set of basic tutorials. I've run through a few other folk's video tutorials also.
The test case I chose, I picked because it is a super simplification of an actual space I'm trying to model (a large school sports complex - see below). Ive modelled it as a closed volume, with a few solid objects inside it, and it is a much less box-shaped space, with a ceiling that is not flat, and a significant lattice of acoustic panelling that encloses the roof trusses.
the volume of this space is around 50000 cubic metres, which if I followed the guidelines o0f 50-100 rays per cubic metre, would be 2.5 - 5 million rays. I ran a simulation on the test simplified box space with 100k rays, which took about 2 hours running on a macbook pro booted into windows. Perhaps I need to find a much more serious machine to run this on. would it be a reasonable assumption to think that as more rays are added, the results would converge on a particular solution? if so, if you had to take a guess, how many rays/m3 would be required to get a solid estimate of reverb time +/- 0.1s?
I don't mean to imply that Pachyderm isnt up to scratch - simply that I'm trying to find some way of determining whether a given set of simulation parameters are going to give a result that will be enough to make decisions about surface materials and treatments that will be required. I tried a bunch of different methods and simulation parameters to see if they were even remotely similar, and unsurprisingly, they werent. I'm not an acoustic engineer, I'm an architect who has studied some acoustics in addition to my regular subjects. I know enough to be dangerous, but I'm trying to convert that into enough to be useful. :). I'm totally open to any advice anyone might offer.
One last thing, could you confirm that the T-30 parameter is T-30 (and so needs to be doubled to get RT60)
Thanks for responding,
Ben
…
ys to make use of it.
What it does...
This plug-in allows for one to "connect" a Rhino document with
Grasshopper documents (referred to throughout the plugin as pairing) so
that you can remember which Grasshopper documents are used or reference
data from the Rhino document
How to use it...
Right now, the plug-in is just one command "PairGHFiles" which has
five(5) different options.
PairAllActiveGHDocs - This option pairs all of the documents that are
currently active in the GH Editor to the current Rhino document
PairSelectedGHDocs - This option shows a dialog that allows you to pick
from all the currently active documents in the GH Editor. The selected
documents will be paired to the current Rhino document
OpensAllPairedGHDocs - Opens all the GH Documents that are currently
paired with the Rhino Document
RemovePairedGHDocs - Shows a list of the currently paired GH Documents
and allows you to select which ones to remove.
CurrentlyPairedGHDocs - Prints to the command line all of the GH
Document paths that are currently paired to the Rhino Document.
The plug-in automatically saves all the necessary data, so you don't
need to remember to save any additional files. Do keep in mind that
only GH documents that have been saved and have a valid path will be
able to be paired to the Rhino Document.
Installation
Place the rhp file in a safe, static locataion, then drag and drop it on
top of a running instance of Rhino. Or run the PlugInManager command,
click the Install button towards the bottom of the window, and choose
the rhp file.
If anyone has any questions, feedback, suggestions, or issues, feel free
to post here or email me. Also, for people looking to do the "opposite"
of this (pairing a Rhino Document to a GH Document), check out Visose's
post below.
http://news2.mcneel.com/scripts/dnewsweb.exe?cmd=article&group=rhino&item=353734&utag=
This plug-in is provided without any written or expressed guarantee. By
downloading and installing the plug-in you release the author of any
liability in regards to anything this plug-in may or may not do.
Best Regards,
Damien
Develop | Research | Design
e| damien[AT]liquidtectonics.com
w| liquidtectonics.com…
Added by Damien Alomar at 12:27pm on October 26, 2010