the results myself and I am open to changing the name/description of the input based on what you have found here. modulateFlowOrTemp is not the best name for what seems to be going on and we should change it to reflect more what is happening in the IDF.
Here is how I am understanding the results of the different cases:
1) When the variable flow option is selected (and the outdoor air set to "None"), the heating and cooling of the space seems to happen only through re-circulation of the indoor air. My comparison to a VAV system was not appropriate and perhaps it would be better to compare it to a window air conditioner or a warm air furnace, which, as far as I understand, only re-circulate indoor air and do not bring in outside air.
2) My reasoning for the name modulateFlowOrTemp came mostly from my realization that the supply air temperature remained within the defined limits when the variable flow option is selected (and the outdoor air set to "None"). When the outdoor air was set to Maximum or Sum, the supply air temperature went way out of the temperature limits that I initially set. I realize now that the flows are varying in both cases and the name of the input really must change.
3) I think that the reason why we don't see any effect from the air side economizer is because the heating/cooling energy results that you get from an ideal air system are just the sum of the sensible and the latent heat added/removed from the zone by the system. This value of heat added or removed from the zone does not change whether the added/removed heat comes from outside air or from a cooling/heating coil. Since there is no cooling coil or boiler or chiller in an ideal air system, there is no way to request an output of the energy added/removed by such a coil or chiller as opposed to that removed/added by outside air. In other words, the air side economizer option on the ideal air system is practically useless because it does not help us differentiate the cooling that comes from the outside air vs. that which comes from a coil. All that it does is change the outdoor air fraction while keeping the reported cooling/heating values the same.
Please let me know if you think that this explanation makes sense, Burin and, in light of all this, I am very interested in your suggestions.
From my own perspective, I am now convinced that the default should definitely have the outside air requirements set to "None" since, otherwise, we cannot distinguish cooling/heating that happens from addition of outside air and that which must be supplied by a coil. At least when we get rid of the outside air requirement, we can be sure that the ideal air system values are only showing heating/cooling from a coil or HVAC system.
I have decided to remove the airsideEconomizer input since it seems to give misleading expectations. I am going to recommend here on out that, if you want to estimate the effect of increasing outside air on cooling, you should use the "Set EP Airflow" component, use fan-driven natural ventilation, and you should connect a custom CSV schedule of airflow. You will have to create such a schedule with native GH components using the outside air temperature, your zone setpoints, and the times that you are cooling in your initial run of E+. Either you do this or you set up a full-blown system with OpenStudio.
I have also decided to get rid of the heatRecovery input since it seems like this will also produce misleading expectations by the same logic.
Lastly, I am going to change the name of the modulateFlowOrTemp_ input to outdoorAirReq_. The default will be to have no indoor air requirement as stated above but you can input either "maximum" or "sum" to have the IDF run accordingly.
Let me know if this sounds good or if you have suggestions. Updated GH file attached. The github has the new Ideal Air Loads component. Make sure that you have sync correctly and restart GH after updating your components.
-Chris…
uick answers. Below you will find some suggestions, but don't think of them as rules and especially don't think of them as guarantees.
1. Choose a descriptive title for your post
Don't call your question "Help!" or "I have a problem" or "Deadline tonight!", but actually describe the problem you are having.
2. Be succinct but clear in your wording
People need to know some details about your problem in order to understand what sort of answers would satisfy you, but nobody cares about how angry your boss or how bad your teacher or how tight your deadline is. Talk about the problem and only the problem. If you don't speak English well, you should probably post in your native language as well as providing a Google Translation of your question.
3. Attach minimal versions of all the relevant files
If you have a GH/GHX file you have a question about, attach it to the post. Don't expect that people will recreate a file based on a screen-shot because that's a lot of pointless work. It's also a good idea to remove everything non-essential from a GH file. You can use the 'Internalise Data' menu option to cut everything to the left of a parameter:
If you're importing curves or Breps or meshes from Rhino, you can also internalise them so you won't have to post a 3DM file as well as a GH file. If you do attach large files, consider zipping them first. Do not use RAR, Ning doesn't handle it.
It is especially a good idea to post files that don't require any non-standard components if at all possible. Not everyone has Kangaroo or Hoopsnake or Geco installed so if your file relies on those components, it might not open correctly elsewhere.
4. Include a detailed image of the GH file if it makes sense
If your question is about a specific (group of) components, consider adding a screenshot of the file in the text of the post. You can use the Ctrl+Shift+Q feature in Grasshopper to quickly create nice screenshots with focus rectangles such as this:
5. Include links to online resources if possible
If you have a question about Schwarz Minimal surfaces, please link to a website which talks about these.
6. Create new topics rather than continuing old ones
It's usually better to start a fresh question, even if there's already a discussion that kinda sorta tangentially touches upon the same issue. Please link to that discussion, but start anew.
7. This is not a 'do my work for me' group
Many of us like to help, but it's good to see effort on our part being matched by effort on your part. Questions in the form of 'I need to do X but cannot be bothered to try and learn the software' will (and should) go unanswered.
7b. Similarly, questions in the form of 'How do I quickly recreate this facade that took a team of skilled professionals four months to figure out?' have a very low success rate.
--
David Rutten
Lead Grasshopper Development
Robert McNeel & Associates…
Added by David Rutten at 12:58pm on October 1, 2013
: ----------------------------------------------------------------------------------------------
1)
Hi Clemens I've analysed a plate structure using Karamba and wanted to do a convergence analysis on results computed as a function of the number of elements.
Now, when strictly looking at the result magnitudes of internal energy (IE) and maximum displacement (w_max), it's acceptable, that their relative deviations are very small. But I cannot explain the tendencies of their graphs. From what I know, FEM should always compute underestimated results when compared to analytical solutions. So I don't understand why both the IE and w_max seem to be decreasing for an increasing number of elements.
But my main concern is the behaviour of the peak moment, it seems to be simply hill climbing untill suddenly a singularity kicks in. I initially wanted to use the peak moment as a fitness value for optimisation, but with this behaviour, I don't think that would make sense. I've attached my GH file as well.
It would be much appreciated if you could enlighten me on these subjects. Cheers Daniel Andersen
2)
Hi Daniel,
I could not run your definition because I have not all the plug-ins installed that you use.
You are basically right that the displacement should increase with a finer mesh. However the result of the shell analysis also depends on the shape of the triangles (well formed vs. very distorted). In order to test this, I think it would be interesting to use a very simple example (e.g. rectangular plate with one column) where you can easily control mesh generation. Would you like to start a discussion on this in the karamba group at http://www.grasshopper3d.com/group/karamba?
It is not a good idea to use the bending moment at a singularity for optimization because the result will be heavily mesh dependent. Also real columns do have a certain diameter and modeling them as point supports introduces an error.
Best,
Clemens
3)
oh, and by the way!
Here's some relevant literature on handling peak moments: https://books.google.dk/books?id=-5TvNxnVMmgC&pg=PA219&lpg=PA219&dq=blaauwendraad+plates+and+fem&source=bl&ots=SdDcwnrSA1&sig=6HulPmKNIhqKx4_rGxitteMC4CU&hl=da&sa=X&ved=0CDEQ6AEwA2oVChMIg66k0LPaxgIVgY1yCh1KPAeY#v=onepage&q=chapter%2014&f=false (Blaauwendraad, J., 2010. Plates and FEM : Surprises and Pitfalls, see Chapter 14) It would be great if a feature dealing with peak moments could be incorporated in Karamba. In my work, I ended up exporting my models to Robot in order to verify the moment values. Best, Daniel
4)
Hi Daniel,
thank you for your reply and the link to Blaauwendraads excellent book!
At some point I hope to include material nonlinearity in Karamba which will help in dealing with stress singularities.
If you want you could open a discussion with a title like 'moment peaks in shells at point-supports'. Then we could copy and paste the text of our conversation into it.
Best,
Clemens
----------------------------------------------------------------------------------------------…
but rather than keep everyone waiting, I've decided to share some as they become ready.
This also has the advantage that questions about components can be more easily grouped under the relevant post - so please do add any questions / comments / bugs / suggestions about these examples below.
So today I am posting some examples of the mesh utilities that come with the new release.
While these are not directly physics based, many of the forces and types of relaxation in Kangaroo are designed to work with meshes, and in the process of development I've ended up adding a number of simple utilities to make working with them a little easier.
I recommend also installing Weaverbird which has many more subdivision functions and other useful tools for working with meshes in Grasshopper. Also Plankton, Turtle, MeshEdit and Starling extend these possibilities still further.
Diagonalize
This component replaces every edge of a mesh with a new face. The new faces will always be quads, except for along the boundaries, where they will be triangles. It can be used to easily create diagrids. The input mesh can contain any mix of triangles and quads.
When treating the edges of a quad mesh as springs, diagonalizing it will often significantly change its physical behaviour. If you are trying to planarize a quad mesh, diagonalizing may sometimes allow you to stay closer to a target shape if it matches the curvature directions better.
diagonalize.gh
Checkerboard
This component assigns the faces of a mesh into a checkerboard pattern. The output is a list of 1s and 0s (which could represent black/white or true/false) which can be used to dispatch the faces into 2 lists, where no pair of adjacent faces have the same colour.
One nice application I found for this is applying alternating clockwise and counter-clockwise rotations as shown below. Also, on occasion you may want to planarize a quad mesh, but have some constraints on the shape and grid that prevent this, and triangulating only alternating quads to give a hybrid quad/tri mesh can sometimes be a good compromise, allowing a bit more freedom.
Note - Not all meshes can be assigned a checkerboard pattern!
As a simple example, take a mesh with 3 quads around one vertex - If we assign one black face, then both the neighbouring faces should be white, but then we have 2 white faces adjacent to one another, which violates the checkerboard condition.
Generally, we can say that if a mesh has any internal vertex with an odd number of faces around it, then we cannot apply a consistent checkerboard pattern to it (although not having any odd valence vertices is not in itself an absolute guarantee that a mesh is 'checkerboardable').
checkerboard.gh
WarpWeft
This sorts the edges of a quad mesh into 2 lists of line segments, which are like the warp and weft directions of a fabric. They can also be seen as a sort of mesh equivalent to the u and v isocurves on a NURBS surface.
This can be useful if you want to control the shape of a tension structure, because it allows you to assign different stiffnesses in the 2 directions.
As with the checkerboard component, not all meshes can be consistently assigned warp/weft directions. It follows a similar rule - all internal vertices should have an even number of adjacent faces. With a bit of care, it is usually possible to model the initial mesh in such a way as to allow this.
This component also has an output telling us whether or not each line is on a boundary of the mesh, as we will often want to treat these differently.
Same mesh relaxed with different warp/weft stiffness:
warpweft.gh
MeshCorners
This one is hopefully fairly self explanatory. In many simulations we want to anchor the corner points of a mesh. This saves us having to pick them manually in Rhino.
It works on quad meshes, and looks around the boundary vertices for any which do not have exactly 3 connected edges.
corners.gh
That's all for now. Coming soon - a "mesh tools 2" post explaining more of the components.…
greatly appreciate it!!
You can write the number of the question and write your answer next to it, example:
1) a
2) c
3) a) Washington University in St. Louis
4) 2 weeks (1week+1week shipping)
5) 130
6) b
7) b
The survey questions are as follows:
1)
Did you 3D print before?
5)
How much did it cost (in dollars)?
a.
Yes, for a school project
a.
Between 20 & 50
b.
Yes, for a personal project
b.
Between 50 & 80
c.
Between 80 & 120
2)
Print size
d.
Please specify if otherwise: _____ dollars
a.
Between 2 & 6 cubic inches
b.
Between 6 & 12 cubic inches
6)
Do you think the price was expensive?
c.
Between 12 & 20 cubic inches
a.
Not at all
d.
Please specify if otherwise: ____cubic inches
b.
A little bit expensive
c.
Very expensive
3)
Where did you print your object?
a.
School
7)
Were you satisfied with the printed object?
b.
Outside school: _________________
a.
Yes, it was a great print without problems
b.
Not bad, some issues
4)
How long did it take to print?
c.
I was not satisfied, very bad quality
a.
___ days
b.
___ weeks
Thank you very much to all!!
PS: If you did many 3D prints, you can post multiple answers.
Wassef…
ración de 150 horas divididas en cuatro módulos, arrancando el 22 de Marzo del 2011 y terminando la segunda semana de Junio con sesiones los Martes y Jueves de 18:00 a 22:00hrs y algunos Sábados de 10:00 a 14:00hrs.
El tema central del diplomado es el uso integral de la herramienta digital en el proceso de diseño a partir de la base teórica del fenómeno de la emergencia (entendida como la obtención de resultados complejos a partir de la interacción de elementos simples con reglas de bajo nivel de sofisticación).
El desarrollo del programa se concentra en la aplicación práctica de las reflexiones teóricas generadas mediante el uso de herramientas digitales generativas, principalmente Grasshopper (plug-in de modelado parametrico para Rhinoceros).
Contaremos con la presencia de dos colaboradores internacionales: EL primero será un miembro de LaN (Live Architecture Network) que impartirá un curso sobre programación avanzada en Grasshopper enfocandolo a la realización de un objeto construido, haciendo énfasis en la transición entre lo virtual, lo análogo y lo físico. El segundo es Jalal el Ali, maestro en arquitectura por la Architectural Association, líder de la Unidad de Geometría Generativa de Buro Happold y actual líder de proyecto en Zaha Hadid Architects, quien dará un curso intensivo enfocado al uso de la herramienta digital y la producción digital, enseñando procesos que ha aplicado en la empresa donde trabaja. Jalal pronunciará también una conferencia magistral.
Es un programa promueve el uso de nuevas tecnologías y la integración de procesos de producción desde la concepción del diseño, aplicando los conocimientos teóricos en un objeto físico usando el laboratorio de fabricación de la Universidad Iberoamericana.
…
diseño, construcción y entendimiento de nuestro entorno.
BIM está poniendo a disposición de los diseñadores y gestores auténticas bases de datos que pueden generarse, conectarse y editarse de forma paramétrica, proporcionando una sólida capa de realidad a los ejercicios de diseño generativo y computación que son objeto de estudio en Algomad, el seminario que busca popularizar la programación y la parametrización en el diseño y en la experiencia de nuestro entorno construido.
Tras un paréntesis en 2015, Algomad vuelve con el objetivo de demostrar cómo una visión computacional del BIM es una oportunidad para mejorar la forma de trabajar de ingenieros, arquitectos, constructoras y operadores de edificios e infraestructuras, tendiendo un puente entre las técnicas de diseño digital más avanzadas y la realidad de la construcción.
Algomad 2016 tendrá lugar en el centro de Madrid, en IE School of Architecture and Design, IE University, los días 3, 4 y 5 de Noviembre de 2016 y comprenderá 4 talleres así como ponencias a cargo de expertos de primer nivel.
Estructura de Algomad 2016
Algomad 2016 se estructura en torno a tres áreas temáticas principales:
BIM, como la metodología total específica para el sector de la construcción.
Computación, englobando las aplicaciones de programación y parametrización al diseño de edificios e infraestructuras.
Realidad, como marco de trabajo, buscando siempre resolver problemas reales a través de los dos puntos anteriores.
Público objetivo
Arquitectos, arquitectos técnicos, ingenieros y en general académicos, estudiantes de últimos cursos y profesionales del mundo inmobiliario y de la construcción que compartan un interés por la digitalización de nuestro sector. Se espera un nivel mínimo en el uso de herramientas BIM y de parametrización. Algomad proporcionará formación adicional y gratuita en las herramientas básicas a emplear en los talleres para asegurar un correcto desempeño.…
ively and creatively solve today’s product development challenges.
Our Rhino3D Foundations for Industrial Design class provides an in-depth look at 2D and 3D tools and methods with Rhino3D, a NURBs surface modeling software. In this class, we will systematically work through Rhino3D’s core features, using them to model the various components of a consumer product. Over the course of 3 days, we’ll cover some foundational topics, including Rhino interface and navigation, Rhino3D object types and properties, creating and editing 2D and 3D geometry, procedural modeling, automation, transforming geometry, Rhino modeling best practices, freeform vs. precision modeling, and exporting geometry.
You’ll take away the following:
Navigate the Rhino modeling environment
Create, edit, and modify curves, surfaces, and solids
Precision model using coordinate input and object snaps
Use transformation and universal deformation tools
Apply best practices for layer management and model annotation
Download the course one-pager. Need more information? Connect with us.
This class is ideal for:
Industrial designers who are new to Rhino3D and want to learn its concepts and technical features in an instructor-led environment.
For groups of 10 or more, contact Mode Lab at hello@modelab.is
Interested in additional training options?
https://www.modelab.is/upcoming-computational-design-events…
e think. Also, its easier to catch an error because the malicious component simply turns red/orange (in most cases). However, if you are adept at scripting, you are probably very used to recursive looping & conditional evaluation which you miss majorly in GH (it is possible in very limited ways through using series components or comparer components). So an adept scriptor may soon end up switching back to Rhinoscript unless they find the shift from VBscript to VB.net really fast & smooth (which is rare).
GH ofcourse has the advantage of keeping it all 'alive' and changing things with sliders/graphs/image painting, compared to Rhinoscript which is a run-once operation -- so that's where one makes a choice between recursive looping (in RS) & live interactivity (in GH). I'd say RS mostly wins the battle because interactivity is fancy, but recursion can be a necessity.
Now to VB.net. The one barrier I have hit most often with GH is speed. If you were working on a fairly large data set, or doing a number of surface/polysurface/brep operations, you hit the performance ceiling real fast, which is when the interactivity becomes almost useless -- because its nowhere close to real time anymore even if you had 12gb ram. Thus steps in VB.net (A bit of clever scripting can make a really significant difference).
Working a series of geometric operations in a code component is much faster than doing it through native GH components due to the fact that each native component comes with tonnes off error trapping code, preview generation (I think even if you turn it off, its still being computed, only not displayed), etc. while with VB, you can circumvent a lot of that.
If GH were to handle geometry even remotely comparable to what GC/Catia* can do, it would have a long way to go -- I am not sure if that is even the objective. For instance, I am currently working on a tower where all geometry is only meshes and polylines - no degree 3 curves, no surfaces/polysurfaces. This is because if the entire tower is to stay 'alive', Meshes are the lightest option with the amount of geometry being generated. And most of it is through code... there's only the sliders and a couple of other components that are GH native -- and its still in GH due to the interactivity. (I think there's a vast potential with Meshes that GH/Rhino are really not tapping into. There are all the building blocks, but no significant implementation. Giulio's weaverbird plugin is just a small example).
*GC/Catia cost significantly more than Rhino itself, and GH is a free plugin to Rhino. Morever, these softwares were written to be parametric modelling softwares from day1, unlike GH which is an add-on over the RhinoSDK, which was never developed from such a perspective. So a very very unfair comparison there, but GH is becoming so significant that its got a forum of its own -- gaining an almost 'independent software' status. I just hope the McNeel marketing people are not listening :)…
we're actually using PET sheets for our flexures. We try to design so that the flexures don't go through more than +/- 30 degrees of deflection. If the angular deflection is kept small, the lifetime can definitely be on the order of 1000000 cycles.
As for the design process (item 2), ideally the designer would be able to use a simple 3D CAD tool to design a model of a robot, and the geometry would be represented by dimensioning the individual parts in the model. Maybe there should be some parametric primitive kinematic building blocks like four bar linkages, box frames, etc. that a user could build up a robot from. But, the key functionality the tool needs to provide is for the designer to be able to visualize how the robot will move when it's fabricated. This could mean observing (or plotting) the motion of a leg, a wing, or a series of body segments. Ideally, then, the tool would generate an unfolding of the design. How this would work is still very vague - maybe the user would assist in the unfolding, maybe there would be an optimization routine that computes optimal unfoldings based on criteria like minimal waste, or fewest pieces (I would *not* constrain the problem to construction from a single monolithic piece as in origami). The biggest problem we have right now, is that our design process is totally divorced from fabrication. Even if we went through the trouble of extruding individual thin plates in Solidworks and creating an assembly for visualizing the kinematics of a mechanism, that particular representation doesn't transfer easily to the fabrication process because it's essentially monolithic.
Item 3: The 2D drawing is simple a drawing done manually in Solidworks. There are different layers for flexure cuts, outline cuts, and potentially any cuts to be made in the plastic flexure layer. Depending on the robot, there may be many separate pieces for different parts and linkages in a single robot. For example, the drawing for a robot containing a fourbar linkage may have the linkage laid out as a physically separate piece consisting of five rigid links connected by four flexure hinges. During assembly, the designer would then fold up that linkage and insert it into the robot wherever it's supposed to go. If you're curious you can see some sample 2D drawings for older designs here: http://robotics.eecs.berkeley.edu/~ronf/Prototype/ under the "Example Structures" heading.
I noticed Kangaroo seems to be a popular choice for physical simulations. I don't really even need to include forces like bending resistance - I'm happy to allow the design tool to approximate flexures as pin joint-type hinges. Once the design is unfolded, the details of how to cut the flexures could be worked out in a post-processing step. I wouldn't expect the tool to be able to realistically simulate the bending of the hinges.
I'm going to have to dig a lot deeper into understanding Grasshopper and Kangaroo. I only just got started with Grasshopper today by following the folding plate tutorial on wa11ace.com.au today. …