round each gap is called a compact circle packing, and this isn't always possible to achieve exactly on every surface, but luckily for a sphere it is.
You can break the problem into 2 parts:
-The combinatorics, or connectivity, ie how many circles there are, and which is tangent to which. This is often represented as a mesh, where each vertex is the centre of a circle, and the edges link the centres of the circles which are tangent to each other.
-The sizes and centre positions. If you treat the combinatorics as fixed, you can then concentrate on optimizing the radii and locations of the circles to get them as close to tangent as possible.
I have done some work on solving these 2 parts simultaneously (see video here), and shared some scripts for this here.
Alternatively we can deal with them separately. For the combinatorics you could use something regular, based on subdivision (for a sphere you might want to start with an icosahedron). Alternatively you could use the remeshing tool I recently shared here. This can cover any surface with a mesh of almost equal edge lengths.
For the second part there is a force in Kangaroo which can optimize any triangulated mesh so that there is a packing of spheres centred on its vertices (and if the mesh is smooth, this sphere packing also leads to a circle packing). The file cp_mesh1 in the circle packing directory of the new collection of Kangaroo example files I recently posted shows this.
As for limiting to a small number of specified radii, this is still tricky, and impossible without compromising some of the other conditions. If you allow some variable gaps between the circles, you can replace each one with the closest from your set of radii. If you do not choose your radii in advance, but generate a packing with continuously varying radii then cluster them, it can give a better fit.
Alternatively you can give up the requirement that the packing to be compact and have good tangency, but some gaps with more than 3 sides.
Circle packing is a beautiful and surprisingly deep topic. I'd also recommend taking a look at the work of Ken Stephenson, Bobenko Hoffmann and Springborn, and Mathias Höbinger's thesis, which goes into more detail about triangular meshes with tangent incircles.
…
l, you can find examples of parametric design using LB/HB, specifically the HB component pollinator workflows.
In these examples, a GH component (data recorder) is used to locally store either input parameters or output values of different model configurations and transmit them to pollinator. I can imagine, depending on how your facade is made parametric in GH, that you could save those input parameters (e.g. angle of surfaces or height of extrusion) and output variables for each iteration (e.g. annual shading).
This a search process through the design space. I do think that if you would set up the model as such, then it would be ok that the components in the PV workflow resetted after each iteration as the results would be saved. There is even a really good visualization platform Mostapha has shared to go along pollinator.
You can find examples of these workflows in the forum, simply search pollinator. I have one that I shared somewhere as well, although it was doing rudimentary things it would help.
This design space approach is a bit different than the optimization approach utilizing components like galapagos. It gives you an idea of the space of possible different desings and allows you to compare alternatives. Plus, it usually allows me to avoid all these issues of losing results between components in the workflo.
I also find it very handy and much more efficient than simply allowing a component optimize everything for me. However, it can ncrease almost exponantially (or is it geometrically, I am always bad at this) to the range and number of your input parameters. So, if each square on the wall has more than a couple of input values for a a few input parameters, I would expect this to take a long time. Thankfully, the components in the workflow will let you know exactly how many iterations.
If this method is interesting to you and you follow it I would suggest a few things to hasten the process like utilizing only the squared above and on the sides of the PV panel, since the others won't really affect shading, selecting just 2 or 3 characteristic angles for extrusions, and perhaps approximating energy production through annual shading numbers (since I imagine they have an almost linear relationship).
I do hope that I have understood what you want to do and the above information helps. I'm sure Djordje will give much better feedback on the specifics of the PV workflow. I will try and keep this page saved so that I can send over the example once I'm back at work mid of next week.
Good luck!
Kind regards,
Theodore.
…
rent actors to work together in real time on an architectural project.
DixieVR was born from the idea that virtual reality could become a fantastic tool for architecture and architects, not only for virtual tours but for the conception at its very core. Inspired by the efficiency of sandbox games, DixieVR will allow you to build a fully parametric 3D model from scratch in a very intuitive way and to simulate various factors like natural and artificial light, gravity, and more. DixieVR is also multi-user oriented : several people, architects or not, are able to work together in real time on the same 3D model and in the same shared immersive environment !
The project started in the Digital Knowledge department of Paris-Malaquais Architecture School.
The DixieVR Softwares can be found here : dixievr.github.io
// Interoperability
DixieVR deals with .dix files. For more information about this file format, please refer to the Interoperability documentation of DixieVR.
You can use this DixieIO plugin for Grasshopper/Rhinoceros for exchanging data between DixieVR (PC) & DixieViewer (Android).
You can import or export objects at any time inside a DixieVR scene. The Software also come with a library of premade objects that you might find useful. Adding your own premade objects to this library might be a good habit.
If you are hosting a scene, you also have the choice to open a .dix file directly from the main menu, this will load the last scene in which the geometry has been saved.
// Plugin
The DixieVR Plugin can be found in the Extra tab, come with 3 components and a example definition:
Dixie2Gh : Import DixieVR geometry to Grasshopper/Rhinoceros reading a .dix file (up to 1000 beams and/or 750 faces).
G2D_Polylines : Export Grasshopper/Rhinoceros Polylines to DixieVR writing a .dix file (up to 1000 line segments).
G2D_Mesh : Export Grasshopper/Rhinoceros Mesh to DixieVR writing a .dix file (up to 750 triangulated faces).
To install:
In Grasshopper, choose File > Special Folders > Components folder. Place the DixieIO_01.gha file there.
Right-click the file > Properties > make sure there is no "blocked" text.
Restart Rhinoceros or Unload Grasshopper.
// Contact - DixieVR
vr.dixie@gmail.com dixievr.github.io
- Oswald Pfeiffer oswaldpfeiffer.com
- Mathieu Venot mathieuvenot.com…
t defined from the discussion of radiation exchange between urban surfaces and the sky in urban heat island research (See Oke's literature list below). It will be affected by the proportion of sky visible from a given calculation point on a surface (vertical or horizontal) as a result of the obstruction of urban geometry, but it is not entirely associated with the solid angle subtended by the visible sky patch/patches.
So, I think using "geometry way" to approximate Sky View Factor is not correct. Sky View Factor calculation shall be based on the first principle defining the concept: radiation exchange between urban surface and sky hemisphere:
(image extracted from Johnson, G. T., & Watson, 1984)
Therefore, I always refer to the following "theoretical" Sky View Factors calculated at the centre of an infinitely long street canyon with different Height-to-width ratios in Oke's original paper (1981) as the ultimate benchmark to validate different methods to calculate SVF:
So, I agree with Compagnon (2004) on the method he used to calculate SVF: a simple radiation (or illuminance) simulation using a uniform sky.
The following images are the results of the workflow I built in the procedural modeling software Houdini (using its python library) according to this principle by calling Radiance to do the simulation and calculation, and the SVF values calculated for different canyon H/W ratios (shown at the bottom of each image) are very close to the values shown in Oke's paper.
H/W=0.25, SVF=0.895
H/W=1, SVF=0.447
H/W=2, SVF=0.246
It seems that the Sky View Factor calculated from the viewAnalysis component in Ladybug is not aligned with Oke's result for a given H/W ration: (GH file attached)
According to the definition shown in this component, I assume the value calculated is the percentage of visible sky which is a geometric calculation (shooting evenly distributed rays from sensor point to the sky and calculate the ratio of rays not blocked by urban geometry?), i.e solid angle subtended by visible sky patches, and it is not aligned with the original radiation exchange definition of Sky View Factor.
I'd suggest to call this geometrically calculated ratio of visible sky "Sky Exposure Factor" which is "true" to its definition and way of calculation (see the paper on Sky Exposure Factor below) so as to avoid confusion with "The Sky View Factor based on radiation exchange" as discussed in urban climate literature.
Appreciate your comments and advice!
References:
SVF: definition based on first principle
Oke, T. R. (1981). Canyon geometry and the nocturnal urban heat island: comparison of scale model and field observations. Journal of Climatology, 1(3), 237-254.
Oke, T. R. (1987). Boundary layer climates (2nd ed.). London ; New York: Methuen.
Johnson, G. T., & Watson, I. D. (1984). The Determination of View-Factors in Urban Canyons. Journal of American Meteorological Society, 23, 329-335.
Watson, I. D., & Johnson, G. T. (1987). Graphical estimation of sky view-factors in urban environments. INTERNATIONAL JOURNAL OF CLIMATOLOGY, 7(2), 193-197. doi: 10.1002/joc.3370070210
Papers on SVF calculation:
Brown, M. J., Grimmond, S., & Ratti, C. (2001). Comparison of Methodologies for Computing Sky View Factor in Urban Environments. Los Alamos, New Mexico, USA: Los Alamos National Laboratory.
SVF calculation based on first principle:
Compagnon, R. (2004). Solar and daylight availability in the urban fabric. Energy and Buildings, 36(4), 321-328.
paper on Sky Exposure Factor:
Zhang, J., Heng, C. K., Malone-Lee, L. C., Hii, D. J. C., Janssen, P., Leung, K. S., & Tan, B. K. (2012). Evaluating environmental implications of density: A comparative case study on the relationship between density, urban block typology and sky exposure. Automation in Construction, 22, 90-101. doi: 10.1016/j.autcon.2011.06.011
…
via MIDI controllers.
my idea is to link PureData to GH via UDP. why pure data? cause' i can relate data like GH to generate numeric relations (and link it to audio generation)
so far i got PD and Processing to talk, but i can't get to grasshopper.
i use this definitions to make pd and processing to talk http://ubaa.net/shared/processing/udp/ and this GHX to get the data to GH http://www.grasshopper3d.com/forum/attachment/download?id=2985220%3...
i got this data from this post but the GH definition doesn't work for me. i have tried LAN definitions and "the engine" as well but they both freeze, even if i send data thru processing or PD.
i have a lot of questions at this time
1.- why processing tells me that i am getting the data from diferent ports, while i'm using 6000?
2.- why in the UDP definition i get no data out, even if it should say something like "waiting fordata/port/etc.." that's defined in the C# capsule
3.- is there a direct way to get midi data (key and CC) to GH
i also tried to use firefly to get the data via COM port. i know you can do this trick in processing but i just don't know how.
well. if anyone could help me i would share the results here (since it's a magister, results shoud be very interesting)
UDP has allways been a unsolved issue on other posts. maybe we could work it out ;)
Thanks…
Added by jota aldunce at 8:43am on September 28, 2010
en la práctica de nuevos métodos de diseño y fabricación utilizando herramientas digitales. Estos procedimientos emergentes están cambiando radicalmente la manera en que nos aproximamos al proceso de diseño en términos de concepción y producción. Los participantes serán introducidos en el uso de softwares de modelado 2d y 3d para la generación de geometrías que serán posteriormente mecanizadas in situ en una máquina de control numérico CNC de 3 ejes.
¡AL FINAL DEL CURSO TE LLEVAS TU LÁMPARA A CASA!
Profesores: Equipo MEDIODESIGN* + TOOLINGROUP*
*Official Rhino Trainners. Acreditación otorgada por McNeel, desarrolladores del software Rhinoceros.
Lugar: Mediodesign. Pallars 85-91 5-2 BCN
Duración: 16 / 20 horas
Fecha: sábado 9 / domingo 10 julio de 2011
Horario: de 10h a 14h / de 16h a 20h
Plazas: 20 participantes
REQUISITOS
< Dirigido a estudiantes y profesionales de la arquitectura, diseño y profesiones afines.
< Ordenador portátil.
< Softwares instalados. En el momento de la inscripción, los participantes recibirán las instrucciones para la descarga e instalación de versiones gratuitas (trials) de los softwares.
CONTENIDOS
< Introducción al diseño avanzado y la fabricación digital.
< Entorno Rhinoceros y sus plug-ins.
< Herramientas y estrategias de trabajo CNC.
< Materiales y sus características.
< Planteamiento del ejercicio: diseño de una luminaria
< Desarrollo del archivo de RhinoCam para el mecanizado CNC.
< Mecanizado y post-producción.
< Entrega de propuestas: Presentación en formato digital del proceso de diseño y fabricación (pdf, powerpoint, etc…) y del prototipo de luminaria realizado.
INSCRIPCIONES
Precio: 199 € Materiales incluidos.
Forma de pago: mediante transferencia bancaria.
Límite fecha de inscripción: lunes 4 de julio 2011
Se otorgará certificado de asistencia. …
ding is not for the faint of heart and is quite a significant understanding. However, I don't know what your dealing with, so that may be the way to go about it.
Your component if its "finished" has to supply some sort of results that are then used downstream. AFAIK there isn't a way to "prevent" down stream components from calculating until your finished. They have to get some sort of information or else they'll just be waiting. Considering how the results of those components are likely to be invalid until the information gets calculated, it may be better off supplying them with nulls until you have some actual information to give them.
Anyway, I think that you should think very closely about the structure of your routine, and specifically how it will interact and update itself. The way I'm thinking about it now is that there really isn't anything that's done in the "solve instance" function if you will. Essentially the "solve instance" function would either A) start the reading of the file if no data is found, or B) output some data if it is found. This is an extreme undersimplification, but the simpler you keep this the more likely this will work. Here are a few more "details", i guess, of how I could see this potentially working...
Thread A - Initial call to Solve Instance function
+ Check and see if there are any results that exist from reading your file - at this point there shouldn't be. These results should be stored in some sort of class variable that is accessible to both threads. It might also be a good idea to have some boolean flag that will also be accessible that represents whether your reading/writing those variables.
+ Fire a function in another thread that begins the read process. Note that you'll likely have to do this through a delegate and an invoke call, but I'm not 100% sure
+ Fill in some null values for the variables you must supply
+ Output the nulls, thus finishing the Solve Instance function
Thread B - File Read Function running in separate thread
+ Open up the file. Note that its probably a good idea just to pass the file path (as a string) between the different threads. Leave the creation of the file/text stream to the one thread that's using it.
+ Perform all the necessary reading from the file
+ Copy all your data to the variables that are accessible to both threads.
+ Expire either the solution on either the component in question or (at last resort) the whole canvas. I know expiring the whole canvas is defenitely possible, but it should be possible to just expire the one component that's doing the reading.
Thread A - "Second" call to Solve Instance after being manually expired
+ Check and see if there are any results that exist from reading your file, which there now should be.
+ Output those shared results
+ Clear the last results (or cache them in some way) so that the next time the Solve Instance function is fired, you don't find any results and reread the file.
I think there are a few variations to this that could happen too, including having a separate function for reading and writing through the data that's called using its own delegate/invoke call to make sure that its extra safe.
If you haven't already, you should really look into event driven programming, delegates, and asyncronous messaging. These are going to be the 3 things that you'll need to have a decent hold on to make sure this things works. Just to let you know, debugging these things can be a bitch.…
ght on why this is, and some ideas I have for how to improve things going forward.
MeshMachine grew out of some scripts I started developing over 3 years ago (described here), originally just with the aim of achieving approximately equal edge lengths on a smooth closed triangulated mesh.
As time went on, I kept adding things, such as ways of keeping boundaries and sharp edges fixed, different ways of controlling edge lengths that vary across the surface, and different ways of pulling to surfaces.
I was also still experimenting with different rules for the core remeshing operations, such as valence driven vs angle driven edge flips.
All of these things meant many variables in the script. I wanted to share the work so others could play with it, but not really knowing exactly what people might use it for made it difficult to simplify the interface, so I just exposed most of these variables I was using (actually there were originally even more, but I felt a component with 20+ inputs was excessive, and combined some of them and fixed others to default values).
I've never been happy with that component, but some people want a component that you can just feed a surface and get a mesh with 'nice' triangles, without too much fuss or needing to know anything about how it works, while other people want to be able to vary the density based on proximity to the border, and curvature, and attractor points and see the intermediate results, and model minimal surfaces without pulling to any underlying surface, and...
Since then I did the rewrite from Kangaroo to Kangaroo2, and through that process, and associated conversations with Steve Baer, David Rutten and Will Pearson, my ideas about how to structure libraries and make cleaner more flexible Grasshopper components changed. Much of this centres around using interfaces (in the specific programming sense, not to be confused with UI), because they allow separating code into multiple components, while still allowing to edit parts of it within Grasshopper, and other parts in a proper IDE (because I find the GH code editor is not conducive to writing large amounts of well structured object oriented code).
Towards the end of last year, Dave Stasiuk and Anders Deleuran invited me and Will Pearson over to CITA for a few days of mesh and physics coding and beer drinking. During this time I made the first steps to restructuring MeshMachine to be more modular and interface based like Kangaroo2, instead of one giant script. One of the main motivations for doing this was to make it easier to combine the K2 physics library with the remeshing. However, at the time I hadn't yet released K2, so it didn't make sense to post examples that used those libraries. After the launch of K2, this restructured MeshMachine development has been a bit on the back-burner, but this discussion and Dave Stasiuk's work with Cocoon is inspiring me to pick it up again.
Seeing how you are combining the Cocoon and MeshMachine, and how Dave is also using interfaces in his recent work suggests to me it might be possible to integrate them more smoothly...
…
ight be able to provide more insight). Whenever you run a new simulation in Radiance, it is not always necessary to re-write all of the initial simulation files from scratch. These initial simulation files include both a .rad geometry file as well as a separate .pts file that contains the test point locations. If all that you are changing in a given parametric run is the locations of the test points (like your case), it is not necessary to re-write (or reinterpret) the entire .rad geometry file. My guess is that there is some type of check for this built into either code Mostapha wrote or radiance functions that Mostapha is calling. As such, it seems that the rad geometry file is not being re-written (or re-interpreted by radiance) completely when all that you change is the test points and this actually seems to be saving you an extra 10 seconds each time that you run the component without changing the materials or the building geometry. Other times (like when you plug in custom radParameters), it seems that it re-writes (or re-interprets) the .rad geometry file from scratch since this file is probably affected by customized rad parameters.
So far, if this explanation is holding, it seems like there would be no concern on your end but I also recognize that the difference between these long and short simulations is giving you radiation results that are ever so slightly different from each other (by my estimates, they differ by about 0.2%). Compared to the other types of assumptions that the radiance model is making, though, these are mere rounding errors that probably originate from the number of decimal places in the vertices of the rad geometry file. Rather than worrying about whether your simulations are giving you the right rounding errors to give you matching results, I would encourage you to instead contemplate how much your radiance results are matching reality given all of the assumptions that you are making about the climate (with the epw file for a "typical" year) and with the number of light bounces in the radiance simulation. To give you an example, I ran your model with a higher quality of simulation type (3 ambient bounces) and this gives you results that differ by 1.1% from the original simulation that you were running with only 2 ambient bounces (this is practically an order of magnitude larger than 0.2%).
To address your unease I will say that, for a long time, I also felt uneasy any time that I encountered something that seemed unpredictable in software that I was using. Once I started coding my own stuff, though, I realized quickly that unpredictable behavior is an unavoidable aspect of all software. There is always a tradeoff between accurate results and the time it takes to get them, which produces a multitude of possible ways to arrive at a solution. Add into this complex situation the fact that you might have an almost infinite number of possible inputs to a given set of code.
Because of the unpredictable multitude of cases, there is no application that is completely free from limitations and assumptions. In this light, what ends up being more important than the actual calculation method used is the social infrastructure that is in place to help understand what is being run under the hood, hence why both Radiance and Honeybee are open source and why we try to build a robust community of support through forums like this one!
-Chris…
ly 26-27-28-29 (digital fabrication)
The third edition of digitalMed Workshop is structured as a design laboratory. Participants will learn the challenging process of producing ideas, projects and research analysis that are to be developed through specific software and concepts that emerge through the use of mapping, parametric design and digital fabrication.
The workshop will take place in the city of Salerno (Italy) and it will last 11 days structured into 3 intensive weekends: July 13-14-15 (mapping); July 19-20-21-22 (parametric design); July 26-27-28-29 (digital fabrication).
Goals and Objectives:
We aim to make clear the theoretical and technical knowledge in the approach to parametric and generative design and digital fabrication. (From collection and data management, to the manner in which these inform the geometries, to the fabrication of prototypes.)
Participants will also have the opportunity to practice the new knowledge gained in the design laboratory through project work.
Project Theme:
"Urban Field" Identify, study and analyze the system of public spaces in the urban area of the city of Salerno.
Connection, mutation, generation and evolution are the themes to be followed in project work.
Brief Description of Topics:
- Mapping. Our reality, in all its forms, has studied through concepts of the theory of Complex Systems. The techniques that will be used to study events and places of reality, will work for the management, manipulation and visualization of data and information. These will form the basis for project management and driven geometry, conducted during the second phase of the workshop.
- Parametric Design. Introduction to Rhino* and Grasshopper. Specifically, we will explain the concepts with which to work with the software of parametric design and how they function. Through these tools, we will arrive at the definition of systems of mathematical and / or geometrical relationships that are able to generate and govern patterns, shapes and objects that will inform the final design.
- Digital Fabrication. In this phase, participants of the workshop are organized into working groups. Participants have access to materials and conceptual apparatus that will take them directly to the fabrication of the geometries of the project, with the use of software CAD / CAM interface and the use of machines for the digital fabrication.
The DigitalMed workshop is organized by Nomad AREA (Academy of Research & Training in topics of Contemporary Architecture), in collaboration with the City of Salerno, the Order of Architects Province of Salerno and the National Institute of Architecture In / Arch - Campania.
Interested parties may download the Notice of Competition at the address www.digitalmedworkshop.com and fill the pre-registration no later than July 10th 2012.
PRESS OFFICE
Dr. Francesca Luciano
328 61 20 830
fra_luciano@libero.it
For information or subscriptions:
e-mail: info@digitalmedworkshop.com - tel: 089 463126 - 3391542980 …