.
For my project I want to make a sphere or spherical-like shape and pack it with circles of varying sizes. The circles all have to touch each other and thus on a point where three circles 'sort of' meet, there can only be three circles. This is shown in the second picture I have attached, a 2D circle packing made by Daniel Piker. So basically what I want to achieve is having the second picture projected on a 3d surface, that I can also edit. Also I would like to be able to change the size and amount of the circles that populate the surface. This means that I would be able to say 'there should be 30 circles with a radius of 2, 40 circles with a radius of 3 and 50 circles with a radius of 4, put them on this particular shape'.
As I've just started the project I haven't done so much research yet. What I have found is for example this Kangaroo definition of circle packing in 2D: http://www.grasshopper3d.com/group/kangaroo/forum/topics/circle-packing-definition?xg_source=activity
It is very beautiful and does exactly what I want to achieve, except that it is in two dimensions. I also have to say that I feel pretty confident working with both Grasshopper and Rhino, but not really with Kangaroo. I have used it a few times but not extensively.
So what I'm wondering is, how could I best approach this project? I looked into the concept of 'circle packing' and I noticed that it can be approached very mathematically. As I am an architecture student I don't know much about the math behind the geometry (although I do think it is very interesting) and thus I'm wondering if I will be able to achieve what I want to achieve. Also, do you think I could best approach the project in Kangaroo and do you think it is realistic for me to think I could finish the project? I'm just trying to see if I'm not going to try to tackle a problem that is very difficult to solve even for skilled mathematicans or something. Sorry for the long and perhaps vague read, but I would be very happy with any sort of input you might have on my problem!
Thanks in advance!
…
ported to Rhino and "set" in Grasshopper, i trim both surfaces from their rectangular bases so that when sDivide is used it creates and distributes the same number of points on each surface.But heres the problems: a) if i use the "trimmed" surfaces with SrfGrid it errors warning: "A point in the grid is null. fitting operation aborted".I'd learned this was caused by "nulls" replacing position Data Items when the rectangular grid(surface base) was trimmed away. So i used Clean Tree which worked removing all nulls, then Shift Paths\Flip Matrix to create line-endpoint pairs for Polyline\Evaluate Curve. I Flattened the last Flip Matrix placing all data items in one source for SrfGrid, like in the working Untrim\CopyTrim definition.This time,.b) SrfGrid errored with: "The UCount value is not valid for this amount of points",.So, i substituted a 356 value, numeric Slider in the Addition B param., and tested its range until a valid UCount was found. Then SrfGrid fitted a surface thru the points, BUT,d) those SrfGrid surfaces are extremely deformed even thought the points preceding it from Evaluate Curve are accurate,SEE: def: "3b-RGH_SurfaceBlend.gh",AND,.a2) if i use Untrim with CopyTrim then SrfGrid works, but since the Jokers limbs WILL be in different surface positions then the blends between the Arm (for example) will rise from its relative FLAT position on the untrimmed Source surface to the Arm on the Target surface, rather than morphing from the Corresponding Arm position on the Source surface,. ..see def.: "4-RGH_SurfaceBlend.gh"So please let me know,..1) how to produce accurate surfaces from SrfGrid in def.: "3b-RGH_SurfaceBlend.gh",. ..(NOTE: BOTH these def's contain 2 indentical, "internalized" surfaces, but if def. 3b can be made to work it will also work with Dis-similar surfaces)2) which component to use or how else to determine the correct UCount value for a specified amount of points(ie:155), re: SrfGrid error: "The UCount value is not valid for this amount of points",.3) how else to force SrfGrid to work with Trimmed surfaces?, AND,..4) how to force intersurface, point-blend correspondence lines: Polylines(PLine) to be connected between correctly! correponding positions (Limbs) on the surfaces?,
Really! appreciate all help, definitions and kind generosity common to this knowledgable membership,
Cheers!,
Jeff…
aching my skill set here, but bare with me.
I want to create an animated facade of squares which rotate depending on a sequence of grey-scale images. I've got pretty far thanks to many discussions here, but have hit a blank with exporting my animated model to 3ds max.
Here's my GH script - it's a botch of 3 or 4 various things incorporating centipede at the start and end to get the animation.
All good and it works! It produces animations which I can sequence for presentations too thanks to it's bmp export, which is sort of a side-product.
What I have a problem is that the OBJs it produces error wildly when imported to max. eg in rhino it looks like
But when I've imported them to max it looks like
and as it animates it just gets longer and smaller.
NOW I reckon it might be because my model in grasshopper is 100 separate geometries and it'd like it to be a single one - but I've not achieved that.
Does anyone have any ideas how to solve this? My end result I would like to look like this rendered still from max, but animated.
Thankyou all! This also uses Firefly, so you might need that installed to see how my file works.
…
Added by chris parrott at 10:34am on September 11, 2015
t ''Morph'' turns Red saying ''Cannot morph from a degenerate box'' (image 2),
that's because every curve generates a box (image 3).
After what i check the Option ''Union'' box to make only one box for all the curves (image 4).
However, the result is aleatory and not accurate at all ... :/ (see image 6).I know you are developing Pufferfish and not ''Morph'' component, but recently you publish on instagram a video where i believe you could morph and Twist with success a collection of curves (please see image 7 and 8)...If you could give me a hint how that can be achieved, it would be awesome.(Piping/Meshing the curves with very small diameter will perhaps work and help for visualisation purposes, but i actually just need morphing Raw curves for fabrication purposes).Hope to read you very soon...Ghali,…
east make all our algorithms thread-safe, so they can all be called from multiple threads, this is the first step towards multi-threading.
But multi-threading is not just something you switch on or off, it's an approach. Let's take the meshing of Breps for example. Let's assume that at some point one or more breps are added to the document. The wireframes of these breps can be drawn immediately, but the shading meshes need to be calculated first. How do we go about doing this? Allow me to enumerate some obvious solutions:
We put everything on hold and compute all meshes, one at a time. Then, when we're done we'll yield control back to the Rhino window so that key presses and mouse events can once again be processed. This is the simplest of all solutions and also the worst from the users point of view.
We allow the views to be redrawn, mouse events and key presses to be handled, but we perform the meshing in a background thread. I.e. whatever processor cycles are left over from regular use are now put to work on computing meshes. Once we're done computing these meshes we can start drawing the shaded breps. This is a lot better as it doesn't block the UI, but it also means that for a while (potentially a very long time) our breps will not be shaded in the viewport. This approach is already a lot harder from a programming perspective because you now have multiple threads all with access to the same Breps in memory and you need to make sure that they don't start to perform conflicting operations. Rhino already does this (and has been doing for a long time) on a lot of commands, otherwise you wouldn't be able to abort meshing/intersections/booleans etc. with an Escape press.
So we can compute the meshes on the UI-thread or on a background thread. How about using our multiple cores to speed up the process? Again, there are several ways in which this can be achieved:
Say we have a quad-core machine, i.e. four processors at our disposal. We could choose to assign the meshing of the first brep to the first processor, the second brep to the second processor, the third brep to the third processor and so on. Once a processor is done with the meshing of a specific brep, we'll give it the next brep to mesh until we're done meshing all the breps. This is a good solution when multiple breps need to be meshed at once, but it doesn't help at all if we only need to compute the mesh for a single brep, which is of course a very common case in Rhino.
To go a level deeper, we need to start adding multi-threading to the mesher itself. Let's say that the mesher is set up in such a way that it will assign each face of the brep to a new core, then -once all faces have been meshed- it will stitch together the partial meshes into a single large mesh. Now we've sped up the meshing of breps with multiple faces, but not individual surfaces.
We can of course go deeper still. Perhaps there is some operation that is repeated over and over during the meshing of a single face. We could also choose to multi-thread this operation, thus speeding up the meshing of all surfaces and breps.
All of the above approaches are possible, some are very difficult, some are actually not possible if we're not allowed to break the SDK. A further problem is that there's overhead involved with multi-threading. Very few operations will actually become 4 times faster if you distribute the work across 4 cores. Often one core will simply take longer than the other 3, often the partial results need to be aggregated which takes additional cycles and/or memory. What this means is that if you were to apply all of the above methods (multi-thread the meshing of individual faces, multi-thread the meshing of breps with multiple faces and multi-thread the meshing of multiple breps) you're probably worse off than you were before.
--
David Rutten
david@mcneel.com
Poprad, Slovakia
* an example would be the z-sorting of objects in viewport prior to repainting, which is a step performed on every redraw as far as I know.…
presentar Digital Process: Generative Design Technologies Workshop; Taller especializado que se llevara a cabo en 4 de las ciudades mas importantes de la republica mexicana [Puebla] [Mexico DF] [Guadalajara] [Leon] en Enero y Febrero de 2012.http://gendesigntech.wordpress.com/
Enfocado principalmente a arquitectos, diseñadores industriales, diseñadores de interiores, Urbanistas, Artistas digitales, estudiantes y profesionistas afines al diseño; este Workshop tiene como objetivo proporcionar a los participantes los conocimientos y recursos tecnológicos que les permitan desarrollar los elementos de un proyecto desde la concepción hasta su aplicación de manera completa.Apoyándose en un conjunto potente y flexible de plataformas, los participantes aprenderán a generar, analizar y racionalizar morfologías complejas, formas orgánicas libres y algoritmos computacionales avanzados así como a producir visualizaciones fotorealístas aplicables en diversos proyectos de Diseño.A lo largo de 5 dias de intenso trabajo, exploración y retroalimentación los participantes seran guiados en el desarrollo de un flujo de trabajo mas dinamico, que les permitira explotar al maximo el potencial de las herramientas y potencializar sus habilidades, aptitudes y capacidades.Instructores:Leonardo Nuevo Arenas [Complex Geometry]José Eduardo Sánchez [DesignNest]Daniel Camiro/Luis de la Parra [Chido Studio]http://issuu.com/chidostudiodiseno/docs/digproworkConoce el programa aquí.http://gendesigntech.wordpress.com/program/Para registrarte por favor visita.http://gendesigntech.wordpress.com/registro…
frontare il tema della modellazione parametrica con Grasshopper. Questa plug-in di Rhino consente di progettare, confrontandosi con un contesto evolutivo, attraverso la comprensione e l'utilizzo di parametri e componenti che influenzano la rappresentazione e la rendono dinamica componendo algoritmi. Nel corso verranno introdotte le nozioni base di Grasshopper approfondendo le metodologie della progettazione parametrica e le tecniche di modellazione algoritmica per la generazione di forme complesse.Le informazioni teoriche saranno fornite in maniera accelerata ma organica e contestuale agli argomenti elencati. Per massimizzare i risultati, le lezioni saranno accompagnate da piccole esercitazioni pratiche.Argomenti trattati:- Introduzione alla progettazione parametrica: teoria, esempi, casi studio- Grasshopper: concetti base, logica algoritmica, interfaccia grafica- Nozioni fondamentali: componenti, connessioni, data flow- Funzioni matematiche e logiche, serie, gestione dei dati- Analisi e definizione di curve e superfici- Definizione di griglie e pattern complessi- Trasformazioni geometriche, paneling- Attrattori, image sampler- Data tree: gestione di dati complessiStrutturaIl corso ha una durata di 16 ore programmate nell'arco di 2 giornate con i seguenti orari: i giorni 28/07 e 29/07 dalle 10,00 alle 19,00 con pausa pranzo di un'ora.DestinatariIl corso è rivolto a tutti coloro che hanno buone conoscenze di Rhinoceros e vogliono affrontare i nuovi metodi di progettazione in maniera consapevole attraverso il linguaggio visual scripting proposto dal software Grasshopper.PrerequisitiPer affrontare il corso è richiesta una conoscenza di base del software Rhino attraverso esperienze teoriche e pratiche. I partecipanti dovranno venire muniti di proprio laptop e con software Rhinoceros 5 o Rhinocero 4 perfettamente funzionanti.AttestatoAlla fine del corso verrà rilasciata l’attestato di partecipazione ad un corso qualificato McNeel valido per l’ottenimento di crediti formativi universitari.LuogoLe lezioni si terranno presso lo studio il Pedone in Via Muggia 33, 00195 ROMA…
ace Syntax." eCAADe 2013 18 (2013): 357.
http://www.sss9.or.kr/paperpdf/mmd/sss9_2013_ref048_p.pdf
The measure Entropy is newer. I hereby explain it (from my PhD dissertation):
Entropy values, as described in (Hillier & Hanson, The Social Logic of Space, 1984) and specified in (Turner A. , “Depthmap: A Program to Perform Visibility Graph Analysis, 2007), intuitively describe the difficulty of getting to other spaces from a certain space. In other words, the higher the entropy value, the more difficult it is to reach other spaces from that space and vice-versa. We compute the spatial entropy of the node as using the point depth set:
(11)
“The term is the maximum depth from vertex and is the frequency of point depth *d* from the vertex” (ibid). Technically, we compute it using the function below, which itself uses some outputs and by-products from previous calculations:
Algorithm 4: Entropy Computation
Given the graph (adjacency lists), Depths as List of List of integer, DepthMap as Dictionary of integer
Initialize Entropies as List(double)
For node as integer in range [0, |V|)
integer How_Many_of_D=0
double S_node=0
For depth as integer in range [1, Depths[node].Max()]
How_Many_of_D=DepthMap.Branch[(node,depth)].Count
double frequency= How_Many_of_D/|V|
S_node = S_node - frequency * Math.Log(frequency, 2)
Next
Entropies [node] = S_node
Next
…
mething? I think it would be very useful to have a mapping of light intensity over the field of view of the used camera, and possibly and option to overlay it on the luminance mapping. It would in a very visual way provide information about contrast and glare.
Doesn't the falsecolor option already do that for luminance mappings? If not can you post an image/screenshot of such a mapping from Dialux/AGI32 or any other software.
4. It's just a shoebox type simulation. 11x11 luminaires pointing down to simple materials. The default elapsed time was 3m40s. I have found the _RadParameters component meanwhile, and got it down to 0m30s. I have noticed that the simulation doesn't tax multiple cpu threads completely, most of the time cpu is at 25% during execution.
The under-utlization of CPUs is a known issue with Radiance (the calculation engine) on Windows based systems. Unfortunately there isn't much that can be done about it at the moment.
5. Is it possible to map different degrees of translucency, diffuse color, absorptance, reflectance, etc..., by means of a bitmap image, expression, or other?
6. There is a feature that I consider absolutely necessary (and I haven't found it yet), which is the emitting surface feature, with the ability to stipulate homogeneous intensity with luminance values (in cd/m^2) or flux; and by mapped distribution of intensities or luminances (in cd or cd/m^2).
By emitting surface I don't mean just a flat rectangular plane, such as an area light. It would be absolutely amazing! to perform photometric analysis on irregular and convoluted shapes and the light falling on neighbouring surfaces. 3DS Max with MentalRay provides similar functionality, but without the power of GH + HB.
In the image below, the HB logo is assigned as a texture to a glass which then creates a pattern of that on the wall when daylight falls on it.
ln the image below the light from the Batman logo illumninates the scene.
The images above were Rendered with Radiance. While these things are possible with Radiance, and therefore HB, the reason why they aren't incorporated into the code is that these effects are not "physically based" and are not rooted in reality. Radiance is arguably the most intensively tested and validated lighting simulation software in the world. However, once we start applying such "magic" to it, the results from it are no longer reliable and therefore no different from other photorealistic engines such as V-ray, Mental-Ray etc. …
what they really mean by that, as in what buttons to push, so I assume it's a Windows Path entry?
2.) Modify PATH
Add the install location on the path, this is usually: C:\Program File\IronPython 2.7
But on 64-bit Windows systems it is: C:\Program File (x86)\IronPython 2.7
As a check, open a Windows command prompt and go to a directory (which is not the above) and type:
> ipy -V PythonContext 2.7.0.40 on .NET 4.0.30319.225
Tutorial on setting a Windows environmental variable (path):
http://www.computerhope.com/issues/ch000549.htm
But this fails to point out that path contains many entries already separated by semicolons so if I merely add a new variable called "path" it's likely that I will destroy existing program function. There's no info on how to just tack on another entry, and the Windows 7 edit box doesn't even show the whole collection, but one item (!), so I copied the existing path into a text editor to see the whole collection successfully and added the C:\Program Files (x86)\IronPython 2.7 entry after an added semicolon, correcting for an Enthought page typo of no 's' on the end of "Program Files". I also checked the others and many pointed to old missing directories so I deleted those entries.
...and the test fails and "ipy" is not recognized as a command, even though the path now shows up using "path" in the Windows CMD window, that is if I copy all by right clicking and pasting the stuff into a text editor to really view it all. I can run it from the source directory just fine.
The rabbit hole was indeed deep. Using the Task Manager (control-alt-delete) to kill Explorer and then Run in the menu to restart "Explorer," along with restarting the Windows CMD window however, worked. I can now invoke Iron Python ("ipy") via command line from any directory. For the "path" I edited path in the System Variables and not the User Variables. No, you don't have to type that whole crazy line above just to test the path variable, just "ipy" (and control-Z to quite IronPython) in the CMD window invoked by typing "cmd" into the Start menu search box.
From the CMD line this step did work fine:
3.) ironpkg
Bootstrap ironpkg, which is a package install manager for binary (egg based) Python packages. Download ironpkg-1.0.0.py and type:
> ipy ironpkg-1.0.0.py --install
Now the ironpkg command should be available:
> ironpkg -h(some useful help text is displayed here)
But of course Step 4 fails, giving pages of what seem to be error messages;
C:\Users\Nik>ironpkg scipy
Traceback (most recent call last):
File "C:\Program Files (x86)\IronPython 2.7\lib\site-packages\enstaller\utils.
py", line 92, in write_data_from_url
File "C:\Program Files (x86)\IronPython 2.7\Lib\urllib2.py", line 126, in urlo
pen
File "C:\Program Files (x86)\IronPython 2.7\Lib\urllib2.py", line 397, in open
File "C:\Program Files (x86)\IronPython 2.7\Lib\urllib2.py", line 509, in http
_response
...
Why can't I just download Numpy as a normal file and thus also have it easy for other users to install it when they use my scripts? This is just crazy and lazy. The Enthought developer has turned this into a computer game, with a missing registration link and then the last step spits out errors with utterly no information on how to fix it manually.
This Step 4 error is covered here:
http://discourse.mcneel.com/t/trying-to-import-numpy-in-rhino-python-but-im-getting-this-error-cannot-import-multiarray-from-numpy-core/12912/16…
Added by Nik Willmore at 2:36pm on October 11, 2015