s levels of detail by subdividing a 6 sided cube mesh and projecting its vertices according to a referenced height map. This is one of the standard conventions for building full sizes planets. At the lowest level (0) the mesh planet is made of 6 pieces(each 32x32 resolution). The next level down (1) is made of 24 pieces... 6 divided by 4 = 24. Level (2) is 96 quads etc etc. The script will generate each quad at its sub-division level and compare edge vertices to neighboring quads. It will then make sure any shared vertices are in fact at the same projected vector. This ensures a planet quad with edge vertices that match.
The problems comes in texturing each quad.
If I build the quad as a nurb surface from points I can place the texture easily because each surface UV maps squarely to my texture map (which is also square).
If I build the quad as a mesh I cannot just apply the square texture to the mesh UVs. This is because when you unwrap the UVs from a mesh they will not unwrap like a nurb surface's UVs. Therefore to get the correct mapping I would have to manipulate each UV back to an evenly aligned array (which is 1024 points in a 32x32 resolution UV). Maya and blender have 'relax uv' and 'align UV' functions but they don't do the trick and manual corrections are out of the question. So why not skip the mesh method and use the nurb method?
I did this and there is a trade off. The nurb will accept the material texture I want with no other work on my end but when I export the object as an .obj rhino creates its own mesh to describe the nurb(with various unsatisfactory setting options). This works great up to a point because at some level the interpreted mesh will have vertices that do no match at the edges, ie .. creating visible seams in the mesh. The picture below is the nearly seamless planet at LOD(1) made of 24 quads, each with 32x32 vertice resolution and a 512x512 jpg texture running in Unity3d 5. It works but at close level there are seams. This will be resolved simply by having the next LOD(x) instantiate before getting close enough to see the seam but at core nerd level I want the seamless mesh.
So, I can make the seamless mesh but I can not realistically texture map it. I can also make the nurb surface from points and texture it at the expense of the edge vertices matching. I am at the split in the road but I want to have my cake and eat it too. Thoughts, comments, trolls...?
Thanks for reading =)
Footnote: For you pros I am not using seamless noise across the map I am using grasshopper to sew up my otherwise non perfect edges.
Other programs in the pipeline:
-WorldMachine 2
-Wilbur
-Photoshop
-Unity3d…
ect + Geco
TUTORS:
Arturo Tedeschi (Authorized Rhino Trainer) + Maurizio Arturo Degni
Il workshop avanzato ECOLOGIC PATTERNS affronta l’impiego di strategie parametriche all’interno del processo progettuale, approfondendo l’utilizzo di Grasshopper in sinergia con plug-in, software di analisi ambientale e simulazione fisica. Obiettivo fondamentale è la generazione della forma come risultato di tecniche di form-finding e di input ambientali (solari, termici e acustici). Verranno acquisiti nuovi strumenti operativi e di simulazione al fine di costruire modelli parametrici ottimizzati in grado di adattarsi a diverse condizioni di contesto.
MORE INFO…
ive collaborative environment.
TYPE : Course module and Workshop
The event is open for anybody interested from all the fields of design, including: architecture, interior design, furniture design, product design, fashion design, scenography, and engineering.
1. COURSE MODULE (20-23 April 2014) - optional
+ type: 3 days intensive course regarding basic knowledge in parametric design (LEVEL 1)
+ software: Rhinoceros & Grasshopper
+ plugins: Kangaroo, Weaver Bird, Lunch box, Ghowl, Geco
+ achievements:
- acquainting to the components & the concept of Generative Design
- understanding the strategies in Algorithmic Design
- how to easily insert simple mathematical equation into the project to gain more control
- how to utilize proper plugins with respect to their nature of the project
- interacting with different analysis platforms such as Ecotect & remote controller
- solving several exercises with different scales( 2D- 3D ) during each phase of the workshop
2. WORKSHOP (23-27 April 2014)
A 5 day Design-Based Research Workshop exploring new techniques in Digital Architecture/Fabrication, with a specific focus on the use of generative systems and parametric modeling as tools for creative expression.
Our ultimate goal is to increasing the efficiency of utilizing digital tools in parallel with geometric performance of the primitive design agent.
+ + CONCEPT
Fashion and Architecture are both based on basic life necessities – clothing and shelter.
However, they are also forms of self-expression – for both creators and consumers.
Both fashion and architecture affect our emotional being in many ways.
The agenda of this workshop is to investigate on the overlap between these two areas of design, art & fashion.
Fashion and architecture express ideas of personal, social and cultural identity, reflecting the concerns of the user and the ambition of the age. Their relationship is a symbiotic one and throughout history, clothing and buildings have echoed each other in form and appearance. This only seems natural as they not only share the primary function of providing shelter and protection for the body, but also because they both create space and volume out of flat, two-dimensional materials.
While they have much in common, they are also intrinsically different – address the human scale, but the proportions, sizes and shapes differ enormously.
+ + + OBJECTIVES
So far, Architects have been using techniques such as folding, bending etc. to create space, structural roofs or different other structural shapes.
The agenda of this workshop goes further with the investigation of algorithmic thinking through generative tools Integrated in design.
The challenge is creating a bridge that connects these two areas of design, architecture and fashion that perform at two opposite scales.
+ + + + TECHNICAL BRIEF
In the early stages physical models and low-tech strategies will be used, allowing the participants to gain a greater understanding of materials, fabrication and assembly methods as well as simple, yet pragmatic structural solutions.
Later in the workshop these strategies will be digitalized and elaborated using software visualizing tools such as Rhinoceros and the algorithmic plug-in Grasshopper.…
, Engineer and Researcher from France with broad programming experience. He is the author of the City in 3D Rhinoceros plugin for creation of buildings according to geojson file and with real elevation. Guillaume already created a new component: "Address to Location". It enables getting latitude and longitude values for the given address:
2) Support of Bathymetry data: automatic creation of underwater (sea/river/lake floor) terrain. This feature is now available through new source_ input of the "Terrain generator" component. Here is an example of terrain of the Loihi underwater volcano, of the coast of Hawaii:
3) A new terrain source has been added: ALOS World 3D 30m. ALOS is a Japanese global terrain data. Gismo "Terrain Generator" component has been using SRTM 30m terrain data, which hasn't been global and was limited to -56 to +60 latitude range. With this addition, it is possible to switch between SRTM and ALOS World 3D 30m models with the use of source_ input.
4) 9 new components have been added:
"Address To Location" - finds latitude and longitude coordinates for the given address.
"XY To Location" - finds latitude and longitude coordinates for the given Rhino XY coordinates. "Location To XY" - vice versa from the previous component: finds Rhino XY coordinates for the given latitude longitude coordinates. "Z To Elevation" - finds elevation for particular Rhino point. "Rhino text to number" - convert numeric text from Rhino to grasshopper number. "Rhino unit to meters" - convert Rhino units to meters. "Deconstruct location" - deconstructs .epw location. "New Component Example" - this component explains how to make a new Gismo component, in case you are interested to make one. We welcome new developers, even if you contribute a single component to Gismo! "Support Gismo" - gives some suggestions on how to make Gismo better, how to improve it and support it.
5) Ladybug "Terrain Generator" component now supports all units, not only Meters. So any Gismo example file which uses this component, can now use Rhino units other than Meters as well. Thank you Antonello Di Nunzio for making this happen!!
Basically just forget about this yellow panel:
This panel is not valid anymore, so just use any unit you want.
6) A number of bugs have been fixed, reported in topics for the last couple of weeks. We would like to thank members in the community who invested their time in testing, finding these bugs and reporting them: Rafat Ahmed, Peter Zatko, Mathieu Venot, Abraham Yezioro, Rafael Alonso. Thank you guys!!! Apologies if we forgot to mention someone.
The version 0.0.2 can be downloaded from here:
https://github.com/stgeorges/gismo/zipball/master
And example files from here:
https://github.com/stgeorges/gismo/tree/master/examples
Any new suggestions, testing and bug reports are welcome!!…
Added by djordje to Gismo at 5:13pm on March 1, 2017
(1) I have been exporting small sections of a larger model into Maya from Rhino as FBX. In Maya I rotate and scale the models (-90 in X, Scale XYZ 0.001). The Named Views are being saved, but do not have a successful import into the Maya model. They do not appear as in Rhino, and the problem is not solved by scaling or rotating the cameras.
(2) If I try going the other direction, the cameras exported from Maya as FBX are also not aligning with the model in Rhino as they are in Maya.. I will do my best to post some images of the problem and hope you can help.
error !!
This is what the named views look like
here I am trying to the other way with a good view from Maya
strange placement..
This is the best result I can achieve, after I scale the camera by 1000
Any Advice???
Thanks, Robert.
…
ysim.ning.com/
When you run the simualtion you will notice on the batch terminal that Daysim is also being called, so you may want to consider how Daysim uses Radiance files & data.
Regarding your current problem, I think you stumbled onto something weird and interesting.
Interior and exterior readings appear to differ by 40 in the best case scenarios. Even setting the transmittance to 1 yields similar results. I tried changing from cummulative sky to climate sky and got similar values. Changing the test points did nothing either.
I think, (yet I'm too lazy to prove this) that the difference in values stems from diffuse radiation over the sky dome.
If you delete everything except the glass you'll notice that interior values are like 80-90% of the exterior values (this seems like the expected behaviour with a transmittance of 1). So, if we consider that a vertical window, part of an opaque box, is receiving radiation from 25% of a sphere, as you start to inset the interior test points the radiation they receive will be a fraction of the 25%.
Let me try to explain this better...The exterior surface receives radiation from a section of a sphere calculated by 180degrees on the xy plane (let’s call this angle theta) and by 90degrees (let’s call this angle phi) in azimuthal elevation. If you integrate this over spherical coordinates (theta from 0 to pi; phi from 0 to pi/2) you will find that it comes to a quarter of a sphere. By comparison, the interior surface will not integrate theta from 0 to 180degrees,nor phi from 0 to 90degrees, instead it will be the subtended angle from the exterior surface as a function of their separation; the farther in you go the smaller the view of the outside.
If my hypothesis is correct there shouldn't be that much difference since the separation is only 10cms...the subtended angle would be like 170 instead of 180 for theta and 85 instead of 90 for phi...overall if you integrate both spherical areas there should only by a difference of 10%.
In conclusion, I believe the unexpected behaviour stems from the previous subtended angle thing. If direct radiation was the only factor the difference would be the aforementioned 10%, which suggests that an additional source of energy is also affected by this. Perhaps indirect and diffuse radiation from other areas of the sky dome.
I’m definitely intrigued on why this is happening. Please post if you figure it out.
Regards,
Mauricio
…
TB of RAM. I think I'm going to start a GoFundMe campaign to buy one for myself :)
2- The server's cost is about $13 an hour. I get free access to supercomputer through my university and xsede.org because I earned an NSF Honorable mention last March, however, the supercomputers available through both resources are a little complicated for me to use, as opposed to the one available from amazon that has Microsoft server 2012 already installed.
3- I wanted to run 400 annual glare simulations for 400 different views.
4- I tried a to perform annual glare simulation for one view on my Dell XPS that has Intel Core i7-6700HQ processor and 16GB of system memory. The simulation took 2 hours to complete. Radiance parameter ab was set to 6.
5- I wanted to obtain the batch file for each view so I can run them on the server. So I used the fly component to run all 400 simulations and closed the cmd windows, that wasn't bad ( for me at least) because I asked my son to this job for me, he was just glad to help me :)
6- I created one batch file using this cmd command:
dir /s /b *.bat > runall.bat
This created a file with the path to each .bat file. I edited this file in Notepad++ to include the word "start" at the beginning of each line. This was done using the "find and replace" dialogue box.
7- I split my newly created batch file into 3 batch files, each one has about 130 file names and " start" before the file names.
8- installed radiance on my server
9- Ran the first batch file on the server, this started 130 cmd windows performing my simulations, CPU usage was anywhere between 90% to 100% and about 105 GB of RAMs were used.
10. It took about 5 hours to complete all 130 simulations, I expected to run all in 2 hours but can't complain because this would've taken about 260 hours to run on my laptop. After the simulations done I ran the second and then the third batch files ( total of about 15 hours).
11. I got 400 valid dgb files. Couldn't be happier!
…
azione parametrica e generativa attraverso Grasshopper, plug-in di programmazione visuale per Rhinoceros 3D (uno dei più diffusi modellatori NURBS per l‘architettura e il design). Il workshop mira a gestire e sviluppare il rapporto tra informazione e geometria lavorando sui sistemi ad involucro in condizioni specifiche.La discretizzazione di superfici (pannellizazione Nurbs o Mesh), la modellazione delle geometrie attraverso informazioni (siano esse provenienti da analisi ambientali, mappe o database) e l’estrazione e la gestione di queste informazioni, richiede la comprensione di strutture di dati al fine di gestire completamente processo che va dalla progettazione alla costruzione.I partecipanti impareranno come costruire e sviluppare strutture di dati parametrici per informare geometrie ‘data-driven’ e come estrarre le informazioni rilevanti da tali modelli per il processo di costruzione.
Modulo 2 – Il workshop, volto a promuovere le nuove tecnologie digitali di supporto alla progettazione e alla fabbricazione, esplorerà l’integrazione tra design e prototipazione tramite processi di stampa 3d di materiale ceramico al fine di comprenderne allo stesso tempo sia il comportamento del materiale che i vincoli e le opportunità offerte dall’utilizzo di tali tecnologie.Infatti utilizzando grasshopper ed una macchina a controllo numerico i partecipanti apprenderanno le modalità per la generazione parametrica dei modelli e la creazione del codice per la loro prototipazione (Gcode creato direttamente in Grasshopper). Il workshop darà quindi ai partecipanti la possibilità di testare direttamente i loro elaborati digitali stampandoli in modo da comprendere come le informazioni articolate tramite tali strumenti di design producano specifici effetti sia morfologici che estetici.…
ut in the next few days.
I've found getting really good handling of static vs kinetic friction to be a pain though.
Distinguishing between collisions and resting contact generally becomes more complicated than it might first appear.
If the collision with the mesh or ground is 'hard' I project the particle positions, so they can never penetrate, and reverse the component of their velocity normal to the surface (multiplied by the restitution factor). This means that whenever you have some structure of springs resting on a hard surface, there is usually still some tiny imperceptible bouncing. This makes it hard to properly apply static friction (which would zero the tangential velocity if the tangential force was below some threshold and it is not already sliding), because particles are generally not perfectly on the surface, even when apparently at rest. Obviously it's not good to have friction affecting things that aren't touching the surface.
This is the origin of the 'settle' parameter in the settings. The idea was that when the motion of a particle normal to the surface drops below that limit, it will be totally zeroed, and the particle becomes properly resting on the surface. I never really like having to use these kind of weird ad hoc fixes though.
Alternatively, if the collision is 'soft' I use a spring-like force to push particles out of the ground/mesh.
This can cause problems because in many cases you just want a simple constraint that they never go below ground level, and there is a limit to how stiff you can make these spring-like forces.
The advantage though, is that because any particle resting 'on' the ground/surface will actually be slightly below/inside it, and one can use this to decide whether to apply contact friction.
With bouncing collisions, it is a little simpler. There is just the question of what to do with the velocity component tangential to the surface. See the bottom comment by me here, for more on the 'tumble' setting:
http://www.grasshopper3d.com/video/kangaroo-traction-test
So you see, it is challenging to get one consistent model that will give correct behaviour for all cases (eg a simple static 'leaning ladder' type problem, a bouncing particle, and vehicle wheel traction), without having several of these odd seeming and non-intuitive settings.
…
Added by Daniel Piker at 11:11am on October 18, 2012
m is different from email spam.
Email spammers want you to buy their product. You are the target of the ad contained in each email spam you receive. Comment/web spammers want your readers to buy their product. You (the blogger, author, moderator) are not the target.
2. Web spammers are social engineers.
Email spammers write messages to get your attention. Comment spammers write messages to escape your attention. They want you to believe they are real bloggers, real people, writing real comments, so you’ll approve the comment and publish it on your site. They use flattery, appeal to your good nature, and simply lie in order to convince you to give them the benefit of the doubt.
3. Web spammers are basically advertising on your blog..
..and they're keeping all of the profits. They’re not even asking your permission first. Right now someone is offering to sell links from your blog to anyone willing to pay a few dollars (or a few cents). If your blog is well known, it may even be listed by name, with backlinks for sale at a set price.
4. It’s all about the backlinks.
Web spammers are selling links from your blog to their clients. They do this to game the search engines and trick your readers into visiting dubious web sites. Their clients are sometimes seemingly harmless, but are often peddling fake pills, porn, scams and malware. Sometimes they’ll use “buffer sites” – that is, innocent looking web pages intended to disguise the fact that they’re really advertising something more sinister.
5. Spammers employ humans.
Not all spam is delivered by spambots. Spammers are increasingly using humans to write and post comments by hand. Typically they are exploiting low-paid workers in internet cafes, schools and factories. Sometimes they are viral marketers paid to promote a new product. Either way they are trying to exploit your blog for their profit – and hoping to do it without you noticing.
…
Added by Danny Boyes at 4:51am on October 24, 2013