generative modeling for Rhino
Nasa, 7:35AM (GMT+1), "Touchdown confirmed for Mars Curiosity!!!"
To celebrate this little step I've made a GH definition that use NASA height data to build the Mars surface. It's not very advanced and it has some bottlenecks that could be great to solve.
Beginners could find interesting the method used to extract values from an image that uses a custom colour scheme to represent data (is not possible to extract hue values directly) or the simple displacement method used. These are the curves that follows the colors in the reference color gradient used by NASA in RGB (up) and HSV space:
You can see that there is no linearity or constant change in any channel, either RGB or HSV space.
There is some bottlenecks in the definition that you must take care about: data comparation component (Find Similar Members) and surface closest point (Surface CP). If you want to improve it, please do it, but send me the modified version ;)
Take care of not raise or push down the values in sliders without saving or know perfectly how much data your computer is able to calculate without memory problems. The main sample image is really big and the more resolution you set, more memory it will take to sample and calculate.
Thanks to Andrea Graziano for data links :)
Link to my blog post: Blurrypaths
It's very cool, man!!
Thanks for share.
Great work you Spanish genius !
To solve the first bottleneck, when a mesh is created most of the time texture coordinates are added to the mesh that correspond to the UV values of the original surface. You can retrieve them using a scripting component (see attached def.).
To solve the second bottleneck, I googled for some info on the image. I found that if you go NASA's MOLA website, you can download .img files that can be converted to grayscale .png files. They also come with a .lbl text file with all the necessary info like the min/max values, projection type and so on. There's no need to use the "find similar" component if you use a grayscale image. Besides, the image you included was shaded (maybe you attached the wrong one) so it isn't suitable to create a relief map from it.
In other words, I didn't really solve your second bottleneck. I just used a more convenient data source.
Here's a screenshot with a mesh at 1/4 the resolution of the image source.
Since the attached definition contains the relief image, it's 26 MB.
Can't post files > 5 mb, so here's a link:
Unfortunately all those images are shaded, so they are not a good source to generate geometry from them.
About dettaching the image, i wanted the definition to just work when opened. I'll attach here a lower resolution version since the other link probably won't be working for long.
Hey! I've found what you told me :) http://pds-geosciences.wustl.edu/missions/mgs/megdr.html
I'm gonna play a little bit more this afternoom :)
Thanks for the advise!
I've found these tools to convert label objects from http://pds-geosciences.wustl.edu/missions/mgs/megdr.html to JPEG, GIF or PNG files.
To convert the .img format to .png i used this.