Grasshopper

algorithmic modeling for Rhino

Or as I lovingly call: To Measure a Shadow

RECORDING TRACKED COLLINEAR GLYPHS WITH EMGU CV

This project is the first thing I am posting with this community after extensive exploration into coding, bitmap processing, Kinect, video tracking, etc over the past two years! This end script is sort of a solution to a lot of questions: Why can't the Firefly for Xbox 360 Kinect use the coordinate mapper function? How do I track glyphs in Grasshopper in real time and record them? How do I maintain functional FPS when recording? How do I convert between screen resolutions and reformatted point clouds?

 

The structure of this script is to start with one component automatically triggering the recording of a Point Cloud and other necessary data, shut itself down, and start up the next Calibration Component. The idea is that if the camera is immediately restarted under a coordinate mapper function, the offset can be observed and used for calibration.



The physical glyphs are squares of a color with high saturation. To aid in the consistency of the results, four square glyphs were mounted to a skinny length of steel.  

I wrote the ColorBinary and D-Range Components seen in the first attached image separately to make the Firefly2Emgu component a smaller undertaking. Processing and using the Kinect V1 Color Stream and Depth Stream could be a tutorial in of itself, as well as the entire other world of EmguCV. 

The second attached image shows the point cloud of the results. I let the component output three indexes to better show the process of the Calibration Component. For each glyph, the bottom right point is the original output of the tracker without any calibration. From there, the most upper left point is the square offset from a value isolated by observing the post process. The third point is the end result created by my Calibration Component.  

One limitation of this script is that after recording with the first component, there is significant computation time. Another issue is that the end result is limited to the original resolution of the grid set in the first component, which is smaller to improve FPS. I understand that many people have moved on from the Kinect 360, but I was curious why the coordinate mapper was not worked into the original Firefly design and wanted to explore alternative solutions. I have been sitting on a newer Kinect for some time, and can't wait to see FPS under a newer, similar, singular component with a similar mission statement. At any rate, I plan to create a similar post process setup with the new Kinect and compare.

This is a work in progress, but I wanted to share.

Many thanks to this forum and Andy Payne for all of the Firefly tools and inspiration.

Views: 2063

Attachments:

Replies to This Discussion

Hi Nathaniel,

Thanks for sharing this project! I'm really interested in what you've done here. I'm trying to use a similar approach to map physical architectural models and bring them into Rhino in real-time. Using the AR Sandbox as inspiration, I have a Kinect v2 and am mapping the placement of columns and beams using depth tracking and colour sensing. I have a couple questions.

1. Did you write your ColourBinary and D-Range components yourself

2. I notice you have something called FireflyExtensions and KinectExtensions.. what is that? I have never seen either plugins

I'd love to chat further and if it's easier you can reach me by email at alexandra.pittiglio@gmail.com

Hi Alexandra, 

Thank you for your interest! To answer your questions, yes, the ColourBinary and D-Range components I wrote because I didn't want to have so many inputs coming into one, single component. Also, the EmguCV color binary options seemed to significantly slow everything down. The use of the D-Range component in my photo is actually just a visual aid when running the script, to tell where your body is so you can get a good recording. The FireflyExtensions and KinectExtension categories are just how I wanted to keep track of them at the time. I haven't released them anywhere yet, but they really should go into the firefly library. Your project sounds really cool. I'll send you an email, so we can chat.

Nate

Hi Nathaniel,

I'm very fascinated from your work. I'm trying to integrate a computer vision system in GH+Ironpython, but my tries are going bad.

Have you got some reference, some hint, to configure/install EmguCV with IronPython?

Hi Carlo, 

I wrote my components in C. From What I see Emgu CV says it can be used from Iron Python. The big difference here is I used the Kinect depth data at the same time as Emgu CV, so I have to start with the Kinect. The V1 Microsoft Kinect library was written in C. You might have problems using Firefly to initialize the image because it is written in C? 

I don't see why you can't write your own grasshopper bitmap component from scratch with IronPython importing it, and converting to Emgu CV image.

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service