Using NI Vision Assistant

We have used this a little for tracking the rectangle of the backboards and had some luck using the edge detector feature in vision assistant. Just wondering if anyone else has played around with this and any thoughts on how to have this process images in the background on a driver station laptop and then send directions back to the robot in a reasonable amount of time.

I would use a color plane extraction to achieve a grayscale image (The plane: your choice, usually one that appears to make the best). What our team did was use the tucked away vision tutorial, which allows range finding, and specific color thresholds also!