This may or may not be a very plausible idea, but I am having problems getting it to work.
Our robot has an arm that extends our minibot. It needs to attach pretty accurately on to the pole and that's rather difficult. We have about 2 or 3 inches of allowable error. What I'm trying to do is make some code that will track the FIRST logo at the bottom of the minibot pole and automatically align the robot. I've made some code and tested it with some pictures I took with my camera. It worked fine.
Today we unbagged the robot and the vision code wasn't really working at all. This was understandable because this is my first time doing vision code. I expected some errors. When I went into the Vision loop to probe around and see what was wrong, everything was grayed out. You know how indicators on probes are grayed out when the section of code isn't running? That's what I was getting.
Even more strange is that the image preview I put on the front panel showed one processed frame. That's all it did was process one frame. It never updated.
If it helps at all, the sequence I'm using to process the image is to fiddle around with gamma and contrast, find a template of red, gray, and blue, and then find the coordinates of the found template. I've ftped the template on the cRIO. Is there some step I'm missing that needs to be done when working with vision on the cRIO?
Thank you very much for this. I can post my code if you need me to. In fact, I have a gitweb server that you can download a copy of the code from if you want to take a look. My hard drive recently died so my last update is kind of out of date. I'll update it soon. By soon, I mean as soon as I post this and go to my team laptop. (I just need to find my private key somewhere...)
http://98.243.163.242/gitweb