Process one Image

Hi everyone. For the life of me I cannot wrap my head around how to go about vision processing.

All I want to do:

  1. Take 1 picture at the beginning of autonomous.
  2. Identify the presence of the dynamic vision target.
    3a. If present, drive forward and launch ball. (This code already in place)
    3b. If not present, wait 5 seconds, then drive forward and launch ball.

I have ran the 2014 Image Processing VI, and it can identify the targets. I have also gone through Tutorial 8 - “Integrating Vision into the robot code” on the Getting Started screen. I understand the basics of how the VI identifies and scores the targets.

At this point I’m stuck. I can’t seem to find any resources for how to actually capture the image, process it, and decide to shoot or wait. I understand that processing on the cRIO is slow, but would it work if it is just processing one image?

It’s difficult being the only programmer! I just want to thank the CD community for the help they have given on other topics, I hope to be able to learn enough this year to get more team members on board with programming next year.

Instead of taking a picture, our bot does the same thing except with real-time video capture.
For the entirety of autonomous, our bot checks for the light to flash, then moves forward when it is detected. This is done by having the code set a global variable named “State” to true whenever there is a target detected (i.e. the light) and activating the autonomous code when said “State” is true and remaining still while false via case structure. Unfortunately, I do not have the exact code on me right now, but when I get back to the team’s computer I can get more detailed on it.

Our team uses real time video for auton. We also have our vision processing and target identification done on the dashboard as to not overload the crio.

However we end up doing it, the main point I still can’t figure out is how to send a signal to the auto VI indicating that the target was found.

You can set a global flag in your vision processing code that the Autonomous vi can read.
Be sure to use an image you capture after the start of autonomous and not one that was captured before (when everything is lit up).
I.e., don’t set the flag constantly, wait until you are sure.

Adding Global Variables

I was able to figure it out last week Thursday. Thanks for the help guys. I went back through and looked through the tutorials again. Apparently I was following the steps to have the cRIO process the image, and was missing something. I just re-did it, instead offloading the processing onto the dashboard. From that point I simply sent back a boolean value, like Mark suggested. I also added a fail safe in auto that will cause the robot to fire if it does not detect the target after 6 seconds. This way, we will ensure an attempt at a shot, even if our camera is having problems.

I have yet to test it on our actual robot, but will be able to during the rest of our un-bag time tonight.