Last year, we managed to get our robot to track the basketball rings successively. Unfortunately, when we arrived at the competition, the lighting was different, causing our vision processing to stop working. We could not re calibrate the camera on the playing field, and so were forced to abandon the idea. If we ran the project via labview, then we could select the color on a camera feed in the vision processing VI. In order to not face the same problem this year, we have considered trying to select the color on the dashboard, and have that sent to the robot. This is though, inefficient and requires us to add another layer of communication to the robot. I would like to know how other teams are coping with this and if anyone has a better way of doing this. The Vision processing Vi used is attached.
You better go with the monochrome option of the image processing rather than color for better “compatibility”. We used Instensity last year, but there seem to be a few better options in this year’s code.
Also, when you use the monochrome option, it’s very easy to implement the lower value limit and upper value limit slides into your custom dashboard (you can do that with the color too, but I pity the poor driver who would dare to mark a spot on the camera image’s frame mid-game).
Also, use a LED ring around you camera’s lens! I can’t stress it enough.
I would definitely second the idea of putting an LED ring on your camera - it at least doubles the effectiveness of the tracking. In addition, all teams should be allowed time to calibrate their cameras at regional events. Did you not have time, or were you not allowed?
We did all the vision processing inside the dashboard program last year, so adjusting the color settings from the dashboard was quite easy! Basically, the robot sent the camera image to the driver station, and the driver station sent rectangle coordinates back to the robot. All the image processing happened in between.
And yes - you really really need a ring light. We found that green is a good option, since it’s not a hue that you usually see elsewhere on the field.
We did that too and honestly it tends to be our favorite way to go with vision processing so long as we don’t have a coprocessor. Vision processing is eating the cRIO alive, and considering the default robot/Dashboard code already sends like 2 or 3 int/bool values from the SmartDashboard to the robot, sending the X/Y relative coordinates of the target, the distance from the target in ft and whether or not the target is in the center of the frame in bool shouldn’t be such a problem as far as bandwidth is concerned
The Rookie Kits did not have a camera in them this year, so the rookie teas that missed out on the few axis cameras available through First Choice will need to purchase their own cameras. Even though we are rookies we where hoping to use a vision targeting system to help alleviate some of the pressure on the driver.
Does anyone have any recommendations for a goof Vision Camera system. and LED ring?
We did indeed acquire 3 different light rings: White, Red and Green. Of these three we discovered that green worked best and white was the worst. We are indeed looking at monochrome. We tried having the vision processing happen in the dashboard, but this slowed down the camera frame rates from 20 frames a second to about 7.
Tangart, if you read the vision processing white paper there will be a link to a online shop which sells the led rings at a reasonable price.