Quote:
Originally Posted by AirplaneWins
Could you explain your vision tracking process this year, I heard you guys used 2 cameras. And what coprocessor did you use,if any?
|
Sorry for the delayed response. Life got in the way of robots again
As Travis said, we wanted to do stereo, but didn't get around to verifying that it worked well enough to start using the distance that it reported. One of the side effects of stereo cameras was that we didn't need to deal with the transforms required to deal with the camera not being centered. Our shooter didn't have any space above or below the ball for a camera. The bottom of the shooter rested on the bellypan, and the top just cleared the low bar.
We did the shape detection on the Jetson TK1, and passed back a list of U shapes found to the roboRIO over UDP in a protobuf, including the coordinates of the 4 corners for each camera. We didn't find that we needed to do color thresholding, just intensity thresholding, and then shape detection. This ran at 20 hz, 1280x1024 (I think), all on the CPU. The roboRIO then matched up the targets based on the angle of the bottom of the U.
We were very careful to record the timestamps through the system. We recorded the timestamp that v4l2 reported that the image was received by the kernel, the timestamp at which it was received by userspace on the Jetson, the timestamp it was sent to the roboRIO and the timestamp that the processed image was received on the roboRIO. The let us back out the projected time that the image was captured on the Jetson in the roboRIO clock within a couple ms. We then saved all the gyro headings over the last second and the times at which they were measured, and used those two pieces of data to interpolate the heading when the image was taken, and therefore the current heading of the target. This, along with our well tuned drivetrain control loops, let us stabilize to the target very quickly.
Ask any follow-on questions that you need.