We want to have the robot align itself with one pole, drive forward until it reaches the pole, and then hang the ubertube.
I’ve been looking through the 2011 Vision Example VI. It seems to have the necessary data for this (locations of the targets and the poles), but I’m not sure how to use it to control the motors.
If you know the location in the camera image of the pole you want to reach, and you know the location of “straight ahead” on the robot, you can compute the angle the robot needs to turn in order to aim for that pole.
If you know the height of the target in the camera image, you can compute the distance to the target using some simple trigonometry.
Use the turn angle and the drive distance to derive inputs to an Arcade Drive function.
You could use a PID to strafe and line yourself up horizontally with the pole.
Mount your camera such that the center of the image is “straight ahead” for the robot.
If you can see the end of the peg you want to hang on in your image, make the distance between that and the center of your image equal to the error.
Strafe in the correct direction until the error is zero!
As far as stopping at the right forward position. You can use a triangulation method like Alan said. If you don’t want to calculate anything, just see what vertical height of the target in your image corresponds to the forward position you are looking for. Set up a PID for that as well with the setpoint equal to that vertical position in your image.
Okay, I think I have a basic idea of what needs to be done.
But another question: How could we get the example vision code into the FRC framework? Does it go into VisionProcessing.vi, RobotMain.vi, or somewhere else in the code?
Vision Processing sounds like a good place. And to share results, write the results to a global variable or two. Make the driving code read from the globals.
I thought the images from the camera had to be processed by the CRIO before being sent to the computer? Or am I gravely incorrect? If you’re right, is the number of ports on the router the only limitting factor for adding additional cameras?
The camera runs a web server. It can handle connections from up to five computers at once. The DLink has room for all sorts of enet devices including cameras. If the rules don’t prohibit it, you could stick another switch on the robot and connect even more devices.