A Step by Step Run-through of FRC Vision Processing


#1

With vision having the potential to play a crucial role in competitions this upcoming year, we hope the ideas in this white paper help other teams benefit from the results of an effective vision system. This paper provides a detailed analysis of the implementation of vision processing for both rookie and veteran FRC robotics teams. After first-hand experiences of vision processing throughout the 2018 Power Up season and years prior, we have recognized both our successes and failures. After compiling these key findings together, we created this guide to lead teams through current and future challenges, in hope to aid them in putting together a complete and accurate vision system in sync with their robot. Using this step by step process, teams have access to a detailed outline, guiding them to create their own successful system.

For more information on the server, check out our GitHub. Good luck this year!

The LigerBots
Team 2877

LigerBots_Vision_Whitepaper.pdf (964.4 KB)


How to know your robot angle compared to vision taget
Robot positioning with Limelight?
Hatch Panel Alignment
Vision Following while incoming at angle
How does opencv camera calibration work?
#2

Excuse my ignorance, but I’m still confused on what the world variables are and how you are calculating them. Anyway thanks for the awesome document, will definitely help me out this year!


#3

I’m very excited to read your paper, I’m currently developing highschool curriculum that centers around machine vision, and love seeing how others present the information.


#4

World coordinates are coordinates relative to the target (with for example the center of the target being the coordinate (0, 0, 0)) while camera coordinates are relative to the camera itself. The OpenCV method solvePnP() does this transformation between systems for you. Calculations just turn out to be simpler when placing the origin at the target rather than at the camera when calculating the distance and angle in accordance with the robot since we only care about the target.


#5

How did you power the ODROID-XU4 from the robot? I noted it takes 5V/4A which is more amps than supplied from the Voltage Regulator Module.


#6

We used a 5A buck converter. About $10 on Amazon. Happy to get the exact model if you want.


#7

Sure that would be wonderful


#8

The world coordinates are the real life coordinates of the corners of the vision target. As degman said, it simplifies life if you set the origin “(0,0,0)” at the target itself. Given that, you get the coordinates from the size of the reflective tape. So in 2018, the corners of the retro-reflective target on the Switch would have world coordinates of:
(-4,0,0), (+4,0,0), (-4,15.3,0), (+4,15.3,0)
These are the outer corners of the two strips combined. You could also use all 8 corners of the two strips if you wanted. Also, the above coordinates implies that the exact origin of World coordinates is at the floor midway (horizontally) between the two strips.


#9

It is from DROK:

I guess technically it is a step-down converter, but will deliver the 4A. Make sure to adjust the voltage before connecting the ODROID. The potentiometer screw did take a bunch of turns before it started affecting anything. But the unit worked well throughout competition, and holds 5V output down well below brownout for the Rio.


#10

So the world coordinates are just the dimensions of the reflective tape?


#11

Yes, that’s correct, with the Z coordinate fixed at 0.


#12

Is it the physical dimensions (inches/ centimeters/ etc) or is it the pixel count or is it a relative measurement to something else? Thanks for the help!


#13

For the inputs to solvePnP(), the world coordinates are physical dimensions.


#14

Again excuse my ignorance.

We’re using limelight, do you know what the equivalent function is?

If not what exactly does PnP output? I’m sure I could figure out an equivalent


#15

Sorry, I don’t know much about the Limelight. You will need to check their docs.

solvePnP() is documented here:
https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#solvepnp
The interesting output is:

  • rvec = the translation (distances) between the camera and target
  • tvec = a packed 3-float array with “angles”. You need to use the " Rodrigues()" function to unpack that to a real 3x3 rotation matrix between the two coordinate systems.

#16

@Degman Very nice paper - thank you.

We are getting started with vision, using the new FRC image on the Raspberry Pi.

Do you have any comment on how much extra the learning curve would be to move to the ODROID?


#17

I have not looked at the RPi image. My guess is that the extra work is setting up the OS with the correct software. Not hard, but benefits from some experience with compiling etc.

But I really do like the ODROID-XU4. It is really about 3x faster than the RPi, which is significant. Its GPU is technically useful in OpenCV (using OpenCL), which is not true with the RPi; however, what I have learned is that the GPU is used for so little with typical FRC processing that it is a minor improvement.


#18

Thank you.


#19

Thanks for being so patient with me. This helped a lot. If anyone else reading this is struggling another great link besides the one prensing provided is:

https://www.learnopencv.com/tag/solvepnp/


#21

Does anyone have a working calibration json file for the LifeCam?