JavaCV and OpenCV Camera Calibration with sample code

This document contains a procedure to calibrate an Axis Camera using JavaCV and OpenCV.

The procedure starts with nothing, and walks through the camera setup, JavaCV and OpenCV installation, plus a description of the three sample JavaCV calibration programs (frames per second, manual calibration, automated chessboard calibration).

Also included is a PPT of our ‘chessboard’, in a zip file since this forum does not recognize pptx files.

Calibrating an Axis Camera using JavaCV and OpenCV.docx (949 KB)
smallChessboard.zip (30.3 KB)


Calibrating an Axis Camera using JavaCV and OpenCV.docx (949 KB)
smallChessboard.zip (30.3 KB)

I’m curious if you found it necessary to do the calibration? NI IMAQ has a similar calibration procedure, but for simplicity, I haven’t bothered to include it.

In other words, what forms of image processing failed without the calibration and worked afterwards?

Greg McKaskle

We calibrated the camera to remove the barrel distortion.

We are counting pixels to determine height and width for the backboard rectangle. From the height and width in pixels, we can determine range and bearing between the camera and the backboard.

Using a radial target pattern, we determined there was a 25% difference in pixel count at the center of the camera and the edge of the camera field of view. Specifically, 41 pixels per cm at the center and 31 pixels per cm at the edge.

An error of up to 25% due to the backboard placement in the field of view would lead to range errors greater than the diameter of the hoop, and so we would not be able to determine how far to launch the ball.

If we were to reduce the distortion error, our subsequent calculations for range and bearing would improve and our chances of launching the ball on the correct trajectory would be much better.

This is huge help. It was on my list of “NEXT” to do.

25% is a pretty big difference.

The white paper contained a pretty full shot of the target. I attached it again below. The target doesn’t go entirely to the edge of the screen, but pretty close. Quick measurements there on the rectangle have it at 200pixels on one vertical edge, 209 on the other, and 216 in the center. I honestly can say for sure which camera the shot was taken with any longer since I carry both in my bag. What camera were you measuring?

Greg McKaskle

field 33 .jpg


field 33 .jpg

We are using Axis 1011.

Check out these images for comparison. One is the original, the other is after the correction. Look at the edges to see how many pixels off the edges are. When the image is up close, errors are insignificant (more pixels to get the distance). As the robot moves away from the hoops, the rectangle gets smaller in the frame, and moves towards the edges.

Check out this radial shot and measure the pixels per cm between rings 1-2 and the pixels per cm between rings 7-8.

10.jpg
undistort.jpg
20120112_16-06-08.jpg


10.jpg
undistort.jpg
20120112_16-06-08.jpg

Suppose your camera was a somewhere in mid-field.

If the camera were looking directly at the backboard, that is, your rectangle is in the center of the field of view, then you would get a large reading for the pixel count, indicating you are close to the backboard.

If the camera were looking off to the side, then your rectangle is at the edge of the field of view. The rectangle would be smaller by up to 25% in pixel count than at the center. Any calculation based on that smaller number would say that you are farther than you really are.

The answer you get at the same distance depends on where the backboard is in the field of view. You can correct for that after the fact, or you can correct the image in the first place. We chose to correct the image.

I understand what calibration is and the benefits, but I had assumed that it was rather expensive, and not all that necessary.

Do you expect the robot to shoot at targets near the edge of the camera?

Greg McKaskle

Depending on the robot’s distance from the backboard, the rectangle moves up and down in the field of view. Also, depending on the tilt of the camera, the rectangle may be anywhere on a vertical line at the time of launch. We should be able to get the side to side alignment fairly close.

Some correction due to vertical placement seems to be required. Depending on the processing configuration, when this happens might change. If the cRio is doing the image work, it may be reasonable to do this correction after the fact. A coprocessor off the cRio may have enough cycles to correct up front, and simplify the geometric calculations.