OpenCV Camera Calibration?

I’m just curious. For years, I’ve dabbled in running OpenCV on a Raspberry Pi or Jetson to do some shooting tasks.

The first step I always do is run the camera calibration programs. I use the chessboard style calibration.

It never works. Well, I shouldn’t say that, but it never works as well as doing it some other way. I can get the values of the K matrix better by measuring the focal length. I have a tape measure attached to the wall. I take a picture from a known distance. From there I can figure out the focal length in pixels. I know that the value for the offset vector is the center of the image. It all works.

By contrast, the calibration parameters are always slightly off. I might be 20 pixels off from the center of the image. I might have X and Y focal lengths that don’t match, even though I know that they do, in reality, and I can look at the error values and know that things aren’t perfect.

I’m just curious if other people have this problem, or if there’s a trick to this. I usually capture about 100 pictures. I use and 9x8 chessboard, with 25 mm squares. (The first year I did this, I printed out a chessboard pattern, but getting it absolutely flat was difficult when I taped or glued it to a target. Then I bought a very rigid European checkerboard, and taped off a couple of rows, so I am confident I have a perfectly flat, perfectly measured, board.)

It’s not a crisis or anything. As I said, I know how to make it better, so it isn’t a huge deal. On the other hand, I’m usually working with a narrow field of view camera, so I can mostly ignore distortion. If I want a wider field of view, I get the fish eye effect, and then it becomes really important to get the distortion coefficients right. If I know the focal length and offsets are a little bit off, I’m worried that the distortion coefficients will be as well.

Anyway, I’m mostly just curious if others see this problem, and if there’s an easy way to get numbers I can trust. In my experience, taking twice as many calibration images doesn’t help much, but I wonder what others’ experience is.

PhotonVision team asked me to see if I could get the ChArUcoBoard pose guidance calibration Python program into Java for PV integration. I’d value your comments if you’d give it a try. The standalone version is in my GitHub - tom131313/calibrate (see the releases).

It’s been running about 10 frames captured to meet its convergence criteria.

It’s nearly ready to try in PV and that team will have to decide the final disposition for including in PV. That project is also in my GitHub but it’s not well tested yet so don’t be tempted to try it. I expect the standalone version to run well as I have fixed what I thought were bugs in the original.

Sorry, I am not directly answering your question, but maybe there is some inspiration to be found here. I use this method to calibrate cameras for use when writing my own code for tracking Apriltag targets.

Now admittedly, I now use PV on my teams as it just does a better job than my code does.
I hope this helps.

It always makes me nervous if I see a 10 minute video saying how to do something in five minutes.

However, they got results comparable to mine. I noticed offset numbers of 311 and 233. (If I remember correctly.) The actual values ought to be 320 and 240.

There’s also some great information there about making it easier to communicate using a laptop. I usually set up a monitor and it’s a bit of a pain. Thanks.

I took a quick look. A real answer will take a bit more. I’ll give it a try.

I probably should use PhotonVision, but I want to get into the details myself. The best thing about Pv is that since it’s open source I can do both. I can see how they did their code, and use it if I like. It’s a wonderful resource, even if I don’t use it myself.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.