Target Tracking PID Lag

Hi. I need some help implementing the x position from the camera into a PID to align the robot with the target. I already know how to use PID’s and vision tracking. The problem is, obviously there is a lot of lag when it gives us our value, making it hard to set the gains on the PID. How can I compensate for this lag??? I’ve heard of someone on another thread that THEY compisated. I just don’t know how.


Are you saying that your camera is lagging? You should reduce your framerate/quality (these can be found when you open the camera in Begin. Lower the framerate and raise compression). You should also make sure your vision processing code is as efficient as possible and runs as quickly as you can make it.

If you want your robot to respond faster through your PID controller, you should raise Kp or Ki, depending on how exactly you want it to respond.

Aah yes. I completly forgot about framerate and compression!

The PID is reacting fast enough, it’s just that the camera isn’t updating the ACTUAL values fast enough.

I have one more question. Which camera is the best to do processing on? and at what framerate/compression? The Axis 306 or M1011? if I am correct that’s the model.

The most effective camera-based tracking systems I’ve seen don’t use the camera image to provide direct feedback. They use the image to decide how far to turn, and then use some other feedback (gyro, turret position sensor, whatever) to close the loop on the aiming. The occasional camera image can be used to update the desired setpoint.

The camera is a slow sensor, 30Hz tops, and depending on the exposure time, compression time, transmission time, decompression time, and processing time, tends to have quite a bit of lag. To get an idea of what it is, the most clever test I’ve seen (not mine), displays a high resolution clock on the computer screen and the camera image next to it. The camera is pointed at the screen so that you can see the clock, the computer reads the image and displays it next to the clock – but lagged, and at any point you can do a screen capture to see how the clock and image of the clock differ. The clock can be a LV numeric with a loop updating the current time or the milliseconds primitive value.

To measure on the cRIO, I’ve usually used an LED, like the user LED on the cRIO. I setup the camera to stare at the LED, then the code starts with it off and at t0 turns it on. When a camera frame shows the LED on, that is t1. Do this a number of times with intentional jitter to avoid the aliasing of when the camera takes its exposure, and you can calculate an average and deviation given the camera and other elements. My expectation is that the 206 and M1011 are pretty similar lag if using MJPG. I saw the 206 being faster if requesting successive jpegs. The compression rate doesn’t have much to do with the lag (it is assisted by HW), except that the cRIO memory manager takes longer to allocate or resize blocks above 16KB. The M1011 allows for more elaborate specification, I think it allows for fixed size streaming, but WPILib doesn’t expose this.

So, clearly this is doable, but you need to pay attention to the details. To give ballpark numbers, I measured this in 09 on the 206 and cRIO and got about 60ms at the small and medium image size – if I remember correctly. This included decoding, but minimal analysis. If you measure your analysis times, you should be able to add 60ms and get an idea of the lag.

Another comment, fps impacts how often an acquisition takes place but doesn’t impact the lag of a particular acquisition.

Greg McKaskle

If I DO, measure the lag. How would I program it to compensate it??

Also I have heard that you can measure the angle from target to center of the camera using: xPosition*(FOD/2). I have tried this but the camera is always updating the xPos. How can I ‘store’ the xPos so I can get ONE angle out of that and put it in our gyro PID?

Edit: Oops. I heard that here :stuck_out_tongue:

Any help???

See this thread: