How do I use vision processing with PID?

This year is my team’s first time using vision, and also our first time trying PID. I have the vision processing down, and I understand how to get PID working with gyroscopes. However, I would like to use vision as a source for PID, but it isn’t a PIDSourceType. How can I make it a source type? Assuming it’s even possible.

If you feel like looking, here’s my code.

https://github.com/CyberCoyotes/Test...team3603/robot

Vision on its own isn’t ideal for PID control.

In most cases, you would use vision to set a target (heading, distance to travel, etc) and then use other sensors (like a gyro, or encoders, or an accelerometer) to give feedback on the progress to the target.

Agreed. If you do not have positional feedback, you can use time and voltage to get you to the target, but it’s going to be a lot slower and/or less reliable. When we’ve done this, we have applied time and voltage to go what we estimate to be 1/2-3/4 of the way to where we want to go. Going much farther sometimes put the target out of the camera window when we went to calculate the next step.

On the other side of this:

The next level of sophistication is to keep doing the vision processing as you move towards the target. As each image is processed, use it to update the target in the inner (PID) loop. If you do this while moving, you need to remember to get your location as you take each frame - don’t wait until after you’ve processed the photo, as you would apply the information from somewhere behind you to where you are now.