Looking at some of the videos of autonomous modes, I got to wondering about this.
If you’re using the camera to track a target, how are you dealing with loss-of-target? Lemon Squeeze has a fairly simplistic system that just extrapolates based on trajectory and rate of acceleration-deceleration, but I was wondering if anyone has used some other strategy(i.e.rangefinders to relocate a target or markov models or something)
Last year we had a robot chasing the big trackballs around with the cmucam. All we did was say: if the robot is tracking and moving left, and it loses it keeps turning left in hopes to find it again and vica versa for the going right. You could probably, like you said, use the rate of turn in predicting how fast you should turn to find it. I would say its a safe bet to say that if your tracking to the left and you loose it that the target is to the left of you…
the only thing is that with the axis cam it’s confidence settings fluctuate a lot more, and your target area is smaller, so if it jumps out of the field fast enough, you lose the direction, and the values change too much for finding the rate of the pivot. we are still doing it with the direction settings though …
Well i’m assuming that most people aren’t using two cameras so your acceleration/velocity estimates are based on the projection of the object onto the image plane – using this to predict trajectory is noisy at best.
Given that you have the object centroid in an image plane you can localize the object to a single “ray” in space (look up how perspective projection works http://en.wikipedia.org/wiki/Perspective_projection#Perspective_projection)
If are interested in AI (you mentioned markov models) perhaps you could build a theoretical model for ball behavior once projected to the image plane-- would be an interesting project involving math (mostly geometry and maybe statistics if you model noisy measurements probabilistically)
The bounding box size of the “ball” in the image plane gives you a pretty good estimate on depth along that ray. This gives you an estimate of 3d position. Perhaps you can fit a curve through the 3-4 estimated previous 3d positions of the ball and use that curve to project the balls motion forward.
The next part is extracting 3d velocity/acceleration measurements from a 2d projection – this will involve doing the inverse perspective transform on the 2d measurements (which is possible given your depth estimate for the ball). Alternatively a much easier way is just to work in 3d … i.e use the 3d estimates to find 3d velocities – I think this is maybe slightly less error prone
Note that this idea is probably not practical in terms of time/ performance benefit but is a great project from a theoretical /research perspective