Cam resetting search instead of switching to tracking

We’re using Kevin Watson’s camera, tracking and pwm libraries from the streamlined version with our camera to calculate the distance between the robot and the base of the rack. These libraries have been manually added to the default 2007 code which we have been working on, so we’re not using his included user_routine.c or user_routine_fast.c files. We’re using mplab to do the programming. Our camera is currently mounted according to the official instructions (not Kevin’s instructions) but we have switched the proper values in the header file to reflect this. We have also increased the max tilt pwm value to 244 (one step of 50 above the default) in order to give our camera the ability to track the light at a greater angle. So we have 4 levels at which we track instead of the default 3.

The camera is able to track the light perfectly after it detects it, but we are having some issues with the camera detecting the light when the light needs to be detected on the third (second-highest) or fourth (highest) search level. When the light can be detected on the first or second level it works fine and we can then move the light up/down, left/right and it tracks fine.

However, if we uncover the light at a height where only the third/fourth level search can detect it while the camera is currently searching at levels 1 or 2, the camera will move along, then when it comes to the end of level 2 it will sweep across back to the starting position at level 3 and it will pause for a second as if it detected something, then resets back to level 1 and starts searching again without locking on and without completing the search of levels 3 or 4.

However, if we allow the camera to begin searching level 3 (ie: it moves to the second pan value of the third level) and then uncover the light it will detect it and lock on without resetting the search.

I have a theory but I havn’t had a chance to test it, I’m just posting it here to see if anyone can give me a reason why it is dead wrong and I should not bother any testing. This is the first time I’ve ever programmed in c, but I have a lot of experience in other languages.

My theory is that as the camera moves from the last pan value of the 2nd level to the first pan value of the 3rd level it sweeps across diagonally and detects the light; then when the move is finished the camera thinks the light is in the view because it saw it while it was doing the move across so it trys to adjust the servos to target the light, but the light isn’t in its view anymore so it loses it and (this is the part I’m making major assumptions on) the tracking code resets the search back to the first pan value of the first tilt level since it lost the target. I guess my biggest questions are a) could the camera be confused if it detected the light while moving the pan value from the finishing side to the starting side while moving up a level and then not finding it anymore once the move is complete, and b) does the tracking code reset to pan value 0, tilt value 0 when the light is lost and it has to being searching again?

Any comments or suggestions at all are very much appreciated. I tried to keep my terminology straight as I typed it but if I muddled any section please let me know.

I have one more question that I’m just curious: is there any reason, besides having the lense as low as possible for maximum possible tilt angle, which makes mounting the camera upside down advantageous?

I believe you’ve correctly identified the problem. We fixed a similar issue last year by changing the code so that it starts searching from the current location instead of always beginning back at the bottom left after it loses tracking. This year we might even try to change the search pattern to scan in zigzags rather than rasters.

Having the camera mounted “upside down” puts the lens in line with the tilt axis and minimizes parallax. That makes it easier to seek quickly to point at a specific spot in the image. I’m certain it was originally intended to be that way, but somewhere along the line the documentation seems to have gotten confused, and many of the official tools and manuals now presume it’s supposed to have the lens nearer the top.

When I wrote the code last year, there was some internal discussion within the GDC over the possibility of releasing the code early so that teams could help me find any bugs before the kick-off. Knowing that this might happen, I wrote a very generic search algorithm that wouldn’t give away the location of the green light. My thinking was that the search algorithm would be the first thing teams would rip out and improve upon. Anyway, the first thing I would do if I were you is to choose tilt min/center/max PWM values so that the tilt angle doesn’t change during search. The vertical field of view is very wide and as long as the camera finds the green light anywhere within the field of view, the tracking portion of the code will take over and drive both servos to center the green light on the imager.

With regard to the wacky behavior you’re seeing, see Q9 of the camera FAQ.

The optimum orientation for the camera is with the lens down. This minimizes the number of pixels the scene moves per PWM step, thus keeping the green light on the imager through a larger number of PWM values. This is important when the tracking algorithm is trying pick the best PWM value to keep the centroid of the green light on the center pixel of the imager. In the extreme, imagine if there was only one PWM value that would put the green light on the imager; you’d have to increase the size of the allowable error box to be able to track the light and this decreases your pointing knowledge.


I’ve modified the code so that it zig-zags the search pattern instead. I also implemented some code so that it only resets the search back to the beginning after it’s sure that it has located and tracked the light for a specified number of loops. Hopefully that should minimize any possible problems during autonomous. I won’t have an opportunity to test it until tomorrow.

the first thing I would do if I were you is to choose tilt min/center/max PWM values so that the tilt angle doesn’t change during search

I hadn’t even thought of doing that before but that will go a long way towards fixing this problem. I’ll still probably implement the zig-zag search just because the last thing I want during autonomous mode is the camera getting into a position where it just loops and can’t find the light even though it’s looking right at it ><

I understand the reasons for mounting the camera upside down now, when I first read Q9 I was confused about what the tilt axis was but I see it now. Thank you both for your assistance.