I finally managed to get the CMU camera working, and capped a center goal. I wouldn’t have been able to do it without help from Bill Pramik and Jesse Pye from team 624, who provided code that became the framework for my own program. This camera thing is darn tricky, and I congratulate all of the teams who actually managed to make it work during the season.
A few things of note:
Observe the huge camera box on the near side of the robot. I don’t know why they thought it needed THAT much protection, but it seems to have worked out okay.
Yes, our drive train really does make those horrible grinding noises. If I wasn’t a programmer, I’d fix it myself.
Though it appears in the video as though it’s a straight shot from robot to tetra to goal, the robot actually started out a ways to the left of the tetra, and is pointed to the right of the goal as it’s lifting.
I’ll post more video later to demonstrate its other neat tricks, like missing the tetra, backing up, trying again, and then capping the goal.
Sweet job dude. Seems to be rather close to 15 seconds, but getting the robot to cap in auto within any time period is certianly an accomplishment to be proud of.
I’d love to GPL it, but I need to finish it first, and I need to get permission from the other author(s), as it’s not all my own code. If it’s okay, I’ll be sure to put it up here.
While we wait for the code, can you describe the methods you use? Are you using the CMUcam’s auto servoing? Fixed camera? Roll-your-own servoing? Are you using the camera to find the goal too?
The approach code sets a specific speed to both wheels. It then subtracts a number from both sides, proportional to the tilt servo value. This makes the robot start out fast, but not slam into the tetra at full speed or make a huge correction at the last second. Finally, it subtracts another number from just one side, proportional to the pan servo value, to make the robot turn without losing too much speed. The arm control is very simple. It stores three numbers, which we will call o, i, and f. o is the value of the arm potentiometer at startup, i is the target arm position, and f is the current position. I store a target to i, based off of o. (For instance, I could do i=o-50;, meaning that I want the arm to go to 50 clicks away from the starting point. o never changes after initialization, and all my positions are thus relative to the starting point, meaning that I don’t have to worry about the pot slipping over time.) Then, every program cycle, I just apply the difference to a motor value (i.e. pwm04=127+i-f). Not coincidentally, this is the same way that our backdrive code works for operator control.
This is the part of team 624’s code that I mostly left intact. What it does is pan the camera to the left, and then turn on auto-servoing. If size==0, meaning there’s no object in sight, the camera pans to the center, and auto-servos again. If nothing, then it tries the right. It keeps going through this sequence until it finds something. So, it’s kind of a combination of do-it-yourself and the default.
Yes. It uses the same driving formula to approach the goal as it does to approach the tetra.
In summary, here’s the basic state machine I’m using:
Assign low position to arm. Look around. If green is visible, goto 2.
Drive, using the formula described above. If object is lost, goto 1. If camera is tilted down far enough, goto 3.
Proceed forward slowly, in case we came up short. After a half second, goto 4.
Assign high position to arm. Keep tracking green. If arm is at its destination and the camera is still looking down, goto 5. If the camera is above parallel with the floor (127), goto 6.
Assign low position to arm while backing up. After one second, goto 2. (Note: If you hit this stage in competition, you’d never make it to the goal in fifteen seconds. However, if you’re just demonstrating it with no time limit, it looks awesome.)
Look around. If yellow is visible AND arm is at its assigned position, goto 7.
Drive, using the formula described above. If object is lost, goto 6. If camera is tilted down far enough, goto 8.
Check the pan position. If we’re at the correct position but turned askew, rotate slowly in place. If the pan position is 127, goto 9.
Assign mid position to arm. After one second, goto 10.
Hmm, never thought of having the speed that it’s approaching at be proportional to the tilt value. It’s so simple, but such an effective idea. How did you figure out what the “magic number” was? Trial and error?
Dave indeed did hint during last years kickoff that we should get used to sensing and sort of hinted at the return of the IR sensor. So next year do we get a choice to use the CMU CAM or the IR or both?
The only concern I have here is that digital cameras pick up IR light. We used one to test the IR beacons we built pre-season. I guess you could use the CMUCam to track IR if you wanted!