Trackballs and CMUcam

Hey all, this question is mainly geared towards team T.H.R.U.S.T. 1501 after seeing their robot video, but if anyone else has any thoughts, please chime in!


Chris-

Hi, my name is Dan, I’m a mentor on team 395…2trainRobotics. I saw that you guys had the CMUcam tracking the trackball really nicely, and had a few questions about your setup if you wouldn’t mind. I saw that you had a pot on there…are you perhaps using a motor instead of a servo for panning? And also are you using any additional lighting on the robot? Or did you find that it was unnecessary? Any additional details would help too, as we are planning on trying to get this working during our FIX-IT windows :D.

Thanks Sooo much,
-dandon

Dan,

Nice to meet you on CD. Actually that was not a pot, it’s an Ultra Sonic sensor. The sensor looks up at the over pass. After we get so close to the over pass the CMU loses sight of the ball because the tilt servo and the bottom of the ball is too dark. So we roll through the over pass until the ultra sonic senses it. Once it does, it makes the robot change states in the software. Classic state logic.

We don’t have a panning servo. We never have. We lock the camera X to the robot base. The FOV (field of view) for the camera is good enough to see the entire over pass with a fixed position X (panning). So instead of using a panning servo, we create an error variable from the pixels from the X value. Then make the robot drive to the error. The center of the the “X” FOV is 80 pixels (160 x 240). So if the camera sees the centroid or center of mass in the ball say to the left of the center of X, then it will report an X pixel of 40. The error will be 40 (80 - 40). When then take this error of 40, and scale it into a percent based PWM command and correct the entire robot base to drive to the center of the mass. When the robot turns towards the error, the error from the camera will get less, and less, and the percent based scaled error of “X” will decrease until the ball mass is at 80 pixels, which is 0 error. We put in about 5 pixel dead band, so the camera can report 75 to 85 pixels and that’s good enough.

We tried a FALCON vision system, which required a light. We bought an automotive fog light and made a reflector for it. But we changed back to the CMU camera and the lighting was “ok” as is. I personally have a light meter. At our practice field, we have 400 LUX reading on the light meter on the face of the ball we are trying to see. We went to a Muncie practice on Sunday before ship day, and their field was 170 LUX, our camera would not see the ball, so we dropped some setting in the camera and was able to drive under the ball once again. We plan on bringing our light meter to Boilermaker and measure the light from arena so we understand it.

Hope that helps.

I was thinking about doing this during our fix-it window but I was wondering if the ultrasonic sensor was necessary?

From my understanding, what the ultrasonic does is give you a surefire way of knowing when you passed under the ball, allowing the program to know when to change its state rather than guessing based on dead reckoning and timer/encoder based loops.

I dunno if my student programmer is watching this thread, but this is what he explained to me the problem was.

When using the CMU camera, he doesn’t get a real time reading from where the ball is all the time. He just samples it every now and then and updates a varible in the gyro control.

Sometimes, the CMU camera samples the vision system and it returns nothing, it does this on the way to the ball. This doesn’t stop the robot from driving forward however. It keeps driving and correcting to the last “good” location of the ball.

So with the CMU camera is giving us flakey data, it was hard to figure out if it gave us flakey data on the way to the ball or if we truly did arrive under the ball. At ship time, we did not have our encoders working, and I dunno if he plans on using encoders or not when we get to our regional, so he chose a sensor instead of measuring it out with an encoder.

Being under the ball or driving past the ball for SURE would return nothing from the camera, camera would lose sight of the ball under it or as we drove past it so that’s where the extra sensor comes in.

Our programmer came up with this:

If camera data > 0
Update gyro control varible with pixel error from camera
If camera data = 0 keep driving to last known pixel error

Do all the above, until sensor =< than 20 inches, meaning it sees something
20 inches away, either the rack or ball. When nothing is in front of the sensor it see somewhere around 270 inches.

Then our programmer changes states in the software, from:
State 1 (seek the ball and drive to it using camera until sensor)
to
State 2 (drive to “x” degree for so many seconds or inches)
to
State 3
to
State 4

The above basically eliminates any dead reaconing, and allows the robot to do whatever it wants to do and get there how it needs to get there before changing to state 2.

If you watch a match where we knock down the first ball, and the robot just STOPS, that means we are still having issues with our sensor detecting the rack, that may be a good reason to switch over to an encoder when we get to Boilermaker.

Bottom line, it’s not perfect yet, but the concept and control theory worked. That’s what important that the programmers did what they set out to do.

Is it possible I could contact your programmer, I am having trouble finding a way to “drive to the error”?

Yes, private message him:

http://www.chiefdelphi.com/forums/member.php?u=15959
Nathan

He is very good, I am proud of him this year.

Thanks, I’ll do that.

I forgot to mention…

When on the way to the ball, IF we don’t get any tracking data for 5,000 ms (5 seconds), then he drops the PWM commands NEUTRAL. Just to keep the robot from running away. I learned this fail safe tonight from him.