Rookie Team Prgramming [autonomous]

Hello everyone,

As a Rookie Team this year our team has alot to accomplish and we are recieving little help when it comes to the programming. So to keep this short and sweet ill explain our situation.

Currently we are using a set of autonomous modes that we can switch between via switches that will hopefully be mounted on the robot if the rules permit it. we have seperate VIs to control the Acceleromoter inputs and Camera Tracking. I have coded a few things [ie, acceleromoter controlled tracktion control and bump control] but i havent gotten any data yet as i have not been able to test the acceleromoter as of now. plus this being the first year ive been exposed tro labview i havent quite gotten the hang of it.

what i need to know is as follows:
-is there a way to use camera code to track not only colors but objects [ie other robots and barriers] and use that code so we can avoid those obejects

-is using the acceleromoter as traction control a good idea? its seems iffy to me.

-if the above is true, how?

thanks in advance!

The vision processing can locate objects if you can write an algorithm to distinguish them from everything else. Some things like the edge of the field might be pretty straightforward. But the camera doesn’t know what objects are, just pixels. The color processing is the inexpensive way we are identifying the target this year.

Greg McKaskle

Well if you have all that coded, you are doing good. Of course the hard part is testing and getting it working.

There is a really good thread on traction control on CD, search for it. While you are searching… The accelerometer is one way to gather the data you need for one method of traction control and it is as good as other ways of gather accelration data. When you use the accel. you are looking for an excessive change in rate of accel., you could also do this with POTS on the drive system. When the rate of accel. exceeds a threshold, you reduce power to the motor until you return to the target rate of accel (solution crys out for a PID loop).

The other way of doing traction control is to measure true ground speed, compare it to motor speed. When the motor speed is greater than what the ground speed should be, you are slipping and need to reduce the motor speed until they are within some tolerance. The math should be easy when both sides are slipping at the same rate, when your slipping to the left or right, then I think the math will get tougher.

A fairly robust way of doing this that I can think of is to make use of the fact that a majority of the playing field is white this year, and that almost all teams that I have seen use-non white bumpers. Try taking a picture of your robot sitting on a white surface and import it into Vision Assistant, then play around with the Color Threshold filter.

To get a sense of where these obstacles might be, mount your camera at the bottom of your robot, and point it slightly downward. The y-position of the obstacles you identify will be roughly proportional to the distance they are away from you. A related algorithm is the Polly algorithm, more info here with some good pictures for visualization; you’ll just be using white/non-white pixels instead of doing edge detection to find obstacles.

EDIT: Note, that this method will mean that you can’t track trailers at the same time, as your camera has to be pointed at the floor. If you want to track trailers, I’d advise that you use the camera only for that, in which case you could consider using something like an ultrasonic range sensor to detect obstacles, although this is a much coarser way of doing it.

If you have any more questions, feel free to post back in this forum, and I’ll try to help you out.

–Ryan

When it comes to finding obstacles and avoiding them with a camera… well, there are million dollar prizes for designing cars that can do just that. So far, no one has won any of them. you can do a decent job, but to be excellent, you need amazing programming skills, time, and better hardware then we have.