Electrical componenets for autonomous navigation

Currently our team is addressing the question of how to get precise autonomous navigation.

Last year we just used the supplied gyro. Which we were not satisfied with at all. What did your team do to address this problem?

What additional components did your team order and implement for autonomous navigation?

What system yielded the best results? How accurate was your system? What are some tips to keep in mind to insure accuracy and best autonomous operation?

What are you planning on exploring for this coming season?

our team used the FIRST yaw rate sensor, integrated the signal by adding it to a register that was initialized to 32,000 - as long as your robot does not turn faster than the max rate of the sensor, it was very accurate for us.

For this year, if we need to do some sort of naviagation again, we are planning to design a distance and speed sensor. the simply approach is to put white paint marks on the side of a wheel and count them with an optical sensor.

there are two problems with this approach. the wheel might be slipping and you cant be sure which way the wheel is turning (someone might be pushing you backwards while your robot is commanded to go forwards - and there is no way to tell just by counting the marks on the wheel - the optical sensor only sees: On - off ----on —off)

an easy way to get by the slipping wheel (drive motor is causing the wheel to spin) is to use a nondriven wheel - like they do at car testing labs -you see them on TV sometime, with the bicycle wheel attached behind the car)

to solve the fowards or backwards problem - one way is to use two sensors and wider marks on the wheel - then you can tell which dirction you are going by which sensor changes state first.

then by counting (integrating) the marks you get distance, and by measureing the time between marks you get speed.

BTW - there will probabally be some kind of auton mode requirement this year, but dont assume it will be auton navigation - they could require us to do somehthing like having the driver control the robot position, but the robot does something else by itself at the same time.

Well, we had a whole different array of sensory at one point or another. I think the most unique sensory we had was our chain sensory. We too a plastic cup (I think) and wired an led beneath the chain, and a small light sensor above the chain on the other side of the cup. This would click on and off every now and then from the links in the chain blocking the path of light. Well, this was used in our auto to double check whether our drive was stuck or not. It worked ok, but we didn’t have enough time to really sit down and get it going before competition.
We also had a small “batvision” module.
Our plans were to mount two ultrasonics on two servos, and have it pivot around, scanning in front of the bot.
This data would be picked up by the dashboard output by our laptop, and we were working on a 3D program that had a mesh of the field and the bots. We would have these transparent "walls
" pop up where the bot had gotten feedback, and the distance it had received it. I really wish that one flew…
We also tried using a magnetic switch mounted by each wheel, and tried gluing magnets around the wheel to count the rotations. We didn’t get too far with that one either.
We DID have a successful line-tracker, but as everyone knows, it was just TOO slow.
This year we hope to implement the gyro in some way. I think the one we received last year was a dud too, we couldn’t get it to work right.
The sensory you use really depends on your system, and a good collaboration of hardware and software.
If your’e using a skid steering bot, then you pretty much just need a way of checking your wheel rotations. You could rely on software for the direction, but it’d be a little trickier.
Anyway, you’d be surprised how much “basic” sensory can do, switches and the like.

Keep in mind however:
The More Data you gather, the more you can compare against each other (triangulation, sort of), so the more sensory, the better.

The way I like to think of choosing sensory is this:
For last years game, we gathered the team into one room with no windows. It was a pretty large room, and there were things in it (furniture, etc.) We took a bell and put it in the middle of the room. The bell represented the Stacks up on the ramp.
Then, we tied everyones arms behind their backs, so they couldn’t feel anything, we turned off the lights, and they had to start around the edges of the room. The also weren’t allowed to make any noise, or they got dq’d. They had to try and ring the bell in the dark. Anyway, this gave them some perspective on what it’s like to be the robot. You can’t see, feel, or hear anything. You may not even know where your target is. There was one thing I pointed out though:
Everyone DID know in which direction their feet were moving, and they knew if they ran into something. This helped a little.
We found pretty quickly that certain things can make the job easier.

Anyway, It’s really 50% robot and game specific on what kind of sensory you choose. Just explore!

Another sensor that we used came from www.digi-key.com and was a 4-bit rotary encoder. it works, for the most part, the way the optical rotation sensors worked as previously stated^. anyway, we put one RE on an axle of one side, and one on the other. (We used a tank-style drive system) these 2 encoders were used in correlation with the yaw-rate sensor to triangulate the position of the robot. (Our programmer is a senior and in 200 level math and a EE class at Purdue) The only problem with the REs is that if a wheel slips then your calculation is off. There are 6 pin-outs on the encoder and are connected to the analog port on the RC.

Keep in mind however:
The More Data you gather, the more you can compare against each other (triangulation, sort of), so the more sensory, the better.

I agree with this statement. The more sensors a robot has the less of a chance of a false positive.

We used a rotation sensor built into the wheel assembly consisting of two optical sensors (kit part) and a alternating light dark encoded disk that was part of the wheel. This system gives direction and distance traveled with accuracy down to less than an inch. Add to that the gyro to determine turning or moving and some feedback from the steering servo and you can tell where you are anywhere on the playing field within a few inches. Map out the field with a coordinate system and you can easily tell the robot to go somewhere on the field and it will do precisely that. In addition, a long string of direction commands will be carried out in sequence as the robot moves from point to point correcting at each point along the way. Add some storage to the custom circuit that does the control and a way to select different strategies and you have a fairly powerful system. Our operator interface would allow the drive team to select from 11 different strategies at the field without reprogramming.

*Originally posted by Adam Y. *
**I agree with this statement. The more sensors a robot has the less of a chance of a false positive. **

On one hand this is true. However, the more sensors you have, the more work it is to integrate them, and the more chance that nothing will work.

If this was the DARPA challenge, I would definetly say you are right. But 6 weeks in FIRST, is often not enough time to get 1 good set of sensors working, let alone 2 or 3.

Last year on 190 We used a two axis accelerometer and a gyro to develop an inertial navigation system. This worked very well with our crab drive in autonomous mode.

How were you able to account for the noise in the accelerometer? If not handled, it could be misinterpreted as data, meaning your robot would think it was moving even though it wasn’t.

There are a few ways that a navigation system could be setup for the robot. Last year we used a network of sensors to give us data that we then used for autonomous mode. We have 4-Bit rotary encoders on the front two drive wheels. Basically what this did was count each time the wheel made one revolution. So, based on this we get a distance per time. Then we used the the gyro chip to get our change in angle per unit time. We had this input to the computer for autonomos. Now, you could also use an accelerometer. You could get a 3-axis accelerometer that would give you acceleration in the x, y, and z axis’. You would then take this and input it into 2 integrating op-amp circuits which would then give you a position vs. time readout. This is based on physics/calculus. We know that if we integrate Acceleration (m/s^2) once we get Velocity (m/s) and if we integrate Velocity we get position vs. time, which is the data you want for navigation. Another way to do autonomous mode would be based on simple timing. You could write your program for autonomous such that after a certain time interval the robot will do/complete a certain action. For example, move the drive motors at a certain value for a certain amount of time in a forward direction. then drive one motor in forward and another motor in reverse (if you have a tank drive setup) and turn for this amount of time and so on and so forth. Now, this is not super accurate when it comes to the field, but it is one way of doing it. This is a very guess and check method, and requires that the robot is placed on the field at the same spot everytime so that everything works out alright.

This is an important aspect to consider. Noise/interferance can effect the performance of the sensor being used. Typically, this can be done by the use of a circuit. IF you input the sensor into a Low Pass Filter, this should be able to reduce the disturbance and thus give you a clean signal.

This disturbance can cause errors in our results, thus reducing the accuracy of our data. Now, we use Analog to Digital (A/D) Converters to get all of this data. Basically, this coverts the analog output of a sensor into a digital number that can be interpreted by the computer. The error happens when an input signal has a frequency component at or higher than half of the sampling rate. Recall that frequency is the number of occurrences within a given time period (usually 1 second); “the frequency was 40 cycles per second”. If this is not accounted for and limited, it will not be able to be understood or distinguished from data that is validly sampled. Ideally, a low pass filter would pass unchanged all slower signal components with frequencies from DC to the filter cutoff frequency. Anything above that point would be eliminated completely, thus reducing the signal disturbance. However, in reality filters do not just cut off sharply at a certain point. Instead, it gradually gets rid of the erroneous frequency components and will display a falloff or roll-off slope.

This is the inherent problem in A/D conversion when the input signal has frequencies that are above half of the A/D sample rate. The higher frequencies will “fold” into the lower frequency areas and will be interpreted as random signals that really do not make sense. Thus, we use a low pass filter that limits the input signal bandwidth to below half of the sampling rate. A low pass filter that is applied to each input channel of the A/D will also get rid of the unwanted high frequency noise and interference introduced before sampling occurs.

This is just a general way to look at disturbance in sensors.