Line Sensor Algorithm Question

Has anyone made a good line following algorithm with some type of feedback? We got ours up and running last night with a basic if-else statement. The problem we have is the robot wiggles as it follows the line.

Note: The line sensors are analog not digital hence the threshold.

left = LeftSensor.getAverageVoltage();
middle = MiddleSensor.getAverageVoltage();
right = RightSensor.getAverageVoltage();

if (left > left_threshold) {
  left_sensor = true;
} else {
  left_sensor = false;
}

if (right > right_threshold) {
  right_sensor = true;
} else {
  right_sensor = false;
}

if (middle > middle_threshold) {
  middle_sensor = true;
} else {
  middle_sensor = false;
}

if (left_sensor && !middle_sensor && !right_sensor) {
  // Left Sensor Only
  drive.tankDrive(0.75, 0.5);
  SmartDashboard.putString("Direction", "Hard Left");

} else if (left_sensor && middle_sensor && !right_sensor) {
  // Left and Middle Sensor
  drive.tankDrive(0.5, 0.25);
  SmartDashboard.putString("Direction", "Light Left");

} else if (!left_sensor && middle_sensor && !right_sensor) {
  // Middle Sensor Only
  drive.tankDrive(0.75, 0.75);
  SmartDashboard.putString("Direction", "Straight");

} else if (!left_sensor && middle_sensor && right_sensor) {
  // Right and Middle Sensor
  drive.tankDrive(0.25, 0.5);
  SmartDashboard.putString("Direction", "Light Right");

} else if (!left_sensor && !middle_sensor && right_sensor) {
  // Right Sensor Only
  drive.tankDrive(0.5, 0.75);
  SmartDashboard.putString("Direction", "Hard Right");
  
} else {
  drive.tankDrive(0.0, 0.0);
  SmartDashboard.putString("Direction", "Error");
}
2 Likes

Because it’s analog you should be able to use a scaler to adjust it towards your target. Right now it’s using a Bang Bang system, which has many issues like you discovered with the jittering. I’m not sure exactly what sensors you are using but a Camera for camera tracking would be much better.

We thought about using a camera but it requires much more programming and space. We have never done any kind of vision tracking and figured it might be better to do off season. Do you have any in-depth resources?. Here is a link of the sensor we are using.

maybe an unnecessary comment, because I’m not actually helping with the line tracking, but to save execution/processing time of your code, you can replace all of your if statements with single-line comparisons:

left_sensor = left > left_threshold;
middle_sensor = middle > middle_threshold;
right_sensor = right > right_threshold;

This sensor can work. However you should use values that adjust correctly based on the targets location. A PID Loop is best but might be the hardest to implement. I recommend that you create a simple P Loop that will take in the signal to adjust to the target.

Perhaps this light sensor would be useful for what you are trying to do?

Maybe not, since it is intended to follow dark lines on light-colored surfaces?

The problem is the sensing distance of that sensor is less than one inch. The sensor I am using senses at 2-3 inches.

If you watch the video on that page, they say there’s a setting to invert the code to see light lines on dark backgrounds

Having had experience with both line following algorithms (2011) and a similar concept in a light following application for a grad class… you’ll never get it to go perfectly straight. There will always be some amount of wiggle.

With some testing and development, however, you can smooth it out considerably. Consider adding in a “history” component. Right now it might see right and middle, but what was it seeing a second ago? If it was seeing only the right, then you don’t want to be turning more to the left - you want to be turning a little to the right to straighten it out and hopefully hit the center when you’re almost straight. On the other hand, if you had only been seeing the middle a second ago, you’ll want to turn back to the left a little to get back there.

So it’s those cases where you are seeing the line with two sensors that you need to pay a bit more attention to, and get a little more creative!

1 Like

Ok, I can see why that is problematic! I would recommend checking out line follower tutorials from FIRST Lego League teams. It’s a start to find some effective but simple algorithms. The programming language is a little wonky, but intuitive.

That a good starting point a lot of them use one sensor though. I’m looking to use 3-5.

Ah, I see it on the Features tab too.

I’m thinking the history idea would work. You take power away from your slight right and left every time you go into that case.

Here is what I have for the “history” code. I haven’t tested it yet. The theory would be the offset would start at 1.0 and get smaller when just the middle sensor is sensed. I think I need to put a one shot trigger in here. This way the offset doesn’t go from 1 to 0 after hitting the middle sensor once.

left = LeftSensor.getAverageVoltage();
middle = MiddleSensor.getAverageVoltage();
right = RightSensor.getAverageVoltage();

left_sensor = left > left_threshold;
right_sensor = right > right_threshold;
middle_sensor = middle > middle_threshold;

if (left_sensor && !middle_sensor && !right_sensor) {
  // Left Sensor Only
  left_speed = 0.75 * offset;
  right_speed = 0.5 * offset;
  drive.tankDrive(left_speed, right_speed);
  SmartDashboard.putString("Direction", "Hard Left");

} else if (left_sensor && middle_sensor && !right_sensor) {
  // Left and Middle Sensor
  left_speed = 0.5 * offset;
  right_speed = 0.25 * offset;
  drive.tankDrive(left_speed, right_speed);
  SmartDashboard.putString("Direction", "Light Left");

} else if (!left_sensor && middle_sensor && !right_sensor) {
  // Middle Sensor Only
  drive.tankDrive(0.75, 0.75);
  SmartDashboard.putString("Direction", "Straight");
  if (offset > 0.0) {
    offset = offset - 0.1;
  } else {
    offset = 0.0;
  }

} else if (!left_sensor && middle_sensor && right_sensor) {
  // Right and Middle Sensor
  left_speed = 0.25 * offset;
  right_speed = 0.5 * offset;
  drive.tankDrive(left_speed, right_speed);
  SmartDashboard.putString("Direction", "Light Right");

} else if (!left_sensor && !middle_sensor && right_sensor) {
  // Right Sensor Only
  left_speed = 0.5 * offset;
  right_speed = 0.75 * offset;
  drive.tankDrive(left_speed, right_speed);
  SmartDashboard.putString("Direction", "Hard Right");

} else {
  drive.tankDrive(0.0, 0.0);
  SmartDashboard.putString("Direction", "Error");
}

One of the reasons that it’s wiggling is because you’re using thresholds. If you use the analog values and base the speeds off of those, you can make it much smoother.

Gotcha.

Where are you planning on mounting the sensors? If your planning on mounting them over the chassis’s “axis of rotation” rather than say, the front of the chassis, this will force you to rely on algorithms that use history. The beauty of putting the sensors ahead of your axis of rotation is that a “proportional” algorithm approximately becomes a PD algorithm. Not only does track if your sensor is to the right or left of the line, but also prevents your robot from “over-correcting” by turning right or left too far.

Here’s a proportional line following algorithm I’ve thought out for an analog sensor array…

I’ve got 4 sensors in a line A,B,C, and D, each the width the the line apart. I’ll assume that they are calibrated to return value in the range 0-1 where 0(ish) is completely over carpet, and 1(ish) is completely over the white line.

Now say A,B,C, and D return the values:
.01 .10 .75 .45

The first step of the algorithm would be to select the highest value, in this case the one from sensor C. It would then select the highest value from an adjacent sensor, D’s is higher than B’s so we can reasonably conclude that line is probably between C and D. Now I’ll add up the two sensor sensor values to get 1.20 and I’ll divide the sensor on the rights value (D is right of C: 0.45) by that and get 0.375. So now I know that the line is ~62.5% under C, while it’s only ~37.5% under D. Because science.

Now we can construct a proportional value for the line’s position. We’ll say when the line is ~100% under senor A or D, that’s a worst case scenario and we should probably turn hard right or left respectively. If the line is 50-50 between B and C we’re happy and should drive straight.

I’ll assign a “proportionality” value to 3 of the sensors, A: -1.5, B: -0.5, and C: 0.5. Remember the value we obtained by dividing the “right sensor” by the sum of the two selected sensors? In the example case this value was 0.375. So we’ll add this to our left sensor’s “proportionality” value (0.5) and get 0.875.

This final value should be proportional to the line’s position relative to the 4 sensors. In the worst case scenarios it will return -1.5 or 1.5 and a best case scenario it will return 0.

You should be able to use this value to determine the correct speeds (proportional to this value) for the right and left motors of your robot.

That’s exactly what I am looking to do, but I have some questions. You assigned a proportionality value to 3 of the sensors but what about the fourth? Also what would you do with the final value?

I also need to look into calibrating the sensor.

Calibration can be tricky. Are you planning on trying to protect the sensors from ambient light?

Instead of calibrating the sensors to return values between 0 and 1, I’m thinking it might be better to bias the sensor values so that the smallest is 0 and the largest is 1.

(sensor_value - smallest_value)/(largest_value - smallest_value)

The reason why only 3 proportionality values need to be assigned, is that (in the way the algorithm works above) with two selected adjacent sensors (one of which is returning the highest reflection reading) the proportionality value of the leftmost is added to the “percentage” of the line the right sensor is reading. The sensor on the right side of the array will never qualify as the left of the adjacent sensors.

As for using the final value (in this case ranging from -1.5 to 1.5) for determining motor speeds, one method is to set the left and right motor speed to:

left_speed = 0.5*(1-final_value/1.5)
right_speed = 0.5*(1+final_value/1.5)

Edit: To be clear, I would bias the sensor value each time I take in the readings, using the min and max from the four values.

Would a switch statement be quicker and/or smoother than if/else?