My team is scrambling to create a working autonomous.
Our code, so far, is a modified version of the Java sample line tracker. We had to change our code to work off of two sensors because, unfortunately, one of our three line sensors came broken.
We understand that all the sample code does is move the robot forward for 8 seconds, but even with this sample code loaded on the robot, our robot does zilch. We’re just trying to get our robot to show some sign of life while put in autonomous mode.
Can anybody help us out? All suggestions are appreciated.
Here’s what we have loaded, autonomous-wise:
public void autonomous() {
getWatchdog().setEnabled(false);
getWatchdog().feed();
//Timer.reset(); //Resets the timer
//Timer.start(); //Starts the timer
int binaryValue;
int previousValue = 0; // the binary value from the previous loop
double steeringGain; // the amount of steering correction to apply
// the power profiles for the straight and forked robot path. They are
// different to let the robot drive more slowly as the robot approaches
// the fork on the forked line case.
double forkProfile] = {0.70, 0.70, 0.55, 0.60, 0.60, 0.50, 0.40, 0.00};
double straightProfile] = {0.7, 0.7, 0.6, 0.6, 0.35, 0.35, 0.35, 0.0};
double powerProfile]; // the selected power profile
// set the straightLine and left-right variables depending on chosen path
boolean straightLine = ds.getDigitalIn(1);
powerProfile = (straightLine) ? straightProfile : forkProfile;
double stopTime = (straightLine) ? 2.0 : 4.0; // when the robot should look for end
boolean goLeft = !ds.getDigitalIn(2) && !straightLine;
System.out.println("StraightLine: " + straightLine);
System.out.println("GoingLeft: " + goLeft);
boolean atCross = false; // if robot has arrived at end
// time the path over the line
Timer timer = new Timer();
timer.start();
timer.reset();
int oldTimeInSeconds = -1;
double time;
double speed, turn;
// loop until robot reaches "T" at end or 8 seconds has past
while ((time = timer.get()) < 8.0 && !atCross) {
int timeInSeconds = (int) time;
// read the sensors
int leftValue = left.get() ? 1 : 0;
//int middleValue = middle.get() ? 1 : 0;
int rightValue = right.get() ? 1 : 0;
// compute the single value from the 3 sensors. Notice that the bits
// for the outside sensors are flipped depending on left or right
// fork. Also the sign of the steering direction is different for left/right.
if (goLeft) {
binaryValue = leftValue * 4 + rightValue;
steeringGain = -defaultSteeringGain;
} else {
binaryValue = rightValue * 4 + leftValue;
steeringGain = defaultSteeringGain;
}
// get the default speed and turn rate at this time
speed = powerProfile[timeInSeconds];
turn = 0;
// different cases for different line tracking sensor readings
switch (binaryValue) {
case 1: // on line edge
turn = 0;
break;
case 5: // all sensors on (maybe at cross)
if (time > stopTime) {
atCross = true;
speed = 0;
}
break;
case 0: // all sensors off
if (previousValue == 0 || previousValue == 1) {
turn = steeringGain;
} else {
turn = -steeringGain;
}
break;
default: // all other cases
turn = -steeringGain;
}
// print current status for debugging
if (binaryValue != previousValue) {
System.out.println("Time: " + time + " Sensor: " + binaryValue + " speed: " + speed + " turn: " + turn + " atCross: " + atCross);
}
// set the robot speed and direction
robotDrive.arcadeDrive(speed, turn);
if (binaryValue != 0) {
previousValue = binaryValue;
}
oldTimeInSeconds = timeInSeconds;
Timer.delay(0.01);
}
// Done with loop - stop the robot. Robot ought to be at the end of the line
robotDrive.drive(0,0);
}
You say two? Good luck with that, it can be done, but you will require either gyros or encoders. Keep a record of the speeds, or the angle it has veered off to. Mount a line tracker in the front, and the other one in the back so it always is triggered. Do some calculations. I too have to work on the autonomy. I will be writing my own from scratch.
We don’t use line tracking, we use encoders we drive a fixed distance straight and place a piece.
Since you’re running so late. try code that powers your drive forward at say 1/3 power for 5 seconds. Tune it until it just bumps the wall and then place a piece. Make sure you keep a good battery, but I think that’s your best bet for a last minute autonomous BEFORE you ship.
Alternatively, look up teams going to your first regional and see if there’s anyone like my team who hasn’t used there line sensors and wouldn’t mind giving you one.
Are you getting any response when you deploy? If the robot does nothing, the robot is probably quitting unexpectedly, which you would receive. As for working with two photosensors, if you are dead set on using them (instead of other sensors or dead reckoning as has been suggested) I would place them directly adjacent to each other, so they can both be tripped by the tape at the same time and working with that. There may be some interference however (one receiving the reflected light from the other and returning a false positive) and it obviously won’t be ideal.
Personally I find the encoders a more reliable way of making an autonomous algorithm. You simply drive until you hit a certain number of encoder counts. Then you move your manipulator. Then you drive some more. Then you place. Then you reverse.
If you’re using the Iterative Robot template, the easiest way to do this is do create different state machines for the drive and each of your mechanisms. Then you create a main state machine that simply feeds these other machines values and waits for the drive or manipulator to be in place. Once the structure was created, fine tuning the numbers only took a few hours, and now we can place on any of the high pegs and the middle second level pegs reliably (as long as we’re using a fresh battery that is).
I personally would disagree with you on that: because our robot is driven by chain, there are a lot of noise in the encoders from the chain jerking around. I suggest one day just put the robot on blocks and let the motor run through from 0 to 255 PWM and graph the encoder data vs the PWM. You will be surprised at the noise.
My team used encoders and we have a reliable autonomous even though our robot has chain. The noise is minimal, I haven’t graphed it, but after ~30 successful runs, I wouldn’t worry about it unless your chain is really lose or otherwise acting up.
The other thing about encoders is that with some math, you can convert it to feet. You know how far back you start, so drive N feet forward and then place the piece. All the fiddling is with the arm then.
Sorry to bring up this relatively old topic but I was wondering how are you using encoders to go to a certain distance? Do you just have PID and Setpoint encoders to some number? Or are you using something else too?
Any other teams out there willing to share how they use their encoders in autonomous?
I have two encoders hooked up to jaguars running CAN one for the left side and the other for the right side. The autonomous does P (No need for ID) so that it slows down as it gets closer until it hits the wall going pretty slowly. This runs reliably as long as it’s lined up and the left and right sides act the same. i.e. You jam both joysticks forward and it actually drives straight-ish.
Here’s the relevant code.
leftDist = bot.dt.getLeftDist() - leftStart;
rightDist = bot.dt.getRightDist() - rightStart;
double left = gain * ((distance - leftDist) / distance);
double right = gain * ((distance - rightDist) / distance);
bot.dt.drive(left, right, bot.gyro);
distance is the distance you wish to travel in feet. gain is the initial value that gets toned down as you drive (Currently using 0.7)
Hopefully the rest of the code is clear enough that it makes sense. If you have anymore questions feel free to ask. Also, you can PM me if you want to see our full autonomous code.
I was thinking of putting on two encoders to get the base to stop within 6 inches of the target distance without overshooting and without travelling slowly. Not sure if that’s possible yet because we don’t have encoders on the base yet but I would imagine I’ll need I and D. The biggest problem that I’ll have is momentum because I plan on running the auto at close to full speed.
If you don’t mind, can you please send me the code that has to do with your base? I’m sure I can learn from it as I can’t find many resources that explain the concepts.
Is there any reason you can’t overshoot? Fast speed to wall, then slowly backing up 6 inches may be better, unless it smashes your arm or something. Don’t forget, you have 15 seconds, consistent even if slow is better than inconsistent and fast.
At the Waterford District we used encoders, drove a set amount of cycles forward, and then had our arm code place our tube, except it only worked 50% of the time because our field people liked to not line it up straight so it went off to the side quite a bit.
At Ann Arbor we used line trackers to keep our robot straight while using encoders to set our distance. But our field people would have it too far forward or back so it still wasn’t reliable.
At Troy we’re looking forward to trying to use a camera positioned on our arm to locate the top peg and score, with little or no use for the encoders or line trackers.
As for only using 2 line trackers I wouldn’t recommend it. Last year we tried that and sent half of a competition trying to borrow one after 5 unsuccessful matches.
In the past we’ve run autonomous without encoders, the biggest problem we had was that performance (especially for turning) varied on battery charge, so that if we didn’t regularly switch out the batteries we’d have issues. As long as your conscious of batteries and keep them charged, there shouldn’t be any issues, as long as it worked consistently during testing.
How long would it take to implement encoders? Assuming you plan for it, it should be quick and doable with one or two runs to the practice field.
BEFORE YOU GET THERE
If you ave the encoders at your shop and can solder the wires so that you can plug it straight into jags or digital sidecar
Figure out where you’re mounting the encoders (cimple boxes?) and the gear ratio to the wheels
Figure out the diameter of the wheel and the gear ratio between the
Using the diameter of the wheel and gear ratio to calculate how many feet you go in one rotation of the encoder
Write code using the above equation and P(and possibly ID) to get to your target position (I think starting location is ~19 feet from the driver station)
Double check your code.
WHEN YOU GET THERE
Attach encoders to robot
Wire encoders
With robot on blocks check that autonomous behave sanely (have somebody on the Estop)
Test on practice field (have somebody on the Estop)
Assuming that the behavior was close enough to sane…
Continue testing on practice field and matches until it’s consistently scoring.
(Similar applies to arm… might need more math though)
Hopefully that helps, the great thing about sensors is that you can write alot of code ahead of time. Good luck if you go through with using encoders.
Our team is using 3 front-mounted light sensors with great success. No encoders, no ultrasonic, no timers.
We’re using C++. Our autonomous program manages a state machine that gets input from the light sensors and decides what to do based on them. Our algorithm is as follows:
Light sensor values (L/C/R)-----------------------State to execute
000--------------------------------------------------move() - go forward until you see the line
010--------------------------------------------------move() - this is ideal, we’re on the line
100--------------------------------------------------correctLeft() - move left to see the line
110--------------------------------------------------also correctLeft()
001 and 011----------------------------------------correctRight()
101--------------------------------------------------at the fork; correctLeft() or correctRight() based on a switch on the robot
111-------------------------------------------------placeTube()
After executing placeTube(), the program quits the state machine and retreats back several feet, then turns on the spot (~180 degrees, but it’s not perfect since we don’t have encoders or a gyro). Generally, getting the tube on the rack takes about ten seconds, and we’re only driving the motors at 40% for drive and 60% for turn.
Is there a horizontal line at the end of the tape to let the robot know it needs to stop? We have it just going and going until there’s nothing left, then it goes back a little bit to give room for the tube placement.
Just asking, because we’re having our first (and only) regional next week, and we still haven’t been able to test our autonomous. :rolleyes: