Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Autonomy: How Did You Guys Do It? (http://www.chiefdelphi.com/forums/showthread.php?t=93554)

davidthefat 13-03-2011 20:10

Autonomy: How Did You Guys Do It?
 
From the looks of it, a lot of teams did the whole "drive forward for x seconds/feet and score" method. Others used line trackers, some might have used cameras.

How did you guys do it?

mwtidd 13-03-2011 20:15

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by davidthefat (Post 1038791)
From the looks of it, a lot of teams did the whole "drive forward for x seconds/feet and score" method. Others used line trackers, some might have used cameras.

How did you guys do it?

I've got a range finder on the front of the robot. I drive until I'm about 3ft away from the wall. Then switch over to the camera, and adjust to center on the peg.

Arm is only on timers... Hoping to add encoders soon.

MagiChau 13-03-2011 20:17

Re: Autonomy: How Did You Guys Do It?
 
Used the E4P quadrature encoders provided in the KoP. We had the robot driven the correct distance for different modes, and I plugged them in for drive distance. Had some other encoder counts noted to do some calculations without trial and error. Arm height at our starting position angle needed to go through the peg was noted. Program ended up drive robot until encoder count got close to target. When robot stops moving and the code that moves the robot mechanisms to position returns a boolean of complete another stage is triggered that lowers the arm and shoots the tube out with the rollers.

Had to do some trial and error with drive speeds at competition so lift gets out before the robot goes up to the peg.

davidthefat 13-03-2011 20:23

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by lineskier (Post 1038796)
I've got a range finder on the front of the robot. I drive until I'm about 3ft away from the wall. Then switch over to the camera, and adjust to center on the peg.

Arm is only on timers... Hoping to add encoders soon.

Now, I was going to use the rangefinder to find weather we are on the fork or the stop. I had the worry that due to the metal backing, it would give a wrong reading.

mwtidd 13-03-2011 20:25

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by davidthefat (Post 1038808)
Now, I was going to use the rangefinder to find weather we are on the fork or the stop. I had the worry that due to the metal backing, it would give a wrong reading.

Actually they are awesome. Good to almost an inch. For me the drive was geared too high so we coasted a lot, but I came up with a fix to that (drop the strafe wheel and do an endo :) )

you need to do a bit of filtering, but not a big deal.

CoachPoore 13-03-2011 20:34

Re: Autonomy: How Did You Guys Do It?
 
Our autonomous uses the encoders on the SuperShifter transmissions. The software uses the encoder values to actively make the robot drive dead straight. We also know reliably how far the robot has driven from the encoders. Encoders on the arm allow reliable positioning of the arm. The claw motors let us adjust the attitude of the tube when we reach the peg and then eject it onto the target peg. We have six different programs, one for each peg height. At GSR, we only ever ran the two for the top pegs, but have the others just in case. We can also select a delay before we start, which proved very useful in eliminations at GSR when we were on an alliance with 175 and 176 which scored 3 ubertubes in many of our elimination matches. See http://www.youtube.com/user/FRCteam1.../3/drbPrGJlroI for a match (semi-final 2) where all 3 of our alliance and 2 of our opponents scored in autonomous. Our robot is the middle blue robot which does not move until after the 4s delay we selected.

Noel

nighterfighter 13-03-2011 20:38

Re: Autonomy: How Did You Guys Do It?
 
Encoders on our drive train, encoders on our elevator, camera tracking, gyro angle correction, and sonar sensor. Consistently scores the ubertop on the top peg every time.

BigJ 13-03-2011 20:41

Re: Autonomy: How Did You Guys Do It?
 
We decided that dead reckoning would be enough for this year given that the autonomous period is not very dynamic. Although our encoder code is done, we have had hardware problems and they are not on yet. Thankfully, our robot it pretty balanced and drives very straight. We didn't get an auto-mode in until yesterday in WI and it only scored once (in a somewhat humorous fashion), but will have one that should score pretty consistently week 4 in Midwest.

Given the hardware issues are fixed, the timing to approach will be changed to distance and a gyro would be added for turning around and getting ready. I don't think our robot functions act quickly enough to attempt a double-tube in 15 seconds.

Jetweb 13-03-2011 20:59

Re: Autonomy: How Did You Guys Do It?
 
We used the camera 100%. I have a feeling previous years vision targets have given the camera a bad name. The retro-reflective tape used this year is simply awesome to track and is the only target in my history that is not affected by the stage lighting used at the competitions. I hope that stuff is around for a long time to come.

nighterfighter 13-03-2011 21:03

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by Jetweb (Post 1038837)
The retro-reflective tape used this year is simply awesome to track and is the only target in my history that is not affected by the stage lighting used at the competitions. I hope that stuff is around for a long time to come.

Yup. Our autonomous works in normal lighting, pitch black, or intense lights. We love that stuff.

torihoelscher 13-03-2011 21:12

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by davidthefat (Post 1038791)
From the looks of it, a lot of teams did the whole "drive forward for x seconds/feet and score" method. Others used line trackers, some might have used cameras.

How did you guys do it?

We used Line Sensors, a sonar, a Gyro, and a backup way to make the auto move. Of course the Line Sensors gave us a hassle that we had to disable them and trust the sonar. it was great!!!

Sean1038 13-03-2011 22:34

Re: Autonomy: How Did You Guys Do It?
 
Our Auton code drives straight for x seconds using gyro angle correction; rotates our arm using x distance using an encoder value; rotates the tube pitch; rolls out the tube onto the peg; and backs up.

we're working on putting up two ubertubes.

davidalln 13-03-2011 22:36

Re: Autonomy: How Did You Guys Do It?
 
We're using nothing but line sensors, and its working excellently for us (backing up when it hits the horizontal piece of tape)!

Jogo 13-03-2011 22:36

Re: Autonomy: How Did You Guys Do It?
 
We're using a line track and ultrasonic range finder combo, with a gyro to keep us straight. In hindsight, the line track isn't really that useful.

Owen Meaker 13-03-2011 22:43

Re: Autonomy: How Did You Guys Do It?
 
Our team was going to have encoders on our drive wheels, but we decided they wouldn't be that helpful for the most part. As a result, our automous is drive forward x seconds, move arm down for Y seconds, and retreat. we are using a makeshift encoder for the lift, so we will be able to make it go to the right height. Whether or not this code works will be determined on Thursday.

ayeckley 13-03-2011 23:51

Re: Autonomy: How Did You Guys Do It?
 
Playback of recorded data from one of ten driver-selected .xls files stored on the cRIO. The file is selected during Disabled mode using buttons on one of the driver joysticks. The data is recorded during early practice matches to get a "gross" autonomous play accomplished, and then edited via MS Excel for any fine-tuning needed over the next few days. The recorded data includes joystick positions, button status, and pSoc switch status. It is written to a lossless queue (FIFO) at 20ms intervals (the same rate at which it arrives from the Driver Station) and eventually written to a file in one fell swoop once the driver decides to stop recording data.

For matches, the desired autonomous play is selected by the driver. Playback data from the corresponding .xls file is then loaded into several arrays (all while in Disabled mode). At the start of Autonomous, the data is "played" out from those arrays into the appropriate subsystem VIs (drive, arm, etc.) at the same rate at which is was collected. Of course, there is some variability in the response of the mechanical system from match to match but it is minimal. I guess you could call this a reasonably sophisticated open loop system.

Each link in our two-jointed tube-handling arm is controlled via independent PID loops. Pre-identified angular positions of each joint are contained in a lookup table of positions (home, feeding station, floor pickup, top center, etc.) stored in a .ini file on the cRIO similarly to the autonomous play files. A state machine monitors the status of buttons on the Driver Station and loads the angular positions into the PID setpoints when a state change occurs. The same state machine runs in both Autonomous as well as Teleop since the arm control VI doesn't care if the inputs are coming from a human being or from recorded data.

We had a pretty good success rate hanging ubertubes on the top peg at Pittsburgh using this system. The main problem we had was in getting our claw clear of the tube before backing away from the rack; we sometimes de-scored ourselves. There are some drawbacks to using the approach described, but as they say in math classes - the proof is left to the reader.

AllenGregoryIV 13-03-2011 23:58

Re: Autonomy: How Did You Guys Do It?
 
Our autonomous uses one sonar on the left and one sonar on the front. Paired with our omni directional drive, the lane divider and driver station wall allow us to know exactly where we are. We also have a gyro to keep us orientated correctly, we were hoping to use two range finders up front to orient our selves but we never got that working fully. We are hoping to get a two tube auto working Thursday at Lone Star.

jonboy 14-03-2011 00:03

Re: Autonomy: How Did You Guys Do It?
 
Our autonomous followed the line til it sensed the tee and then gunned it for the last loop of the while loop and stopped, which put us right at the wall. The "gunning" worked well except it should have been a little less energetic. Since we had holomonic drive we just controlled via strafing to keep on the line thus remaining normal to the wall. We started raising the arm to the limits at the start so that it was in position when we hit the wall. We scored 8/10 on the top middle peg in qualification runs and 100% in elimination except for one case where the arm was intentionally disabled. We could do the Y but almost all the alliances allowed us to go straight. Too bad autonomous did not offer more points.

Bethie42 14-03-2011 01:35

Re: Autonomy: How Did You Guys Do It?
 
This rookie programmer wrote autonomous mode on the last day at 9 PM :)

Our arm's highest extension is at roughly the height of the highest peg, and when we release the tube we don't have to back up in order to score, which made things easier.

We navigate to the scoring grid using line sensors. I didn't program for the Y because I figured 1) we should have our pick of starting positions, we can just choose the straight path and 2) it would distract us from getting the easier stuff working.
Currently we are just running the elevator motor up all the time during autonomous, due to a weird problem with our limit switches in autonomous [speaking of which, would anybody like to Find the Bug that's making our limit switches work in teleop but not autonomous? :D I used C++.]. However, we recently switched our elevator motor to a more powerful one, which actually caused a belt to shear when it was run too far.....alarmed, I am now planning some careful testing and playing with the motor speed before we go out to a match and shear another belt. The idea is that if the elevator reaches the top just as we reach the scoring grid, we won't damage anything.

We use the tape T to trigger scoring, which involves stopping the elevator motor and opening the gripper. We can drop the tube at just about any height and still score.

All very rudimentary and I am particularly irked that the limit switches are still proving noncompliant. We didn't get very much [read: any] testing in of the whole system: at roughly 9 PM on ship night we got the line following stuff working, and then I successfully wrote and tested a switch statement to ensure that we don't just drive right over the T before it has time to stop us. So theoretically it should all work: I am anxiously looking forward to our Week 4 regional to try it all out :D

RoBoTiCxLiNk 14-03-2011 01:41

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by Bethie42 (Post 1039092)
This rookie programmer wrote autonomous mode on the last day at 9 PM :)

Our arm's highest extension is at roughly the height of the highest peg, and when we release the tube we don't have to back up in order to score, which made things easier.

We navigate to the scoring grid using line sensors. I didn't program for the Y because I figured 1) we should have our pick of starting positions, we can just choose the straight path and 2) it would distract us from getting the easier stuff working.
Currently we are just running the elevator motor up all the time during autonomous, due to a weird problem with our limit switches in autonomous [speaking of which, would anybody like to Find the Bug that's making our limit switches work in teleop but not autonomous? :D ]. However, we recently switched our elevator motor to a more powerful one, which actually caused a belt to shear when it was run too far.....alarmed, I am now planning some careful testing and playing with the motor speed before we go out to a match and shear another belt. The idea is that if the elevator reaches the top just as we reach the scoring grid, we won't damage anything.

We use the tape T to trigger scoring, which involves stopping the elevator motor and opening the gripper. We can drop the tube at just about any height and still score.

All very rudimentary and I am particularly irked that the limit switches are still proving noncompliant. We didn't get very much [read: any] testing in of the whole system: at roughly 9 PM on ship night we got the line following stuff working, and then I successfully wrote and tested a switch statement to ensure that we don't just drive right over the T before it has time to stop us. So theoretically it should all work: I am anxiously looking forward to our Week 4 regional to try it all out :D

I'm currently trying to distract myself from the fact that the next regionall for us is in like 2 and a half weeks so if youd like me to look over your code, id be happy to help debug it, granted i understand the language you send me ::rtm::
and that sounds pretty awesome. im still working on getting time to work on our teams autonomous :(

Bethie42 14-03-2011 02:00

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by RoBoTiCxLiNk (Post 1039093)
I'm currently trying to distract myself from the fact that the next regionall for us is in like 2 and a half weeks so if youd like me to look over your code, id be happy to help debug it, granted i understand the language you send me ::rtm::
and that sounds pretty awesome. im still working on getting time to work on our teams autonomous :(

Why thank you :) I have MORE than enough stuff to distract myself with: ie, Chairman's Award, scouting, and general team organization, heh.

It's in C++...I shall update my original post. Also I posted about the limit-switch problem here.

One of the best memories from this build season was running around on ship night [bag-and-tag night actually], writing code for autonomous while hardware wanted to work on the robot too, meantime our programming laptop died and we had to shuffle the robot back and forth from the shop to our high-ceiling testing area....then getting the line-following code to work. I shrieked with joy [and caffeine] and promptly laid down another duct tape line for the robot to follow....who knew that you can get cuts from duct tape?

RoBoTiCxLiNk 14-03-2011 02:07

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by Bethie42 (Post 1039101)
Why thank you :) I have MORE than enough stuff to distract myself with: ie, Chairman's Award, scouting, and general team organization, heh.

It's in C++...I shall update my original post. Also I posted about the limit-switch problem here.

One of the best memories from this build season was running around on ship night [bag-and-tag night actually], writing code for autonomous while hardware wanted to work on the robot too, meantime our programming laptop died and we had to shuffle the robot back and forth from the shop to our high-ceiling testing area....then getting the line-following code to work. I shrieked with joy [and caffeine] and promptly laid down another duct tape line for the robot to follow....who knew that you can get cuts from duct tape?

Haha, sounds awesome. My attempts at line following were with a chassis of last years bot on a tiled hallway and the little markings all over them ended up messing up the line sensors so much that it was impossible to write reliable code.

I had just seen that thread in the real time forum thing, and i immediately thought to myself "its C++ :/". I program in java, so i may not be able to find it if its a language specific problem.

and for our first regional, my mom needed the laptop i use to program for a conference she went to because her mac cant make databases, so i was stuck programming on the dinky driver station computer >.< the multi purpose netbook thing. still managed to get many things working though

Bethie42 14-03-2011 02:22

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by RoBoTiCxLiNk (Post 1039103)
Haha, sounds awesome. My attempts at line following were with a chassis of last years bot on a tiled hallway and the little markings all over them ended up messing up the line sensors so much that it was impossible to write reliable code.

I had just seen that thread in the real time forum thing, and i immediately thought to myself "its C++ :/". I program in java, so i may not be able to find it if its a language specific problem.

and for our first regional, my mom needed the laptop i use to program for a conference she went to because her mac cant make databases, so i was stuck programming on the dinky driver station computer >.< the multi purpose netbook thing. still managed to get many things working though

Yeah, tiled hallways don't help... :P We have a little scrap of carpet from the local scrimmage in our shop and it's amazingly handy for both testing line-following and spread-eagling on to work on the robot! The aluminum shavings get into the seams a bit but oh well...

I discovered at our scrimmage that finding a bit of fabric or other material that simulates the carpet for testing in one's pit is surprisingly hard. Also our scrimmage had very pale grey tape lines...very nice high contrast, but not really what I expect at competition. Guess who's going down on the floor with the drive team with a screwdriver on day one to calibrate the light sensors.

Programming on the Classmate?! I applaud you. I don't have the patience to change default settings on that thing, lol, much less program on it...and the keys are so tiny. Try programming on a cell phone, I imagine it's about the same.
We are actually planning on taking our desktop computer, my personal PC laptop, and my Macbook to regionals. The desktop is a backup in case the laptop has the fabled 64-bit incompatibility with WindRiver, and the Mac is for scouting use. We did about the same last year, and got told by some scouting team that we had the most computers they'd seen in a pit...said something about that being a scouting criteria, which makes one wonder...

The original programming laptop didn't actually die....but WindRiver just...stopped compiling. It gets to 13% [on ANY program] and then just sits there indefinitely. We reinstalled and everything. It wasn't even particularly slow to begin with...

RoBoTiCxLiNk 14-03-2011 02:39

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by Bethie42 (Post 1039108)
Yeah, tiled hallways don't help... :P We have a little scrap of carpet from the local scrimmage in our shop and it's amazingly handy for both testing line-following and spread-eagling on to work on the robot! The aluminum shavings get into the seams a bit but oh well...

I discovered at our scrimmage that finding a bit of fabric or other material that simulates the carpet for testing in one's pit is surprisingly hard. Also our scrimmage had very pale grey tape lines...very nice high contrast, but not really what I expect at competition. Guess who's going down on the floor with the drive team with a screwdriver on day one to calibrate the light sensors.

Programming on the Classmate?! I applaud you. I don't have the patience to change default settings on that thing, lol, much less program on it...and the keys are so tiny. Try programming on a cell phone, I imagine it's about the same.
We are actually planning on taking our desktop computer, my personal PC laptop, and my Macbook to regionals. The desktop is a backup in case the laptop has the fabled 64-bit incompatibility with WindRiver, and the Mac is for scouting use. We did about the same last year, and got told by some scouting team that we had the most computers they'd seen in a pit...said something about that being a scouting criteria, which makes one wonder...

The original programming laptop didn't actually die....but WindRiver just...stopped compiling. It gets to 13% [on ANY program] and then just sits there indefinitely. We reinstalled and everything. It wasn't even particularly slow to begin with...

The keys are very small on the DS, but you get used to it after a while. its pretty handy, using it for all the programming and keeping comms to the driver station exe, then not switching laptops for regional.

we do have a small bit of carpet, but its all curled and needs weight at each end, and its only slightly larger than the robot, so its not of much use. even to sit or lie down on, its very hard to use. i usually just sit on the floor, cross legged with MY laptop that i used to program this year (an actual laptop, not DS) on my lap, right next to the chassis on a cart. and i think maybe ill suggest a mac to the scouting leader or whatever hes called, but pretty much everyone on the team is a pc.

and that sucks about your original programming laptop. we have this really old laptop in our closet with no wifi, and probably about 5 MB of RAM that i was thinking of using to program one time, till i picked it up... very heavy for such a bad laptop...

Bethie42 14-03-2011 13:40

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by RoBoTiCxLiNk (Post 1039114)
The keys are very small on the DS, but you get used to it after a while. its pretty handy, using it for all the programming and keeping comms to the driver station exe, then not switching laptops for regional.

we do have a small bit of carpet, but its all curled and needs weight at each end, and its only slightly larger than the robot, so its not of much use. even to sit or lie down on, its very hard to use. i usually just sit on the floor, cross legged with MY laptop that i used to program this year (an actual laptop, not DS) on my lap, right next to the chassis on a cart. and i think maybe ill suggest a mac to the scouting leader or whatever hes called, but pretty much everyone on the team is a pc.

and that sucks about your original programming laptop. we have this really old laptop in our closet with no wifi, and probably about 5 MB of RAM that i was thinking of using to program one time, till i picked it up... very heavy for such a bad laptop...

Wow, I thought it was only our team that has stacks of old decrepit laptops stashed in the closet.....along with old control boards and expired software...

At one point this season I was actually writing code on my Mac...couldn't compile because of not having WPI lib stuff, but still...and I have a fond hope of someday compiling C++ for FRC on a mac, in Xcode or Code::Blocks. I tried to do that earlier this season.....after linking up most of the dozens and dozens of WPI library header files, I gave up, but I was CLOSE! Or at least I like to think so!


On another note, has anyone here worked on an autonomous mode that scores more than one tube?

apalrd 14-03-2011 13:53

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by Bethie42 (Post 1039344)
On another note, has anyone here worked on an autonomous mode that scores more than one tube?

I have.

The command functions are written in LabVIEW and perform specific tasks reliably and accurately. The primary drive functions are drive_straight and drive_gyro_turn, and both were calibrated fairly precisely (drive is accurate to about 1/2" and gyro is repeatable to about a degree, but is usually off by two degrees). There are also more command functions for elevator actions, but all of those set the getsets which are read by the elevator code elsewhere (do score, set state, etc.)

The beescript system runs on top of the command functions, and reads a text file on the cRio which it interprets and calls the command functions (by ref).

An example script would be:
Code:

#score a tube 150 inches away 3.4 ft/sec
ELEV_SET_STATE score_hi
DRIVE_STRAIGHT 150 3.4
#score
ELEV_SCORE
#backup
DRIVE_STRAIGHT -150 6

This would be read by the interpreter and it would call the command functions in sequence.

Jeffy 14-03-2011 16:58

Re: Autonomy: How Did You Guys Do It?
 
Line trackers. They work very well if you calibrate them at the beginning of each day. Once we got ours right, we scored in our last 6 matches without flaw.

connor.worley 15-03-2011 02:02

Re: Autonomy: How Did You Guys Do It?
 
We use a gyro, one encoder on our drivetrain, an IR sensor on our roller claw, and a 10 turn potentiometer on our arm. All of them are used in autonomous, but only the IR sensor and the pot are used in teleop.

On the programming side of things, we've got P loops for the encoder and gyro, a PD loop for the arm, and a state machine for the roller. We push all the autonomous commands to a queue with a timeout. If the target positions for the subsystems aren't achieved within the timeout, we skip to the next command. This helps ensure that the robot doesn't get stuck in a control loop and is always doing something.

davidthefat 15-03-2011 02:07

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by connor.worley (Post 1039884)
We use a gyro, one encoder on our drivetrain, an IR sensor on our roller claw, and a 10 turn potentiometer on our arm. All of them are used in autonomous, but only the IR sensor and the pot are used in teleop.

On the programming side of things, we've got P loops for the encoder and gyro, a PD loop for the arm, and a state machine for the roller. We push all the autonomous commands to a stack with a timeout. If the target positions for the subsystems aren't achieved within the timeout, we skip to the next command. This helps ensure that the robot doesn't get stuck in a control loop and is always doing something.

Congrats on your performance at the SD Regional

WizenedEE 15-03-2011 02:34

Re: Autonomy: How Did You Guys Do It?
 
We're first raising our arm with two PD loops and two accelerometers (it's a two part arm) and a little bit on the wrist, then tracking the line with the rockwells, stopping at the T and then it opens the claw, lowers the arms, and runs away all at the same time.

The running away will hopefully be useful in competition, because it's been very useful scaring all the PR folks.

Does anybody else find the "practice" mode very useful for the 15 second cutoff? The awesome sounds are also nice :cool:

vinnie 15-03-2011 23:55

Re: Autonomy: How Did You Guys Do It?
 
We use a combination of line tracking and ultrasonic rangefinding. Our robot follows the line at about 40% speed and then slows to 20% when it reaches around 60inches away from the wall. We have a lot of fun starting our robot totally crooked and watching it straighten itself out. Also we are using position control for the arm with the E4P encoder in the KOP

Kingofl337 17-03-2011 15:54

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by Bethie42 (Post 1039344)
On another note, has anyone here worked on an autonomous mode that scores more than one tube?

We have, done two tubes.

We turn the robot into a rolling PID Loop and just start executing commands to the PID controllers.

Arm Length and Tilt PID for HighMiddle
Ultrasonic & Gyro PID to 36" from the wall.
Ultrasonic & Camera PID to the peg tube 1
Reverse Claw
Arm Length & Tilt PID to home
Arm Length and Tilt PID for Front Pickup Run Claw Intake
Strafe Wheel Pods & Encoder PID infront of tube 2
Drive FWD until the claw detects a tube
Arm Length & Tilt PID to HighEndPeg
Ultrasonic & Gyro PID 36" from the wall
Ultrasonic & Camera PID to the Peg
Arm Length & Tilt PID to home
Reverse Claw

If you watch the video you can see each step: Video

AlexD744 17-03-2011 17:46

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by torihoelscher (Post 1038851)
We used Line Sensors, a sonar, a Gyro, and a backup way to make the auto move. Of course the Line Sensors gave us a hassle that we had to disable them and trust the sonar. it was great!!!

Are you using the gyro to keep you straight if the robot veers? I feel like that would be a common occurrence with the bumps in the field and a swerve drive.

theprgramerdude 17-03-2011 18:24

Re: Autonomy: How Did You Guys Do It?
 
I used a for loop that drove forward and raised the arm at set speeds. Piece of cake.

Grim Tuesday 17-03-2011 19:27

Re: Autonomy: How Did You Guys Do It?
 
We had the whole grand plan: Camera tracking, line following, PID loop. Unfortunately, none of our optical encoders could be coerced to work, despite constant replacements. And the camera made the C-RIO crash.

So now we have a "drive forward, lift up, open grabber" auto.

adf0221 19-03-2011 12:54

Re: Autonomy: How Did You Guys Do It?
 
are you using the newest camera? if so you need to use a non-crossover cable to connect it to the crio to prevent it from crashing..thats how we fixed our problem

PriyankP 19-03-2011 13:40

Re: Autonomy: How Did You Guys Do It?
 
We had many combinations of sensors to make the auto work but in the end, we went with line sensors, ultra sonic and pot on the arm.

We start the auto by line tracking at about 8ft/s and stopping the base at some distance away from the wall. While it's line tracking, our arm moves to the high goal preset. Once it reaches the target distance, it starts to orient the tube & de-suck it out of the rollers. And the arm is slightly lowered too. :)


I tried using gyro but due to sensor inefficiency, the gyro reading would keep increasing even if the robot's not moving. I was told that it happens with all gyros and it shouldn't make a big difference in the 15-20 second autos...

Michael DiRamio 19-03-2011 15:43

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by PriyankP (Post 1042029)
I tried using gyro but due to sensor inefficiency, the gyro reading would keep increasing even if the robot's not moving. I was told that it happens with all gyros and it shouldn't make a big difference in the 15-20 second autos...


It will always drift slightly. If you're noticing big changes it's likely that the robot was moving when the gyro initialized and biased. If that happens it will constantly think it is in motion afterwards. Make sure that it's completely still during boot up.

PriyankP 20-03-2011 02:14

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by Michael DiRamio (Post 1042067)
It will always drift slightly. If you're noticing big changes it's likely that the robot was moving when the gyro initialized and biased. If that happens it will constantly think it is in motion afterwards. Make sure that it's completely still during boot up.

I think the last time I got the gyro to work properly it was changing by a degree every 5 seconds... although it wouldn't have made a huge difference, I abandoned the idea of an auto with gyro to stay straight because it made our base oscillate when I programmed it to stay within +-1degree of the original heading. When I get back to the room again, I'm going to test it again with initialization in robotinit() function, but I doubt it will make a lot of difference because the the degree of change was measured in disabled mode.

On that note, how often do other programmers call for inputs from sensors during autonomous and teleop periods? I've always called it at the beginning of every iteration but not sure if calling it every other iteration would made a huge difference.

davidalln 20-03-2011 08:59

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by PriyankP (Post 1042283)
On that note, how often do other programmers call for inputs from sensors during autonomous and teleop periods? I've always called it at the beginning of every iteration but not sure if calling it every other iteration would made a huge difference.

Calling it at the beginning of the loop is probably the best idea. If you call it at the end, and the code hangs up for whatever reason, then you'll be running with stale data the next loop iteration. But in reality, it shouldn't matter that much.

We ubertubed 15/16 matches at Peachtree. Line tracking is working great for us :D

MikeE 20-03-2011 13:21

Re: Autonomy: How Did You Guys Do It?
 
We developed camera tracking and have used gyro in previous years but this year we found that the line sensors were very reliable as long as they are calibrated properly to the field. The lights on the front of the sensors make them very easy to calibrate manually when sliding the robot from side to side over the lines.

Our robot has KoP quad encoders for each of the mechanum wheel modules, so we know how far the robot has travelled forward. There is code to detect if an encoder is providing unreliable values, hence excluding it from calculation. Typical problems are no signal due to wiring problems or physical encoder damage.

We do have an encoder on the elevator, but we have found the most reliable method is to drive the elevator to the top and check the current draw from the motor via CAN. As soon as the elevator gets to the top, the motor starts to stall, increasing current draw which we detect and stop the motor. This method also help if we are get too close and hit some other part of the peg grid structure since it will stop the elevator, rather than unsuccessfully trying to drive it through the grid to a preset point.

The code is written in Java in the autonomousPeriodic() method as a state machine, with timeouts on many states to allow for sensor failures.

Summarizing: line sensors for direction, quad encoders on drivetrain for distance travelled, current draw from elevator motor and timing for pneumatics. Ultrasonic range finding would be a helpful addition.

The biggest problem was lack of access to a real field to test out the system. In particular we had extremely limited access to the practice field during our Regional. Testing by watching the system perform during a match is a very public way of finding bugs, which I do not recommend for the thin-skinned.

siggy2xc 20-03-2011 21:54

Re: Autonomy: How Did You Guys Do It?
 
We recorded our joystick inputs to a file on the cRIO and played them back, it worked amazingly well. Heres a link to my program http://www.chiefdelphi.com/forums/sh...ad.php?t=93720

davidthefat 21-03-2011 19:25

Re: Autonomy: How Did You Guys Do It?
 
Sigh... I guess I will have to go with the blind man way... Autonomy with only encoders. Hey, but at least it is so easy to program that.

theprgramerdude 21-03-2011 19:58

Re: Autonomy: How Did You Guys Do It?
 
Did you write your own encoder code?

davidthefat 21-03-2011 20:11

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by theprgramerdude (Post 1043379)
Did you write your own encoder code?

No, because I have no access to the FPGA. I read that all the encoder calculations basic are done on the FPGA and that the WPILib is just an interface with the FPGA. Even if I write my own program to interface with the encoders, the FPGA has to be the middle man. I can't do anything if the FPGA is doing the internal calculations all wrong.

davidalln 21-03-2011 21:13

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by davidthefat (Post 1043388)
No, because I have no access to the FPGA. I read that all the encoder calculations basic are done on the FPGA and that the WPILib is just an interface with the FPGA. Even if I write my own program to interface with the encoders, the FPGA has to be the middle man. I can't do anything if the FPGA is doing the internal calculations all wrong.

Wait... I'm not sure this is true. The encoder is plugged into the Digital Sidecar, which is plugged directly into the cRIO. There are no signals or math about the encoder being sent to the FPGA, the only thing that reads the values is the code. In fact, the FPGA only acts as a middle man between the driver station and the cRIO.

At least, this was my understanding.

theprgramerdude 21-03-2011 21:28

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by davidalln (Post 1043422)
Wait... I'm not sure this is true. The encoder is plugged into the Digital Sidecar, which is plugged directly into the cRIO. There are no signals or math about the encoder being sent to the FPGA, the only thing that reads the values is the code. In fact, the FPGA only acts as a middle man between the driver station and the cRIO.

At least, this was my understanding.

Incorrect. The FPGA is the middleman between the Crio's CPU and the majority of it's IO's. The FPGA can be directed in it's actions by the CPU, as the CPU is the boss, but everything on the sidecar runs through the FPGA, including encoders. David, if you claim that the FPGA is doing everything wrong, then why would you use encoders? As far as I can tell, the only thing it's doing wrong is deriving rates, which can be worked around. It can also give the CPU the value's the FPGA is reading at any instant, allowing you to completely write your own code from scratch. The FPGA CAN be accessed, just not directly; sometimes, you have to be sneaky in doing so.

http://decibel.ni.com/content/docs/DOC-1750
Figure 3 on this page may help clear things up.

davidthefat 22-03-2011 00:57

Re: Autonomy: How Did You Guys Do It?
 
I really had no say in this. It was like: heres what we have and do something amazing with it. All I can use is encoders and timers. If the worst comes to worst, I will just use a timed system. My early tests have shown that there is too much noise coming from the encoders (just from the raw data). I think that is due to the chains on the drive train jerking around.

Joe Ross 29-03-2011 18:06

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by davidthefat (Post 1043367)
Sigh... I guess I will have to go with the blind man way... Autonomy with only encoders. Hey, but at least it is so easy to program that.

It's harder then it seems, isn't it?

davidthefat 29-03-2011 18:54

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by Joe Ross (Post 1047072)
It's harder then it seems, isn't it?

No, we just flicked the wrist because my mentor did not authorize me to do anything else in autonomous. It was because we had a truck load of mechanical and electrical problems that I did not have time to actually go and test...

Hjelstrom 29-03-2011 20:23

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by davidthefat (Post 1047091)
No, we just flicked the wrist because my mentor did not authorize me to do anything else in autonomous. It was because we had a truck load of mechanical and electrical problems that I did not have time to actually go and test...

The WPI lib's encoder class works great! Things have really come a long way since when I started in 2005 with the IFI stuff. The CRio, the language options and the libraries you have access to are great. I really encourage you to use some of this stuff rather than assuming you need to rewrite it or bypass it.

mahumnut 29-03-2011 21:09

Re: Autonomy: How Did You Guys Do It?
 
I had originally planned on using the camera, but after about two weeks spent trying to get it to work found it just wasn't accurate or fast enough and even crashed sometimes. After this, I spent most of the time (whenever I wasn't programming the other aspects of the robot) working on a dead reckoning auton with just timers. This turned out to be super hard considering the rotational drift with mecanum wheels and the change in distance due to battery voltage. On the bag n tag day, I eventually just tried line tracking and we slapped on the three photoswitches to the front of our robot. Because of the time crunch (we literally got them on and calibrated at like 6 pm) and the fact that I couldn't figure out how to change the speeds in the pre-supplied line tracking code quickly enough, I just made my own, all it is is a bunch of cases, one for each possible combination, that outputs either straight, left, right, or stop which are then interpreted based on time.
Whaddya know, it actually worked. After the line tracking to get to the pegs, it was just a matter of timing a sequence of raising the arm, tilting the tube and lowering the arm.
But srsly, line tracking is amazingly simple and effective. I didn't even have to do any special programming for the Y, I just have two different modes, one for tracking the left edge and one for the right edge. It was pretty consistent at DC (after some last-minute time calibration on Friday to get the tube on just right) only failed us twice (one of which was during our second quarter match :mad:).

theprgramerdude 29-03-2011 21:10

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by Hjelstrom (Post 1047112)
The WPI lib's encoder class works great! Things have really come a long way since when I started in 2005 with the IFI stuff. The CRio, the language options and the libraries you have access to are great. I really encourage you to use some of this stuff rather than assuming you need to rewrite it or bypass it.

The class is great! Now only if it's functionality was negated by the FPGA image...

apalrd 29-03-2011 21:26

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by theprgramerdude (Post 1047133)
Now only if it's functionality was negated by the FPGA image...

Something NI should have fixed long ago. As in, they shouldn't have touched the Counter/Encoder code since it worked perfectly fine last year. (then they decide it's too risky to fix during build season so they won't fix it at all....)

MagiChau 29-03-2011 21:33

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by apalrd (Post 1047142)
Something NI should have fixed long ago. As in, they shouldn't have touched the Counter/Encoder code since it worked perfectly fine last year. (then they decide it's too risky to fix during build season so they won't fix it at all....)

Wished a lot of things were fixed beforehand. 2CAN for an example, thought it was working until it was found to be buggy again. :/

Hjelstrom 29-03-2011 23:29

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by theprgramerdude (Post 1047133)
The class is great! Now only if it's functionality was negated by the FPGA image...

Hmm, now I'm curious what functionality is negated?! I guess we're just using the count which is working fine.

remulasce 30-03-2011 02:02

Re: Autonomy: How Did You Guys Do It?
 
KOP line sensors, an ultrasonic mounted on the front, a 10 turn pot on our telescope, and a 3 turn on our wrist. We use a state machine on the line following. The line sensors are mounted close enough together that at some points two sensors will be read at the same time, and if that happens it will make fine corrections, while if only the outside sensors detect, it will make a coarse correction. The state machine will remember what state it was in last if no lines are detected, so we can start almost 90 degrees off from the line and once it reaches the line, it will track it, even if inertia carries the sensors across the line, because it will keep correcting back until it finds the line again. It detects a fork if it sees both outside sensors are lines but the center one is not. If it finds one, it drives straight ahead for a short time, then turns either right or left and goes straight until it meets up with a line again. It requires the middle sensor to be ground so that it doesn't try a fork at the foot of the line or if the driver come at the line at an extreme angle. The entire line following sequence is built into a LineFollower class, an object of which is passed to both the autonomous code and the teleop code, so the driver can use it any time during a match. The LineFollower keep its own state machine, so the auto code doesn't have to deal with finding forks or whatnot, and if a fork isn't registered, the auto code doesn't even care. The auto code just calls FollowLineStraight or FollowLineFork until the front ultrasonic sensor reads about two feet,at which point the scoring sequence is executed. At the beginning of auto, a PID loop puts the telescope and wrist to the correct positions, and once the robot gets close enough, the wriswrist tilts down, the telescope drops, and the rollers outtake simultaneously, and the robot backs off. The score sequence constants are held in a config file available to both teleop and auto, so an identical sequence can be called via the codriver's trigger. Our alliance had three robots with line following, and so we took the fork while our partners took straight. We got triple ubertubes in a couple elim matches.

TL;DR: our auto pwnz.

Funny story about pregame checklists: one match, we started off in high gear accidentally, which the constants were never tuned for. At the beginning of autonomous, we were alarmed to see our robot hurtle at us at nearly full speed (14 fps). At the scoring distance, it neatly stopped, hung the tube, and daintily pulled away. This all took about four seconds, which was far faster than the regular low gear attempts. After the match, our team captain demanded of me a two tube autonomous, now that I blew the "not enough time" excuse.

wireties 30-03-2011 10:02

Re: Autonomy: How Did You Guys Do It?
 
Our robot uses mecanum drive with encoders on each wheel and a pot on the shoulder to tell us where the arm is. The code has a built-in script language for autonomous code so we can FTP a new script to the robot for every match (if needed). We also have code that records operator input and replays it. Both work pretty well though we've not tested as much as we'd like to this year. The weather-related delays during build season left little time.

We played with line following and with the camera. Our robot pretty much goes where you tell it to so the line following was not necessary and slowed it down. The camera consumes a lot of CPU (for analysis) and communications bandwidth and did not seem worth it this year.

kenavt 30-03-2011 17:11

Re: Autonomy: How Did You Guys Do It?
 
I believe our first autonomous was merely timed commands to different motors and solenoids. However, that was quickly scrapped when we also found out the effectiveness and simplicity of line tracking.

Equipment: 2 "eye" sensors, KOP
4 US Digital quadrature encoders, KOP (wheels)
1 "string" linear potentiometer, not KOP (not sure of manufacturer) (arm height)
2 solenoids, probably KOP (telescoping arm, aka "extendorizer")

We have two "eye" sensors tracking the line, simply dragging one side of the robot to stay on track. While the robot is moving along the line, at timed intervals (I believe), our arm moves up, stopping at a certain "string" pot value. At another time interval afterward, we "power" (not entirely sure of the terminology) our pneumatic telescoping arm to reach up to the top row.

The robot is stopped by reaching a certain encoder value. Two rollers on the bottom and top of our claw eject the tube, while a "jaw" to our tube lowers, allowing a quick ejection. Simultaneously, the arm is retracted, to lower it out of the way.

Once this operation is complete, the robot lowers the arm to a certain pot value, simply backs up to another encoder value, and then stops.

Commands can be sent to the robot via FTP with a text file, which is parsed. In the text file, available commands are vector or strafe driving (we have mechanums), retraction of arm, and arm position. You can also determine the encoder value you want to stop at, and motor speed.

Our last Ann Arbor District (last week) match, with the autonomous working perfectly (nearest red robot):
http://www.youtube.com/watch?v=TfGzUebFtfM

plnyyanks 30-03-2011 17:16

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by Hjelstrom (Post 1047214)
Hmm, now I'm curious what functionality is negated?! I guess we're just using the count which is working fine.

it's a problem with rate calculations. I don't remember the details (I think only even numbered allocations of encoders can calculate rate), since we wrote our own rate functions, but NI has acknowledged the bugs and won't be fixing it until the offseason. I'll post a link to a more detailed explanation as soon as I find it again.

EDIT: see this thread, specifically this post and thispost.

theNerd 31-03-2011 16:40

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by Jetweb (Post 1038837)
We used the camera 100%

Wow! I've never done this before! I attempted to "track" still objects in various still pictures - a ball, a stick, a box - all color and at different lightings but I never was able to do this successfully. My main question is not how to apply an hsl threshold or clean up the photo after a threshold, but what to do with all that data I get using the stat VI's and such - I'm using labview. What does it all mean? and what on earth am i supposed to do with it to make my program know that it's looking at the target? thanks

aaronweiss74 03-04-2011 20:05

Re: Autonomy: How Did You Guys Do It?
 
We used light sensors for line tracking, encoders for auto-correction, and an encoder on the arm for height determination. It's never failed. It's also kind of showy since instead of placing the tube it throws it down onto the peg.

RoBoTiCxLiNk 03-04-2011 20:16

Re: Autonomy: How Did You Guys Do It?
 
I used a gyro on the base to keep it straight, an encoder for distance, and a second gyro to determine arm angle. I'm hoping to rig up another encoder so as to have two straight-correcting sensors and average the two corrections, or use one as a safety net in case one malfunctions.

Jeremy Germita 03-04-2011 22:35

Re: Autonomy: How Did You Guys Do It?
 
I was told by several older members and mentors to abandon the use of sensors altogether because of our history with sensors. Essentially, don't use them because we've never used them. What I thought was hilarious about this discussion was that half of these guys were mechanical, and couldn't tell between a potentiometer and an encoder.
</rant>

The autonomous program I've been using in San Diego and Los Angeles was dead reckoning with timers. It worked about 70% of the time. Aim for middle peg, score on left peg.

I plan to have a basic autonomous program using the encoders and gyro. This will at least go straight. I will test and implement this this week in Utah. Another variation of this program will attempt to put up two tubes. But this is a long shot...

If we make it to STL, I want to mount an ultrasonic sensor for distance sensing.

Sensors on James Bot:
Potentiometer on arm shoulder
Encoders on drivetrain
Gyro on drivetrain
CAN Bus Current Sensors(Used in our roller claw to sense tubes)

I've had KOP light sensors mounted, but 2/3 were mangled during practice on concrete, the other one doesn't seem to calibrate. I have abandoned these.

wireties 11-04-2011 15:24

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by jeremypg399 (Post 1048990)
I was told by several older members and mentors to abandon the use of sensors altogether because of our history with sensors. Essentially, don't use them because we've never used them. What I thought was hilarious about this discussion was that half of these guys were mechanical, and couldn't tell between a potentiometer and an encoder.

Man, that is some questionable advice!

We used encoders on all 4 wheels, a pot on the arm and an ultrasonic sensor on the front and rear. Our bot simple rolled forward X feet (measured by the encoders), raised the arm to the 9' height (measured by the pot), released the ubertube and backed off some. We used the ultrasonic sensors to align for minibot deployment.

Reliable sensors are a HUGE advantage in autonomous mode and can be used to automate certain movements in teleop mode (like releasing a tube and lowering an arm in perfect sequence). If you understand them, program them correctly and test them thoroughly - USE THE SENSORS!

HTH

Robby Unruh 12-04-2011 07:40

Re: Autonomy: How Did You Guys Do It?
 
I'm less interested in how they did it, instead what I want to know but is kind of mishaps occured during autonomous period?

One match, our robot drove straight backward and rammed another, pushing them in to their scoring zone, giving my team a red card, instead of just line sensing. Sooo unexpected. :eek:

stuart2054 12-04-2011 19:14

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by davidthefat (Post 1038791)
From the looks of it, a lot of teams did the whole "drive forward for x seconds/feet and score" method. Others used line trackers, some might have used cameras.

How did you guys do it?

We Use two electric eyes for line tracking and an utrasonic distance sensor for distance we got from Pololu.com. The distance sensor is awesome, with in +/- an inch. You have to correct a little for coasting when the motors are put to zero speed. Our arm is pneumatic so it just goes to the top position which is good for either the middle or left and right pegs. We also have two distances which are selected by the driver by moving the right hand joystick throttle either fully up or fully down. WQe need to be closer to the wall when we stop and hang the tube if we play the middle Y tape position.

At the correct distance we stop and let go of the uber tube and lower the arm. Then back up sending equal speed signals to the motors at 13 seconds in we lower the arm.

At our first district we never missed a top level uber tube score. We did not back up there and almost dropped the tube because when the arm fully retracted we hit the middle peg.

So we added the backing up before district competition #2. We had a couple of misses there because the the joy stick was set for the middle Y tape positon and we got too close and the arm joint caught on the middle peg and shook off the tube.

At MSC we did not miss except for when an partner robot veered into us and one other time when at end of day Friday match the tape was screwed up and the robot veered off. Our driver now check the tape on the floor before every match.

stuart2054 12-04-2011 19:29

Re: Autonomy: How Did You Guys Do It?
 
Quote:

Originally Posted by Robby Unruh (Post 1051844)
I'm less interested in how they did it, instead what I want to know but is kind of mishaps occured during autonomous period?

One match, our robot drove straight backward and rammed another, pushing them in to their scoring zone, giving my team a red card, instead of just line sensing. Sooo unexpected. :eek:

I seen almost the same thing in the Niles, MI district Finals match. It happened in the last finals match. The other alliance called a time out and when the match started a robot went backwards al the way into our safe zone. I felt sorry for them and especially the programmer. (But for the grace of GOD it could have been me)

We did not know what they were working on in the time out but I assume the autonimous program for that robot was a part of it.

At MSC one of of our alliance partners did not use the tape but instead lined up right beside our robot. They veered into ours and knocked it off the tape and it missed. I seen a lot of the teams doing this there ( I don't know how they were programed) and I was not impressed with it. Some worked well but most missed and many stopped an alliance partner from scoring.

JesseK 12-04-2011 20:41

Re: Autonomy: How Did You Guys Do It?
 
- 1 encoder on SuperShifting transmissions
- Transmissions in low gear
- 1 Gyro with semi-aggressive PID control for "drive straight"

1. Finely tune the encoder distances
2. DO NOT turn the robot on until it is set down. If they gyro init's before it's set down, then it's game over
3. Push the bot backwards a little so that the chains are pre-tensioned. This should help alleviate drive straight problems from lurching
4. Ramp up the speed rather than going 100% full blast
5. Dead-reckon to the peg

Once it was tuned, we had 9 straight auto-modes score.


All times are GMT -5. The time now is 23:41.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi