|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools |
Rating:
|
Display Modes |
|
|
|
#1
|
|||
|
|||
|
Autonomy: How Did You Guys Do It?
From the looks of it, a lot of teams did the whole "drive forward for x seconds/feet and score" method. Others used line trackers, some might have used cameras.
How did you guys do it? |
|
#2
|
||||
|
||||
|
Re: Autonomy: How Did You Guys Do It?
Quote:
Arm is only on timers... Hoping to add encoders soon. |
|
#3
|
|||
|
|||
|
Re: Autonomy: How Did You Guys Do It?
Now, I was going to use the rangefinder to find weather we are on the fork or the stop. I had the worry that due to the metal backing, it would give a wrong reading.
|
|
#4
|
||||
|
||||
|
Re: Autonomy: How Did You Guys Do It?
Quote:
)you need to do a bit of filtering, but not a big deal. |
|
#5
|
|||
|
|||
|
Re: Autonomy: How Did You Guys Do It?
Our autonomous uses the encoders on the SuperShifter transmissions. The software uses the encoder values to actively make the robot drive dead straight. We also know reliably how far the robot has driven from the encoders. Encoders on the arm allow reliable positioning of the arm. The claw motors let us adjust the attitude of the tube when we reach the peg and then eject it onto the target peg. We have six different programs, one for each peg height. At GSR, we only ever ran the two for the top pegs, but have the others just in case. We can also select a delay before we start, which proved very useful in eliminations at GSR when we were on an alliance with 175 and 176 which scored 3 ubertubes in many of our elimination matches. See http://www.youtube.com/user/FRCteam1.../3/drbPrGJlroI for a match (semi-final 2) where all 3 of our alliance and 2 of our opponents scored in autonomous. Our robot is the middle blue robot which does not move until after the 4s delay we selected.
Noel |
|
#6
|
|||
|
|||
|
Re: Autonomy: How Did You Guys Do It?
Encoders on our drive train, encoders on our elevator, camera tracking, gyro angle correction, and sonar sensor. Consistently scores the ubertop on the top peg every time.
|
|
#7
|
|||
|
|||
|
Re: Autonomy: How Did You Guys Do It?
We decided that dead reckoning would be enough for this year given that the autonomous period is not very dynamic. Although our encoder code is done, we have had hardware problems and they are not on yet. Thankfully, our robot it pretty balanced and drives very straight. We didn't get an auto-mode in until yesterday in WI and it only scored once (in a somewhat humorous fashion), but will have one that should score pretty consistently week 4 in Midwest.
Given the hardware issues are fixed, the timing to approach will be changed to distance and a gyro would be added for turning around and getting ready. I don't think our robot functions act quickly enough to attempt a double-tube in 15 seconds. Last edited by BigJ : 13-03-2011 at 20:49. |
|
#8
|
|||
|
|||
|
Re: Autonomy: How Did You Guys Do It?
We used the camera 100%. I have a feeling previous years vision targets have given the camera a bad name. The retro-reflective tape used this year is simply awesome to track and is the only target in my history that is not affected by the stage lighting used at the competitions. I hope that stuff is around for a long time to come.
|
|
#9
|
|||
|
|||
|
Re: Autonomy: How Did You Guys Do It?
Yup. Our autonomous works in normal lighting, pitch black, or intense lights. We love that stuff.
|
|
#10
|
||||
|
||||
|
Re: Autonomy: How Did You Guys Do It?
Wow! I've never done this before! I attempted to "track" still objects in various still pictures - a ball, a stick, a box - all color and at different lightings but I never was able to do this successfully. My main question is not how to apply an hsl threshold or clean up the photo after a threshold, but what to do with all that data I get using the stat VI's and such - I'm using labview. What does it all mean? and what on earth am i supposed to do with it to make my program know that it's looking at the target? thanks
|
|
#11
|
||||
|
||||
|
Re: Autonomy: How Did You Guys Do It?
We used light sensors for line tracking, encoders for auto-correction, and an encoder on the arm for height determination. It's never failed. It's also kind of showy since instead of placing the tube it throws it down onto the peg.
|
|
#12
|
||||
|
||||
|
Re: Autonomy: How Did You Guys Do It?
I used a gyro on the base to keep it straight, an encoder for distance, and a second gyro to determine arm angle. I'm hoping to rig up another encoder so as to have two straight-correcting sensors and average the two corrections, or use one as a safety net in case one malfunctions.
|
|
#13
|
||||
|
||||
|
Re: Autonomy: How Did You Guys Do It?
Used the E4P quadrature encoders provided in the KoP. We had the robot driven the correct distance for different modes, and I plugged them in for drive distance. Had some other encoder counts noted to do some calculations without trial and error. Arm height at our starting position angle needed to go through the peg was noted. Program ended up drive robot until encoder count got close to target. When robot stops moving and the code that moves the robot mechanisms to position returns a boolean of complete another stage is triggered that lowers the arm and shoots the tube out with the rollers.
Had to do some trial and error with drive speeds at competition so lift gets out before the robot goes up to the peg. Last edited by MagiChau : 13-03-2011 at 20:20. |
|
#14
|
||||
|
||||
|
We used Line Sensors, a sonar, a Gyro, and a backup way to make the auto move. Of course the Line Sensors gave us a hassle that we had to disable them and trust the sonar. it was great!!!
|
|
#15
|
|||
|
|||
|
Re: Autonomy: How Did You Guys Do It?
Our Auton code drives straight for x seconds using gyro angle correction; rotates our arm using x distance using an encoder value; rotates the tube pitch; rolls out the tube onto the peg; and backs up.
we're working on putting up two ubertubes. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|