![]() |
Autonomy: How Did You Guys Do It?
From the looks of it, a lot of teams did the whole "drive forward for x seconds/feet and score" method. Others used line trackers, some might have used cameras.
How did you guys do it? |
Re: Autonomy: How Did You Guys Do It?
Quote:
Arm is only on timers... Hoping to add encoders soon. |
Re: Autonomy: How Did You Guys Do It?
Used the E4P quadrature encoders provided in the KoP. We had the robot driven the correct distance for different modes, and I plugged them in for drive distance. Had some other encoder counts noted to do some calculations without trial and error. Arm height at our starting position angle needed to go through the peg was noted. Program ended up drive robot until encoder count got close to target. When robot stops moving and the code that moves the robot mechanisms to position returns a boolean of complete another stage is triggered that lowers the arm and shoots the tube out with the rollers.
Had to do some trial and error with drive speeds at competition so lift gets out before the robot goes up to the peg. |
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
Quote:
you need to do a bit of filtering, but not a big deal. |
Re: Autonomy: How Did You Guys Do It?
Our autonomous uses the encoders on the SuperShifter transmissions. The software uses the encoder values to actively make the robot drive dead straight. We also know reliably how far the robot has driven from the encoders. Encoders on the arm allow reliable positioning of the arm. The claw motors let us adjust the attitude of the tube when we reach the peg and then eject it onto the target peg. We have six different programs, one for each peg height. At GSR, we only ever ran the two for the top pegs, but have the others just in case. We can also select a delay before we start, which proved very useful in eliminations at GSR when we were on an alliance with 175 and 176 which scored 3 ubertubes in many of our elimination matches. See http://www.youtube.com/user/FRCteam1.../3/drbPrGJlroI for a match (semi-final 2) where all 3 of our alliance and 2 of our opponents scored in autonomous. Our robot is the middle blue robot which does not move until after the 4s delay we selected.
Noel |
Re: Autonomy: How Did You Guys Do It?
Encoders on our drive train, encoders on our elevator, camera tracking, gyro angle correction, and sonar sensor. Consistently scores the ubertop on the top peg every time.
|
Re: Autonomy: How Did You Guys Do It?
We decided that dead reckoning would be enough for this year given that the autonomous period is not very dynamic. Although our encoder code is done, we have had hardware problems and they are not on yet. Thankfully, our robot it pretty balanced and drives very straight. We didn't get an auto-mode in until yesterday in WI and it only scored once (in a somewhat humorous fashion), but will have one that should score pretty consistently week 4 in Midwest.
Given the hardware issues are fixed, the timing to approach will be changed to distance and a gyro would be added for turning around and getting ready. I don't think our robot functions act quickly enough to attempt a double-tube in 15 seconds. |
Re: Autonomy: How Did You Guys Do It?
We used the camera 100%. I have a feeling previous years vision targets have given the camera a bad name. The retro-reflective tape used this year is simply awesome to track and is the only target in my history that is not affected by the stage lighting used at the competitions. I hope that stuff is around for a long time to come.
|
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
Our Auton code drives straight for x seconds using gyro angle correction; rotates our arm using x distance using an encoder value; rotates the tube pitch; rolls out the tube onto the peg; and backs up.
we're working on putting up two ubertubes. |
Re: Autonomy: How Did You Guys Do It?
We're using nothing but line sensors, and its working excellently for us (backing up when it hits the horizontal piece of tape)!
|
Re: Autonomy: How Did You Guys Do It?
We're using a line track and ultrasonic range finder combo, with a gyro to keep us straight. In hindsight, the line track isn't really that useful.
|
Re: Autonomy: How Did You Guys Do It?
Our team was going to have encoders on our drive wheels, but we decided they wouldn't be that helpful for the most part. As a result, our automous is drive forward x seconds, move arm down for Y seconds, and retreat. we are using a makeshift encoder for the lift, so we will be able to make it go to the right height. Whether or not this code works will be determined on Thursday.
|
Re: Autonomy: How Did You Guys Do It?
Playback of recorded data from one of ten driver-selected .xls files stored on the cRIO. The file is selected during Disabled mode using buttons on one of the driver joysticks. The data is recorded during early practice matches to get a "gross" autonomous play accomplished, and then edited via MS Excel for any fine-tuning needed over the next few days. The recorded data includes joystick positions, button status, and pSoc switch status. It is written to a lossless queue (FIFO) at 20ms intervals (the same rate at which it arrives from the Driver Station) and eventually written to a file in one fell swoop once the driver decides to stop recording data.
For matches, the desired autonomous play is selected by the driver. Playback data from the corresponding .xls file is then loaded into several arrays (all while in Disabled mode). At the start of Autonomous, the data is "played" out from those arrays into the appropriate subsystem VIs (drive, arm, etc.) at the same rate at which is was collected. Of course, there is some variability in the response of the mechanical system from match to match but it is minimal. I guess you could call this a reasonably sophisticated open loop system. Each link in our two-jointed tube-handling arm is controlled via independent PID loops. Pre-identified angular positions of each joint are contained in a lookup table of positions (home, feeding station, floor pickup, top center, etc.) stored in a .ini file on the cRIO similarly to the autonomous play files. A state machine monitors the status of buttons on the Driver Station and loads the angular positions into the PID setpoints when a state change occurs. The same state machine runs in both Autonomous as well as Teleop since the arm control VI doesn't care if the inputs are coming from a human being or from recorded data. We had a pretty good success rate hanging ubertubes on the top peg at Pittsburgh using this system. The main problem we had was in getting our claw clear of the tube before backing away from the rack; we sometimes de-scored ourselves. There are some drawbacks to using the approach described, but as they say in math classes - the proof is left to the reader. |
Re: Autonomy: How Did You Guys Do It?
Our autonomous uses one sonar on the left and one sonar on the front. Paired with our omni directional drive, the lane divider and driver station wall allow us to know exactly where we are. We also have a gyro to keep us orientated correctly, we were hoping to use two range finders up front to orient our selves but we never got that working fully. We are hoping to get a two tube auto working Thursday at Lone Star.
|
Re: Autonomy: How Did You Guys Do It?
Our autonomous followed the line til it sensed the tee and then gunned it for the last loop of the while loop and stopped, which put us right at the wall. The "gunning" worked well except it should have been a little less energetic. Since we had holomonic drive we just controlled via strafing to keep on the line thus remaining normal to the wall. We started raising the arm to the limits at the start so that it was in position when we hit the wall. We scored 8/10 on the top middle peg in qualification runs and 100% in elimination except for one case where the arm was intentionally disabled. We could do the Y but almost all the alliances allowed us to go straight. Too bad autonomous did not offer more points.
|
Re: Autonomy: How Did You Guys Do It?
This rookie programmer wrote autonomous mode on the last day at 9 PM :)
Our arm's highest extension is at roughly the height of the highest peg, and when we release the tube we don't have to back up in order to score, which made things easier. We navigate to the scoring grid using line sensors. I didn't program for the Y because I figured 1) we should have our pick of starting positions, we can just choose the straight path and 2) it would distract us from getting the easier stuff working. Currently we are just running the elevator motor up all the time during autonomous, due to a weird problem with our limit switches in autonomous [speaking of which, would anybody like to Find the Bug that's making our limit switches work in teleop but not autonomous? :D I used C++.]. However, we recently switched our elevator motor to a more powerful one, which actually caused a belt to shear when it was run too far.....alarmed, I am now planning some careful testing and playing with the motor speed before we go out to a match and shear another belt. The idea is that if the elevator reaches the top just as we reach the scoring grid, we won't damage anything. We use the tape T to trigger scoring, which involves stopping the elevator motor and opening the gripper. We can drop the tube at just about any height and still score. All very rudimentary and I am particularly irked that the limit switches are still proving noncompliant. We didn't get very much [read: any] testing in of the whole system: at roughly 9 PM on ship night we got the line following stuff working, and then I successfully wrote and tested a switch statement to ensure that we don't just drive right over the T before it has time to stop us. So theoretically it should all work: I am anxiously looking forward to our Week 4 regional to try it all out :D |
Re: Autonomy: How Did You Guys Do It?
Quote:
and that sounds pretty awesome. im still working on getting time to work on our teams autonomous :( |
Re: Autonomy: How Did You Guys Do It?
Quote:
It's in C++...I shall update my original post. Also I posted about the limit-switch problem here. One of the best memories from this build season was running around on ship night [bag-and-tag night actually], writing code for autonomous while hardware wanted to work on the robot too, meantime our programming laptop died and we had to shuffle the robot back and forth from the shop to our high-ceiling testing area....then getting the line-following code to work. I shrieked with joy [and caffeine] and promptly laid down another duct tape line for the robot to follow....who knew that you can get cuts from duct tape? |
Re: Autonomy: How Did You Guys Do It?
Quote:
I had just seen that thread in the real time forum thing, and i immediately thought to myself "its C++ :/". I program in java, so i may not be able to find it if its a language specific problem. and for our first regional, my mom needed the laptop i use to program for a conference she went to because her mac cant make databases, so i was stuck programming on the dinky driver station computer >.< the multi purpose netbook thing. still managed to get many things working though |
Re: Autonomy: How Did You Guys Do It?
Quote:
I discovered at our scrimmage that finding a bit of fabric or other material that simulates the carpet for testing in one's pit is surprisingly hard. Also our scrimmage had very pale grey tape lines...very nice high contrast, but not really what I expect at competition. Guess who's going down on the floor with the drive team with a screwdriver on day one to calibrate the light sensors. Programming on the Classmate?! I applaud you. I don't have the patience to change default settings on that thing, lol, much less program on it...and the keys are so tiny. Try programming on a cell phone, I imagine it's about the same. We are actually planning on taking our desktop computer, my personal PC laptop, and my Macbook to regionals. The desktop is a backup in case the laptop has the fabled 64-bit incompatibility with WindRiver, and the Mac is for scouting use. We did about the same last year, and got told by some scouting team that we had the most computers they'd seen in a pit...said something about that being a scouting criteria, which makes one wonder... The original programming laptop didn't actually die....but WindRiver just...stopped compiling. It gets to 13% [on ANY program] and then just sits there indefinitely. We reinstalled and everything. It wasn't even particularly slow to begin with... |
Re: Autonomy: How Did You Guys Do It?
Quote:
we do have a small bit of carpet, but its all curled and needs weight at each end, and its only slightly larger than the robot, so its not of much use. even to sit or lie down on, its very hard to use. i usually just sit on the floor, cross legged with MY laptop that i used to program this year (an actual laptop, not DS) on my lap, right next to the chassis on a cart. and i think maybe ill suggest a mac to the scouting leader or whatever hes called, but pretty much everyone on the team is a pc. and that sucks about your original programming laptop. we have this really old laptop in our closet with no wifi, and probably about 5 MB of RAM that i was thinking of using to program one time, till i picked it up... very heavy for such a bad laptop... |
Re: Autonomy: How Did You Guys Do It?
Quote:
At one point this season I was actually writing code on my Mac...couldn't compile because of not having WPI lib stuff, but still...and I have a fond hope of someday compiling C++ for FRC on a mac, in Xcode or Code::Blocks. I tried to do that earlier this season.....after linking up most of the dozens and dozens of WPI library header files, I gave up, but I was CLOSE! Or at least I like to think so! On another note, has anyone here worked on an autonomous mode that scores more than one tube? |
Re: Autonomy: How Did You Guys Do It?
Quote:
The command functions are written in LabVIEW and perform specific tasks reliably and accurately. The primary drive functions are drive_straight and drive_gyro_turn, and both were calibrated fairly precisely (drive is accurate to about 1/2" and gyro is repeatable to about a degree, but is usually off by two degrees). There are also more command functions for elevator actions, but all of those set the getsets which are read by the elevator code elsewhere (do score, set state, etc.) The beescript system runs on top of the command functions, and reads a text file on the cRio which it interprets and calls the command functions (by ref). An example script would be: Code:
#score a tube 150 inches away 3.4 ft/sec |
Re: Autonomy: How Did You Guys Do It?
Line trackers. They work very well if you calibrate them at the beginning of each day. Once we got ours right, we scored in our last 6 matches without flaw.
|
Re: Autonomy: How Did You Guys Do It?
We use a gyro, one encoder on our drivetrain, an IR sensor on our roller claw, and a 10 turn potentiometer on our arm. All of them are used in autonomous, but only the IR sensor and the pot are used in teleop.
On the programming side of things, we've got P loops for the encoder and gyro, a PD loop for the arm, and a state machine for the roller. We push all the autonomous commands to a queue with a timeout. If the target positions for the subsystems aren't achieved within the timeout, we skip to the next command. This helps ensure that the robot doesn't get stuck in a control loop and is always doing something. |
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
We're first raising our arm with two PD loops and two accelerometers (it's a two part arm) and a little bit on the wrist, then tracking the line with the rockwells, stopping at the T and then it opens the claw, lowers the arms, and runs away all at the same time.
The running away will hopefully be useful in competition, because it's been very useful scaring all the PR folks. Does anybody else find the "practice" mode very useful for the 15 second cutoff? The awesome sounds are also nice :cool: |
Re: Autonomy: How Did You Guys Do It?
We use a combination of line tracking and ultrasonic rangefinding. Our robot follows the line at about 40% speed and then slows to 20% when it reaches around 60inches away from the wall. We have a lot of fun starting our robot totally crooked and watching it straighten itself out. Also we are using position control for the arm with the E4P encoder in the KOP
|
Re: Autonomy: How Did You Guys Do It?
Quote:
We turn the robot into a rolling PID Loop and just start executing commands to the PID controllers. Arm Length and Tilt PID for HighMiddle Ultrasonic & Gyro PID to 36" from the wall. Ultrasonic & Camera PID to the peg tube 1 Reverse Claw Arm Length & Tilt PID to home Arm Length and Tilt PID for Front Pickup Run Claw Intake Strafe Wheel Pods & Encoder PID infront of tube 2 Drive FWD until the claw detects a tube Arm Length & Tilt PID to HighEndPeg Ultrasonic & Gyro PID 36" from the wall Ultrasonic & Camera PID to the Peg Arm Length & Tilt PID to home Reverse Claw If you watch the video you can see each step: Video |
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
I used a for loop that drove forward and raised the arm at set speeds. Piece of cake.
|
Re: Autonomy: How Did You Guys Do It?
We had the whole grand plan: Camera tracking, line following, PID loop. Unfortunately, none of our optical encoders could be coerced to work, despite constant replacements. And the camera made the C-RIO crash.
So now we have a "drive forward, lift up, open grabber" auto. |
Re: Autonomy: How Did You Guys Do It?
are you using the newest camera? if so you need to use a non-crossover cable to connect it to the crio to prevent it from crashing..thats how we fixed our problem
|
Re: Autonomy: How Did You Guys Do It?
We had many combinations of sensors to make the auto work but in the end, we went with line sensors, ultra sonic and pot on the arm.
We start the auto by line tracking at about 8ft/s and stopping the base at some distance away from the wall. While it's line tracking, our arm moves to the high goal preset. Once it reaches the target distance, it starts to orient the tube & de-suck it out of the rollers. And the arm is slightly lowered too. :) I tried using gyro but due to sensor inefficiency, the gyro reading would keep increasing even if the robot's not moving. I was told that it happens with all gyros and it shouldn't make a big difference in the 15-20 second autos... |
Re: Autonomy: How Did You Guys Do It?
Quote:
It will always drift slightly. If you're noticing big changes it's likely that the robot was moving when the gyro initialized and biased. If that happens it will constantly think it is in motion afterwards. Make sure that it's completely still during boot up. |
Re: Autonomy: How Did You Guys Do It?
Quote:
On that note, how often do other programmers call for inputs from sensors during autonomous and teleop periods? I've always called it at the beginning of every iteration but not sure if calling it every other iteration would made a huge difference. |
Re: Autonomy: How Did You Guys Do It?
Quote:
We ubertubed 15/16 matches at Peachtree. Line tracking is working great for us :D |
Re: Autonomy: How Did You Guys Do It?
We developed camera tracking and have used gyro in previous years but this year we found that the line sensors were very reliable as long as they are calibrated properly to the field. The lights on the front of the sensors make them very easy to calibrate manually when sliding the robot from side to side over the lines.
Our robot has KoP quad encoders for each of the mechanum wheel modules, so we know how far the robot has travelled forward. There is code to detect if an encoder is providing unreliable values, hence excluding it from calculation. Typical problems are no signal due to wiring problems or physical encoder damage. We do have an encoder on the elevator, but we have found the most reliable method is to drive the elevator to the top and check the current draw from the motor via CAN. As soon as the elevator gets to the top, the motor starts to stall, increasing current draw which we detect and stop the motor. This method also help if we are get too close and hit some other part of the peg grid structure since it will stop the elevator, rather than unsuccessfully trying to drive it through the grid to a preset point. The code is written in Java in the autonomousPeriodic() method as a state machine, with timeouts on many states to allow for sensor failures. Summarizing: line sensors for direction, quad encoders on drivetrain for distance travelled, current draw from elevator motor and timing for pneumatics. Ultrasonic range finding would be a helpful addition. The biggest problem was lack of access to a real field to test out the system. In particular we had extremely limited access to the practice field during our Regional. Testing by watching the system perform during a match is a very public way of finding bugs, which I do not recommend for the thin-skinned. |
Re: Autonomy: How Did You Guys Do It?
We recorded our joystick inputs to a file on the cRIO and played them back, it worked amazingly well. Heres a link to my program http://www.chiefdelphi.com/forums/sh...ad.php?t=93720
|
Re: Autonomy: How Did You Guys Do It?
Sigh... I guess I will have to go with the blind man way... Autonomy with only encoders. Hey, but at least it is so easy to program that.
|
Re: Autonomy: How Did You Guys Do It?
Did you write your own encoder code?
|
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
Quote:
At least, this was my understanding. |
Re: Autonomy: How Did You Guys Do It?
Quote:
http://decibel.ni.com/content/docs/DOC-1750 Figure 3 on this page may help clear things up. |
Re: Autonomy: How Did You Guys Do It?
I really had no say in this. It was like: heres what we have and do something amazing with it. All I can use is encoders and timers. If the worst comes to worst, I will just use a timed system. My early tests have shown that there is too much noise coming from the encoders (just from the raw data). I think that is due to the chains on the drive train jerking around.
|
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
I had originally planned on using the camera, but after about two weeks spent trying to get it to work found it just wasn't accurate or fast enough and even crashed sometimes. After this, I spent most of the time (whenever I wasn't programming the other aspects of the robot) working on a dead reckoning auton with just timers. This turned out to be super hard considering the rotational drift with mecanum wheels and the change in distance due to battery voltage. On the bag n tag day, I eventually just tried line tracking and we slapped on the three photoswitches to the front of our robot. Because of the time crunch (we literally got them on and calibrated at like 6 pm) and the fact that I couldn't figure out how to change the speeds in the pre-supplied line tracking code quickly enough, I just made my own, all it is is a bunch of cases, one for each possible combination, that outputs either straight, left, right, or stop which are then interpreted based on time.
Whaddya know, it actually worked. After the line tracking to get to the pegs, it was just a matter of timing a sequence of raising the arm, tilting the tube and lowering the arm. But srsly, line tracking is amazingly simple and effective. I didn't even have to do any special programming for the Y, I just have two different modes, one for tracking the left edge and one for the right edge. It was pretty consistent at DC (after some last-minute time calibration on Friday to get the tube on just right) only failed us twice (one of which was during our second quarter match :mad:). |
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
KOP line sensors, an ultrasonic mounted on the front, a 10 turn pot on our telescope, and a 3 turn on our wrist. We use a state machine on the line following. The line sensors are mounted close enough together that at some points two sensors will be read at the same time, and if that happens it will make fine corrections, while if only the outside sensors detect, it will make a coarse correction. The state machine will remember what state it was in last if no lines are detected, so we can start almost 90 degrees off from the line and once it reaches the line, it will track it, even if inertia carries the sensors across the line, because it will keep correcting back until it finds the line again. It detects a fork if it sees both outside sensors are lines but the center one is not. If it finds one, it drives straight ahead for a short time, then turns either right or left and goes straight until it meets up with a line again. It requires the middle sensor to be ground so that it doesn't try a fork at the foot of the line or if the driver come at the line at an extreme angle. The entire line following sequence is built into a LineFollower class, an object of which is passed to both the autonomous code and the teleop code, so the driver can use it any time during a match. The LineFollower keep its own state machine, so the auto code doesn't have to deal with finding forks or whatnot, and if a fork isn't registered, the auto code doesn't even care. The auto code just calls FollowLineStraight or FollowLineFork until the front ultrasonic sensor reads about two feet,at which point the scoring sequence is executed. At the beginning of auto, a PID loop puts the telescope and wrist to the correct positions, and once the robot gets close enough, the wriswrist tilts down, the telescope drops, and the rollers outtake simultaneously, and the robot backs off. The score sequence constants are held in a config file available to both teleop and auto, so an identical sequence can be called via the codriver's trigger. Our alliance had three robots with line following, and so we took the fork while our partners took straight. We got triple ubertubes in a couple elim matches.
TL;DR: our auto pwnz. Funny story about pregame checklists: one match, we started off in high gear accidentally, which the constants were never tuned for. At the beginning of autonomous, we were alarmed to see our robot hurtle at us at nearly full speed (14 fps). At the scoring distance, it neatly stopped, hung the tube, and daintily pulled away. This all took about four seconds, which was far faster than the regular low gear attempts. After the match, our team captain demanded of me a two tube autonomous, now that I blew the "not enough time" excuse. |
Re: Autonomy: How Did You Guys Do It?
Our robot uses mecanum drive with encoders on each wheel and a pot on the shoulder to tell us where the arm is. The code has a built-in script language for autonomous code so we can FTP a new script to the robot for every match (if needed). We also have code that records operator input and replays it. Both work pretty well though we've not tested as much as we'd like to this year. The weather-related delays during build season left little time.
We played with line following and with the camera. Our robot pretty much goes where you tell it to so the line following was not necessary and slowed it down. The camera consumes a lot of CPU (for analysis) and communications bandwidth and did not seem worth it this year. |
Re: Autonomy: How Did You Guys Do It?
I believe our first autonomous was merely timed commands to different motors and solenoids. However, that was quickly scrapped when we also found out the effectiveness and simplicity of line tracking.
Equipment: 2 "eye" sensors, KOP 4 US Digital quadrature encoders, KOP (wheels) 1 "string" linear potentiometer, not KOP (not sure of manufacturer) (arm height) 2 solenoids, probably KOP (telescoping arm, aka "extendorizer") We have two "eye" sensors tracking the line, simply dragging one side of the robot to stay on track. While the robot is moving along the line, at timed intervals (I believe), our arm moves up, stopping at a certain "string" pot value. At another time interval afterward, we "power" (not entirely sure of the terminology) our pneumatic telescoping arm to reach up to the top row. The robot is stopped by reaching a certain encoder value. Two rollers on the bottom and top of our claw eject the tube, while a "jaw" to our tube lowers, allowing a quick ejection. Simultaneously, the arm is retracted, to lower it out of the way. Once this operation is complete, the robot lowers the arm to a certain pot value, simply backs up to another encoder value, and then stops. Commands can be sent to the robot via FTP with a text file, which is parsed. In the text file, available commands are vector or strafe driving (we have mechanums), retraction of arm, and arm position. You can also determine the encoder value you want to stop at, and motor speed. Our last Ann Arbor District (last week) match, with the autonomous working perfectly (nearest red robot): http://www.youtube.com/watch?v=TfGzUebFtfM |
Re: Autonomy: How Did You Guys Do It?
Quote:
EDIT: see this thread, specifically this post and thispost. |
Re: Autonomy: How Did You Guys Do It?
Quote:
|
Re: Autonomy: How Did You Guys Do It?
We used light sensors for line tracking, encoders for auto-correction, and an encoder on the arm for height determination. It's never failed. It's also kind of showy since instead of placing the tube it throws it down onto the peg.
|
Re: Autonomy: How Did You Guys Do It?
I used a gyro on the base to keep it straight, an encoder for distance, and a second gyro to determine arm angle. I'm hoping to rig up another encoder so as to have two straight-correcting sensors and average the two corrections, or use one as a safety net in case one malfunctions.
|
Re: Autonomy: How Did You Guys Do It?
I was told by several older members and mentors to abandon the use of sensors altogether because of our history with sensors. Essentially, don't use them because we've never used them. What I thought was hilarious about this discussion was that half of these guys were mechanical, and couldn't tell between a potentiometer and an encoder.
</rant> The autonomous program I've been using in San Diego and Los Angeles was dead reckoning with timers. It worked about 70% of the time. Aim for middle peg, score on left peg. I plan to have a basic autonomous program using the encoders and gyro. This will at least go straight. I will test and implement this this week in Utah. Another variation of this program will attempt to put up two tubes. But this is a long shot... If we make it to STL, I want to mount an ultrasonic sensor for distance sensing. Sensors on James Bot: Potentiometer on arm shoulder Encoders on drivetrain Gyro on drivetrain CAN Bus Current Sensors(Used in our roller claw to sense tubes) I've had KOP light sensors mounted, but 2/3 were mangled during practice on concrete, the other one doesn't seem to calibrate. I have abandoned these. |
Re: Autonomy: How Did You Guys Do It?
Quote:
We used encoders on all 4 wheels, a pot on the arm and an ultrasonic sensor on the front and rear. Our bot simple rolled forward X feet (measured by the encoders), raised the arm to the 9' height (measured by the pot), released the ubertube and backed off some. We used the ultrasonic sensors to align for minibot deployment. Reliable sensors are a HUGE advantage in autonomous mode and can be used to automate certain movements in teleop mode (like releasing a tube and lowering an arm in perfect sequence). If you understand them, program them correctly and test them thoroughly - USE THE SENSORS! HTH |
Re: Autonomy: How Did You Guys Do It?
I'm less interested in how they did it, instead what I want to know but is kind of mishaps occured during autonomous period?
One match, our robot drove straight backward and rammed another, pushing them in to their scoring zone, giving my team a red card, instead of just line sensing. Sooo unexpected. :eek: |
Re: Autonomy: How Did You Guys Do It?
Quote:
At the correct distance we stop and let go of the uber tube and lower the arm. Then back up sending equal speed signals to the motors at 13 seconds in we lower the arm. At our first district we never missed a top level uber tube score. We did not back up there and almost dropped the tube because when the arm fully retracted we hit the middle peg. So we added the backing up before district competition #2. We had a couple of misses there because the the joy stick was set for the middle Y tape positon and we got too close and the arm joint caught on the middle peg and shook off the tube. At MSC we did not miss except for when an partner robot veered into us and one other time when at end of day Friday match the tape was screwed up and the robot veered off. Our driver now check the tape on the floor before every match. |
Re: Autonomy: How Did You Guys Do It?
Quote:
We did not know what they were working on in the time out but I assume the autonimous program for that robot was a part of it. At MSC one of of our alliance partners did not use the tape but instead lined up right beside our robot. They veered into ours and knocked it off the tape and it missed. I seen a lot of the teams doing this there ( I don't know how they were programed) and I was not impressed with it. Some worked well but most missed and many stopped an alliance partner from scoring. |
Re: Autonomy: How Did You Guys Do It?
- 1 encoder on SuperShifting transmissions
- Transmissions in low gear - 1 Gyro with semi-aggressive PID control for "drive straight" 1. Finely tune the encoder distances 2. DO NOT turn the robot on until it is set down. If they gyro init's before it's set down, then it's game over 3. Push the bot backwards a little so that the chains are pre-tensioned. This should help alleviate drive straight problems from lurching 4. Ramp up the speed rather than going 100% full blast 5. Dead-reckon to the peg Once it was tuned, we had 9 straight auto-modes score. |
| All times are GMT -5. The time now is 23:41. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi