![]() |
What to do with autonomous
What are your plans for autonomous this year? Will you choose to track the IR beacon? If so, how do you plan on finding your ball without knocking over the opponent's? The beacon is in betwen the two?
Line tracking? Dead reckoning based on timing? Dead reckoning based on encoders? Dead reckoning based on gyros and accelerometers? Something else entirely? Whatever is is you are doing, please explain. Our team is doing an encoder based dead reckoning system somewhat similar to what wildstang did last year, only it is expected to be much more precise. We have a seperate PIC18F252 dedicated to nothing but counting encoder pulses and computing the trig transforms necessary to figure out where we are. We have figured out a rather clever way of doing trig functions extremely fast, but don't expect any leaks on that. The IFI processor will take the position data from the secondary processor, and run a rather complex control algorythm to make the robot follow preprogrammed paths. |
Re: What to do with autonomous
Dont do line following!!!!!!!!!!!!!! I cannot emphasize how slow line following really is. We tweaked our bot enough last year that we got it to do it just about as fast as you could, and it took the entire 15 seconds. Sure, it won us Driving Tommorow's Technology, but it was the stem of many problems.
Cory |
Re: What to do with autonomous
The infrared beacon is in between the two balls, so yours is always inbetween the robot and beacon and opponents is opposite beacon.
|
Re: What to do with autonomous
Quote:
BTW, we were not computing our trig in the conventional way. We employed many optimizations on these calculations that I would consider standard for this situation, and the result was that converting our heading and distance to an (x,y) coordinate took only a few clock cycles. |
Re: What to do with autonomous
Quote:
|
Re: What to do with autonomous
That is true, however, the problem we saw wasnt the sensors, it was that if you wanted to run the robot at a speed significant enough to get anywhere quickly, you would overrun the line, and never be able to get back, although now that I think about it, since the line is far straighter, you might be able to get it to work well. I would still lean towards another mode, however. Especially with these IR sensors. It seemed as if Woodie was hinting that we would *need* to be able to use sensors in the coming years (Or was it Dave?).
$0.02 |
Re: What to do with autonomous
Quote:
If we are duplicating your design, i promise, this is not intentional. I am the one responsible for the design of our system and if it is turning out like yours then thats probably because we followed similar descision paths and came to similar conlusion about how to effectively implement such a system. |
Re: What to do with autonomous
I belive it was Dave who hinted at the IR use. I am not sure what our team is planning yet, if anything. We probably wont go for the ten point ball. We may use dead reconing to move under the ball drop, for a possible bot design for catching the balls. Well, if we can figure out dead reckoning, being our main porgrammer left us. If anyone can help me learm more about it, it would be greatly appriecated.
|
Re: What to do with autonomous
Since the line is much less "Curvy" this year, line following could be optimized to go faster than last year. HOWEVER, I would strongly recommend against that if possible. In many applications, not just FIRST I have tried line following and have not been satisfied with the results. Most of the time it is either too slow or loses the line too easily. Granted, there are applications where line following IS a very viable option, but I'm pretty sure that this is not one of them, especially since FIRST gives you the sensors and code to use the IR beacons, especially since they seem determined to make sure they are fully utilized in the future. So, of course, ultimately it is up to the individual teams, but personally, if you ask me, then it is not your best option.
|
Re: What to do with autonomous
Quote:
|
Re: What to do with autonomous
Quote:
As for the 6 inch accuracy, that comes from somewhere else. While we know our position with much better accuracy than that, we would consider to have hit a waypoint if we were within a 6" square around it. This is necessary to prevent the robot from reaching a waypoint and then overshooting it due to coasting, and then having it try to keep getting closer and closer to it. I imagine you'll find that you need a similar concept as well. |
Re: What to do with autonomous
Quote:
|
Re: What to do with autonomous
It looks like there is the potential this year to use the IR beacons for triangulation. I would LOVE to see a robot use triangulation instead of dead reckoning for navigation. A ball-collector robot could knock down both teams' 10pt balls, then navigate to their side of the field, run patterns to collect balls and return to the team's original side of the field - all during the autonomous period!
There are two ways to do this triangulation, and both assume two pairs of IR detectors constantly tracking an IR beacon, similar to the setup demonstrated during the kickoff, but for each side of the robot. Method one would use the intensity measurement from each beacon to estimate the distance to each beacon, solving positions from a 3-known sides triangle. Method two would use the angle between the two sensors to determine position. There would be multiple solutions with this method, but constantly tracking position could help out. That said, both methods would blow in real life. The IR beacon's line of sight to the receiver could be blocked by another robot or all the poles in the way. Not to mention calibration and multiple equation solutions... dead reckoning would be easier and more reliable. Still, it'd be cool to see. MUCH more useful would be the ability to alter a preprogrammed (dead-reckoned / line-followed) course if another autonomous robot was in the way. This could be done with an IR sender/receiver pair or more easily with Sharp GP2D12 IR Distance Sensors from Digikey. However, this sensing tech could be much more insidious, and used to lock-onto and disrupt the motion of an opposing autonomous robot. Hmmmm.... -Brandon Heller Team 449 Alum mentoring Team 931 |
Re: What to do with autonomous
Team 538 is also planning on using an "educated" positioning system similar to the ones mentioned by Rickertson of #1139 and the system used by Wildstang (#111 i think?) last year. I'm going to try to use accelerometers instead of shaft encoders, but fears about noise may eventually force us into encoders as well.
Good luck to all the teams who choose to implement autonomous! I hope everybody gets the chance to develop a decent autonomous program. I think we learned last year that it isnt too fun to get partnered with a robot that's dead in auto mode! |
Re: What to do with autonomous
Quote:
|
Re: What to do with autonomous
Quote:
ANd being that the code points it out for you more strategically than in PBASIC, I think more teams will be willing to give it a go. |
Re: What to do with autonomous
quick question. Shaft encoders? What the heck are those? Do you really need much more precision than triangualtion, 2 accelerometers, and a gyro or 2 can give you? I'm planning on having one major positioning system with 2 sub ones. I.e. Range sensing triangualtion, then triangulation based on angles, than accel and gyro combined. Oh, and a quick way to do trig is make a datasheet and interpolate. If you choose your points right, you will be off by %.04 at the most and still only use something along the lines of 10 lines of code.
-Kesich |
Re: What to do with autonomous
I think that autonomous doesn't all revolve around knocking off the ball, there are some other interesting things that you could do.
|
Re: What to do with autonomous
the IR beacon looks like our teams best bet, because we had the best line follower at the comp. last year, and it took us all 15 seconds to get out of the starting zone. it did give us a slight advantage on the bots w/o an autonomous tho. anywhere the autonoumouse mode gets you is helpful this time.
|
Misnomers - Please Don't Use
Quote:
Dead reckoning comes from ded. reckoning, which is short for DEDUCED reckonking. Ask anyone with a pilot certificate or someone who has taken a large-water boating class and they should know the definition. The reason it is called deduced reckoning: You know where you started, you have an approximate idea of your heading, and you have an approximate idea of your speed. So by using TIMING, you can DEDUCE where you are after a certain amount of time. Do you KNOW where you are? NO!!! You only have an approximate idea via deduction. When you KNOW where you are, via sensing capability, it is NO LONGER dead reckoning - it is guidance and measurement. There is a HUGE difference. One method knows quite precisely where you are, and the other method is just a decent educated guess. The reason that I am preaching is because is becomes very difficult to describe how your autonomous mode works when everyone uses incorrect terminology. Here's a quick example: Our team last year had a guidance system. We tried to describe it to some teams, and here was a typical conversation: Other team: "You guys say you are fast, you must use dead reckoning." Us: "No, we use a guidance system." Other team: "huh?" Us: "We use an angular rate sensor and a measurement wheel to calculate our position on the field." Other team: "So, you use sensors. You must be slow." Us: "No, we're not slow." Other team: "But I read on ChiefDelphi.com that dead reckoning is the fastest method. If you're not dead reckoning, you must be slow." Us: "No, we're not slow. We use sensors so we can go as fast as possible." Other team: "So, you follow the line? I hear that is REALLY slow." Us: "No, we don't follow the line. We use angular rate sensors and a measurement wheel." Other team: "Oooooohhhhh. You dead reckon. Why didn't you just say so?" Us: "AAAAAAAAAAAHhhhhhhhhhhhhhhhhhhhh!!!!!!!!!!!" This is pretty much an actual conversation from last year. Why does it ruffle my feathers so much? Because it lumped us (with our wonderful guidance system that was virtually impossible to throw off) in with the dead reckoners that could be thrown off with a well placed bin or a low battery. Sooooo...... I would appreciate it if we could please start using correct terminology now, so that when I get to the competitions, I don't need to have a repeat of the above conversation. Whew, that was a long one. Now, let's see if I can get off the soapbox without getting hurt. -Chris |
Re: Misnomers - Please Don't Use
Quote:
Quote:
After reading a good deal about the subject last year, I came to the conclusion that our positioning system was indeed dead reckoning, and that generally the crowd on Chief Delphi used the term incorrectly to mean dead reckoning based on timing information only. I think this excerpt explains it well (from this web site): Quote:
Quote:
|
Re: What to do with autonomous
i was thinking why not just let the bot sit there so that way something doesnt go wrong
|
Re: What to do with autonomous
Please read the first few paragraphs of Chapter 1 from this paper
from this page http://www-personal.engin.umich.edu/...b/position.htm The author gives a very good description of dead reckoning. It is also a nice reference for all the different types of autonomous movement. Let me warn that it is rather long, and at times mathematically complex. It is also somewhat outdated (written in 1996), but most of the concepts still hold true. |
Re: Misnomers - Please Don't Use
I'm climbing back up on my soapbox here. This is going to be a long one, so if you want to cut to the chase, you can skip to the last paragraph.
The FAA definition of dead reckoning (as taken from the Gleim Commercial Pilot manual): "Dead reckoning is the navigation of your airplane solely by means of (human) computations based on true airspeed, course, wind direction and speed, groundspeed, and elapsed time. Simply, dead reckoning is a system of determining where the airplane should be on the basis of where it has been." Notice that "should" is underlined - it doesn't say determining where the airplane is, but where is should be. This definition pretty much agrees on the two that you presented - i.e. navigating without direct external reference. However, what is not provided in the definitions is the fact that you have no way to truly measure groundspeed, wind speed and course - you can only estimate. The FAA also defines that any use of electronic navigation (i.e. any positioning system) to gain the above information is NOT dead reckoning - it is electronic navigation. If you try to use any electronic navigation aids during the dead reckoning portion of your pilot test, you will fail, as the FAA deems this as NOT dead reckoning. Dead reckoning is ILLEGAL to use as a primary means of navigation under intrument flight rules. Some inertial navigation systems are 100% certified to use under IFR as a primary means of navigation. Therefore... according to the FAA's definition (which is the one I subscribe to, being a commercially certified pilot and all), inertial navigation (and I would argue direct measurement of the ground using a wheel) is NOT dead reckoning. Getting back to the IFR thing... IFR navigation is very much like our robot's autonomous mode. You know where you are when you enter the clouds, but while you're in the clouds, you have NO reference to the outside world (via your eyes) so you can't 100% truly verify your position over the ground. (for the robot, you know where you start, but without sensors, you can't truly know where you end up). This is where the difference between dead reckoning and inertial navigation comes in. Let's say you're in solid IMC (instrument meteorilogical conditions). You're calculating your position via dead reckoning and then the wind shifts. You would have NO IDEA that you're being blown off course into the mountain that's to the right of your route. (For the robot, you think you're going along in a straight line, when in actuality, you're in a shoving match with another robot.) However, with an inertial navigation system, the system would detect the subtle lateral acceleration and keep you on course. (For the robot, the navigation system would detect the lack of motion and could do something about it.) In other words, the genrally accepted definition (amongst pilots, anyway) is that TRUE dead reckoning is BLIND between reference points. i.e. you only know where you start, where you THINK you're going, and how fast you THINK you're going. Then you calculate where you SHOULD be. Then, once you're not blind again (i.e. once you decide to reference the outside world), you can see how good you did, and then dead reckon to the next point. Even though you correct yourself at a few discrete points, you are blind between reference points (this is the definition of dead reckoning). If you have direct measurements of your course at any arbitrary instant in time, this is NOT defined as dead reckoning. So... If you are measuring position constantly (by any reliable means of measurement), that is NOT dead reckoning. If you ever turn on motors to move and stop measuring, that IS dead reckoning. I guess it depends on how you defing "reliable means of measurement". I would agree that after a long period of time, and cheap INS would amount to dead reckoning. However, you mentioned that it was accurate to within an inch over the course of the autonomous period. I would say that you have a reliable means of measurement, and therefore are not dead reckoning. (I'm climbing back down again, for the time being). |
Re: What to do with autonomous
hmm.
I'm thinking of using a combination of all three options. First have the bot find detect the IR signal, and head to it till it gets about 3/4 there. This would be for speed; then turn off the IR detection; by then the robot should be on the white lines, then use those to get to the object. Afterwards, once the ball knocked out, have the bot turn around and head for the balls. This way you get speed of the heading straight with IR mode, as well as the precision of line walking. atleat I'll try that ;-). Sounds like fun. |
Re: Misnomers - Please Don't Use
OK, first of all, I'd say there's enough disagreement (not just here, but in general on the web) that we're not going to change each other's minds. Because of that lack of concensus, though, it hardly seems fair to chastise others for using the phrase contradictory to the FAA, especially since most of the writing on the subject with regards to robotics seems to be differerent than the FAA.
Quote:
If you put your robot up on a platform and spin your encoder wheel, your robot will think it has moved, won't it? Again, your robot assumed it was measuring groundspeed when it actually wasn't. Anyway, it's clear we disagree on the definition, so I don't expect you to change your mind and you shouldn't expect me to change mine. I understand your reasoning and accept it, but I still personally believe the prevailing definition of dead reckoning in the world of robotics to be the one I described. |
Re: Misnomers - Please Don't Use
Quote:
This is a good lesson as far as showing how two cultures can clash. As the kids on South Park would say, "we've learned something today..." Everyone should take this to heart, as this type of thing happens every day around the world. I thought Dave was an idiot for not knowing the definition of dead reckoning (based on the definition that I know), and he thought I was an idiot based on the definition that he knows. Are either of us idiots? No (okay, okay, I don't want to hear any snide remarks). However, our different backgrounds lead us to think that way. This is very typical in a global corporate setting. Each culture has different expectations about the ways things are supposed to be, and if you don't understand that, you can think some pretty bad things about people without knowing the truth. For instance, our European office thought that our American engineers don't care what they're saying because we don't keep eye contact 100% of the time, while most Americans would interpret 100% eye contact to mean that they are angry or being intrusive. Anyway, that is one small example, but the point is this: if someone does something that to you seems inappropriate or stupid - you may just need to do some investigating to determine the true intentions. You'd be surprised at what you'll find out. Lastly, I'm sorry for getting wound up. I had one of the most frustrating days of my life here at work today, and my blood pressure was probably through the roof today. But that doesn't mean I should go over the top here on you guys. I apologize. Just a couple of side note for fun: - the wheel is MUCH more accurate than aircraft groundspeed calculations. An airspeed indicator can vary by a few knots in ideal conditions, then it changes in accuracy depending on the temperature and altitude (at 5000 ft on a hot day, your airspeed indicator can read as much as 15 knots low). Throw in the fact that winds aloft forcasts are rarely more accurate than +/- 5 knots and the direction is rarely better than +/- 30 degrees, and your compass is allowed to be off by 5 degrees, and you can start to see how aircraft dead reckoning can hardly be compared to a measurement wheel and a nice angular rate sensor. I guess when you are used to the horrible inaccuracy associated with aircraft dead reckoning you don't want your robot to be associated with that inaccuracy - too many bad connotations. |
Re: What to do with autonomous
going back to the side discussion of line following (whew, wasn't that a long time ago in the thread ;)), perhaps speed and accuracy in line detection could be improved by adding braking to the program controlling the wheels.
|
Re: What to do with autonomous
Quote:
With the new controller, you have a much faster sample rate, which means you won't go nearly as far in one sample and therefore correct much faster. If you really want to be slick, use an array of optical sensors (say 8 of them in a line). Then you can use a feedback loop to try and keep the line in the center of the array. This way, your follower is more analog, rather than digital. |
Re: What to do with autonomous
Hi,
This is my first year in the FIRST compo as a mentor. I must say that the first "15 seconds" seems to me to be the most interesting and challenging aspect after reading so many posts. I was an Inertial Navigation tech on US Missile subs during the 80 and early 90's. We would leave port with accurate position information. Our inertial navigators (dead-reckoning) would use 3 accelerometers mounted on a gyro stabilized platform. Acceleration was integrated once to yield velocity north, west and vertical. Acceleration was integrated a second time to yield displacement (distance) changes north and west. These were good systems. After several days we would obtain a position "fix" from the old NavSat system (single satelite pass in a polar orbit), land based radio triangulation (LORAN) or underwater topography. We were generally only a few hundred yards off at the very most...pretty neat. We could NOT use the three sources of position fix information all the time because the info was not available. NavSats a few times a day, LORAN only if close to land and topography every hundred miles or so. Real-time constant triangulation with two IR beacons is pretty cool. If you can determine their directions then you can triangulate positions very easily given know distances between them. I don't see a need for line-following or dead-reckoning in this case if your system is accurate. Dead-reckoning with time, encoders or distance traveled estimates should maintain some relative accuracy for 15 seconds. No need for triangulation or line following. Line following from previous postings can be slow. In fact time could be lost trying to follow the line. However, it was shown in last year's competition that line-following works and has a lot of potential for improvement. All in all there are many possibilities for that first 15 seconds of autonomous operations. We are experimenting with 4 Banner sensors arranged in a diamond pattern. The current goal is not to follow the line exactly but to stay near it. This is much like a ship at sea following the coastline. If you are heading north on the East coast and you lose site of land, you gently steer to port (left) until land is visibile. We believe it is possible for the robot to travel much faster using this approach then typical line following techniques. For those of you learning to drive, you know that when you are traveling 50 mph and your right side car wheels slip off of the road that the correct action is to let off of the accelerator (no brakes) and to gently steer back onto the road...then resume your speed and direction. We'll keep you posted of any new results or ideas. Regards, Chuck |
Re: What to do with autonomous
Quote:
I'm curious about the diamond pattern. What is the reasoning behind this specific pattern? Also, how far apart are you placing them? And where? Are they at the edge of the 'bot? Centered in the front, rear, left & right of the 'bot? Or closer together? In the center of the 'bot? Toward the front? Last year we had a linear array of 6 or 7 sensors across the front of our 'bot. It worked. (We used a heuristic assumption that we would be turning either to the right or the left, and then used the sensor input to keep us on the line.) But we decided it wasn't fast enough, and we opted to instead do a blind swoop out and back up the ramp. |
Re: What to do with autonomous
Quote:
Did you mean non-standard? If you indeed meant standard, would you mind pointing us to some references? (We wouldn't mind, though, if you decided you wanted to share your non-standard methods too. ;)) |
Re: What to do with autonomous
is there anyone that is using multiple autonomous modes? if so i would like to talk about that.....my team is doing that and im just curious
|
Re: What to do with autonomous
Greg,
Looking down upon the diamond pattern, the top sensor is labeled North, right sensor is East...etc. N-S sensors are 6" apart. E-W sensors are 4 1/2". The N, E and W sensors are 2 1/4" from the diamond center. However, the S sensor is 3 3/4". The system is about 4" high. The mechanics say we cannot have the space in the dead center of the robot drive system and that are line-tracking system can be placed slightly forward of center. So, we consider this S sensor to represent center of robot (close as possible). The distances and angles formed by the internal angles are subject to change but this is a starting point. Assuming the sensor data can be sampled fast enough in lieu of the robots rapid movement, it is possible by examining the on-off state of the four sensors several things....assuming you understand the starting position and orientation of the robot. Our team (4 students and myself) will discuss this configuration in more detail and compare its results to other sensor layouts. We need only a few data points along this path to follow it (parallel it) successfully...of course we need to do this in less time than 15 seconds. :-) Regards, Chuck |
Re: What to do with autonomous
Quote:
|
Re: What to do with autonomous
Quote:
|
Re: What to do with autonomous
Quote:
It's not a big deal, but I hate to not have a reason. :-) Thanks. |
Re: What to do with autonomous
Quote:
|
Re: What to do with autonomous
great minds think alike I suppose...our team is running with the same idea in mind as well...
|
Re: What to do with autonomous
We're (I'm) thinking about it. You know, If you dead-reckon (Using nothing or just wheel encoders), your going to want to write 2 procedures, a left and a right, not to mention the dozens of other autonomous mode strategies one can write.
|
Re: What to do with autonomous
Well, the first part of our autonomous mode entails using Dead Reckoning to reach our 24' arm halfway across the field and...::censored::.. :D Am I kidding?
|
Re: What to do with autonomous
Quote:
|
Re: What to do with autonomous
I've been living in a hole (my bedroom studing for exams) for the past week or so. First off, a question about the uber-accurate Wildstang Positioning System (WSP perhaps?):
1. How were you able to account for wheel slipping? This would occur when your robot would be pushed (while wheels were locked) or sliding down the ramp (for example). I'd immagine that such 'small' inaccuracies could addup to more then +/- 1-inch by the end of the match. Back to our autonomous mode. I'm thinking (I'm the only programmer on our team) of using a IR seeking/dead reckoning hybrid. If at any point the beacon is blocked, dead (sorry, read 'ded.') reckoning would kickin. It would be adaptive, so the robot would continue in that direction until the beacon can be seen again. The dead/ded. reckoning system would rely on a counter to determine how much longer to go forward, not to overshoot the target. I'm thinking of having a collision avoidance system; a set of sensors (sonar?) that would try to avoid obsticals (such as robots) by driving around them. It would also try to avoid the platform and the guard rail. On a sidenote, herding balls will be interesting. I'd imagine that they'd go all over the place unless your robot has a neat device that would keep them under control :yikes: Nevertheless, I'd like to hear more on other team's autonomous ideas. |
Re: What to do with autonomous
We are working on a robot that hopefully will be able to do almost anything in autonomous mode. I have already put together the IR beacons and am working on the Banner Sensors. I do have a question about the banner sensors. Are you supposed to run it through the 5V digital output? We tried doing that and nothing happened. I think it needs 12V to run, but there seems to be no place to put it.
|
Re: What to do with autonomous
|
Re: What to do with autonomous
Quote:
Just curious. |
Re: What to do with autonomous
Quote:
Quote:
Quote:
Like Mike said in a previous post, we didn't need a whole lot of precision. If we wanted to ram a stack (whether freestanding or the group on the ramp), we just had to make sure that the stack ended up in front of our robot. Our wheel slippage (in normal situations) was * minimal on the carpet in normal situations * slight climbing the ramp * measurable on the ice (HDPE) * negligable descending the ramp Therefore, we didn't design programs that spent significant time on the ice. Our double-hit program (hit the left side of the stack, come back, hit the right side of the stack) consistently cleared a good portion of the bins, but was slightly inconsistent due to the slippage on the ramp. In situations where we came head-to-head with another robot, there was a good chance that we deflected during the collision and recorrected our heading to squeeze past and continue on. If is was the type of collision that stopped us dead in our tracks, that was a different story. Luckilly, those types happened on the ice which allowed our wheels to slip and not burn motors. Quote:
|
Re: What to do with autonomous
Quote:
But the basis of the algorithm was knowing our current position (from our custom circuit) and the position of our next point, then using some trig to get the angle to the target. After we got the target angle we figured out what angle we needed to turn our wheels to by subtracting or adding the target angle and our robot's angle relative to the field (also from the CC). This calculation ws done each time we executed our main loop, which turned out to be 3-4x slower than the 26ms IFI loop. The theory was simple, but getting it to work consistently with the tools IFI gave us was difficult. And keeping all the PBASIC code readable & understandable was a big challenge. |
Re: What to do with autonomous
Quote:
I made some tests following the line, actualy, many of them. The best results came out using six sensors! Use them in V, and put them as far as you can from the robot's rotation center, keeping a distance of 6'-7' between the center sensors, so the robot can go with two or three corrections only. In 30 times we got there everytime between 7 and 8 seconds, never loosing the line. The limit is your drive system! |
| All times are GMT -5. The time now is 20:59. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi