View Full Version : Autonomous Planning
With the goal of making a robot fully autonomous (http://www.chiefdelphi.com/forums/showthread.php?t=84797):
How should robot actions be planned?
What sorts of actions should be planned? (how high-level?)
Here's a couple examples of high-level things to be planned.
Where should I move next?
Is now a good time to kick, or will that robot get in the way?
Should I block or score?
Is it worth it to go over the bump?
Will I get penalized if I go into that section of the field?
Are any of these answerable?
What information would you need to answer them?
(For how to get the information, please post in the autonomous perception thread (http://www.chiefdelphi.com/forums/showthread.php?t=85072))
Jon Stratis
08-04-2010, 16:22
That's an extremely tough question... made even harder because you don't know what the game will be next year! Let me give you an example for this year's game:
Lets say you have a robot with the following qualities: a suction mechanism to firmly hold onto the ball, regardless of how you drive around with it. A variable-distance kicking mechanism.
Now, such a robot could, in theory, be pretty good at scoring balls from anywhere on the field. So, the robot's managed to find a ball, and it knows it has one. What does it do with it?
Well, that depends on what else it knows. Lets say you've managed to integrate encoders, accelerometers, and gyro's with an algorithm that can constantly run in another thread to provide you with your exact current location and direction. Needless to say, that's no easy task... but without it, it's going to be incredibly hard to score that ball.
So now you have to find the goal you want to shoot at - easy, right? after all, they have big targets above the goals for you. And you aim at it, calculate the distance, adjust the kicker...
What happens if a robot is sitting there blocking that goal? Do you detect that? Do you change targets and line up to shoot at the other goal?
What happens if a defending robot is hurtling towards you from the other side of the field... does your robot know that? Does it hurry to get off a shot, or take the time to try to avoid the defending robot and line up for the shot later?
Before you can start to define how the actions should be planned, you need to know a fairly large number of things:
- What the game is
- how you want the robot to play the game
- what the robots capabilities are
- How the game is played, what sort of robot-robot interactions can be expected
- what sort of information you'll need to make an intelligent choice on the field
Like any time you're writing code, you should break everything up into small, discrete chunks. Stringing those chunks together into robot actions and strategy really depends on how your robot is making its decisions, what sort of information it has available.
davidthefat
08-04-2010, 16:28
Balls to the wall, go for it, no regrets...
Radical Pi
08-04-2010, 16:47
A lot of these questions depend on knowing the current score. For example, should I block or score? If the other alliance has been scoring rapidly, go for block. If you are losing but the other alliance isn't scoring too much then you should go for scoring.
For the "is a bot blocking the goal" problem, a simple color detection below the target should be enough. If you detect a nice big rectangle of your team's color below the target, odds are you have an open goal.
A score counter could also be useful for the goal choosing. If the robot kicks and doesn't see a ball scored within a certain period of time regularly, maybe it should switch to the other goal.
ideasrule
08-04-2010, 17:12
A lot of these questions depend on knowing the current score. For example, should I block or score? If the other alliance has been scoring rapidly, go for block. If you are losing but the other alliance isn't scoring too much then you should go for scoring.
The robot can get this info the same way a human can: by looking at the big TV screen. Possible?
theprgramerdude
08-04-2010, 17:23
The FMS could, in theory, be programmed to allow robots access to the current score. Just an idea, or a camera on the Operator console could be pointed at the screen, and then its just a contrast function to analyze the numbers for scoring.
IMO, Knowing where you robot is would be big. The camera could use contrast to determine what an object is, where it begins and ends, and where its going. A ball against the backdrop of the field is very noticeable, so a virtual representation could be built from it. This would be essential to any long-term planning, i.e. more than 10 seconds, or any decent strategy.
reversed_rocker
08-04-2010, 17:41
Well lets just say that we are going to play a fully autonomous robot for this years game. Many games have the same theme involving a ball that needs to be picked up, thrown, tossed, ect. so some of the same ideas are likely to apply
First thing, find a ball.
You could have 4 sonic rangers across the front of the ball on the front of the robot. These sensors would be spaced apart just under the diameter of a ball. What you could do is have it so that the robot spins until there is an object that gives approximately the same distance for two and only two of the sensors, this would be a ball.
Getting the ball
For this robot it would be difficult to guide the robot so that the ball would hit a particular point to be picked up by a vacuum or small ball roller, so i would suggest a double ball roller that runs as far across the robot as possible (I'm thinking a robot very simular to 1918 or 1986). When the robot finds something that it thinks is a ball, it would stop spinning and drive forward. On the both sides of robot you could have 2 photo transistors lined up parallel to the ball roller about 1.5-2 in inside the frame. This way the robot could tell when it has a ball and approximately where on the robot the ball has stuck (we use the same sensor to detect when a ball is in our vacuum, easy to use and very reliable).
Shooting the ball
Since the photo transistors aren't that accurate, you would have the code split the ball roller into 3 sections: left side of the robot, middle of the robot, right side of the robot. The robot would then spin until the camera sees the goal. The gyro would have to be set at the beginning of the match so that the robot knows which side of the field to shoot at. Once the robot sees the target, you can line up your shot using the camera again and fire. Then you start over with the ball collection phase of the code
special considerations:
This would take some playing around with, you would probly have to throw in some timing aspects so that the robot doesnt get stuck on one part of the code. Things like "if you saw a ball 10 seconds ago and you havent picked it up, go back to finding balls" or "if you dont have a ball anymore, go back to finding balls" or "if it takes your more than 5 seconds to find the goal, drive forward and try again". The sonic rangers could be used for basic driving manuevers, If more than 2 of the sonic rangers sees an object less than 3 feet away then it would turn around.
Eagle33199, you've brought up some very good points.
Perhaps we need a standard way of describing a robot's strategy.
What are the roles a robot can play in this game?
Scoring
Blocking
Passing
What general situations can occur?
I guess this has to be divided into properties:
What region of the field am I in?
How available are the gamepieces?
What is the current score?
Are there any 'bots blocking me?
Are there any 'bot helping me?
So, assuming those roles are discrete, that means you have 5 factors (so far) in choosing between what role to play.
davidthefat
08-04-2010, 17:53
Eagle33199, you've brought up some very good points.
Perhaps we need a standard way of describing a robot's strategy.
What are the roles a robot can play in this game?
Scoring
Blocking
Passing
What general situations can occur?
I guess this has to be divided into properties:
What region of the field am I in?
How available are the gamepieces?
What is the current score?
Are there any 'bots blocking me?
Are there any 'bot helping me?
So, assuming those roles are discrete, that means you have 5 factors (so far) in choosing between what role to play.
Jack Of All Trades, be the Chuck Norris of robots, thats your ultimate goal, since you are autonomous, you really can't communicate with your team, so you have to kick $@#$@#$@# your self
reversed_rocker
08-04-2010, 18:05
I was thinking that the robot would play best in a passing role. Something where you wouldnt have to necessarily be accurate but still could be useful. Defense i think would be hard because more likely than not it will be necessary to track other robots. tracking robots would be hard because they can be so different, what happens when your robot is looking for rectangles when its up against a robot shaped like an octagon?
David, I think you're missing the idea of inter-robot communication:
Robots can send data about what they're doing, in a standard form, to other robots, so that robots may function as an alliance.
Well lets just say that we are going to play a fully autonomous robot for this years game. Many games have the same theme involving a ball that needs to be picked up, thrown, tossed, ect. so some of the same ideas are likely to apply
First thing, find a ball.
You could have 4 sonic rangers across the front of the ball on the front of the robot. These sensors would be spaced apart just under the diameter of a ball. What you could do is have it so that the robot spins until there is an object that gives approximately the same distance for two and only two of the sensors, this would be a ball.
Getting the ball
For this robot it would be difficult to guide the robot so that the ball would hit a particular point to be picked up by a vacuum or small ball roller, so i would suggest a double ball roller that runs as far across the robot as possible (I'm thinking a robot very simular to 1918 or 1986). When the robot finds something that it thinks is a ball, it would stop spinning and drive forward. On the both sides of robot you could have 2 photo transistors lined up parallel to the ball roller about 1.5-2 in inside the frame. This way the robot could tell when it has a ball and approximately where on the robot the ball has stuck (we use the same sensor to detect when a ball is in our vacuum, easy to use and very reliable).
Shooting the ball
Since the photo transistors aren't that accurate, you would have the code split the ball roller into 3 sections: left side of the robot, middle of the robot, right side of the robot. The robot would then spin until the camera sees the goal. The gyro would have to be set at the beginning of the match so that the robot knows which side of the field to shoot at. Once the robot sees the target, you can line up your shot using the camera again and fire. Then you start over with the ball collection phase of the code
special considerations:
This would take some playing around with, you would probly have to throw in some timing aspects so that the robot doesnt get stuck on one part of the code. Things like "if you saw a ball 10 seconds ago and you havent picked it up, go back to finding balls" or "if you dont have a ball anymore, go back to finding balls" or "if it takes your more than 5 seconds to find the goal, drive forward and try again". The sonic rangers could be used for basic driving manuevers, If more than 2 of the sonic rangers sees an object less than 3 feet away then it would turn around.
reversed_rocker, I feel like I overlooked your posts.
Could you copy your sensor ideas to the Autonomous Perception (http://www.chiefdelphi.com/forums/showthread.php?t=85072) thread?
You said the passing feature would be the easiest to implement, because you would not need to track other robots?
I think you're right.
I think it also narrows down the information we need to two things:
What zone am I in
Where are the gamepieces around me?
We could probably say that a robot should only be in the "passing" role if it is in the middle or far zone, and there are game pieces near.
I just realized that there are situations where a robot does not fit any of the three roles I've mentioned so far. It's when the robot is moving to a different area because there's nothing to do where it is.
Although it usually only happens once or twice in a match, it's an important strategic decision. (If you're scoring, and you run out of game pieces, are you supposed to just sit there and wait until another 'bot brings you some more?)
davidthefat
10-04-2010, 02:38
I want it to be so highlevel that I want the only think to get called is Pwn(); in the teleop mode... Quite possibly take in what button is triggered in a custom button panel that has multiple buttons for the different types of strategies and the manuel over drive in some kind of the button with the flip up cover would be cool
I want it to be so highlevel that I want the only think to get called is Pwn(); in the teleop mode... Quite possibly take in what button is triggered in a custom button panel that has multiple buttons for the different types of strategies and the manuel over drive in some kind of the button with the flip up cover would be cool
Hey, isn't that semi-autonomous? :)
davidthefat
10-04-2010, 12:26
Hey, isn't that semi-autonomous? :)
You CAN think of it like that, but its like saying, want to use a nuke, a conventional missile or a machine gun? How much PWNage do you want?;)
No, I'm just giving you a bad time.
I really like the idea of giving higher-level methods of control to the drivers.
However, I think that could be used to supplement the robot, not as a primary form of making decisions.
Perhaps humans should simply be used as data acquisition devices.
Radical Pi
10-04-2010, 13:45
Perhaps humans should simply be used as data acquisition devices.
Push a button on a score, tell the robot how many opponents are in each zone, stuff like that? It could be very useful information
nathanww
11-04-2010, 14:57
You CAN think of it like that, but its like saying, want to use a nuke, a conventional missile or a machine gun? How much PWNage do you want
So what you're actually thinking of is more like
void Pwn(float pwnLevel) {
My concern is that this might not give you enough precision. I'd personally go with
void Pwn(double pwnLevel) {
<seriousness>
Team 1678 has done some research on the kind of "operator as a data source" model (http://www.chiefdelphi.com/media/papers/2275)that kamocat is describing, as well as a system for autonomous/hybrid action planning (http://www.chiefdelphi.com/media/papers/2365)(hybrid refers generically to recieiving basic game information from an operator, not "hybrid mode"). Essentially the planning system consits of a component that "abstracts" input from sensors and/or an operator, "promotors" which respond to certain abstract conditions, and "payloads" which are instruction sets attached to the promotors which actually contain the robot's response to a certain condition. By altering "weighting factors" attached to the payloads and promotors, the overall tactics of the robot can be changed without needing to change the content of payloads(i.e.A robot can be made more agressive or more defensive by changing parameters that correspond to when agressive and defensive modes activate). In simulation(and potentially on an actual robot during the build season) a genetic algorythm can also be applied to the parameters to allow for machine learning.
Unfortunatley, we have never actually tested this system in competition due to the inherent risk(since we only go to one regional per year) and the opportunity cost of developing it, since we have a high rate of turnover in our programming team.
Team 1678 has done some research on the kind of "operator as a data source" model (http://www.chiefdelphi.com/media/papers/2275)that kamocat is describing, as well as a system for autonomous/hybrid action planning (http://www.chiefdelphi.com/media/papers/2365)(hybrid refers generically to recieiving basic game information from an operator, not "hybrid mode"). Essentially the planning system consits of a component that "abstracts" input from sensors and/or an operator, "promotors" which respond to certain abstract conditions, and "payloads" which are instruction sets attached to the promotors which actually contain the robot's response to a certain condition. By altering "weighting factors" attached to the payloads and promotors, the overall tactics of the robot can be changed without needing to change the content of payloads(i.e.A robot can be made more agressive or more defensive by changing parameters that correspond to when agressive and defensive modes activate). In simulation(and potentially on an actual robot during the build season) a genetic algorythm can also be applied to the parameters to allow for machine learning.
Neat!
I'll probably end up with some sort of weighting system, but it might use data that's a combination of the inputs. (Say, a ratio of gamepieces to robots.)
I like the idea of the weighting jitter, and the continuous scale from aggressive to defensive.
The robot can get this info the same way a human can: by looking at the big TV screen. Possible?
Highly unlikely, first the robot's camera needs to find the screen, second, even if it DID know exactly what the screen was showing, which would be extremely difficult, the screen is not always showing the whole field, it might be looking at a certain part of it or maybe not even looking at the field at all
davidthefat
11-04-2010, 18:18
So what you're actually thinking of is more like
void Pwn(float pwnLevel) {
My concern is that this might not give you enough precision. I'd personally go with
void Pwn(double pwnLevel) {
<seriousness>
Team 1678 has done some research on the kind of "operator as a data source" model (http://www.chiefdelphi.com/media/papers/2275)that kamocat is describing, as well as a system for autonomous/hybrid action planning (http://www.chiefdelphi.com/media/papers/2365)(hybrid refers generically to recieiving basic game information from an operator, not "hybrid mode"). Essentially the planning system consits of a component that "abstracts" input from sensors and/or an operator, "promotors" which respond to certain abstract conditions, and "payloads" which are instruction sets attached to the promotors which actually contain the robot's response to a certain condition. By altering "weighting factors" attached to the payloads and promotors, the overall tactics of the robot can be changed without needing to change the content of payloads(i.e.A robot can be made more agressive or more defensive by changing parameters that correspond to when agressive and defensive modes activate). In simulation(and potentially on an actual robot during the build season) a genetic algorythm can also be applied to the parameters to allow for machine learning.
Unfortunatley, we have never actually tested this system in competition due to the inherent risk(since we only go to one regional per year) and the opportunity cost of developing it, since we have a high rate of turnover in our programming team.
Actually I was going to use an int for the variable not floats or doubles, It does not need to be that precise;)
Radical Pi
11-04-2010, 19:28
Actually I was going to use an int for the variable not floats or doubles, It does not need to be that precise;)
needs to be an unsigned long. I don't want to be limited in how high it goes :P
What about reactions to sensors dying? If an encoder fails or something and starts returning bad values, what's the decision-making code going to do about it? Should it be the responsibility of the planning code to do sanity checks, or should that fall under the perception topic?
davidthefat
11-04-2010, 19:49
needs to be an unsigned long. I don't want to be limited in how high it goes :P
What about reactions to sensors dying? If an encoder fails or something and starts returning bad values, what's the decision-making code going to do about it? Should it be the responsibility of the planning code to do sanity checks, or should that fall under the perception topic?
Thats when the driver flips the manual override switch:ahh:
ideasrule
11-04-2010, 20:38
Highly unlikely, first the robot's camera needs to find the screen, second, even if it DID know exactly what the screen was showing, which would be extremely difficult, the screen is not always showing the whole field, it might be looking at a certain part of it or maybe not even looking at the field at all
I was saying that the robot can read the score off the screen. The score is always kept at the bottom of the screen, in white, against the same background, year after year.
How would an encoder fail?
I can see it getting wired backwards, but that wouldn't be a spontanious thing.
I could see it getting hit, expect that it's protected by the toughbox. Because of how quadrature encoding works, if an encoder is hit, the most that it can happen is it stops telling you it's moving.
Radical Pi
11-04-2010, 23:01
I was saying that the robot can read the score off the screen. The score is always kept at the bottom of the screen, in white, against the same background, year after year.
There can be any number of things blocking the screen, especially with the score being on the bottom. Also, there is no guarantee within the FMS or the scorekeeper that the scoring display will always be showing. Also, it means dedicating a periodical scan of the arena for the score, stopping what you are doing to look for it. Having humans provide the data is so much easier and I doubt anyone would complain about something with that. Perhaps we can lobby the GDC to provide the score live to the robots for next year
With the encoder thing, if an encoder starts returning 0 speed constantly, any PID loop that uses that encoder spins out of control. A serious issue to think about.
There can be any number of things blocking the screen, especially with the score being on the bottom. Also, there is no guarantee within the FMS or the scorekeeper that the scoring display will always be showing. Also, it means dedicating a periodical scan of the arena for the score, stopping what you are doing to look for it. Having humans provide the data is so much easier and I doubt anyone would complain about something with that. Perhaps we can lobby the GDC to provide the score live to the robots for next year
With the encoder thing, if an encoder starts returning 0 speed constantly, any PID loop that uses that encoder spins out of control. A serious issue to think about.
How about a built-in self-test?
Do a short (~100ms) pulse on each of the wheels, each direction, to see if the encoders function?
This could also be used to test the state of the battery.
I'm thinking this is something that could be done while teams are waiting in queue.
Radical Pi
12-04-2010, 00:30
The self-test is a good idea for the queue, but on the field I think a mid-match system in place. If something breaks in the middle of a match, a queue test will do nothing to help that poor robot driving around in circles on the field. Perhaps a slightly stronger version of the queue test to compensate for being on the field and the torque required to drive there (.5 second at 25%-50% speed?), activated by sanity tests (such as driving both sides but only one encoder spinning)
Have you ever seen an encoder failure?
I was under the impression that if the encoders are physically protected, and if they're wired up correctly, they won't fail.
They should be frictionless, because they are optical (the disk doesn't contact anything but the shaft).
The toughboxes protect them pretty well, as long as there's no dense objects that aren't fastened down.
Are the actually prone to failure?
Alan Anderson
12-04-2010, 01:45
Have you ever seen an encoder failure?
"Encoder failure" almost always refers to a wire coming loose.
"Encoder failure" almost always refers to a wire coming loose.
Or damage during a dissassembly repair, or If you are using an encoder on an idler sprocket, there are a number of additional failure modes.
In 2008 we had encoders do out twice on our bot. when this occurred, the robot would run into the far wall at about 15 FPS. This did an amazing amount of damage to our bot as well as knocking off the controls of an opponent once.
Alan is right, typically it is a wire, though with the new control system, there are several different wires that can cause this failure mode.
davidthefat
12-04-2010, 16:32
Ok what are you guys planning on doing the Autonmous mode? I was thinking of prepping for the teleop mode (The robot is still autonomous) by taking in intial readings from the sensors and if needed, calibrating. http://www.galesburgelectric.com/Waytek-44218-Toggle-Switch-Guard.html I want to put that switch cover on the manual override switch to make it look bad $@#$@#$@#... Are you guys planning to do something in the real autonomous mode?
Ok what are you guys planning on doing the Autonmous mode? I was thinking of prepping for the teleop mode (The robot is still autonomous) by taking in intial readings from the sensors and if needed, calibrating. http://www.galesburgelectric.com/Waytek-44218-Toggle-Switch-Guard.html I want to put that switch cover on the manual override switch to make it look bad $@#$@#$@#... Are you guys planning to do something in the real autonomous mode?
I'm sorry, could you discuss that in another thread?
This thread is about planning algorithms involving field-awareness.
davidthefat
12-04-2010, 18:32
I'm sorry, could you discuss that in another thread?
This thread is about planning algorithms involving field-awareness.
LOL Honestly feel like a douche making so many threads, so I found the most similar thread
No, don't worry.
It's all part of the plan to seduce people into collaboration towards widespread autonomous robots.
Another thread wouldn't be bad at all.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.