Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Programming (http://www.chiefdelphi.com/forums/forumdisplay.php?f=51)
-   -   Autonomous Perception (http://www.chiefdelphi.com/forums/showthread.php?t=85072)

davidthefat 10-04-2010 19:07

Re: Autonomous Perception
 
Quote:

Originally Posted by kamocat (Post 951719)
It depends whether your goal is to acquire the ball, or to keep another 'bot away from them.
However, I would still argue that if you can keep track of multiple balls around you at the same time, then you immediately know where to go when one is taken, and you don't waste time looking for another. I think ball detection should be passive most of the time.
Also, by not using the camera for finding balls, then the robot could focus on the target to fire, but still be aware of what's happening around it.

I am kind of disturbed by you saying "ball", the game will be different next year and you have to be as open minded as you can and generic as you can, so don't say ball...

Radical Pi 10-04-2010 20:15

Re: Autonomous Perception
 
Well if you don't want us designing with this year in mind, then what do you want us to do? All decision-making is based on the game, so if we don't know next year's game then we should go with this year's game, learn what we can from it, and hope some of it is reusable. We can't even be sure drive code still works next year if the GDC throws a 2009-style curveball

kamocat 10-04-2010 21:01

Re: Autonomous Perception
 
Quote:

Originally Posted by davidthefat (Post 951729)
I am kind of disturbed by you saying "ball", the game will be different next year and you have to be as open minded as you can and generic as you can, so don't say ball...

I understand what you're saying.
While the games in the past are our best estimates of what the GDC will do in the future, it's true that we should plan this in a way that it will work with any game. (Perhaps the planning algorithms will differ, but the perception and control should be very similar)
Here's some basic commonalities between games:
  • robot size and weight
  • six robots on the field, two alliances
  • 27' by 54' field
  • Robot must display their number and alliance in some way easily distinguishable by humans
  • Multiple gamepieces (all similar)
  • Multiple locations to score

davidthefat 10-04-2010 21:09

Re: Autonomous Perception
 
Quote:

Originally Posted by kamocat (Post 951886)
I understand what you're saying.
While the games in the past are our best estimates of what the GDC will do in the future, it's true that we should plan this in a way that it will work with any game. (Perhaps the planning algorithms will differ, but the perception and control should be very similar)
Here's some basic commonalities between games:
  • robot size and weight
  • six robots on the field, two alliances
  • 27' by 54' field
  • Robot must display their number and alliance in some way easily distinguishable by humans
  • Multiple gamepieces (all similar)
  • Multiple goals

Ok, so the field size has been constant for all these years, so Theoretically, you can make a "map" of the field and draw the robot relative to the field position using landmarks like the walls (this year the goals and target and bumps) and like RTS games, theres is a fog of war constantly since the robot has a limited sight range with teh sensors, so the robot can identify the object and send back a theoretical coordinates of the object and that can be drawn on the screen of the laptop...

kamocat 10-04-2010 21:39

Re: Autonomous Perception
 
This is starting to sound more like a videogame than an autonomous 'bot.
Couldn't you just track your location with the encoders and gyro?
The only problem I see with using the encoders for navigation is that they're almost guaranteed to get "off" when they go over the bump this past year. Is that what you're saying this "fog of war" location detection should be used for?

Ian Curtis 10-04-2010 21:42

Re: Autonomous Perception
 
Quote:

Originally Posted by davidthefat (Post 951905)
Ok, so the field size has been constant for all these years, so Theoretically, you can make a "map" of the field and draw the robot relative to the field position using landmarks like the walls (this year the goals and target and bumps) and like RTS games, theres is a fog of war constantly since the robot has a limited sight range with teh sensors, so the robot can identify the object and send back a theoretical coordinates of the object and that can be drawn on the screen of the laptop...

I think you'd like StangPS...

There's no need for a "fog of war" you can get plenty of information from an accelerometer, gyro, and maybe even a compass!

Al3+ 10-04-2010 21:54

Re: Autonomous Perception
 
An issue I see with these "mapping" approaches is that they can easily be thrown off by drift and unforeseen situations. What if the wheels slide or lose contact with the ground (e.g. bump, pushing matches)?

davidthefat 10-04-2010 21:54

Re: Autonomous Perception
 
Quote:

Originally Posted by kamocat (Post 951917)
This is starting to sound more like a videogame than an autonomous 'bot.
Couldn't you just track your location with the encoders and gyro?
The only problem I see with using the encoders for navigation is that they're almost guaranteed to get "off" when they go over the bump this past year. Is that what you're saying this "fog of war" location detection should be used for?

Quote:

Originally Posted by iCurtis (Post 951918)
I think you'd like StangPS...

There's no need for a "fog of war" you can get plenty of information from an accelerometer, gyro, and maybe even a compass!

Yea but you cant track all the other robot/objects on the field with a gyro and ect, but I guess that would make it alot easier than traking with the walls... BTW before going into robotics, I played around with game programming a lot... I didn't get anything significant done.

kamocat 10-04-2010 22:10

Re: Autonomous Perception
 
Because I haven't had much success with the accelerometer positioning yet, I'm going to make the assumption that the gyro and encoders will provide a fairly consistant and accurate data as to where your position is on the field...
EXCEPT when you're going over a bump.
If you're going over a bump, what do you do? How do you detect that you're going over a bump?
Similarly, some robots got balls underneath them this past year. How do you detect that you're not moving as you should, and then determine what you're *actually* doing? I don't think any of us have a 3-axis gyro on hand.

davidthefat 10-04-2010 22:18

Re: Autonomous Perception
 
Quote:

Originally Posted by kamocat (Post 951928)
Because I haven't had much success with the accelerometer positioning yet, I'm going to make the assumption that the gyro and encoders will provide a fairly consistant and accurate data as to where your position is on the field...
EXCEPT when you're going over a bump.
If you're going over a bump, what do you do? How do you detect that you're going over a bump?
Similarly, some robots got balls underneath them this past year. How do you detect that you're not moving as you should, and then determine what you're *actually* doing? I don't think any of us have a 3-axis gyro on hand.

I wish there was some super accurate, small GPS system thing that exists... THAT would simplify everything:ahh: I guess we have to go with the method I posted... Or there can be a hybrid type of thing going on, by default it tracks using the gyro and all that, but resets the coordinates using the IR method every 10 seconds or something

ideasrule 10-04-2010 23:55

Re: Autonomous Perception
 
Another way to determine your location on the field, at least this year, is to look at the goals. Once you know what angle the goals are at, it's very easy to use triangulation to determine the robot's position.

Al3+ 11-04-2010 00:58

Re: Autonomous Perception
 
Quote:

Originally Posted by kamocat (Post 951928)
Because I haven't had much success with the accelerometer positioning yet, I'm going to make the assumption that the gyro and encoders will provide a fairly consistant and accurate data as to where your position is on the field...
EXCEPT when you're going over a bump.
If you're going over a bump, what do you do? How do you detect that you're going over a bump?
Similarly, some robots got balls underneath them this past year. How do you detect that you're not moving as you should, and then determine what you're *actually* doing? I don't think any of us have a 3-axis gyro on hand.

Vertically mounted accelerometer?

kamocat 11-04-2010 01:42

Re: Autonomous Perception
 
That's true, you could use the Z accellerometer to tell when you're not on the bump. (You could also use it to tell when you land. Hard.)

I think simply "resetting" your position off another technique defeats the purpose of the technique in the first place.
What about zeroing up *against* the bump after you've gone over? Or against a wall? That should tell you your angle, and it would tell you your location in at least one plane.

Has anyone tried using the line down the middle of the field? I know in FLL, it's very common to have a line-follower. I don't think it'd be hard to have a light and a phototransistor down near the ground so you can tell when you pass by the center line.

I think triangulation off the goals would work pretty well, except that it's an inverse sine function, and so your accuracy decreases drastically as you get further away. I think you may have to look all the way across the field to see two goals at once, though. Perhaps it would require taking a full-res image (and recording the timestamp), processing it a bit later, and then readjusting the last few seconds to coincide with your new data.
The question that goes along with this is, will the robot very often look at goals on the other side of the field?
It's certainly something you can do in disabled mode, if you're already looking that way.

ideasrule 11-04-2010 13:19

Re: Autonomous Perception
 
Isn't this year's accelerometer three-axis? That would easily tell you when you're going over the bump. It might also be possible to do accurate inertial navigation with the accelerometer: the position estimate could reset every time it touches a bump and every time the camera sees two goals, and the velocity estimate could reset every time the encoders record a speed of 0.

kamocat 11-04-2010 15:25

Re: Autonomous Perception
 
Has anyone tried dragging a ball mouse on the floor (and communicating with it)?
An optical mouse?
What's the communication standard before it's converted to USB? (With a ball mouse, you could actually just rewire it and connect it to the digital sidecar like any other encoder. Perhaps it would need a little mechanical adjustment to have good contact with the floor.)


All times are GMT -5. The time now is 05:23.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi