View Single Post
  #17   Spotlight this post!  
Unread 04-01-2014, 01:33
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Complete Autonomous Robot

Quote:
Originally Posted by BBray_T1296 View Post
This has crossed my mind a time or two before

I think it would be somewhat straightforward for the robot to maintain location awareness, as there are probably enough stationary objects on the field to use as reference/triangulation points (think of using a map and compass to find yourself [if you ever have before]).

There are several tricky parts and stumbles to be had though
-maintaining tracking and location awareness of other robots on the field
-identifying friend or foe to coordinate a movement path
-detecting, tracking, and interacting with game pieces and scoring zones (think of Logomotion, and already filled pegs)

What happens if:
-A robot falls into pieces (We hit a robot so hard last year all 4 of their bumpers fell off and were then strewn about the field)
what will your robot to when it suddenly can see extraneous parts/unidentifiable robots?
-The robot gets into a position it cannot figure its way out (I would assume you would have a manual override)

I can foresee a fully autonomous robot getting a lot of fouls, especially one operating in some sort of defensive mode.

That all being said, It would be REALLY cool to see a robot that plays by itself. Maybe in the future there could be a fully autonomous regional (including pit crew )
Addressing your "detecting, tracking, and interacting with game pieces and scoring zones" first. Simbotics did this in logomotion (on einstein, mind you). There had a 3 uber tube autonomous and the robot had to be aware of where it had already hung the previous ones. That blew my mind when I saw it as a freshman. It was probably the most influential and eye opening moment of my FIRST robotics career as a student. (I might not be much of a mentor while in college, but I will be back for some team once I get my life all figured out and underway). So it has been done. And, I've written a code that tracked the frisbees before, and our cRIO programmer programmed it so he would push a button and it'd go to the nearest [insert colour here] frisbee autonomously, but...we didn't have a means of picking it up, so we had to forget about it. (http://www.chiefdelphi.com/media/photos/39015)

Moving on
"maintaining tracking and location awareness of other robots on the field"
That is where camera pose estimation comes in handy. I can know exactly where the camera is (and therefore the robot) on the field in respect to an object (the target). My mentor created a simulation to test to see if our camera could see the 3 pt from the feeder station. He used the PNP method of camera pose to recreate the field by inputting displacement vectors and giving the 3d world coordinates of the aspects of the field (2pt goals, 3 pt, and the pyramid by the target). So, it proved that yes, we can see the 3pt from the feeder station, barely. This same thing can be applied to real time, though, you can hard code the coordinates of the pyramid in a star, so the program will automatically avoid it. As for awareness of other robots on the field, it will be very difficult to decipher friend from foe, but if teams are so generous as to allow us to "learn" their robot via cascade training, then we can do it. I dont really see a point, however. A completely autonomous robot will be very independent, so it would be better to assume the robot will not get out of your way.

"A robot falls into pieces" this is actually an interesting question. The other student first started depth while I made the algorithm for finding the corner coordinates of a square more accurate over the summer. What he does is he takes a depth image of nothing in front the camera except the floor, and then the aspects of the camera are constant. it cannot be moved up or down, or be tilted. That image is now the calibrated image. Then, we can put anything in front the of the camera and it will see it. I just posted a picture on here, but it hasn't loaded yet. But as you can see in it, there is also a bookshelf and a couch in the image, but the program only sees 2 objects, the objects that weren't there in the calibration image. edit: http://www.chiefdelphi.com/media/photos/39264?

"The robot gets into a position it cannot figure its way out " Yes, we will have a manual override. If it even faulters, the driver could take over. I think it'd be really cool to do a cycle completely autonomously once a game or so. We are worried about the speed of a star and our collision detection algorithm. If they are slower than the human reaction, then this would not be justified in doing (but that is not going to prevent us from trying!).

"I can foresee a fully autonomous robot getting a lot of fouls, especially one operating in some sort of defensive mode." yes. We are not planning on doing defense, as of now. There is nothing wrong with playing defense, it wins a lot of games. Our team has this unsaid motto "better to have tried, failed, and learned." I love seeing rookie teams (and non rookie teams for the matter) build defensive bots. Who cares if they aren't seeded first? Only one team will be, and we have never been in that position and have done very well in the past. The students, and mentors, learned stuff and had fun. They became inspired. We've been playing around with a lot of different things for the past several years, and this year we're going to attempt to put them all together.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."