View Single Post
  #7   Spotlight this post!  
Unread 04-01-2014, 01:53
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Complete Autonomous Robot

Quote:
Originally Posted by BBray_T1296 View Post
I dont think location awareness would really be impossibly difficult, but certainly one of the "easiest" challenges of a fully autonomous robot. (remember easiness is relative)

Hardware: you could use a revolving camera or other sensor such as a complex laser range finder (like you use on boats). On your processing computer (cRIO, some open source board, or a laptop) you process the data like this:
-You have some sort of computer model of the game field (like CAD).
-You have a 360 degree data field of distances of the closest object to you.
-You know how high your sensor is off of the ground
-You would have to throw out some data such as nearby robots as interference, but that might not be as hard as it sounds.
The computer would basically have to compare the range data it receives to the field structure file to find its exact location, like a piece to a puzzle (Use 2013 for example, if you know there is a wall to your left, and a vertical post to your right, you know you are next to the pyramid.

This system would not be the only source of location awareness. There is some degree of dead reckoning to be used as well. Your robot starts knowing it is roughly in your auto position, and fine tunes it to what it exactly sees around it. you don't have to rely on a gyro for rotation because with your range sensor you know the post is to the robots front (by an encoder on the revolving sensor), so you are facing the pyramid. Knowing your robots heading and speed you can guess your next location, but double check and fine tune it based on your advanced data.

Factoring out piece and robot interference to the system can be done since at another stage of the operation you have to know where the pieces and bots are anyways, so you can ignore data from that direction.

Think of how experimental robotic vehicles are working. They are tackling these same challenges and succeeding this very second. While Google's self driving cars don't have to track Frisbees, they do have to identify and track other cars with us dinguses behind the wheel.

Would it be hard? Yes.
Would it be impractical? Probably.
Could it be done in 6 weeks? Probably not.
Is it innovative? Absolutely.
Me and the other student briefly talked about using a spinning camera or a 360 degree one, but decided against it. We are probably going to use 2 depth cameras, one on each side the robot can move (if we do use west coast, all depends on the challenge), and an rgb camera to track a target if applicable. (Sad day in my mind: we are no longer going to be using the kinect. We are switching to the asus xtion for depth and a webcam for target tracking. It has been a good 2 years of developing with you, kinect. You might finally be plugged into my xbox for the first time ever now.)

A lot of people (ok, like 3) on here are talking about robotic cars. I competed at ISWEEEP and was next to the romanian kid that (got first and) made a completely self driving car with the opencv libraries. We've been talking since then and have been working on some projects together.

The most ideal method would be doing a slam of the environment 30 times a second, but that is impossible, so we have to keep our data flow limited, but useful. The two depth cameras should be plenty. The A star will have to be able to decide which depth feed to use, but that wont be too hard.

I've already discussed knowing position and rotation as well. But, an issue that could occur is if we are not getting a solution from the rgb camera. Then we would have to rely on our gyro. A team last year at terre haute spent 30 minutes with us going over how our vision worked just so they could block it
and so we could not shoot at all 3 targets. They just put up a pole to keep the target from being a closed contour. It was really clever. I wasn't even mad. I was lazy and didn't put a bounding rectangle when around contours when there wasn't a solution. (only do it if there isnt a solution. It will shave a few microseconds off the program). That was my fault. Oh well. I learned my lesson.

"Would it be hard? Yes.
Would it be impractical? Probably.
Could it be done in 6 weeks? Probably not.
Is it innovative? Absolutely."

Hard? Our team is up for the challenge. We aren't the most famous team in Missouri (cough cough 1986), and have only actually won one regional in our existence, but we are gaining attention through our software.

Impractical? to the extreme. I just want to do one operation completely autonomously during a game. To see our drivers let go of the controls and for us to be still scoring points.

6 weeks? No. That's why we build 2 robots XD. And I may be going into the hospital soon for IVs for 2 weeks, so that's what I'll be doing for those 2 weeks. (no worries about my health, just have a bug that I can't shake)
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."