View Single Post
  #25   Spotlight this post!  
Unread 02-07-2014, 06:14
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by Pault View Post
Who says that next year is going to be a shooting game? What about the end game, if there is one?



The gyro will not give you an accurate heading over the course of the match. A general rule that I have heard is that a good FRC gyro will give you about 15 degrees of drift over the length of a match. My recommendation on this end is to check your angle whenever possible using the vision targets, and when you can't see the vision target, just use the gyro to calculate your deviation from the last time you could. It may even be possible to do the same thing with the roboRIO's 3-axis accelerometer for location.
I was going to have a boolean for the field direction. Camera pose estimation would be for the actual location/direction. Otherwise, what do y'all think about me using GPS? I am thinking about a multi-camera system so there will almost be a vision target to see. The accelerometer/gyro is just so that the system can tell whether the bot is facing home or enemy territory.

Quote:
Originally Posted by gblake View Post
My hunch is that accomplishing the original post's goal involves solving (or integrating the solutions to), not dozens, but a few hundreds of problems, and my hunch is that converting the vision system's raw imagery into useful estimates of the states of the important objects involved in a match will be the hardest part.

To help wrap your head around the job(s) imagine the zillions of individual steps involved in carrying out the following ...

Create a simple simulation of the field, robots, and game objects for any one year's game.

Use the field/objects/robots simulation to simulate the sensor data (vision data) an autonomous robot would receive during a match. Be sure to include opponents actively interfering with your robot.

Add in the internal-state data the robot would have describing its own state.

Then - Ask yourself, what learning algorithms do I apply to this and how will I implement them.

It's a daunting job, but, if folks can get Aibo toy dogs to play soccer, you could probably come up with a (simulated initially?) super-simple robot that could put a few points on the board during a typical FRC match.

It's veerrryyyy unlikely that an implementation will make any human opponents nervous in this decade (most other posters have said the same); but I think what the OP described can be made, if your goal is simply playing, and your goal isn't winning against humans.

Blake
I have diverted a tad from my original post. I had asked whether it was feasible to make the entire game automatically learnable. I have learned from many other CD'ers that it is nearly impossible. I know that it isn't impossible, but it is just EXTREMELY impratical.

However, now I have decomposed the program idea and it seems quite practical (and actually useful) to automate some parts of the game. Things like autonomous driving, holding position and manipulating the gamepiece seem quite simple to implement (even without AI/ML).
Just a transform with OpenCV targeting a RotatedRect can accomplish finding a gamepiece. Using a rotatedRect again, you can filter for robots. As faust1706 explained to me a long time ago, just color-filter a bumper. Use minAreaRect to crop the bumper. Then, add two lines, dividing the bumper into 3 pieces. perform minAreaRect on this again and then use the height of the middle section to approximate the robot distance using an average bumper height.
I can tell that pathfinding will work, because I am treating it just like as if this were a videogame!

Say that I was tracking the ball as it was being shot. I could triangulate it's height using some trigonometry, and it's distance using a kinect distance map. I could get the hotspot for a made shot using the kinect's distance readings and height measurements. Now, say the ball misses the goal. The robot could possibly find the error, like 6 inches low, etc. and try to figure out a way to make the shot better. For example, in the 2014 game, if the robot misses the shot, it will see whether it was low or high. If it was low, it could move forward a bit. If that made the problem worse, it could move back, etc. This type of code could be quite crude, but still get the job done. If I used ML for this instead, surely, the robot would miss the first few shots, but it can easily be more accurate than a human thereafter. If we want to, we can also add more data points manually that we know. This is Supervised learning.

Basically, in short, it is not a good approach to write a full auto program, but instead to write some program that will allow rapid system integration. If I write my program right, I would need to only code a few pieces
-Vision targets and pose estimation/distance calculations
-Manipulating the gamepiece -- what it looks like, distance, pickup, goal, etc.
-Calculating what went wrong while scoring
-Field (Don't need to code. Need to configure).

And certainly, there are many things that a human player can perform best

However, now my main concern is how do I generate a map of the field. The kinect will offer a polar view of the field if programmed correctly. How do I create a Cartesian grid of all the elements.

For example, instead of the Kinect reporting:
Code:

       .   .   .   .   .     
                  
             __
It instead reports:
Code:
                .
           .         .
       .      ___     .
That could be fed into my array system and everything can be calculated from there.

Also, say that the path is:

Code:
00000001000000000
00000000200000000
00000000030000000
00000000004000000
00000000000500000
how can i make the robot drive more straight and not go forward, left, forward, right, etc.?

Thanks for all your time, mentors and other programmers!