|
|
|
![]() |
|
|||||||
|
||||||||
|
|
Thread Tools | Rate Thread | Display Modes |
|
#24
|
||||
|
||||
|
Re: A Vision Program that teaches itself the game
My hunch is that accomplishing the original post's goal involves solving (or integrating the solutions to), not dozens, but a few hundreds of problems, and my hunch is that converting the vision system's raw imagery into useful estimates of the states of the important objects involved in a match will be the hardest part.
To help wrap your head around the job(s) imagine the zillions of individual steps involved in carrying out the following ... Create a simple simulation of the field, robots, and game objects for any one year's game. Use the field/objects/robots simulation to simulate the sensor data (vision data) an autonomous robot would receive during a match. Be sure to include opponents actively interfering with your robot. Add in the internal-state data the robot would have describing its own state. Then - Ask yourself, what learning algorithms do I apply to this and how will I implement them. It's a daunting job, but, if folks can get Aibo toy dogs to play soccer, you could probably come up with a (simulated initially?) super-simple robot that could put a few points on the board during a typical FRC match. It's veerrryyyy unlikely that an implementation will make any human opponents nervous in this decade (most other posters have said the same); but I think what the OP described can be made, if your goal is simply playing, and your goal isn't winning against humans. Blake |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|