Quote:
Originally Posted by yash101
I am taking an AI/ML course online. I am just wondering if it would be FEASIBLE (I know it is possible) to create a vision system that learns the game by itself?
While it would seem quite hard, instead of writing a different program every year, it could be possible to write one program to play each game.
Such a program would need to be taught the game, what it needs to do. However, it would also need to learn how to use itself.
The last question I have is, has this ever been done? It seems extremely hard so it would be pointless to reinvent the wheel.
What would be the best environment to build something like this in? Currently, I am learning with Octave, however, OpenCV seems to have a lot of useful components, including the ML module.
|
I would not go into this sort of project with any sort of expectation of success. I've fiddled a bit with general game-playing programs, in fact I wrote one for a science fair when I was in high school. It was successful, but the success criteria was that the results were statistically better than random moves. That's a much lower bar than I would feel safe with for something controlling a 120 lb. mobile robot.
I know a couple of years ago state of the art game-playing systems could choke unexpectedly on even relatively simple board games. Unless things have improved by leaps and bounds in the last couple of years I wouldn't even want to be in the same room as a robot controlled by one of these things.
Possibly interesting:
http://en.wikipedia.org/wiki/General_game_playing
http://games.stanford.edu/index.php/...tition-aaai-14