|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
A Vision Program that teaches itself the game
I am taking an AI/ML course online. I am just wondering if it would be FEASIBLE (I know it is possible) to create a vision system that learns the game by itself?
While it would seem quite hard, instead of writing a different program every year, it could be possible to write one program to play each game. Such a program would need to be taught the game, what it needs to do. However, it would also need to learn how to use itself. The last question I have is, has this ever been done? It seems extremely hard so it would be pointless to reinvent the wheel. What would be the best environment to build something like this in? Currently, I am learning with Octave, however, OpenCV seems to have a lot of useful components, including the ML module. |
|
#2
|
|||||
|
|||||
|
Re: A Vision Program that teaches itself the game
I'm not a programmer, but no.
The reason is that while a single program may learn the game every year, it has to adapt to the different robots that are built. Some things stay the same, other things change--sometimes pneumatics are an advantage and sometimes not, for example. So the program will need to be changed to fit the robot every year, regardless of whether or not the game changes are minor or major. Now add to that that no robot in FRC history has ever been fully autonomous beyond automode or a "drive straight and don't stop", and the odds are VERY against you actually pulling it off this side of grad school. |
|
#3
|
||||
|
||||
|
Re: A Vision Program that teaches itself the game
Quote:
I know a couple of years ago state of the art game-playing systems could choke unexpectedly on even relatively simple board games. Unless things have improved by leaps and bounds in the last couple of years I wouldn't even want to be in the same room as a robot controlled by one of these things. Possibly interesting: http://en.wikipedia.org/wiki/General_game_playing http://games.stanford.edu/index.php/...tition-aaai-14 |
|
#4
|
||||
|
||||
|
Re: A Vision Program that teaches itself the game
I know that it would be required to at least program in all the I/O, etc. However, I believe the best robot would be one that gets better at the game by experience. First Match: Roaming in circles, knowing nothing to do. Last Match: Game Pro. It will beat any robot that tries to win!
I want to get my vision program for next year, rolled out with a bit of ML. This way, it would be able to learn how to do better the next time. That is why that computer that plays checkers was so good at playing checkers |
|
#5
|
|||
|
|||
|
Re: A Vision Program that teaches itself the game
Simply put, human brains are still much better at some things than computers... so your answer is no within the context of FRC. For a game as complex as an FRC game, do not expect for a fully autonomous robot control system to be able to outperform the combination of a human brain(s) + control assist.
Computers tend to excel in games that are extremely well defined with few variables. For chess/checkers/backgammon, you may only have a handful of possible moves to a few handfuls of spaces. A basic player is capable of looking at those moves and determining which is "best" right now. An expert player or computer iterates that forward, analyzing several layers deep. If I do this, my opponent's options change from set X to set Y, which gives me another set of options, etc. You can essentially play the game out for each of the possible moves, and look at which of your current moves has the best outcome. If you are interested in this topic, which has intrinsic value (even if I wouldn't recommend applying it at the level you propose), I'd recommend writing a few game solver applications first. Start with a puzzle solver (like Sudoku) where you are essentially writing an algorithm to find the single "right" answer. Approaching a new game with a mindset "like a computer" could also be fun as well. Just start describing your action table when you play out the game. If I'm located at mid-field and my opponent is between me and the goal, what are my options? What are his options? Is he faster than me? Can does he have more traction/weight than me? Is he taller or shorter? Generally, if you are not capable of explaining all these things in words, adding the complexity of a computer will not help you. However, the process of describing them might lead to good strategies, whether implemented by a computer or a human driver. -Steven |
|
#6
|
||||
|
||||
|
Re: A Vision Program that teaches itself the game
Ok, I'm on my phone. Lets see how this goes. It's storming at work so I have time.
Machine learning for vision is rather common, but the approach you want to take isn't feasible due to your lack of training examples. The rule of thumb is that you need at least 50 training examples to start learning from. You simply won't have enough training examples to get a result worth the effort, or any noticeable result at all. Moving on. You can use machine learning for calculating distance from characteristics in the image. You have to have training examples though. So you'd go out, record contour characteristics such as height, width, area, and center.x and .y. then you manually input distance. You do this from as many points as you possibly can bear to do. Then you run a gradient descent algorithm(regression) or apply the normal equation. You can scale your data if you don't think it's linear, such as taking the natural log of contour height. For this example, you are dealing with 6 dimensions, so it is impossible to visualise. You just have to guess what scaling is needed. Then you apply the squared error function (predicted-actual)^2, also called your resudual. You want this to be as close to zero as possible. This can also be applied to game pieces. Another application is shooting pieces. You have a chart of inputs such as motor speed, angle, and distance, and the output is a 1 or 0: making a basket or miss. You have a 3d plot now. There exists a line, or multiple lines virtually the same, in 3d space that garuntees making all your shots (given your robot is 100% consistent). Another type of ai is path planning. If you have a depth map of all your objects in front of you, then you can apply the a star path planning to get to a certain location on the field given you have a means of knowing where you are on the field. (Cough cough encoders on undriven wheels or a vision pose calculation) I might have forgetten somethings. Feel free to ask questions. Disclaimer: all these calculations can be done virtually instantly using octave or matlab. The a star is a bit more intensive. It is an iterative algorithm to my understanding. Last edited by faust1706 : 22-06-2014 at 15:16. |
|
#7
|
||||
|
||||
|
Re: A Vision Program that teaches itself the game
This is possible. A couple of years ago i made a pong game that taught itself how to move the pedal to block the ball back. It taught itself with a neural network. The fitness was base on how long it could play without losing.
|
|
#8
|
||||
|
||||
|
Re: A Vision Program that teaches itself the game
http://en.wikipedia.org/wiki/A*_search_algorithm
^^ That seems like something I want in my next year program. I would like it if I could have a tablet pc for the driver station, with the robot constantly generating a map of the field. If you click on a location on the field in the tablet, the robot could automatically navigate there with a high accuracy. However, for that to be possible, the program would need to know where all the obstacles are. How do you suggest getting the exact position of other robots and field elements? Should I have a Kinect (or a couple), outputting the distance to all the field elements? This gives me another question. What does the Kinect distance map look like? How do you get the distance measurement from a single pixel? |
|
#9
|
||||
|
||||
|
Re: A Vision Program that teaches itself the game
Quote:
|
|
#10
|
||||
|
||||
|
Re: A Vision Program that teaches itself the game
Quote:
My team last summer got a birds eye view of the object in front a kinect to work: http://www.chiefdelphi.com/media/photos/39138 The next step was to implement a* path planning but we never got it to work (it is still on our to do list). (The objects in view are soccer balls, that is why they are all the same size in the top view) On a side not. Slam is so cool. For anyone interested: Quote:
Here is the code to calculate distance from the intensity of a pixel: Scalar intensity = depth_mat2.at<uchar>(center[i]); double distance = 0.1236 * tan(intensity[0]*4 / 2842.5 + 1.1863)*100; center[i] is the center of a contour (object of interest that passed all of our previous tests), it has a x and y component. The kinect is rather intensive. We ran 3 cameras this year an analysed every aspect of the game we possibly could with vision and we got 8 fps on an odroit. You'd most certainly have to have multiple on board computers to handle multiple kinects, but that may not be necessary if you only play to move forward and you don't have omnidirectional drive capabilities. I'm waiting for the cheesy poofs to release their amazing autonomous code so I can apply it to autonomous path planning (instead of their predrawn paths) There are other alternatives to the kinect, I personally prefer the asus xtion. It is smaller, faster, and lighter. Last edited by faust1706 : 23-06-2014 at 19:31. |
|
#11
|
|||
|
|||
|
Re: A Vision Program that teaches itself the game
I am not going to contribute to the reasons as to why you shouldnt do it on the scale you are seeking (Because I think the idea and concept is awesome) but I will say one thing. I have worked with AI Pathfinding algorithms to a decent extent as a Game Programmer, I was freelancing and did work in implementing specific AI Algorithms and various different Game Mechanics.
You have limited Memory on an embedded system like the RoboRIO. Of course the RoboRIO is a massive step up but I am talking about 2 GB RAM vs. 256 MB RAM. A* is in its most basic form an informed Djikstra Pathfinding algorithm. Unlike Djikstra where all moves have a Heuristic cost of 1, A* has ways of assigning a cost to each movement. Depending on your method you will usually get an O((V+E)log(V)) or even O(V^2) algorithm. Pathfinding is an expensive task and if the field was a perfect size where a resolution of 64 px by 32 px worked then you could end up with an extremely large Fringe if enough obstacles exist. In certain scenarios this could be a bit long for an Autonomous period and if proper threading isnt implemented it could cripple your Teleoperated period if you have to wait too long for the calculations to finish in a dynamically changing field of non-standard robots. Also this could work for shooting but if the game calls for a much different scoring system then your AI and Learning may be even further crippled by complexity... Also you dont want a friendly memory error popping up and killing your robot for that round. Its an awesome idea and you should definitely follow through but probably not immediately on a 120 lbs. robot. Experiment first with Game Algorithms and get used to implementing it in an efficient and workable way, then move to the robot where efficiency will really matter. I cant speak for how efficient you will need to be... again... Game Developer but again I really like your concept of pixels but I think you should be wary of how much time and the maintainability of your code. |
|
#12
|
||||
|
||||
|
Re: A Vision Program that teaches itself the game
Aside from the many technical limitations, there is one glaring barrier to such a learning system. Vision systems play very specific roles in each game, and in each robot. They typically tracking geometric, retroreflective targets, but the vision systems my team has created have had no say in the robot's logic -- they effectively turn the camera from an image sensor to a target sensor, streaming data about where the targets are back to the robot. For a vision system to learn the game, it must learn not only what the targets look like, but also what data the robot's central control needs -- whether it wants "is the target there?" data like this year's hot goals, or "At what angle is the target?" as in Rebound Rumble and Ultimate Ascent. Any learning system requires feedback to adapt, and when it has to learn so many different things, designing that feedback system would be at least as complex as making a new vision system, and certainly more error-prone.
|
|
#13
|
||||
|
||||
|
Re: A Vision Program that teaches itself the game
Quote:
Quote:
|
|
#14
|
||||
|
||||
|
Re: A Vision Program that teaches itself the game
Quote:
|
|
#15
|
||||
|
||||
|
Re: A Vision Program that teaches itself the game
Quote:
If you are saying that A* is too ineficient, what do you suggest I should try instead. If anything, I could have 3 computers -- vision processor, AI, and cRIO. Also, 64 by 32 px was just a crude example. By testing the performance of the system, I could tell whether I need to reduce the resolution or what. Otherwise, I could treat everything as either go there or not. My buddy programmer and I would like to use an nVidia Jetson Dev board. Should we use that for AI, or vision processing? We can use an ODROID for the other task! I have already figured out how to effectively use OpenCV and optimize it for a very high performance. I can use a configuration file to make the same setup track multiple target types, and I understand how to use OpenCV to get accurate target data even if the target is tilted! |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|