![]() |
Re: A Vision Program that teaches itself the game
Aside from the many technical limitations, there is one glaring barrier to such a learning system. Vision systems play very specific roles in each game, and in each robot. They typically tracking geometric, retroreflective targets, but the vision systems my team has created have had no say in the robot's logic -- they effectively turn the camera from an image sensor to a target sensor, streaming data about where the targets are back to the robot. For a vision system to learn the game, it must learn not only what the targets look like, but also what data the robot's central control needs -- whether it wants "is the target there?" data like this year's hot goals, or "At what angle is the target?" as in Rebound Rumble and Ultimate Ascent. Any learning system requires feedback to adapt, and when it has to learn so many different things, designing that feedback system would be at least as complex as making a new vision system, and certainly more error-prone.
|
Re: A Vision Program that teaches itself the game
Quote:
|
Re: A Vision Program that teaches itself the game
Quote:
Quote:
|
Re: A Vision Program that teaches itself the game
Quote:
If you are saying that A* is too ineficient, what do you suggest I should try instead. If anything, I could have 3 computers -- vision processor, AI, and cRIO. Also, 64 by 32 px was just a crude example. By testing the performance of the system, I could tell whether I need to reduce the resolution or what. Otherwise, I could treat everything as either go there or not. My buddy programmer and I would like to use an nVidia Jetson Dev board. Should we use that for AI, or vision processing? We can use an ODROID for the other task! I have already figured out how to effectively use OpenCV and optimize it for a very high performance. I can use a configuration file to make the same setup track multiple target types, and I understand how to use OpenCV to get accurate target data even if the target is tilted! |
Re: A Vision Program that teaches itself the game
Quote:
A* is likely your best bet, Pathfinding algorithms are known for being either Time consuming (if restricted) or Memory consuming (if you want speed) and you are right for looking at as if it is a Video Game AI. A* is commonly used because it is fast while being reasonably smart. I would recommend the 3 basics to choose from. Check up on your knowledge of Djikstra, A* and Best First Search. Each have trade offs. Most simply you either get Slow With the Best Path, or Fast with A Good Enough Path. If you have the ability to multi-thread with several CPU's you could possibly get away with a Multi-Threaded Djikstra approach that can quickly search through the Fringe and determine the true shortest path. But sticking to A* might be your best bet. If you separate it into 3 computers and each process has access to its own dedicated memory then you could pull it off in terms of processing power, 1 GB should be well enough I would think. I am still concerned though with how you plan on it being useful outside of an awesome project. On the field I still think it will be hard to make it adaptive to a dynamically changing field (though not impossible) sufficiently and too slow to calculate the best path in a short time frame, though I suppose it also depends on what you consider the best path. I think its awesome and I do honestly support the idea (Because I dont have access to the same materials on my team :P ), just trying to gauge where your head is at. Also I agree if you follow through you will definitely need to constantly tweak (or dynamically update) the resolution of the graph you are analyzing. I have questions such as how you tested your optimizations and how the data is being collected? AI is so hard to discuss since it all depends on your goals and how it needs to adapt to its current scenario. |
Re: A Vision Program that teaches itself the game
http://www.entropica.com
Saw a TED talk on this a while back and thought it was interesting. Probably not a feasible strategy for FRC but it is a neat way of approaching games. |
Re: A Vision Program that teaches itself the game
1 Attachment(s)
OK, so I found this great series of pages which I need to (thoroughly) read. It has quite a few algorithms -- their pros, cons, implementations, etc. The page is at: http://theory.stanford.edu/~amitp/GameProgramming/.
The approach I am thinking about is: Imagine that the field were a grid. In this grid, there are obstacles mapped. These obstacles would be like a wall or something that cannot be moved by the player. These obstacles would be predefined in a nice and lengthy configuration file, with a byte of data for each grid location. I have attached a text file containing how the configuration file will be saved and how the field data would be saved. I would like to call this something along the lines of field-description-file or so because it can save a map of the field. In the example config, I have 2013's game. 2014's game didn't really have any in-field obstacles. So, the configuration stores some basic field information. It contains the boundaries of the field and the static field elements, It also marks the space where the robot can move around. It is basically an array of the field elements. I plan to figure out where on the field I am, using a few sensors. For the location, I would use pose-estimation to figure out my distance and angle from the goal. Then, I would use a gyro for the rotation because both field ends are symmetrical. I guess some changes can be made to this FDF file. I can add more numbers, say 4 and 5. 4 could be a loading zone and 5 can be a goal in which the robot can shoot. The robot would calculate the height of the goal using the vision targets. Otherwise, a second FDF could be created. This FDF will contain a depth map. All the goals would be marked in the exact spot. The number would be the height (in whatever unit). I think this type of an interpreter could get the robot program closer to a program that can play all games. You just need to describe the field in the FDF and program basic behavior -- shooting positions, shooter direction, and etc. The robot could use a supervised learning -- regression for the ML algorithm. This way, the robot could learn where shooting is most consistent and over time gather the coordinates for the best shot! |
Re: A Vision Program that teaches itself the game
Quote:
|
Re: A Vision Program that teaches itself the game
Quote:
Before you decide to do something like this, you need to consider what your goal is. If the goal is become more competitive, I can guarantee that time could be spent better improving another aspect of the team. However, if your primary goal is not to do as well as you can at the competition, and instead is to learn about programming, which is a valid goal, then I would recommend realizing how big of a project this would actually be. You are not the first person to want to do something like this, a user named davidthefat had a similar goal. He, and a few other teams, pledged to do a fully autonomous robot for the upcoming season, which never happened. Look at the Cheesy Poofs. They've won events every year they've been around, they've been on einstein a bunch, they've been world champs, they win the innovation in controls award, and their autonomous mode is very, very complicated. Their code, which can be viewed here, is well beyond the level of high school kids, and is the result of a few really brilliant kids, some very smart and dedicated mentors, and years and years of building up their code base. Yet all this fancy software lets them do is drive in a smooth curved path. Even then, it's not perfect. In the second final match of Einstein, something didn't go right, and they missed. Just to get an idea for the scope of the project, read through these classes from the Cheesy Poof's code. My friend does this sort of programming for a living, and it took him a good hour of reading the pathgenerator and spline portions of the code to really get a good understanding of what's going on. I wouldn't attempt this project unless that code seems trivial to write and easy to understand. Before even thinking about making software to let the robot learn or software for the robot to learn about its surroundings, you'd need to perfect something as simple as driving. As an exercise, try making an autonomous mode that drives the robot in a 10 foot diameter circle. It's much harder than you think. Again, I'm not trying to be harsh or discouraging, I'm trying to be realistic. A piece of software that can do the tasks you've described is beyond the reach of anyone. Another very difficult exercise is figuring out where the robot is on the field and which way it is pointing. You can't just "use a gyro", there's much more to it. |
Re: A Vision Program that teaches itself the game
I have come up with a plan on how to write something like this. The vision program will have a lot of manual set up, like describing the field, the obstacles, goals and boundaries. Other than that, the robot can start it's ml saga for the shooter using human intervention -- it will learn the sweet spots for the shots as the drivers shoot the ball and make/miss it. Over time, the data set will grow and the shots will become more and more accurate, just like our driver's shots.
When we are learning about the robot's capabilities, this is how we learn: Shoot once. Was it low? Was it high? reposition. Try again. This would be quite similar to what the Supervised learning algorithm will be. Using regression, the best shot posture will be estimated even if there is no data point. It needs to just know a couple data points for the answer. The main code change that will need to be done is that the retroreflective targets will need to be coded in. It will be extremely difficult to write a program to find the targets and always find the correct one for pose estimation using ML. Basically, a great portion of the game can be taught to the robot quite easily -- moving around the field, etc. However, as you said, it is quite hard to gather the robot location on the field. However, a gyro will give access to the direction so that the robot can tell which side it's looking at. The pathfinding will be implemented almost exactly as if the program were a videogame! I'm not trying to make a fully-autonomous robot, but instead a robot that has the level of AI/ML to assist the drivers and make gameplay more efficient. I am thinking about using A* quite a bit. When the robot is stationary, a path plan would constantly be generated to keep the robot from moving without brakes, etc. However, that is just a might because that would create quite a bit of lag in the robot motion when a driver wants to run the bot. |
Re: A Vision Program that teaches itself the game
Quote:
Quote:
|
Re: A Vision Program that teaches itself the game
My hunch is that accomplishing the original post's goal involves solving (or integrating the solutions to), not dozens, but a few hundreds of problems, and my hunch is that converting the vision system's raw imagery into useful estimates of the states of the important objects involved in a match will be the hardest part.
To help wrap your head around the job(s) imagine the zillions of individual steps involved in carrying out the following ... Create a simple simulation of the field, robots, and game objects for any one year's game. Use the field/objects/robots simulation to simulate the sensor data (vision data) an autonomous robot would receive during a match. Be sure to include opponents actively interfering with your robot. Add in the internal-state data the robot would have describing its own state. Then - Ask yourself, what learning algorithms do I apply to this and how will I implement them. It's a daunting job, but, if folks can get Aibo toy dogs to play soccer, you could probably come up with a (simulated initially?) super-simple robot that could put a few points on the board during a typical FRC match. It's veerrryyyy unlikely that an implementation will make any human opponents nervous in this decade (most other posters have said the same); but I think what the OP described can be made, if your goal is simply playing, and your goal isn't winning against humans. Blake |
Re: A Vision Program that teaches itself the game
Quote:
Quote:
However, now I have decomposed the program idea and it seems quite practical (and actually useful) to automate some parts of the game. Things like autonomous driving, holding position and manipulating the gamepiece seem quite simple to implement (even without AI/ML). Just a transform with OpenCV targeting a RotatedRect can accomplish finding a gamepiece. Using a rotatedRect again, you can filter for robots. As faust1706 explained to me a long time ago, just color-filter a bumper. Use minAreaRect to crop the bumper. Then, add two lines, dividing the bumper into 3 pieces. perform minAreaRect on this again and then use the height of the middle section to approximate the robot distance using an average bumper height. I can tell that pathfinding will work, because I am treating it just like as if this were a videogame! Say that I was tracking the ball as it was being shot. I could triangulate it's height using some trigonometry, and it's distance using a kinect distance map. I could get the hotspot for a made shot using the kinect's distance readings and height measurements. Now, say the ball misses the goal. The robot could possibly find the error, like 6 inches low, etc. and try to figure out a way to make the shot better. For example, in the 2014 game, if the robot misses the shot, it will see whether it was low or high. If it was low, it could move forward a bit. If that made the problem worse, it could move back, etc. This type of code could be quite crude, but still get the job done. If I used ML for this instead, surely, the robot would miss the first few shots, but it can easily be more accurate than a human thereafter. If we want to, we can also add more data points manually that we know. This is Supervised learning. Basically, in short, it is not a good approach to write a full auto program, but instead to write some program that will allow rapid system integration. If I write my program right, I would need to only code a few pieces -Vision targets and pose estimation/distance calculations -Manipulating the gamepiece -- what it looks like, distance, pickup, goal, etc. -Calculating what went wrong while scoring -Field (Don't need to code. Need to configure). And certainly, there are many things that a human player can perform best However, now my main concern is how do I generate a map of the field. The kinect will offer a polar view of the field if programmed correctly. How do I create a Cartesian grid of all the elements. For example, instead of the Kinect reporting: Code:
Code:
Also, say that the path is: Code:
00000001000000000Thanks for all your time, mentors and other programmers! |
Re: A Vision Program that teaches itself the game
Quote:
Code:
00000001200000000 |
Re: A Vision Program that teaches itself the game
I think what I will do is calculate the angle between each consecutive point. I will change the position and angle constantly. The gyro will be used for accurate direction measurements. Whenever the robot is looking at the goal, the gyro will be recalibrated, yielding max-accuracy!
|
| All times are GMT -5. The time now is 09:55. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi