Log in

View Full Version : A Vision Program that teaches itself the game


yash101
22-06-2014, 01:30
I am taking an AI/ML course online. I am just wondering if it would be FEASIBLE (I know it is possible) to create a vision system that learns the game by itself?

While it would seem quite hard, instead of writing a different program every year, it could be possible to write one program to play each game.

Such a program would need to be taught the game, what it needs to do. However, it would also need to learn how to use itself.

The last question I have is, has this ever been done? It seems extremely hard so it would be pointless to reinvent the wheel.

What would be the best environment to build something like this in? Currently, I am learning with Octave, however, OpenCV seems to have a lot of useful components, including the ML module.

EricH
22-06-2014, 01:57
I'm not a programmer, but no.

The reason is that while a single program may learn the game every year, it has to adapt to the different robots that are built. Some things stay the same, other things change--sometimes pneumatics are an advantage and sometimes not, for example. So the program will need to be changed to fit the robot every year, regardless of whether or not the game changes are minor or major.

Now add to that that no robot in FRC history has ever been fully autonomous beyond automode or a "drive straight and don't stop", and the odds are VERY against you actually pulling it off this side of grad school.

SoftwareBug2.0
22-06-2014, 02:20
I am taking an AI/ML course online. I am just wondering if it would be FEASIBLE (I know it is possible) to create a vision system that learns the game by itself?

While it would seem quite hard, instead of writing a different program every year, it could be possible to write one program to play each game.

Such a program would need to be taught the game, what it needs to do. However, it would also need to learn how to use itself.

The last question I have is, has this ever been done? It seems extremely hard so it would be pointless to reinvent the wheel.

What would be the best environment to build something like this in? Currently, I am learning with Octave, however, OpenCV seems to have a lot of useful components, including the ML module.

I would not go into this sort of project with any sort of expectation of success. I've fiddled a bit with general game-playing programs, in fact I wrote one for a science fair when I was in high school. It was successful, but the success criteria was that the results were statistically better than random moves. That's a much lower bar than I would feel safe with for something controlling a 120 lb. mobile robot.

I know a couple of years ago state of the art game-playing systems could choke unexpectedly on even relatively simple board games. Unless things have improved by leaps and bounds in the last couple of years I wouldn't even want to be in the same room as a robot controlled by one of these things.

Possibly interesting:
http://en.wikipedia.org/wiki/General_game_playing
http://games.stanford.edu/index.php/ggp-competition-aaai-14

yash101
22-06-2014, 04:31
I know that it would be required to at least program in all the I/O, etc. However, I believe the best robot would be one that gets better at the game by experience. First Match: Roaming in circles, knowing nothing to do. Last Match: Game Pro. It will beat any robot that tries to win!

I want to get my vision program for next year, rolled out with a bit of ML. This way, it would be able to learn how to do better the next time. That is why that computer that plays checkers was so good at playing checkers

Steven Smith
22-06-2014, 15:06
Simply put, human brains are still much better at some things than computers... so your answer is no within the context of FRC. For a game as complex as an FRC game, do not expect for a fully autonomous robot control system to be able to outperform the combination of a human brain(s) + control assist.

Computers tend to excel in games that are extremely well defined with few variables. For chess/checkers/backgammon, you may only have a handful of possible moves to a few handfuls of spaces. A basic player is capable of looking at those moves and determining which is "best" right now. An expert player or computer iterates that forward, analyzing several layers deep. If I do this, my opponent's options change from set X to set Y, which gives me another set of options, etc. You can essentially play the game out for each of the possible moves, and look at which of your current moves has the best outcome.

If you are interested in this topic, which has intrinsic value (even if I wouldn't recommend applying it at the level you propose), I'd recommend writing a few game solver applications first. Start with a puzzle solver (like Sudoku) where you are essentially writing an algorithm to find the single "right" answer.

Approaching a new game with a mindset "like a computer" could also be fun as well. Just start describing your action table when you play out the game. If I'm located at mid-field and my opponent is between me and the goal, what are my options? What are his options? Is he faster than me? Can does he have more traction/weight than me? Is he taller or shorter? Generally, if you are not capable of explaining all these things in words, adding the complexity of a computer will not help you. However, the process of describing them might lead to good strategies, whether implemented by a computer or a human driver.

-Steven

faust1706
22-06-2014, 15:09
Ok, I'm on my phone. Lets see how this goes. It's storming at work so I have time.

Machine learning for vision is rather common, but the approach you want to take isn't feasible due to your lack of training examples. The rule of thumb is that you need at least 50 training examples to start learning from. You simply won't have enough training examples to get a result worth the effort, or any noticeable result at all.

Moving on. You can use machine learning for calculating distance from characteristics in the image. You have to have training examples though. So you'd go out, record contour characteristics such as height, width, area, and center.x and .y. then you manually input distance. You do this from as many points as you possibly can bear to do. Then you run a gradient descent algorithm(regression) or apply the normal equation. You can scale your data if you don't think it's linear, such as taking the natural log of contour height. For this example, you are dealing with 6 dimensions, so it is impossible to visualise. You just have to guess what scaling is needed. Then you apply the squared error function (predicted-actual)^2, also called your resudual. You want this to be as close to zero as possible. This can also be applied to game pieces.

Another application is shooting pieces. You have a chart of inputs such as motor speed, angle, and distance, and the output is a 1 or 0: making a basket or miss. You have a 3d plot now. There exists a line, or multiple lines virtually the same, in 3d space that garuntees making all your shots (given your robot is 100% consistent).

Another type of ai is path planning. If you have a depth map of all your objects in front of you, then you can apply the a star path planning to get to a certain location on the field given you have a means of knowing where you are on the field. (Cough cough encoders on undriven wheels or a vision pose calculation)

I might have forgetten somethings. Feel free to ask questions.

Disclaimer: all these calculations can be done virtually instantly using octave or matlab. The a star is a bit more intensive. It is an iterative algorithm to my understanding.

Bpk9p4
22-06-2014, 15:56
This is possible. A couple of years ago i made a pong game that taught itself how to move the pedal to block the ball back. It taught itself with a neural network. The fitness was base on how long it could play without losing.

yash101
23-06-2014, 01:32
http://en.wikipedia.org/wiki/A*_search_algorithm

^^ That seems like something I want in my next year program. I would like it if I could have a tablet pc for the driver station, with the robot constantly generating a map of the field. If you click on a location on the field in the tablet, the robot could automatically navigate there with a high accuracy.

However, for that to be possible, the program would need to know where all the obstacles are. How do you suggest getting the exact position of other robots and field elements? Should I have a Kinect (or a couple), outputting the distance to all the field elements?

This gives me another question. What does the Kinect distance map look like? How do you get the distance measurement from a single pixel?

NWChen
23-06-2014, 01:52
the program would need to know where all the obstacles are. How do you suggest getting the exact position of other robots and field elements?

In addition to locating other robots and field elements, you also need to know the position of your own robot, e.g. with simultaneous localization and mapping (http://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping).

SoftwareBug2.0
23-06-2014, 03:25
How do you suggest getting the exact position of other robots and field elements? Should I have a Kinect (or a couple), outputting the distance to all the field elements?

This gives me another question. What does the Kinect distance map look like? How do you get the distance measurement from a single pixel?

You will not be able to know the exact positions of everything, and thankfully you do not need to. It's standard practice to go further around obstacles than strictly necessary to allow for inaccuracies. This would happen when you're deciding on the nodes to feed into A*, or whatever other algorithm you want to use.

For example:
If you use a Voronoi diagram, you'll get nodes that are maximally far from obstacles.
http://en.wikipedia.org/wiki/Voronoi_diagram

And before using a visibility graph, you'd typically expand all of the obstacles by some constant distance.
http://en.wikipedia.org/wiki/Visibility_graph

The Kinect's distance map is pretty simple: you get a 2-d array with a distance for every pixel.

JohnFogarty
23-06-2014, 09:29
A lot of the stuff I do revolves around object avoidance and detection along with some simple and complex searching pattern algorithms and I have to say a Kinect for the first bit would not be my first choice. I got the chance to use a lydar this season and boy was it nice.

This is the sort of stuff I work with on my NASA Centennial Challenge sample return team though I would also say that even the best software developers in the challenge couldn't design an autonomous robot to combat the decision making of humans at this point in time.

JamesBrown
23-06-2014, 11:04
http://en.wikipedia.org/wiki/A*_search_algorithm

^^ That seems like something I want in my next year program. I would like it if I could have a tablet pc for the driver station, with the robot constantly generating a map of the field. If you click on a location on the field in the tablet, the robot could automatically navigate there with a high accuracy.



This is a cool concept, and I definitely don't want to discourage you from exploring AI to the fullest, however I want to ensure that your expectations are realistic.

First, in FRC, assuming the structure of the game does not change drastically going into next year there is no way you will know where everything on the field is, there are other robotics competitions where this is feasible, however FRC is not one of them. You can incorporate some awesome sensors into the robot and give it a ton of information, however it will never be able to process all of the info that is relevant as accurately and quickly as a human can.

Second, really think about why you would want to do this. Does it offer a competitive advantage? Do you want to do it because It will look cool? What would be cooler, a well driven robot with some automated features to assist the driver that performs very well? Or a robot that learned how to play the game its self but functions poorly compared to human operated robots built by teams with less programming expertise. Which is more inspirational to students (that is the goal in the end right?).

AI has it's place, I spent a lot of time in college studying AI and robotics. I then got out into the real world and realized that as cool as AI is it isn't usually the right solution. In order of preference for robotics typically your first question is can this be done faster, better, or safer if a human is controlling the robot. Then in order of preference we go through
1.) How do we Control the environment
2.) How do we react to the uncontrollable aspects of the environment
3.) How do we improve our reactions.

It isn't until you get to #3 that AI comes up. Even then most applications it is easier to "Teach" by giving the directions directly on how to improve, rather than letting the robot learn on its own.

I love AI and there are some great competitions out there where it is key to winning. FRC is not one of them. The top AI labs in the world could not write an FRC legal AI that could beat student drivers in any FRC game (other than perhaps 2001 (? the year where 71 grabbed both goals and then shuffled) ).

My advice would be that instead of choosing an algorithm now and looking for an application you learn all you can now. Then when the game is released you look for the tasks a computer CAN do better than a human. Some examples I can think of are Aiming a shooter (2006), adjusting height for positioning an arm (2005, 2007, 2011 and others), and automating those features. If precise positioning on the field is important then maybe that is what you need to automate, however I don't think that trying to generate a full field map is the best idea, instead let the driver handle gross movement and then allow him to automate the fine adjustment based on vision (or other) sensor feedback.

yash101
23-06-2014, 12:10
I am thinking about the problem for A* a bit more universally. Say, the field was divided into pixels, Maybe 64px long and 32px wide. The depth sensor would basically find the obstacles and mark the pixels they fall under as a place to avoid. The field would be described in an extremely detailed configuration file. Some pixels, like field elements and walls would be dead zones -- navigate away from them. Anything else would be something that can move, be it a gamepiece or another robot. The algorithm could be programmed to have this as a low priority. This means, ram into here if there is no way around or if the way around is too far or impractical. The robot, then, will have the knowledge to navigate around the field with an excessively high accuracy -- higher than a human player could achieve. After the map is generated, the cRIO will be sent a signal to turn -- turn while told to turn. This will get the robot aligned to start. Next, the cRIO will be told to move forward. This turn command and forward command will constantly be sent, sending the robot in the right direction.

As of what everyone is saying, it seems as though a vision program that teaches itself the game would be highly impractical (and impossible for someone at my level).

However, there are a couple AI algorithms that would be lovely, like a robot that uses A* to navigate to a location automatically, or some Machine Learning algorithm to perfect the robot's shots.

Also, JamesBrown, I want to try AI for a few reasons. I want to try something that is challenging, that if I perform well enough, will pay back in the end. AI is something cool. I know many places where some ML/AI would be just awesome, and would increase the reliability of many systems. It is a very low percentage of my interest to do this for bragging rights or for showing off.

faust1706
23-06-2014, 12:31
In addition to locating other robots and field elements, you also need to know the position of your own robot, e.g. with simultaneous localization and mapping (http://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping).

I don't know if you picked this up on my long post, but the method I proposed was undriven wheels with encoders or doing a pose calculation on a vision target. If you're really wondering what camera pose is....
[/URL]

My team last summer got a birds eye view of the object in front a kinect to work: http://www.chiefdelphi.com/media/photos/39138 (http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html) The next step was to implement a* path planning but we never got it to work (it is still on our to do list). (The objects in view are soccer balls, that is why they are all the same size in the top view)

On a side not. Slam is so cool. For anyone interested: [URL="http://research.microsoft.com/pubs/155378/ismar2011.pdf"]



This gives me another question. What does the Kinect distance map look like? How do you get the distance measurement from a single pixel?

Yash, check the dropbox (pm me your email if you want to be included into the dropbox. It has....23 sample vision programs ranging from our 2012-2014 code, to game piece detection for 2013 and 2014, to depth programming. I passed the torch of computer vision to a student who uses github, so don't be surprised if it gets switched over): TopDepthTest. It is the program that the image I linked to is from. The kinect depth map allocates distance as a pixel value (colour), for those of you who aren't aware.

Here is the code to calculate distance from the intensity of a pixel:

Scalar intensity = depth_mat2.at<uchar>(center[i]);
double distance = 0.1236 * tan(intensity[0]*4 / 2842.5 + 1.1863)*100;

center[i] is the center of a contour (object of interest that passed all of our previous tests), it has a x and y component.

The kinect is rather intensive. We ran 3 cameras this year an analysed every aspect of the game we possibly could with vision and we got 8 fps on an odroit. You'd most certainly have to have multiple on board computers to handle multiple kinects, but that may not be necessary if you only play to move forward and you don't have omnidirectional drive capabilities.

I'm waiting for the cheesy poofs to release their amazing autonomous code so I can apply it to autonomous path planning (instead of their predrawn paths)

I have to say a Kinect for the first bit would not be my first choice. I got the chance to use a lydar this season and boy was it nice.

There are other alternatives to the kinect, I personally prefer the asus xtion. It is smaller, faster, and lighter.

MatthewC529
26-06-2014, 20:01
I am not going to contribute to the reasons as to why you shouldnt do it on the scale you are seeking (Because I think the idea and concept is awesome) but I will say one thing. I have worked with AI Pathfinding algorithms to a decent extent as a Game Programmer, I was freelancing and did work in implementing specific AI Algorithms and various different Game Mechanics.

You have limited Memory on an embedded system like the RoboRIO. Of course the RoboRIO is a massive step up but I am talking about 2 GB RAM vs. 256 MB RAM. A* is in its most basic form an informed Djikstra Pathfinding algorithm. Unlike Djikstra where all moves have a Heuristic cost of 1, A* has ways of assigning a cost to each movement. Depending on your method you will usually get an O((V+E)log(V)) or even O(V^2) algorithm. Pathfinding is an expensive task and if the field was a perfect size where a resolution of 64 px by 32 px worked then you could end up with an extremely large Fringe if enough obstacles exist. In certain scenarios this could be a bit long for an Autonomous period and if proper threading isnt implemented it could cripple your Teleoperated period if you have to wait too long for the calculations to finish in a dynamically changing field of non-standard robots.

Also this could work for shooting but if the game calls for a much different scoring system then your AI and Learning may be even further crippled by complexity... Also you dont want a friendly memory error popping up and killing your robot for that round.

Its an awesome idea and you should definitely follow through but probably not immediately on a 120 lbs. robot. Experiment first with Game Algorithms and get used to implementing it in an efficient and workable way, then move to the robot where efficiency will really matter. I cant speak for how efficient you will need to be... again... Game Developer but again I really like your concept of pixels but I think you should be wary of how much time and the maintainability of your code.

Ginto8
26-06-2014, 22:33
Aside from the many technical limitations, there is one glaring barrier to such a learning system. Vision systems play very specific roles in each game, and in each robot. They typically tracking geometric, retroreflective targets, but the vision systems my team has created have had no say in the robot's logic -- they effectively turn the camera from an image sensor to a target sensor, streaming data about where the targets are back to the robot. For a vision system to learn the game, it must learn not only what the targets look like, but also what data the robot's central control needs -- whether it wants "is the target there?" data like this year's hot goals, or "At what angle is the target?" as in Rebound Rumble and Ultimate Ascent. Any learning system requires feedback to adapt, and when it has to learn so many different things, designing that feedback system would be at least as complex as making a new vision system, and certainly more error-prone.

SoftwareBug2.0
27-06-2014, 00:37
You have limited Memory on an embedded system like the RoboRIO. Of course the RoboRIO is a massive step up but I am talking about 2 GB RAM vs. 256 MB RAM. A* is in its most basic form an informed Djikstra Pathfinding algorithm. Unlike Djikstra where all moves have a Heuristic cost of 1, A* has ways of assigning a cost to each movement. Depending on your method you will usually get an O((V+E)log(V)) or even O(V^2) algorithm. Pathfinding is an expensive task and if the field was a perfect size where a resolution of 64 px by 32 px worked then you could end up with an extremely large Fringe if enough obstacles exist.

I don't quite understand what the big deal is. A 64x32 grid is only 2048 nodes. I'd expect that you could have an order of magintude more before you ran into speed problems. I also don't think you'd have memory issues. If you assume that you have 256 MB of memory, half of which is already used, and 2048 nodes then you'd get 64 bytes per node. That seems like plenty.

faust1706
27-06-2014, 01:10
For a vision system to learn the game, it must learn not only what the targets look like, but also what data the robot's central control needs -- whether it wants "is the target there?" data like this year's hot goals, or "At what angle is the target?" as in Rebound Rumble and Ultimate Ascent. Any learning system requires feedback to adapt, and when it has to learn so many different things, designing that feedback system would be at least as complex as making a new vision system, and certainly more error-prone.

The two tasks you just described are in themselves not difficult to achieve through a vision program (An example method to do this is called cascade training), the real problem is how the robot would act on it. This task would be a no brainer for Yash, in fact, he has already done it for the 2014 game if I remember correctly. This only looks at one aspect of the game though. It also has to know what is in front of it, find game pieces, know whether it has game pieces, and go to where it needs to to score or pass. We did most of this in our code this year with 3 cameras and we were lucky to get 10 fps. It would take months at least for there to be enough generations of the learning algorithm for there to be any noticeable result.

Its an awesome idea and you should definitely follow through but probably not immediately on a 120 lbs. robot. Experiment first with Game Algorithms and get used to implementing it in an efficient and workable way, then move to the robot where efficiency will really matter. I cant speak for how efficient you will need to be... again... Game Developer but again I really like your concept of pixels but I think you should be wary of how much time and the maintainability of your code.


Isn't there a simulation for each year's game? In my mind, that would be a perfect place to start.

yash101
27-06-2014, 05:10
I am not going to contribute to the reasons as to why you shouldnt do it on the scale you are seeking (Because I think the idea and concept is awesome) but I will say one thing. I have worked with AI Pathfinding algorithms to a decent extent as a Game Programmer, I was freelancing and did work in implementing specific AI Algorithms and various different Game Mechanics.

You have limited Memory on an embedded system like the RoboRIO. Of course the RoboRIO is a massive step up but I am talking about 2 GB RAM vs. 256 MB RAM. A* is in its most basic form an informed Djikstra Pathfinding algorithm. Unlike Djikstra where all moves have a Heuristic cost of 1, A* has ways of assigning a cost to each movement. Depending on your method you will usually get an O((V+E)log(V)) or even O(V^2) algorithm. Pathfinding is an expensive task and if the field was a perfect size where a resolution of 64 px by 32 px worked then you could end up with an extremely large Fringe if enough obstacles exist. In certain scenarios this could be a bit long for an Autonomous period and if proper threading isnt implemented it could cripple your Teleoperated period if you have to wait too long for the calculations to finish in a dynamically changing field of non-standard robots.

Also this could work for shooting but if the game calls for a much different scoring system then your AI and Learning may be even further crippled by complexity... Also you dont want a friendly memory error popping up and killing your robot for that round.

Its an awesome idea and you should definitely follow through but probably not immediately on a 120 lbs. robot. Experiment first with Game Algorithms and get used to implementing it in an efficient and workable way, then move to the robot where efficiency will really matter. I cant speak for how efficient you will need to be... again... Game Developer but again I really like your concept of pixels but I think you should be wary of how much time and the maintainability of your code.

I actually wanted to treat this like a game. That is the reason why I thought of creating a field grid. Are you saying that 2GB of RAM won't be enough. The program will have access to 1GB in the worst case scenario. The data collection using OpenCV will use well under 16 MB of RAM.

If you are saying that A* is too ineficient, what do you suggest I should try instead. If anything, I could have 3 computers -- vision processor, AI, and cRIO.

Also, 64 by 32 px was just a crude example. By testing the performance of the system, I could tell whether I need to reduce the resolution or what. Otherwise, I could treat everything as either go there or not.

My buddy programmer and I would like to use an nVidia Jetson Dev board. Should we use that for AI, or vision processing? We can use an ODROID for the other task!

I have already figured out how to effectively use OpenCV and optimize it for a very high performance. I can use a configuration file to make the same setup track multiple target types, and I understand how to use OpenCV to get accurate target data even if the target is tilted!

MatthewC529
27-06-2014, 16:46
I actually wanted to treat this like a game. That is the reason why I thought of creating a field grid. Are you saying that 2GB of RAM won't be enough. The program will have access to 1GB in the worst case scenario. The data collection using OpenCV will use well under 16 MB of RAM.

If you are saying that A* is too ineficient, what do you suggest I should try instead. If anything, I could have 3 computers -- vision processor, AI, and cRIO.

Also, 64 by 32 px was just a crude example. By testing the performance of the system, I could tell whether I need to reduce the resolution or what. Otherwise, I could treat everything as either go there or not.

My buddy programmer and I would like to use an nVidia Jetson Dev board. Should we use that for AI, or vision processing? We can use an ODROID for the other task!

I have already figured out how to effectively use OpenCV and optimize it for a very high performance. I can use a configuration file to make the same setup track multiple target types, and I understand how to use OpenCV to get accurate target data even if the target is tilted!

It depends. Personally I would use the Jetson Dev Board for Vision and ODROID for AI if you are going to separate it that way just from a quick skim of their specifications but I would need to look at it more.

A* is likely your best bet, Pathfinding algorithms are known for being either Time consuming (if restricted) or Memory consuming (if you want speed) and you are right for looking at as if it is a Video Game AI. A* is commonly used because it is fast while being reasonably smart. I would recommend the 3 basics to choose from. Check up on your knowledge of Djikstra, A* and Best First Search. Each have trade offs. Most simply you either get Slow With the Best Path, or Fast with A Good Enough Path. If you have the ability to multi-thread with several CPU's you could possibly get away with a Multi-Threaded Djikstra approach that can quickly search through the Fringe and determine the true shortest path. But sticking to A* might be your best bet.

If you separate it into 3 computers and each process has access to its own dedicated memory then you could pull it off in terms of processing power, 1 GB should be well enough I would think. I am still concerned though with how you plan on it being useful outside of an awesome project. On the field I still think it will be hard to make it adaptive to a dynamically changing field (though not impossible) sufficiently and too slow to calculate the best path in a short time frame, though I suppose it also depends on what you consider the best path. I think its awesome and I do honestly support the idea (Because I dont have access to the same materials on my team :P ), just trying to gauge where your head is at.

Also I agree if you follow through you will definitely need to constantly tweak (or dynamically update) the resolution of the graph you are analyzing.

I have questions such as how you tested your optimizations and how the data is being collected?

AI is so hard to discuss since it all depends on your goals and how it needs to adapt to its current scenario.

cmrnpizzo14
28-06-2014, 23:50
http://www.entropica.com

Saw a TED talk on this a while back and thought it was interesting. Probably not a feasible strategy for FRC but it is a neat way of approaching games.

yash101
29-06-2014, 12:15
OK, so I found this great series of pages which I need to (thoroughly) read. It has quite a few algorithms -- their pros, cons, implementations, etc. The page is at: http://theory.stanford.edu/~amitp/GameProgramming/.

The approach I am thinking about is:
Imagine that the field were a grid. In this grid, there are obstacles mapped. These obstacles would be like a wall or something that cannot be moved by the player. These obstacles would be predefined in a nice and lengthy configuration file, with a byte of data for each grid location. I have attached a text file containing how the configuration file will be saved and how the field data would be saved. I would like to call this something along the lines of field-description-file or so because it can save a map of the field.

In the example config, I have 2013's game. 2014's game didn't really have any in-field obstacles.

So, the configuration stores some basic field information. It contains the boundaries of the field and the static field elements, It also marks the space where the robot can move around. It is basically an array of the field elements.


I plan to figure out where on the field I am, using a few sensors. For the location, I would use pose-estimation to figure out my distance and angle from the goal. Then, I would use a gyro for the rotation because both field ends are symmetrical.

I guess some changes can be made to this FDF file. I can add more numbers, say 4 and 5. 4 could be a loading zone and 5 can be a goal in which the robot can shoot. The robot would calculate the height of the goal using the vision targets. Otherwise, a second FDF could be created. This FDF will contain a depth map. All the goals would be marked in the exact spot. The number would be the height (in whatever unit).


I think this type of an interpreter could get the robot program closer to a program that can play all games. You just need to describe the field in the FDF and program basic behavior -- shooting positions, shooter direction, and etc. The robot could use a supervised learning -- regression for the ML algorithm. This way, the robot could learn where shooting is most consistent and over time gather the coordinates for the best shot!

JamesTerm
29-06-2014, 16:28
I got the chance to use a lydar this season and boy was it nice.

I've heard using lydar is expensive... what parts did you use... how much did it cost?

magnets
29-06-2014, 17:51
I am taking an AI/ML course online. I am just wondering if it would be FEASIBLE (I know it is possible) to create a vision system that learns the game by itself?

While it would seem quite hard, instead of writing a different program every year, it could be possible to write one program to play each game.

Such a program would need to be taught the game, what it needs to do. However, it would also need to learn how to use itself.

The last question I have is, has this ever been done? It seems extremely hard so it would be pointless to reinvent the wheel.

What would be the best environment to build something like this in? Currently, I am learning with Octave, however, OpenCV seems to have a lot of useful components, including the ML module.


Before you decide to do something like this, you need to consider what your goal is. If the goal is become more competitive, I can guarantee that time could be spent better improving another aspect of the team.

However, if your primary goal is not to do as well as you can at the competition, and instead is to learn about programming, which is a valid goal, then I would recommend realizing how big of a project this would actually be. You are not the first person to want to do something like this, a user named davidthefat had a similar goal. He, and a few other teams, pledged to do a fully autonomous robot for the upcoming season, which never happened.

Look at the Cheesy Poofs. They've won events every year they've been around, they've been on einstein a bunch, they've been world champs, they win the innovation in controls award, and their autonomous mode is very, very complicated. Their code, which can be viewed here (https://github.com/Team254/FRC-2014), is well beyond the level of high school kids, and is the result of a few really brilliant kids, some very smart and dedicated mentors, and years and years of building up their code base. Yet all this fancy software lets them do is drive in a smooth curved path. Even then, it's not perfect. In the second final match of Einstein, something didn't go right, and they missed.

Just to get an idea for the scope of the project, read through these classes (https://github.com/Team254/TrajectoryLib/tree/master/src/com/team254/lib/trajectory) from the Cheesy Poof's code. My friend does this sort of programming for a living, and it took him a good hour of reading the pathgenerator and spline portions of the code to really get a good understanding of what's going on. I wouldn't attempt this project unless that code seems trivial to write and easy to understand.

Before even thinking about making software to let the robot learn or software for the robot to learn about its surroundings, you'd need to perfect something as simple as driving.

As an exercise, try making an autonomous mode that drives the robot in a 10 foot diameter circle. It's much harder than you think.

Again, I'm not trying to be harsh or discouraging, I'm trying to be realistic. A piece of software that can do the tasks you've described is beyond the reach of anyone.

Another very difficult exercise is figuring out where the robot is on the field and which way it is pointing. You can't just "use a gyro", there's much more to it.

yash101
01-07-2014, 01:50
I have come up with a plan on how to write something like this. The vision program will have a lot of manual set up, like describing the field, the obstacles, goals and boundaries. Other than that, the robot can start it's ml saga for the shooter using human intervention -- it will learn the sweet spots for the shots as the drivers shoot the ball and make/miss it. Over time, the data set will grow and the shots will become more and more accurate, just like our driver's shots.

When we are learning about the robot's capabilities, this is how we learn:
Shoot once. Was it low? Was it high? reposition. Try again.

This would be quite similar to what the Supervised learning algorithm will be. Using regression, the best shot posture will be estimated even if there is no data point. It needs to just know a couple data points for the answer.

The main code change that will need to be done is that the retroreflective targets will need to be coded in. It will be extremely difficult to write a program to find the targets and always find the correct one for pose estimation using ML.

Basically, a great portion of the game can be taught to the robot quite easily -- moving around the field, etc. However, as you said, it is quite hard to gather the robot location on the field. However, a gyro will give access to the direction so that the robot can tell which side it's looking at.

The pathfinding will be implemented almost exactly as if the program were a videogame!

I'm not trying to make a fully-autonomous robot, but instead a robot that has the level of AI/ML to assist the drivers and make gameplay more efficient.

I am thinking about using A* quite a bit. When the robot is stationary, a path plan would constantly be generated to keep the robot from moving without brakes, etc. However, that is just a might because that would create quite a bit of lag in the robot motion when a driver wants to run the bot.

Pault
01-07-2014, 12:39
I have come up with a plan on how to write something like this. The vision program will have a lot of manual set up, like describing the field, the obstacles, goals and boundaries. Other than that, the robot can start it's ml saga for the shooter using human intervention -- it will learn the sweet spots for the shots as the drivers shoot the ball and make/miss it. Over time, the data set will grow and the shots will become more and more accurate, just like our driver's shots.

When we are learning about the robot's capabilities, this is how we learn:
Shoot once. Was it low? Was it high? reposition. Try again.

This would be quite similar to what the Supervised learning algorithm will be. Using regression, the best shot posture will be estimated even if there is no data point. It needs to just know a couple data points for the answer.

Who says that next year is going to be a shooting game? What about the end game, if there is one?

However, a gyro will give access to the direction so that the robot can tell which side it's looking at.

The gyro will not give you an accurate heading over the course of the match. A general rule that I have heard is that a good FRC gyro will give you about 15 degrees of drift over the length of a match. My recommendation on this end is to check your angle whenever possible using the vision targets, and when you can't see the vision target, just use the gyro to calculate your deviation from the last time you could. It may even be possible to do the same thing with the roboRIO's 3-axis accelerometer for location.

gblake
01-07-2014, 15:39
My hunch is that accomplishing the original post's goal involves solving (or integrating the solutions to), not dozens, but a few hundreds of problems, and my hunch is that converting the vision system's raw imagery into useful estimates of the states of the important objects involved in a match will be the hardest part.

To help wrap your head around the job(s) imagine the zillions of individual steps involved in carrying out the following ...

Create a simple simulation of the field, robots, and game objects for any one year's game.

Use the field/objects/robots simulation to simulate the sensor data (vision data) an autonomous robot would receive during a match. Be sure to include opponents actively interfering with your robot.

Add in the internal-state data the robot would have describing its own state.

Then - Ask yourself, what learning algorithms do I apply to this and how will I implement them.

It's a daunting job, but, if folks can get Aibo toy dogs to play soccer, you could probably come up with a (simulated initially?) super-simple robot that could put a few points on the board during a typical FRC match.

It's veerrryyyy unlikely that an implementation will make any human opponents nervous in this decade (most other posters have said the same); but I think what the OP described can be made, if your goal is simply playing, and your goal isn't winning against humans.

Blake

yash101
02-07-2014, 06:14
Who says that next year is going to be a shooting game? What about the end game, if there is one?



The gyro will not give you an accurate heading over the course of the match. A general rule that I have heard is that a good FRC gyro will give you about 15 degrees of drift over the length of a match. My recommendation on this end is to check your angle whenever possible using the vision targets, and when you can't see the vision target, just use the gyro to calculate your deviation from the last time you could. It may even be possible to do the same thing with the roboRIO's 3-axis accelerometer for location.

I was going to have a boolean for the field direction. Camera pose estimation would be for the actual location/direction. Otherwise, what do y'all think about me using GPS? I am thinking about a multi-camera system so there will almost be a vision target to see. The accelerometer/gyro is just so that the system can tell whether the bot is facing home or enemy territory.

My hunch is that accomplishing the original post's goal involves solving (or integrating the solutions to), not dozens, but a few hundreds of problems, and my hunch is that converting the vision system's raw imagery into useful estimates of the states of the important objects involved in a match will be the hardest part.

To help wrap your head around the job(s) imagine the zillions of individual steps involved in carrying out the following ...

Create a simple simulation of the field, robots, and game objects for any one year's game.

Use the field/objects/robots simulation to simulate the sensor data (vision data) an autonomous robot would receive during a match. Be sure to include opponents actively interfering with your robot.

Add in the internal-state data the robot would have describing its own state.

Then - Ask yourself, what learning algorithms do I apply to this and how will I implement them.

It's a daunting job, but, if folks can get Aibo toy dogs to play soccer, you could probably come up with a (simulated initially?) super-simple robot that could put a few points on the board during a typical FRC match.

It's veerrryyyy unlikely that an implementation will make any human opponents nervous in this decade (most other posters have said the same); but I think what the OP described can be made, if your goal is simply playing, and your goal isn't winning against humans.

Blake

I have diverted a tad from my original post. I had asked whether it was feasible to make the entire game automatically learnable. I have learned from many other CD'ers that it is nearly impossible. I know that it isn't impossible, but it is just EXTREMELY impratical.

However, now I have decomposed the program idea and it seems quite practical (and actually useful) to automate some parts of the game. Things like autonomous driving, holding position and manipulating the gamepiece seem quite simple to implement (even without AI/ML).
Just a transform with OpenCV targeting a RotatedRect can accomplish finding a gamepiece. Using a rotatedRect again, you can filter for robots. As faust1706 explained to me a long time ago, just color-filter a bumper. Use minAreaRect to crop the bumper. Then, add two lines, dividing the bumper into 3 pieces. perform minAreaRect on this again and then use the height of the middle section to approximate the robot distance using an average bumper height.
I can tell that pathfinding will work, because I am treating it just like as if this were a videogame!

Say that I was tracking the ball as it was being shot. I could triangulate it's height using some trigonometry, and it's distance using a kinect distance map. I could get the hotspot for a made shot using the kinect's distance readings and height measurements. Now, say the ball misses the goal. The robot could possibly find the error, like 6 inches low, etc. and try to figure out a way to make the shot better. For example, in the 2014 game, if the robot misses the shot, it will see whether it was low or high. If it was low, it could move forward a bit. If that made the problem worse, it could move back, etc. This type of code could be quite crude, but still get the job done. If I used ML for this instead, surely, the robot would miss the first few shots, but it can easily be more accurate than a human thereafter. If we want to, we can also add more data points manually that we know. This is Supervised learning.

Basically, in short, it is not a good approach to write a full auto program, but instead to write some program that will allow rapid system integration. If I write my program right, I would need to only code a few pieces
-Vision targets and pose estimation/distance calculations
-Manipulating the gamepiece -- what it looks like, distance, pickup, goal, etc.
-Calculating what went wrong while scoring
-Field (Don't need to code. Need to configure).

And certainly, there are many things that a human player can perform best

However, now my main concern is how do I generate a map of the field. The kinect will offer a polar view of the field if programmed correctly. How do I create a Cartesian grid of all the elements.

For example, instead of the Kinect reporting:



. . . . .

__


It instead reports:


.
. .
. ___ .


That could be fed into my array system and everything can be calculated from there.

Also, say that the path is:


00000001000000000
00000000200000000
00000000030000000
00000000004000000
00000000000500000


how can i make the robot drive more straight and not go forward, left, forward, right, etc.?

Thanks for all your time, mentors and other programmers!

SoftwareBug2.0
03-07-2014, 01:17
Also, say that the path is:


00000001000000000
00000000200000000
00000000030000000
00000000004000000
00000000000500000


how can i make the robot drive more straight and not go forward, left, forward, right, etc.?

Thanks for all your time, mentors and other programmers!

The path that you've plotted seems to imply that the robot can go in diagonals. Otherwise it would look like:


00000001200000000
00000000340000000
00000000056000000
00000000007800000
00000000000900000


And adding diagonals is probably the simplest we to make it do more than literally just left/right/up/down. This is of course not the only way, and doesn't give you nice curves.

yash101
03-07-2014, 04:00
I think what I will do is calculate the angle between each consecutive point. I will change the position and angle constantly. The gyro will be used for accurate direction measurements. Whenever the robot is looking at the goal, the gyro will be recalibrated, yielding max-accuracy!

sparkytwd
07-07-2014, 13:02
I think what I will do is calculate the angle between each consecutive point. I will change the position and angle constantly. The gyro will be used for accurate direction measurements. Whenever the robot is looking at the goal, the gyro will be recalibrated, yielding max-accuracy!

Sensor Fusion (http://en.wikipedia.org/wiki/Sensor_fusion)

StevenB
07-07-2014, 16:09
You might want to read some of the RoboCup literature - just do a Google Scholar search for "RoboCup". At this point, the wheeled-robot classes are actually pretty decent, and the small class is very good (http://youtu.be/Jflfq09d4Ro). The RoboCup community has solved a lot of interesting problems along the way. But just to give a little perspective: their game is very simple, and the teams are groups of PhD students who have been working on the robots for several years.

When I was taking a machine learning class and we were choosing course projects, the professor suggested a simple benchmark to tell if the project was appropriate: Is the task easy for a human? If not, it's probably between hard and impossible for a computer, unless it involves huge amounts of data or extremely fast reaction times. Driving an FRC robot is hard, way harder than driving a car.

Here's an interesting place to start that's a little easier: Given a video that shows the whole field for the duration of an Aerial Assist match, calculate the score. Don't worry about penalties. Just track the robots, track the balls (there's only 2!), and keep track of the score.

Once you can do something like this, you will have solved a number of the hard vision and analysis tasks, and will be in a position to make a robot react to play the game. Also, this would be a ludicrously awesome scouting tool. :)

yash101
09-07-2014, 02:30
The reason why it seems so difficult to automate many things on the robot is because we are thinking about the big picture. After you decompose the robot behavior down into small pieces, you get things that don't actually seem too difficult to implement! When you have tools such as Octave and MatLab, things like AI and ML become like, "How in the world did I implement this uber-complicated thing in just 3 lines of code?!". It is quite simple to use OpenCV and acquire data, such as target, robot position, etc.

Just think of this. For the 2014 game, if you just acquired the ball and shot it into the goal -- no passing, etc., you could break up the game into the following parts:
-shooting -- optimal position, shooting power, robot velocity, robot acceleration
-driving -- optimal speed, obstacle avoidance, static field, dynamic field, etc.
-picking up -- map of the gamepieces

This all can be accomplished by quite simple OpenCV + MatLab code!

Now, let's try a practice implementation (idea, not code):


OPENCV: generate field view, find all obstacles, find all the gamepieces, triangulate robot position, gather facing direction <-- this can be done in the matter of 600 lines of OpenCV code.
INPUT: wait for user input -- what to do next? drive to a location, shoot, or intake?
PERFORM:
DRIVE: field data would be sent to the MatLab generated code.
SHOOT: find the optimal shooting location, use DRIVE to prepare the shot and go for it
INTAKE: read the list of gamepieces. Use DRIVE to get to the gamepiece. Use native code to pick up


These are all things that teams are already doing. However, I don't know of any team that has tried to perform it all.

Now, the program is capable of playing the game almost by itself -- little but some human intervention.

Now say that the driver wants to relax and doesn't want to drive by themselves -- sure, get new drivers :D, but even better would be if you program a bit of AI so the robot can play by itself.

Forget about the DRIVE feature, above. Now, the list is -- INTAKE, SHOOT, and maybe a WAIT command.

This is what I would do:

Check if the robot is loaded. If it is ready, go directly into SHOOT. Otherwise, WAIT (do nothing, but watch and wait for an event) for a gamepiece to be available. INTAKE the gamepiece. Use SHOOT to score. Repeat this over and over again until the game timer is up.

^^ You have a fully autonomous robot. Sure, it won't be as competitive as a human -- it couldn't defend or pass, or anything, but a computer can think very fast to a very high accuracy. This means that a computer can perform all of this with a higher accuracy than a human could!

The code would of course be much more difficult if you wanted to implement features like passing or catching. And even better is that there can be human control overrides, so if the human wants the robot to do something his way, it can do it his way.

JamesTerm
09-07-2014, 07:39
The reason why it seems so difficult to automate many things on the robot is because we are thinking about the big picture. After you decompose the robot behavior down into small pieces, you get things that don't actually seem too difficult to implement!

Writing a goals class and breaking down the big picture to smaller pieces is the easy part. The devil is in the details at the bottom end.

MatthewC529
10-07-2014, 01:28
Writing a goals class and breaking down the big picture to smaller pieces is the easy part. The devil is in the details at the bottom end.

Exactly. We know it isn't impossible. It's just the feasibility I personally feel is being downplayed. The feasibility of creating a robot that adapts to a given game in a meaningful way I think requires you to be able to look at it as more than a little OpenCV and clever data gathering. The details are important. Everything seems simple in the big picture until you break it down realtime. Though you are not wrong in saying AI/ML can be deceptively simple.

You think of this as a game in code because it is a game. It's why I use iterative. Reminiscent of a game loop. But unlike a game it's a lot more difficult to handle AI and ML in reality, let alone competition.

Sorry for spelling errors. Mobile is not friendly to my hands.

StevenB
11-07-2014, 03:06
So, I'm a graduate student in electrical engineering, and have taken classes in machine learning, artificial intelligence, image processing, and computer vision. I've spent four summers in internships doing image processing with MATLAB and OpenCV.

http://i3.kym-cdn.com/photos/images/original/000/789/900/836.jpg
Ok, I hesitate to call myself an expert, but I do know what I'm talking about.

When you have tools such as Octave and MatLab, things like AI and ML become like, "How in the world did I implement this uber-complicated thing in just 3 lines of code?!". It is quite simple to use OpenCV and acquire data, such as target, robot position, etc.

I love MATLAB, and yes, it does let you do some pretty neat things with very little code. You can develop and prototype algorithms very quickly, and there seems to be a function for everything you want to do. I can't speak quite as highly of OpenCV, but it too allows you to quickly piece together a lot of computer vision building blocks to build new things. There are dozens of other great libraries out there too - stuff like ROS (http://www.ros.org/), VLFeat (http://www.vlfeat.org/), PyML (http://pyml.sourceforge.net), and more.

But even with such great tools, I think you are radically underestimating the problem. Seriously, if you create a robot that can play Aerial Assist autonomously the way you're describing, you will have done enough innovative work to publish several papers and get a PhD.

I'm not saying this to discourage you. You've got a lot of passion and excitement, and a lot of great ideas, and I want to see something cool and innovative come out of it. You can do great work, and I look forward to seeing it. But I encourage you to focus your effort on a small piece of the problem. It may feel like a teeny, tiny, insignificant part. But unless you start with something small, it's hard to finish. I'm a dreamer by nature - like you, I have big ideas and grand plans, and I like to start new things. But too many of my projects have died shortly after I wrote down the grand vision, because I took on too much at once.

So, think about a small part of the problem, but also something you can get excited about.

Program your robot to catch a ball thrown to it by a human. Use an onboard camera (looks like you've already got one on your bot) to track the trajectory of the ball, and have the robot drive into position to grab it.
Place a series of traffic cones (or similar large bright objects) on your playing field. Make the robot chase down the ball and pick it up without knocking over any cones.
Program your robot to automatically score the ball after picking it up a random distance from the goal, while avoiding traffic cones.
Build the multi-camera system you described, and program the robot to play defense. A robot with red bumpers is trying to pick up a red ball. Your job is to stay between the red robot and the red ball.

These are all things that are doable in a summer, but to my knowledge, no team has done these. Pick one of these, or something similar, break it down into tiny tasks, and see what you can do. Ask questions, read papers, and write some code. With enough dedication, you'll come up with something really great, and I look forward to seeing it!


This all can be accomplished by quite simple OpenCV + MatLab code!

Someone should go tell the RoboCup teams. They've been working on this for 15+ years. :]

yash101
12-07-2014, 04:33
Writing a goals class and breaking down the big picture to smaller pieces is the easy part. The devil is in the details at the bottom end.

Well, y'all are the experts, so I'd have to trust you on that. However, I have come up with quite a tangible and possible plan. The entire program will basically be split up into parts. Also, it will only have basic offense features. I a not thinking about making the entire thing pure AI/ML. A great portion will be basic IF's and ELSE's, where if the ball is ahead, go ahead, otherwise go backwards, etc. There is only one way where I think it would be efficient to use Machine Learning -- Best shooting spot, saying that the game is one in which we must shoot the gamepiece into the goal. The rest incorporates basic stuff, half of which I have completed. I learned some stuff in DLib, so it will be quite simple for me to write an application that has an amazing interface.

This gives me the question: If it is so easy to host a webpage without much processor usage footprint, would it be possible for us to give our alliance partners a web address where we can basically create a diagnostics system/chat system for each other to rapidly talk through?

Exactly. We know it isn't impossible. It's just the feasibility I personally feel is being downplayed. The feasibility of creating a robot that adapts to a given game in a meaningful way I think requires you to be able to look at it as more than a little OpenCV and clever data gathering. The details are important. Everything seems simple in the big picture until you break it down realtime. Though you are not wrong in saying AI/ML can be deceptively simple.

You think of this as a game in code because it is a game. It's why I use iterative. Reminiscent of a game loop. But unlike a game it's a lot more difficult to handle AI and ML in reality, let alone competition.

Sorry for spelling errors. Mobile is not friendly to my hands.

I really do not think that I have explained my goals to well to everyone. There are three things that would be nice to implement:
-Automatic gamepiece collection
-Automatic gamepiece scoring
-Automatic driving algorithm so the robot automatically gets to a location requested by the driver

Those are all the things required to make the robot self-capable of possibly winning a match. I guess the gamepiece collection algorithm can get a bit weird because you would need to make sure that you do not hit a ball of the other team, and so you don't attempt to collect a ball present within another robot.

However, proper camera calibration will mean that the height to y axis in the camera , according to the ground. I believe that the relation would be linear. This would allow the program to tell whether the gamepiece is available for pickup!

So, I'm a graduate student in electrical engineering, and have taken classes in machine learning, artificial intelligence, image processing, and computer vision. I've spent four summers in internships doing image processing with MATLAB and OpenCV.

http://i3.kym-cdn.com/photos/images/original/000/789/900/836.jpg
Ok, I hesitate to call myself an expert, but I do know what I'm talking about.



I love MATLAB, and yes, it does let you do some pretty neat things with very little code. You can develop and prototype algorithms very quickly, and there seems to be a function for everything you want to do. I can't speak quite as highly of OpenCV, but it too allows you to quickly piece together a lot of computer vision building blocks to build new things. There are dozens of other great libraries out there too - stuff like ROS (http://www.ros.org/), VLFeat (http://www.vlfeat.org/), PyML (http://pyml.sourceforge.net), and more.

But even with such great tools, I think you are radically underestimating the problem. Seriously, if you create a robot that can play Aerial Assist autonomously the way you're describing, you will have done enough innovative work to publish several papers and get a PhD.

I'm not saying this to discourage you. You've got a lot of passion and excitement, and a lot of great ideas, and I want to see something cool and innovative come out of it. You can do great work, and I look forward to seeing it. But I encourage you to focus your effort on a small piece of the problem. It may feel like a teeny, tiny, insignificant part. But unless you start with something small, it's hard to finish. I'm a dreamer by nature - like you, I have big ideas and grand plans, and I like to start new things. But too many of my projects have died shortly after I wrote down the grand vision, because I took on too much at once.

So, think about a small part of the problem, but also something you can get excited about.

Program your robot to catch a ball thrown to it by a human. Use an onboard camera (looks like you've already got one on your bot) to track the trajectory of the ball, and have the robot drive into position to grab it.
Place a series of traffic cones (or similar large bright objects) on your playing field. Make the robot chase down the ball and pick it up without knocking over any cones.
Program your robot to automatically score the ball after picking it up a random distance from the goal, while avoiding traffic cones.
Build the multi-camera system you described, and program the robot to play defense. A robot with red bumpers is trying to pick up a red ball. Your job is to stay between the red robot and the red ball.

These are all things that are doable in a summer, but to my knowledge, no team has done these. Pick one of these, or something similar, break it down into tiny tasks, and see what you can do. Ask questions, read papers, and write some code. With enough dedication, you'll come up with something really great, and I look forward to seeing it!


Someone should go tell the RoboCup teams. They've been working on this for 15+ years. :]

Well, there are a few goals that I have in mind. This isn't to build a robot program that will convert the crappiest robot build into a champion bot, but instead to build a prototype that performs a bit better than a novice driver who knows little about the robot.
Driving through traffic cones is quite the opposite I am trying to perform. Catching the ball would be a challenge, but I know that it has been performed by quite a few teams. Catching a ball without knocking over cones is also past my goals. I am not trying to build something that is so accurate that it will never hit anything. I know that on the field, if the robot just touches, or maybe takes a small hit, nothing bad will happen. I just want to make sure that the robot won't generate a path through the wall! The main thing that will help is that the path will be regenerated constantly in a seperate thread, so if one error happens, the robot will be out of control for less than an eighth of a second! I want to build a multicam system, however, the processing power, resources, etc. will be beyond the reach for my team, so I want to stick with two high-FOV cameras and set up stereo vision!

JamesTerm
12-07-2014, 06:53
You think of this as a game in code because it is a game. It's why I use iterative. Reminiscent of a game loop. But unlike a game it's a lot more difficult to handle AI and ML in reality, let alone competition.

Why would it be difficult to handle AI? I guess AI should have more clarity as it's a real generic term. When I say AI especially in terms of gaming I'm really saying what is written in this book (http://www.amazon.com/Programming-Example-Wordware-Developers-Library/dp/1556220782). I loved this book as our code uses a very similar model of goal driven classes... the context of AI written here is really a way to manage events in real-time.

Given this context... the AI foundation for our game (http://www.termstech.com/files/TheFringe-MagicCarpetRide.wmv) code we wrote , and the robot code are 100% identical.

I think your context of AI includes the details of the goals themselves... assuming this to be true, it should be abstracted away to not be a part of the AI... so let's take 2011 logomotion as an example, and only just the autonomous part as an example to illustrate my point:

goals
drive forward 10 feet
raise arm to 9 foot mark
drive forward one more foot
open raptor claw
lower arm 6 inches
drive backwards a few feet

In this example these are high level goals that I could have used in the game, the details of how to do the goals would have been different, and FWIW... driving forward *straight* is not as easy as it sounds. ;)

JamesTerm
12-07-2014, 07:13
Well, y'all are the experts, so I'd have to trust you on that. However, I have come up with quite a tangible and possible plan. The entire program will basically be split up into parts. Also, it will only have basic offense features.


I'll throw some things out there for you... sayings that I've found to be true:

1. People need to follow through their plans... just like a tennis player needs to follow through when he hits the ball on the serve. So follow your passions and see them through to the end. I hope you get some new innovation and learning experience and share with us the outcome!

2. Top down design, bottom up implementation. You now have the plan... break it down to small goals and start to form some implementation strategies. It's time to get down from the dream cloud to face reality!

3. Don't waste your time shuffling/organizing chairs on the titanic. As we get older we realize how precious time is and how little of it we really have. As a team player you have a responsibility to your team as well as following your passion... try to find a happy medium in there. Is your time invested in this taking away what you can contribute to your team? What is the biggest fire or issue with the team that needs help. What can you do this summer to be ready for next year... to overcome known mistakes learned from this season etc. My biggest regret in life is not being a better team player when I was younger.

The sayings in 2 and 3 come from my boss/mentor and I've adopted them as my own, and I'm passing them down to you. Good luck! Whatever the result... it will be a great learning exercise!

MatthewC529
13-07-2014, 01:16
Why would it be difficult to handle AI?

I think your context of AI includes the details of the goals themselves... assuming this to be true, it should be abstracted away to not be a part of the AI...

You are right in stating AI is a very generic term, you can throw around words like A* Pathfinding, State Machine and Goal Oriented Behavior but those can become literally just terms to throw around since a specific task usually requires a more specifically designed AI to be done effectively and, more importantly in games, efficiently.

And no that is not the context I am looking at it from. In reality if you write an overly specific AI focused on conducting only a few tasks than you end up writing a lot more unmanageable code (Learned the hard way, it was an invaluable learning experience), possibly resulting in several anti-patterns, when in reality an abstract approach goes a LONG way. Though you should know from writing that game that you cant just ignore the deeper details. If you dont account for the details you can end up with an exploitable and appearingly buggy AI.

Your example though is great, that is ideal, but as you said it's difficult to even account for the driving "straight" part, and that is what I am getting at. You can say a bit of "MATLAB and OpenCV" but in reality to have a truly effective AI you will need more. Having AI to achieve goals can be easy in a digital game, creating an AI to adapt to the player and other AI is more difficult and having an AI to adapt to the player and other AI with uncertainties in what may occur (particularly speaking about reality) then it gets more difficult. Again I love this idea and I love the plan but its also being over simplified in my opinion.