Go to Post Many would say it is both gracious and professional to make sure your robot will not damage another team's robot. - 4throck [more]
Home
Go Back   Chief Delphi > Technical > Programming
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Closed Thread
Thread Tools Rate Thread Display Modes
  #16   Spotlight this post!  
Unread 26-06-2014, 22:33
Ginto8's Avatar
Ginto8 Ginto8 is offline
Programming Lead
AKA: Joe Doyle
FRC #2729 (Storm)
Team Role: Programmer
 
Join Date: Oct 2010
Rookie Year: 2010
Location: Marlton, NJ
Posts: 174
Ginto8 is a glorious beacon of lightGinto8 is a glorious beacon of lightGinto8 is a glorious beacon of lightGinto8 is a glorious beacon of lightGinto8 is a glorious beacon of light
Re: A Vision Program that teaches itself the game

Aside from the many technical limitations, there is one glaring barrier to such a learning system. Vision systems play very specific roles in each game, and in each robot. They typically tracking geometric, retroreflective targets, but the vision systems my team has created have had no say in the robot's logic -- they effectively turn the camera from an image sensor to a target sensor, streaming data about where the targets are back to the robot. For a vision system to learn the game, it must learn not only what the targets look like, but also what data the robot's central control needs -- whether it wants "is the target there?" data like this year's hot goals, or "At what angle is the target?" as in Rebound Rumble and Ultimate Ascent. Any learning system requires feedback to adapt, and when it has to learn so many different things, designing that feedback system would be at least as complex as making a new vision system, and certainly more error-prone.
__________________
I code stuff.
  #17   Spotlight this post!  
Unread 27-06-2014, 00:37
SoftwareBug2.0's Avatar
SoftwareBug2.0 SoftwareBug2.0 is offline
Registered User
AKA: Eric
FRC #1425 (Error Code Xero)
Team Role: Mentor
 
Join Date: Aug 2004
Rookie Year: 2004
Location: Tigard, Oregon
Posts: 486
SoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant future
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by MatthewC529 View Post
You have limited Memory on an embedded system like the RoboRIO. Of course the RoboRIO is a massive step up but I am talking about 2 GB RAM vs. 256 MB RAM. A* is in its most basic form an informed Djikstra Pathfinding algorithm. Unlike Djikstra where all moves have a Heuristic cost of 1, A* has ways of assigning a cost to each movement. Depending on your method you will usually get an O((V+E)log(V)) or even O(V^2) algorithm. Pathfinding is an expensive task and if the field was a perfect size where a resolution of 64 px by 32 px worked then you could end up with an extremely large Fringe if enough obstacles exist.
I don't quite understand what the big deal is. A 64x32 grid is only 2048 nodes. I'd expect that you could have an order of magintude more before you ran into speed problems. I also don't think you'd have memory issues. If you assume that you have 256 MB of memory, half of which is already used, and 2048 nodes then you'd get 64 bytes per node. That seems like plenty.
  #18   Spotlight this post!  
Unread 27-06-2014, 01:10
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by Ginto8 View Post
For a vision system to learn the game, it must learn not only what the targets look like, but also what data the robot's central control needs -- whether it wants "is the target there?" data like this year's hot goals, or "At what angle is the target?" as in Rebound Rumble and Ultimate Ascent. Any learning system requires feedback to adapt, and when it has to learn so many different things, designing that feedback system would be at least as complex as making a new vision system, and certainly more error-prone.
The two tasks you just described are in themselves not difficult to achieve through a vision program (An example method to do this is called cascade training), the real problem is how the robot would act on it. This task would be a no brainer for Yash, in fact, he has already done it for the 2014 game if I remember correctly. This only looks at one aspect of the game though. It also has to know what is in front of it, find game pieces, know whether it has game pieces, and go to where it needs to to score or pass. We did most of this in our code this year with 3 cameras and we were lucky to get 10 fps. It would take months at least for there to be enough generations of the learning algorithm for there to be any noticeable result.

Quote:
Originally Posted by MatthewC529 View Post
Its an awesome idea and you should definitely follow through but probably not immediately on a 120 lbs. robot. Experiment first with Game Algorithms and get used to implementing it in an efficient and workable way, then move to the robot where efficiency will really matter. I cant speak for how efficient you will need to be... again... Game Developer but again I really like your concept of pixels but I think you should be wary of how much time and the maintainability of your code.
Isn't there a simulation for each year's game? In my mind, that would be a perfect place to start.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
  #19   Spotlight this post!  
Unread 27-06-2014, 05:10
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by MatthewC529 View Post
I am not going to contribute to the reasons as to why you shouldnt do it on the scale you are seeking (Because I think the idea and concept is awesome) but I will say one thing. I have worked with AI Pathfinding algorithms to a decent extent as a Game Programmer, I was freelancing and did work in implementing specific AI Algorithms and various different Game Mechanics.

You have limited Memory on an embedded system like the RoboRIO. Of course the RoboRIO is a massive step up but I am talking about 2 GB RAM vs. 256 MB RAM. A* is in its most basic form an informed Djikstra Pathfinding algorithm. Unlike Djikstra where all moves have a Heuristic cost of 1, A* has ways of assigning a cost to each movement. Depending on your method you will usually get an O((V+E)log(V)) or even O(V^2) algorithm. Pathfinding is an expensive task and if the field was a perfect size where a resolution of 64 px by 32 px worked then you could end up with an extremely large Fringe if enough obstacles exist. In certain scenarios this could be a bit long for an Autonomous period and if proper threading isnt implemented it could cripple your Teleoperated period if you have to wait too long for the calculations to finish in a dynamically changing field of non-standard robots.

Also this could work for shooting but if the game calls for a much different scoring system then your AI and Learning may be even further crippled by complexity... Also you dont want a friendly memory error popping up and killing your robot for that round.

Its an awesome idea and you should definitely follow through but probably not immediately on a 120 lbs. robot. Experiment first with Game Algorithms and get used to implementing it in an efficient and workable way, then move to the robot where efficiency will really matter. I cant speak for how efficient you will need to be... again... Game Developer but again I really like your concept of pixels but I think you should be wary of how much time and the maintainability of your code.
I actually wanted to treat this like a game. That is the reason why I thought of creating a field grid. Are you saying that 2GB of RAM won't be enough. The program will have access to 1GB in the worst case scenario. The data collection using OpenCV will use well under 16 MB of RAM.

If you are saying that A* is too ineficient, what do you suggest I should try instead. If anything, I could have 3 computers -- vision processor, AI, and cRIO.

Also, 64 by 32 px was just a crude example. By testing the performance of the system, I could tell whether I need to reduce the resolution or what. Otherwise, I could treat everything as either go there or not.

My buddy programmer and I would like to use an nVidia Jetson Dev board. Should we use that for AI, or vision processing? We can use an ODROID for the other task!

I have already figured out how to effectively use OpenCV and optimize it for a very high performance. I can use a configuration file to make the same setup track multiple target types, and I understand how to use OpenCV to get accurate target data even if the target is tilted!
  #20   Spotlight this post!  
Unread 27-06-2014, 16:46
MatthewC529 MatthewC529 is offline
Lcom/mattc/halp;
AKA: Matthew
FRC #1554 (Oceanside Sailors)
Team Role: Mentor
 
Join Date: Feb 2014
Rookie Year: 2013
Location: New York
Posts: 39
MatthewC529 is on a distinguished road
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by yash101 View Post
I actually wanted to treat this like a game. That is the reason why I thought of creating a field grid. Are you saying that 2GB of RAM won't be enough. The program will have access to 1GB in the worst case scenario. The data collection using OpenCV will use well under 16 MB of RAM.

If you are saying that A* is too ineficient, what do you suggest I should try instead. If anything, I could have 3 computers -- vision processor, AI, and cRIO.

Also, 64 by 32 px was just a crude example. By testing the performance of the system, I could tell whether I need to reduce the resolution or what. Otherwise, I could treat everything as either go there or not.

My buddy programmer and I would like to use an nVidia Jetson Dev board. Should we use that for AI, or vision processing? We can use an ODROID for the other task!

I have already figured out how to effectively use OpenCV and optimize it for a very high performance. I can use a configuration file to make the same setup track multiple target types, and I understand how to use OpenCV to get accurate target data even if the target is tilted!
It depends. Personally I would use the Jetson Dev Board for Vision and ODROID for AI if you are going to separate it that way just from a quick skim of their specifications but I would need to look at it more.

A* is likely your best bet, Pathfinding algorithms are known for being either Time consuming (if restricted) or Memory consuming (if you want speed) and you are right for looking at as if it is a Video Game AI. A* is commonly used because it is fast while being reasonably smart. I would recommend the 3 basics to choose from. Check up on your knowledge of Djikstra, A* and Best First Search. Each have trade offs. Most simply you either get Slow With the Best Path, or Fast with A Good Enough Path. If you have the ability to multi-thread with several CPU's you could possibly get away with a Multi-Threaded Djikstra approach that can quickly search through the Fringe and determine the true shortest path. But sticking to A* might be your best bet.

If you separate it into 3 computers and each process has access to its own dedicated memory then you could pull it off in terms of processing power, 1 GB should be well enough I would think. I am still concerned though with how you plan on it being useful outside of an awesome project. On the field I still think it will be hard to make it adaptive to a dynamically changing field (though not impossible) sufficiently and too slow to calculate the best path in a short time frame, though I suppose it also depends on what you consider the best path. I think its awesome and I do honestly support the idea (Because I dont have access to the same materials on my team ), just trying to gauge where your head is at.

Also I agree if you follow through you will definitely need to constantly tweak (or dynamically update) the resolution of the graph you are analyzing.

I have questions such as how you tested your optimizations and how the data is being collected?

AI is so hard to discuss since it all depends on your goals and how it needs to adapt to its current scenario.

Last edited by MatthewC529 : 27-06-2014 at 16:47. Reason: That awkward moment where you write an essay instead of a quick response...
  #21   Spotlight this post!  
Unread 28-06-2014, 23:50
cmrnpizzo14's Avatar
cmrnpizzo14 cmrnpizzo14 is offline
Registered User
AKA: Cam Pizzo
FRC #3173 (IgKNIGHTers)
Team Role: Mentor
 
Join Date: Jan 2011
Rookie Year: 2006
Location: Boston
Posts: 522
cmrnpizzo14 has a reputation beyond reputecmrnpizzo14 has a reputation beyond reputecmrnpizzo14 has a reputation beyond reputecmrnpizzo14 has a reputation beyond reputecmrnpizzo14 has a reputation beyond reputecmrnpizzo14 has a reputation beyond reputecmrnpizzo14 has a reputation beyond reputecmrnpizzo14 has a reputation beyond reputecmrnpizzo14 has a reputation beyond reputecmrnpizzo14 has a reputation beyond reputecmrnpizzo14 has a reputation beyond repute
Re: A Vision Program that teaches itself the game

http://www.entropica.com

Saw a TED talk on this a while back and thought it was interesting. Probably not a feasible strategy for FRC but it is a neat way of approaching games.
__________________
FIRST Team 3173 The IgKNIGHTers

"Where should we put the battery?"
  #22   Spotlight this post!  
Unread 29-06-2014, 12:15
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: A Vision Program that teaches itself the game

OK, so I found this great series of pages which I need to (thoroughly) read. It has quite a few algorithms -- their pros, cons, implementations, etc. The page is at: http://theory.stanford.edu/~amitp/GameProgramming/.

The approach I am thinking about is:
Imagine that the field were a grid. In this grid, there are obstacles mapped. These obstacles would be like a wall or something that cannot be moved by the player. These obstacles would be predefined in a nice and lengthy configuration file, with a byte of data for each grid location. I have attached a text file containing how the configuration file will be saved and how the field data would be saved. I would like to call this something along the lines of field-description-file or so because it can save a map of the field.

In the example config, I have 2013's game. 2014's game didn't really have any in-field obstacles.

So, the configuration stores some basic field information. It contains the boundaries of the field and the static field elements, It also marks the space where the robot can move around. It is basically an array of the field elements.


I plan to figure out where on the field I am, using a few sensors. For the location, I would use pose-estimation to figure out my distance and angle from the goal. Then, I would use a gyro for the rotation because both field ends are symmetrical.

I guess some changes can be made to this FDF file. I can add more numbers, say 4 and 5. 4 could be a loading zone and 5 can be a goal in which the robot can shoot. The robot would calculate the height of the goal using the vision targets. Otherwise, a second FDF could be created. This FDF will contain a depth map. All the goals would be marked in the exact spot. The number would be the height (in whatever unit).


I think this type of an interpreter could get the robot program closer to a program that can play all games. You just need to describe the field in the FDF and program basic behavior -- shooting positions, shooter direction, and etc. The robot could use a supervised learning -- regression for the ML algorithm. This way, the robot could learn where shooting is most consistent and over time gather the coordinates for the best shot!
Attached Files
File Type: txt config.txt (12.6 KB, 9 views)
  #23   Spotlight this post!  
Unread 29-06-2014, 16:28
JamesTerm's Avatar
JamesTerm JamesTerm is offline
Terminator
AKA: James Killian
FRC #3481 (Bronc Botz)
Team Role: Engineer
 
Join Date: May 2011
Rookie Year: 2010
Location: San Antonio, Texas
Posts: 298
JamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to beholdJamesTerm is a splendid one to behold
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by JohnFogarty View Post
I got the chance to use a lydar this season and boy was it nice.
I've heard using lydar is expensive... what parts did you use... how much did it cost?
  #24   Spotlight this post!  
Unread 29-06-2014, 17:51
magnets's Avatar
magnets magnets is offline
Registered User
no team
 
Join Date: Jun 2013
Rookie Year: 2012
Location: United States
Posts: 748
magnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond reputemagnets has a reputation beyond repute
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by yash101 View Post
I am taking an AI/ML course online. I am just wondering if it would be FEASIBLE (I know it is possible) to create a vision system that learns the game by itself?

While it would seem quite hard, instead of writing a different program every year, it could be possible to write one program to play each game.

Such a program would need to be taught the game, what it needs to do. However, it would also need to learn how to use itself.

The last question I have is, has this ever been done? It seems extremely hard so it would be pointless to reinvent the wheel.

What would be the best environment to build something like this in? Currently, I am learning with Octave, however, OpenCV seems to have a lot of useful components, including the ML module.

Before you decide to do something like this, you need to consider what your goal is. If the goal is become more competitive, I can guarantee that time could be spent better improving another aspect of the team.

However, if your primary goal is not to do as well as you can at the competition, and instead is to learn about programming, which is a valid goal, then I would recommend realizing how big of a project this would actually be. You are not the first person to want to do something like this, a user named davidthefat had a similar goal. He, and a few other teams, pledged to do a fully autonomous robot for the upcoming season, which never happened.

Look at the Cheesy Poofs. They've won events every year they've been around, they've been on einstein a bunch, they've been world champs, they win the innovation in controls award, and their autonomous mode is very, very complicated. Their code, which can be viewed here, is well beyond the level of high school kids, and is the result of a few really brilliant kids, some very smart and dedicated mentors, and years and years of building up their code base. Yet all this fancy software lets them do is drive in a smooth curved path. Even then, it's not perfect. In the second final match of Einstein, something didn't go right, and they missed.

Just to get an idea for the scope of the project, read through these classes from the Cheesy Poof's code. My friend does this sort of programming for a living, and it took him a good hour of reading the pathgenerator and spline portions of the code to really get a good understanding of what's going on. I wouldn't attempt this project unless that code seems trivial to write and easy to understand.

Before even thinking about making software to let the robot learn or software for the robot to learn about its surroundings, you'd need to perfect something as simple as driving.

As an exercise, try making an autonomous mode that drives the robot in a 10 foot diameter circle. It's much harder than you think.

Again, I'm not trying to be harsh or discouraging, I'm trying to be realistic. A piece of software that can do the tasks you've described is beyond the reach of anyone.

Another very difficult exercise is figuring out where the robot is on the field and which way it is pointing. You can't just "use a gyro", there's much more to it.
  #25   Spotlight this post!  
Unread 01-07-2014, 01:50
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: A Vision Program that teaches itself the game

I have come up with a plan on how to write something like this. The vision program will have a lot of manual set up, like describing the field, the obstacles, goals and boundaries. Other than that, the robot can start it's ml saga for the shooter using human intervention -- it will learn the sweet spots for the shots as the drivers shoot the ball and make/miss it. Over time, the data set will grow and the shots will become more and more accurate, just like our driver's shots.

When we are learning about the robot's capabilities, this is how we learn:
Shoot once. Was it low? Was it high? reposition. Try again.

This would be quite similar to what the Supervised learning algorithm will be. Using regression, the best shot posture will be estimated even if there is no data point. It needs to just know a couple data points for the answer.

The main code change that will need to be done is that the retroreflective targets will need to be coded in. It will be extremely difficult to write a program to find the targets and always find the correct one for pose estimation using ML.

Basically, a great portion of the game can be taught to the robot quite easily -- moving around the field, etc. However, as you said, it is quite hard to gather the robot location on the field. However, a gyro will give access to the direction so that the robot can tell which side it's looking at.

The pathfinding will be implemented almost exactly as if the program were a videogame!

I'm not trying to make a fully-autonomous robot, but instead a robot that has the level of AI/ML to assist the drivers and make gameplay more efficient.

I am thinking about using A* quite a bit. When the robot is stationary, a path plan would constantly be generated to keep the robot from moving without brakes, etc. However, that is just a might because that would create quite a bit of lag in the robot motion when a driver wants to run the bot.
  #26   Spotlight this post!  
Unread 01-07-2014, 12:39
Pault's Avatar
Pault Pault is offline
Registered User
FRC #0246 (Overclocked)
Team Role: College Student
 
Join Date: Jan 2013
Rookie Year: 2012
Location: Boston
Posts: 618
Pault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond reputePault has a reputation beyond repute
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by yash101 View Post
I have come up with a plan on how to write something like this. The vision program will have a lot of manual set up, like describing the field, the obstacles, goals and boundaries. Other than that, the robot can start it's ml saga for the shooter using human intervention -- it will learn the sweet spots for the shots as the drivers shoot the ball and make/miss it. Over time, the data set will grow and the shots will become more and more accurate, just like our driver's shots.

When we are learning about the robot's capabilities, this is how we learn:
Shoot once. Was it low? Was it high? reposition. Try again.

This would be quite similar to what the Supervised learning algorithm will be. Using regression, the best shot posture will be estimated even if there is no data point. It needs to just know a couple data points for the answer.
Who says that next year is going to be a shooting game? What about the end game, if there is one?

Quote:
Originally Posted by yash101 View Post
However, a gyro will give access to the direction so that the robot can tell which side it's looking at.
The gyro will not give you an accurate heading over the course of the match. A general rule that I have heard is that a good FRC gyro will give you about 15 degrees of drift over the length of a match. My recommendation on this end is to check your angle whenever possible using the vision targets, and when you can't see the vision target, just use the gyro to calculate your deviation from the last time you could. It may even be possible to do the same thing with the roboRIO's 3-axis accelerometer for location.
  #27   Spotlight this post!  
Unread 01-07-2014, 15:39
gblake's Avatar
gblake gblake is offline
6th Gear Developer; Mentor
AKA: Blake Ross
no team (6th Gear)
Team Role: Mentor
 
Join Date: May 2006
Rookie Year: 2006
Location: Virginia
Posts: 1,935
gblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond reputegblake has a reputation beyond repute
Re: A Vision Program that teaches itself the game

My hunch is that accomplishing the original post's goal involves solving (or integrating the solutions to), not dozens, but a few hundreds of problems, and my hunch is that converting the vision system's raw imagery into useful estimates of the states of the important objects involved in a match will be the hardest part.

To help wrap your head around the job(s) imagine the zillions of individual steps involved in carrying out the following ...

Create a simple simulation of the field, robots, and game objects for any one year's game.

Use the field/objects/robots simulation to simulate the sensor data (vision data) an autonomous robot would receive during a match. Be sure to include opponents actively interfering with your robot.

Add in the internal-state data the robot would have describing its own state.

Then - Ask yourself, what learning algorithms do I apply to this and how will I implement them.

It's a daunting job, but, if folks can get Aibo toy dogs to play soccer, you could probably come up with a (simulated initially?) super-simple robot that could put a few points on the board during a typical FRC match.

It's veerrryyyy unlikely that an implementation will make any human opponents nervous in this decade (most other posters have said the same); but I think what the OP described can be made, if your goal is simply playing, and your goal isn't winning against humans.

Blake
__________________
Blake Ross, For emailing me, in the verizon.net domain, I am blake
VRC Team Mentor, FTC volunteer, 5th Gear Developer, Husband, Father, Triangle Fraternity Alumnus (ky 76), U Ky BSEE, Tau Beta Pi, Eta Kappa Nu, Kentucky Colonel
Words/phrases I avoid: basis, mitigate, leveraging, transitioning, impact (instead of affect/effect), facilitate, programmatic, problematic, issue (instead of problem), latency (instead of delay), dependency (instead of prerequisite), connectivity, usage & utilize (instead of use), downed, functionality, functional, power on, descore, alumni (instead of alumnus/alumna), the enterprise, methodology, nomenclature, form factor (instead of size or shape), competency, modality, provided(with), provision(ing), irregardless/irrespective, signage, colorized, pulsating, ideate
  #28   Spotlight this post!  
Unread 02-07-2014, 06:14
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by Pault View Post
Who says that next year is going to be a shooting game? What about the end game, if there is one?



The gyro will not give you an accurate heading over the course of the match. A general rule that I have heard is that a good FRC gyro will give you about 15 degrees of drift over the length of a match. My recommendation on this end is to check your angle whenever possible using the vision targets, and when you can't see the vision target, just use the gyro to calculate your deviation from the last time you could. It may even be possible to do the same thing with the roboRIO's 3-axis accelerometer for location.
I was going to have a boolean for the field direction. Camera pose estimation would be for the actual location/direction. Otherwise, what do y'all think about me using GPS? I am thinking about a multi-camera system so there will almost be a vision target to see. The accelerometer/gyro is just so that the system can tell whether the bot is facing home or enemy territory.

Quote:
Originally Posted by gblake View Post
My hunch is that accomplishing the original post's goal involves solving (or integrating the solutions to), not dozens, but a few hundreds of problems, and my hunch is that converting the vision system's raw imagery into useful estimates of the states of the important objects involved in a match will be the hardest part.

To help wrap your head around the job(s) imagine the zillions of individual steps involved in carrying out the following ...

Create a simple simulation of the field, robots, and game objects for any one year's game.

Use the field/objects/robots simulation to simulate the sensor data (vision data) an autonomous robot would receive during a match. Be sure to include opponents actively interfering with your robot.

Add in the internal-state data the robot would have describing its own state.

Then - Ask yourself, what learning algorithms do I apply to this and how will I implement them.

It's a daunting job, but, if folks can get Aibo toy dogs to play soccer, you could probably come up with a (simulated initially?) super-simple robot that could put a few points on the board during a typical FRC match.

It's veerrryyyy unlikely that an implementation will make any human opponents nervous in this decade (most other posters have said the same); but I think what the OP described can be made, if your goal is simply playing, and your goal isn't winning against humans.

Blake
I have diverted a tad from my original post. I had asked whether it was feasible to make the entire game automatically learnable. I have learned from many other CD'ers that it is nearly impossible. I know that it isn't impossible, but it is just EXTREMELY impratical.

However, now I have decomposed the program idea and it seems quite practical (and actually useful) to automate some parts of the game. Things like autonomous driving, holding position and manipulating the gamepiece seem quite simple to implement (even without AI/ML).
Just a transform with OpenCV targeting a RotatedRect can accomplish finding a gamepiece. Using a rotatedRect again, you can filter for robots. As faust1706 explained to me a long time ago, just color-filter a bumper. Use minAreaRect to crop the bumper. Then, add two lines, dividing the bumper into 3 pieces. perform minAreaRect on this again and then use the height of the middle section to approximate the robot distance using an average bumper height.
I can tell that pathfinding will work, because I am treating it just like as if this were a videogame!

Say that I was tracking the ball as it was being shot. I could triangulate it's height using some trigonometry, and it's distance using a kinect distance map. I could get the hotspot for a made shot using the kinect's distance readings and height measurements. Now, say the ball misses the goal. The robot could possibly find the error, like 6 inches low, etc. and try to figure out a way to make the shot better. For example, in the 2014 game, if the robot misses the shot, it will see whether it was low or high. If it was low, it could move forward a bit. If that made the problem worse, it could move back, etc. This type of code could be quite crude, but still get the job done. If I used ML for this instead, surely, the robot would miss the first few shots, but it can easily be more accurate than a human thereafter. If we want to, we can also add more data points manually that we know. This is Supervised learning.

Basically, in short, it is not a good approach to write a full auto program, but instead to write some program that will allow rapid system integration. If I write my program right, I would need to only code a few pieces
-Vision targets and pose estimation/distance calculations
-Manipulating the gamepiece -- what it looks like, distance, pickup, goal, etc.
-Calculating what went wrong while scoring
-Field (Don't need to code. Need to configure).

And certainly, there are many things that a human player can perform best

However, now my main concern is how do I generate a map of the field. The kinect will offer a polar view of the field if programmed correctly. How do I create a Cartesian grid of all the elements.

For example, instead of the Kinect reporting:
Code:

       .   .   .   .   .     
                  
             __
It instead reports:
Code:
                .
           .         .
       .      ___     .
That could be fed into my array system and everything can be calculated from there.

Also, say that the path is:

Code:
00000001000000000
00000000200000000
00000000030000000
00000000004000000
00000000000500000
how can i make the robot drive more straight and not go forward, left, forward, right, etc.?

Thanks for all your time, mentors and other programmers!
  #29   Spotlight this post!  
Unread 03-07-2014, 01:17
SoftwareBug2.0's Avatar
SoftwareBug2.0 SoftwareBug2.0 is offline
Registered User
AKA: Eric
FRC #1425 (Error Code Xero)
Team Role: Mentor
 
Join Date: Aug 2004
Rookie Year: 2004
Location: Tigard, Oregon
Posts: 486
SoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant futureSoftwareBug2.0 has a brilliant future
Re: A Vision Program that teaches itself the game

Quote:
Originally Posted by yash101 View Post
Also, say that the path is:

Code:
00000001000000000
00000000200000000
00000000030000000
00000000004000000
00000000000500000
how can i make the robot drive more straight and not go forward, left, forward, right, etc.?

Thanks for all your time, mentors and other programmers!
The path that you've plotted seems to imply that the robot can go in diagonals. Otherwise it would look like:

Code:
00000001200000000
00000000340000000
00000000056000000
00000000007800000
00000000000900000
And adding diagonals is probably the simplest we to make it do more than literally just left/right/up/down. This is of course not the only way, and doesn't give you nice curves.
  #30   Spotlight this post!  
Unread 03-07-2014, 04:00
yash101 yash101 is offline
Curiosity | I have too much of it!
AKA: null
no team
 
Join Date: Oct 2012
Rookie Year: 2012
Location: devnull
Posts: 1,191
yash101 is an unknown quantity at this point
Re: A Vision Program that teaches itself the game

I think what I will do is calculate the angle between each consecutive point. I will change the position and angle constantly. The gyro will be used for accurate direction measurements. Whenever the robot is looking at the goal, the gyro will be recalibrated, yielding max-accuracy!
Closed Thread


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 08:06.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi