|
|
|
![]() |
|
|||||||
|
||||||||
|
|
Thread Tools | Rate Thread | Display Modes |
|
#25
|
||||
|
||||
|
Re: A Vision Program that teaches itself the game
Quote:
Quote:
However, now I have decomposed the program idea and it seems quite practical (and actually useful) to automate some parts of the game. Things like autonomous driving, holding position and manipulating the gamepiece seem quite simple to implement (even without AI/ML). Just a transform with OpenCV targeting a RotatedRect can accomplish finding a gamepiece. Using a rotatedRect again, you can filter for robots. As faust1706 explained to me a long time ago, just color-filter a bumper. Use minAreaRect to crop the bumper. Then, add two lines, dividing the bumper into 3 pieces. perform minAreaRect on this again and then use the height of the middle section to approximate the robot distance using an average bumper height. I can tell that pathfinding will work, because I am treating it just like as if this were a videogame! Say that I was tracking the ball as it was being shot. I could triangulate it's height using some trigonometry, and it's distance using a kinect distance map. I could get the hotspot for a made shot using the kinect's distance readings and height measurements. Now, say the ball misses the goal. The robot could possibly find the error, like 6 inches low, etc. and try to figure out a way to make the shot better. For example, in the 2014 game, if the robot misses the shot, it will see whether it was low or high. If it was low, it could move forward a bit. If that made the problem worse, it could move back, etc. This type of code could be quite crude, but still get the job done. If I used ML for this instead, surely, the robot would miss the first few shots, but it can easily be more accurate than a human thereafter. If we want to, we can also add more data points manually that we know. This is Supervised learning. Basically, in short, it is not a good approach to write a full auto program, but instead to write some program that will allow rapid system integration. If I write my program right, I would need to only code a few pieces -Vision targets and pose estimation/distance calculations -Manipulating the gamepiece -- what it looks like, distance, pickup, goal, etc. -Calculating what went wrong while scoring -Field (Don't need to code. Need to configure). And certainly, there are many things that a human player can perform best However, now my main concern is how do I generate a map of the field. The kinect will offer a polar view of the field if programmed correctly. How do I create a Cartesian grid of all the elements. For example, instead of the Kinect reporting: Code:
. . . . .
__
Code:
.
. .
. ___ .
Also, say that the path is: Code:
00000001000000000 00000000200000000 00000000030000000 00000000004000000 00000000000500000 Thanks for all your time, mentors and other programmers! |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|