Quote:
Originally Posted by Michael Hill
You could derive your location on the playing field with 2 cameras. It's a "cooperative" process (in that you know the dimensions of goals, etc.)
|
while that is true, it would be much easier to solve for the pose because you do know the dimensions of the target. AND say in 2012 with rebound rumble, there were 4 targets that were hardly ever obstructed (this year there was that pesky pyramid), so you could solve the pose problem for all 4 targets, so that if you only saw one hoop, you could calculate where the other 3 would be with deadly accuracy. That's what we did. AND say you use a gyro, like we do with machanuums, you could have a check on the gyro readings passed off of your vision solutions, that is something we plan on doing in the future considering how versatile this program is.
Sorry, I've been a vision nut since I started vision programming for rebound rumble, well, since i wrote a program to track 2011's pegs and tell which is which and what colour to put at each peg (this was done after the season).
Computer vision is a whole field of its own with computer science. Every decent college will have a professor that can at least help you in it. Just email them, I've worked with professors from harvey mudd, Wash U, and Missouri S&T, also soon to be umstl because they want to support our team by giving us mentors.
I find it unnecessary to use two cameras. Because 1) the data from both of them have to communicate between each other, and to do that, you have to solve for the relation between them (which is essentially pose), so why not use pose to begin with and only use one camera?