2 years ago me and 2 mentors tackled computer vision for the first time. One is an electrical engineer and the other is a software engineer, and I had zero programming knowledge before. We finished the program a week before our first regional, but our robot had some connection or electrical issues, we’ll never really know, but we, as a team, learned a lot that year, and that is all that really matters.
Last year I recruited another student to tackle computer vision and he ate up all the information. Me and him wrote the vision program and it got some attention. The ee mentor just nudged us in the right direction when we were stuck and took us to the white board for logic flow if we were still stuck. The software engineer attempted, and succeeded, to creating a scouting application with another student that involves wii remotes for each robot, tablets for each alliance, and a master computer.
Over this off season (pre-season?) a student on our team got his dad involved with the programming side of things. He is the head of the computer science department at the local state university. He brought along some other professors to help us. Over the summer we dwelled in depth programming. Then we got this idea: can we make a completely autonomous robot for this next year’s competition?
We are attempting to do away with the kinect and are trying to use the asus xtion, a much smaller camera with all the same features (minus the servo motor and accelerometer) and was specifically designed for development, and we are switching from the O-Droid X2 to the XU, not a big change, but still a change.
Here is a spiel I gave another student from a local team this morning who me and the other student have been helping this fall and winter:
Opencv and openni are pretty solid libraries, but openni does a lot more with 3d point clouds (look up the kinect slam) we are trying to do collision detection, avoidance and adding a star path planning in an attempt to be the first team with a completely autonomous robot during Tele operation. If you want to do depth stuff such as just checking to see if an object is there just stay with opencv. Me and [another student] wrote a program that takes the depth map and makes a xz image (an example found here: http://www.chiefdelphi.com/media/photos/39138) and it has the objects in view circled (because they are soccer balls). That is really the extent of opencv and freenect. With that you can apply the a star path planning and even apply a homography to learn the motion of the objects (the math is easier when you assume you aren’t moving, but everything else is) you can use that to calculate the speed of a robot that is in your way. Then since you know what velocity(ies) you’ll be moving at, you can calculate where the robot will be at any given time and adjust your a star path accordingly in advance.
This is our general approach, but with openni. We are looking for all the help we can get, even with our extensive programming team. (Well extensive in respect to our past, when there has only been < 3 programmers and 1 mentor; we now have 5 mentors dedicated to it and 9 students now)