Quote:
Originally Posted by Austinh100
Our team is going to attempt to get this working for the 2012 season, we are ordering a panda board and will keep you guys updated.
|
Sounds great! I'm working on a getting a pipeline working with the MS SDK and OpenCV. A couple of my notes on tests run with the MS depth view:
Depending on the thickness of the hoop netting, the kinect may be able to see it. I used a fairly thick net and it was able to see it fairly well.
It may have issues with the retroreflective tape. The depth sensor tends to through errors with items that reflect.
My hope is to use a combo RGB and depth to see the Red and Blue squares, and thus being able to filter out background colors. Remember the depth sensor will see the rectrangles and the support beams on the field roughly the same, and the poly it will see straight through it.
Also the kinects motor is limited to about 15 angle changes so if you are planning to use it to pick up balls, you may want to take this into consideration. Also at 5' and its max downward angle it can only see within about 3 feet of itself. I am considering the idea of 2 kinects, one for balls and one for hoops. Where one would work for both, I think it may be easier to capture using a second.
Hopefully my notes are somewhat helpful.