|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools | Rate Thread | Display Modes |
|
#16
|
|||
|
|||
|
Re: The kinect on the Robot
The FIRST field is indeed shiny. It has aluminum diamond plate all along the wall, and lexan up above. Lexan is both transparent and reflective. All polished surfaces, like glass, lexan, aluminum, and painted surfaces will reflect to some degree, and that means that overhead lights, and lights from the robots will show up where a reflection point (where the camera and light reflect about the surface normal). Furthermore, the lexan doesn't prevent lights from shining through the back surface, so lights in the stands, windows and other light sources can shine into the camera through the driver wall of lexan. And there are also the t-shirts, hats, and other gear worn by the drivers that are showing through the lexan. It all leads to an image that is quite difficult for simple processing to deal with.
This is why I wouldn't recommend looking just for color. If you combine color and shape info you will be far more robust. This will almost always reject glare, even from the diamond plate. If the shape info is robust enough, you don't even need to use color, just brightness. The camera will most definitely pick up the hoop, net, and supports. Camera placement is indeed important in part due to the hoop and net blocking the reflective tape. As shown on the targets in the paper, the lower edge will be the first to be impacted. As for blinding drivers, I don't think the LEDs need to be very bright. Certainly they aren't as bright as the BFL. Greg McKaskle |
|
#17
|
||||
|
||||
|
Re: The kinect on the Robot
Quote:
|
|
#18
|
|||
|
|||
|
Re: The kinect on the Robot
Has anyone tried having 2 kinects side by side
if the kinect uses a density of how many dots per square inches having 2 kinects would double the amount of dots and effectively making the kinect think the distance is shorter than what it really is |
|
#19
|
|||
|
|||
|
Re: The kinect on the Robot
There's been about three or four threads about this before this one. The consensus is that it is generally a bad idea. In fact, the shoehorning of the Kinect into this year's competition was less than tactful, and as far as I can tell has only succeeded in upsetting people due to its uselessness.
Do you really want to stick it on the robot? Well... Fine, but you probably won't get much more data from it than you would by just using a camera. You already have distance, based on the starting position. Alignment you would get from the normal camera. You'll need to spend money to do this. In one of the other threads, someone mentioned a USB-ethernet converter/interface. That would probably be the best option, but it was about $125 or so. Another would be the Panda Board, at closer to $200, but you would need to program it, it is an embedded computer. Teams should stop getting hung up on the idea of using the Kinect as some kind of uber-sensor. The data will simply not help that much. I would keep it for another time and use it in an off-season project. |
|
#20
|
|||
|
|||
|
Re: The kinect on the Robot
Quote:
http://openkinect.org/wiki/Main_Page and i have successfully used a C# program to control the kinect in ubuntu |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|