|
Re: Dual Cameras - Dual Purposes
Side note* with the depth camera, you are losing a lot of data converting it to a bit that we can see and make sense of. I don't know anything about c#, but the follow code is in c++
mat raw;
while(1)
{
raw = freenect_sync_get_depth_cv(0); //11-bit Image // 10-bit useful
Mat raw_mat(raw), depth_mat;
raw_mat = raw_mat-512; //Erase unusable close values and basically make it 9-bit
raw_mat.convertTo(depth_mat, CV_8UC1, 0.5); // convert 9-bit to 8-bit
imshow("Depth", depth_mat);
}
}
just to clean up the depth map a bit. I assume this was a given program due to the colouring scheme for distance is identical, or nearly, to the one I found with freenect. Im sure the above code could be easily converted to c#.
Anyways, back to your quesiton. Have you ever thought about running two programs for this task? One program uses camera a, the other camera b. Due to the bandwidth restriction nowadays (which i am very glad FIRST implemented because it teaches not to send a lot of data over a short period of time), you could create a simulation on your driverstation. This year, based off of our distances found by the vision program and x rotation, the screen on our driver station adjusted a simulated target to fit those constraints. So, if you were say...trying to take frisbees, or other robots, you could send the coordinates and size, then recreate it on your driver station, and you could update the display for every solution, not just every 100ms, so that's a bonus. Just a suggestion...If you are determined to display both, then I'd say write two programs. But I'm not sure how natural one can get at reading a depth image in the heat of a match. That'd be some serious mental training.
Just wondering, I assume the other image is an rgb camera, yes? Is it the one on the kinect or a webcam/axis camera?
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
|