…However, this code will rapidly switch between the cameras. What I want to do is write a program to run on the driver station, to “deinterlace” the images, by displaying every other frame in its own spot. I’m quite fluent in Processing (Java with some cool libraries), which would make deinterlacing the images easy if I had the image stream available… My question is: how do I get raw image data from the robot on the driver station? I ideally want a solution that can work through the FMS.
Is there a part of the NIVision library that can receive this stream? Or do I need another Java library? And how is it possible to access the video stream with third-party programs in the DS?**
Do you want to mod the images on the stream? Separating them, for example, into evens and odds to display into two different displays.
Or do you truly mean that you want to take even and odd lines in an single image and merge or separate them.
I sorta assume the first, in which case, you can have take the i terminal in the while loop that is getting the images and use the Integer Divide and Remainder to divide by 2. This is the % equivalent in LV. Use the remainder of 0 or 1 wired to a case statement to display into the correct image display.
What I meant to say is that every other frame is from a different camera. Say we have two cameras, A and B. The video stream would look like this:
ABABABABABAB
The driver station would then simply grab every frame, and display the A frames in one window and the B frames in another. It would halve the framerate for each camera, but that’s not important.
The receiving stuff is not in NIVision, but in the dashboard code. The loop allocates one image, gets the images, and displays them in one display. If you customize to have two displays, then you can write the images to alternate displays using even/odd code described above.