Invictus programmer, here!
You might be able to just sent the target parameters you want to exploit.
I’m not completely sure what you mean by “parameters.” Just the optimal depth for firing or what?
displaying on on dashboard is step two, correct?
…
To send data from any processor on the robot to the DS is pretty much the same, but you need to ensure that you used ports compatible with FMS setup. The FMS blocks most ports and a select few are left open for exactly this sort of communication. Again, use TCP or sockets. The modification to the Dashboard will be very similar to the code on the robot. Read the image from the correct port and IP. Depending on the format of the image, I’d probably try to convert it to an IMAQ image. The WPI functions do this internally for JPEGs, and it is possible to hand over an array of pixel data and IMAQ will use it for the image data. Then you use the normal dashboard image control and write to the terminal. I’ve done three camera images at once before, and it works fine as long as the laptop can keep up with the decompression overhead.
First Question - yes that is the second step
Second - Our setup would have the usb from the kinect into the ITX, then the ITX would have an ethernet connection to the d-link with an IP like 10.35.93.7 or something. If I wanted to bypass the crio and just get image sent from the ITX on the dashboard, how would I go about doing that? I’m stumped… 
I had heard that you could only do 2 camera feeds in c++, but obviously that’s not true.
Have you ever thought about running two programs for this task? One program uses camera a, the other camera b. Due to the bandwidth restriction nowadays (which i am very glad FIRST implemented because it teaches not to send a lot of data over a short period of time), you could create a simulation on your driverstation. This year, based off of our distances found by the vision program and x rotation, the screen on our driver station adjusted a simulated target to fit those constraints. So, if you were say…trying to take frisbees, or other robots, you could send the coordinates and size, then recreate it on your driver station, and you could update the display for every solution, not just every 100ms, so that’s a bonus. Just a suggestion…If you are determined to display both, then I’d say write two programs. But I’m not sure how natural one can get at reading a depth image in the heat of a match. That’d be some serious mental training.
Just wondering, I assume the other image is an rgb camera, yes? Is it the one on the kinect or a webcam/axis camera?
We’d like to be able to display *both *camera feeds on the same LV dashboard.
To read the depth data during a match, we set the first 24 bits in a 32-bit rgb data according to the depth, and set the other 8 at ++colorBitmap; basically, the closer something is, the color changes and vice-versa.
Any I am very contientious of how much bandwidth I’m using, I only want to get about 15fps at 50% compression on both displays, that way i’m still “in the green.”
Thank you guys so much for the quick replies!
I had one other question. I had heard that the Kinect has an accelerometer in it and I was wondering if anyone has tried to use an accelerometer to compute where their robot is on the field. It may be a fun idea to collaborate on, if no one has done it yet! If there’s enough interest, I’ll create another thread for this idea, just let me know!