Using FRC dashboard w/ camera during competition?

in the Labview Robotics Programming Guide for FRC, on page 4-8
there is a note around mid section of the page that states
“you can send image data to the host computer only during development, not during the FRC competition.”

does that mean during the competition we can’t have dashboard running with a streaming video? And if so, can we still view the camera somehow during competition?

So does that mean we cannot see the camera’s video image during the competition?

in the Labview Robotics Programming Guide for FRC, on page 4-8
there is a note around mid section of the page that states
“you can send image data to the host computer only during development, not during the FRC competition.”

does that mean during the competition we can’t have dashboard running with a streaming video? And if so, can we still view the camera somehow during competition?

So does that mean we cannot see the camera’s video image during the competition?

This has been clarified.

  1. It is not in the Manual that you can’t, therefore you can.
  2. The GDC in Update 5 clarified that there is a limit (imposed or not) on the competition connection.
  3. Said limit is effectively too slow for video and everything else together.
  4. Therefore, it is legal, but highly discouraged.

We can’t stream video back this year.
FIRST wants to monitor the bandwidth contention with the new WiFi system for this season.
We are only permitted to use the dashboard packets to send the DS data and that limits us to less than a 1000 bytes of data 50 times a second. Not enough for an image anywhere close to real time or at full resolution.

From Team Update #5

General Notes
Communication via the Field Management System Every time the Driver Station and ROBOT communicate, a 1024-byte packet is sent between them. 40 bytes are reserved for use by the Field Management System. The remaining 984 bytes are available for teams to put in any of the user data that they want. The team can choose to send whatever they want in those 984 bytes. Video is typically transmitted from the ROBOT by using the software ports specifically set up to support video throughput. This year, the Field Management System will not pass data sent through that port during a match (to ensure adequate system performance during competition events, until the new system is better characterized in actual competition settings). If a team really wants to transmit images from the camera back to the Driver Station during a competition, they can decompose the video frame and pass it as user data in the available 984 bytes per packet. However, the resulting throughput of the video will likely result in a frame rate so slow that it is not particularly useful.

By passing the mjpeg images to the DS you can probably get around 15 - 24 fps at 160x120 resolution. This is based on jpeg images of about 1.8KB after stripping out unnecessary header data. We are seeing 50 packets per second and you have 984 user bytes per packet for a net datarate of almost 400 kilobits per second.

There is a packet delay from camera to cRIO, processing in cRIO, packet delay cRIO to DS, processing in DS, packet delay to your Dashboard program and processing in your Dashboard program. I’m not sure that I would pronounce this as useless because it could be 100ms or less (1/10th second delay).

The dilemma ( at least for us) is that we like 640x480 for tracking/targeting as long as we can keep the processor load under 50%. So there is a tradeoff. If 160x120 is satisfactory for your tracking/targeting (and using the standard software this may be the realistic limit anyway) then it may be feasible to have 160x120 live video stream, which at a 24 fps frame is not all that useless.

In any case, this kind programming under FRC constraints is not for the feint of heart, although this year’s rules give us a bit more time for programming.

I don’t know, the idea of a driver taking his or her eyes off the field to watch a video twice the height of my Chiefdelphi avatar :yikes: still doesn’t sound terribly useful to me.

Now if you can project that size image onto a pair of goggles, that’d be cool. :cool:

Actually, a moving image of 160x120 is not that bad provided it is first scaled up to say 640x480 for display on the laptop. Unless the center of your visual field is on the display, your eye’s resolution is not all that good and your brain fills it in. Hard data like current targets can be overlaid on the video along with other telemetry. A 160x120 camera 50 feet away is sure a lot better than your vision, especially if there are objects in your way :wink:

Maybe next year we’ll have enough bandwidth for dual cameras and stereo goggles that could provide depth perception.

That would be way cool! :cool: