Questions about high quality camera streams during sandstorm period

How would I process a usb camera stream through a coprocessor such as a Jetson, Pi or Arduino in h.264 and be able to stay under the FMS bandwidth limit?

If it goes through a coprocessor would it go to the radio afterward and work like an IP camera or be implemented similarly to the Axis camera?

I’ve found that through a Jetson TK1 (something we have lying around) you can use gstreamer and send an output through an IP and port after encoding, would this work and manage to stay under the bandwidth limit of 4mb/s this year?

Yes, you can do this. I assume there are parameters on whatever H.264 encoder you’re using that control the level of compression, etc, and hence the bandwidth. In the Raspberry PI, if you use a CSI (ribbon cable-connected) camera you can run it through raspivid to gstreamer and get H.264 compression in the GPU with near-zero CPU load and extremely low latency. And the really cool part is that raspivid has a -b option that lets you specify exactly the bit rate you want.

You do have to be careful how you set up your gstreamer pipeline, as there are many combinations that take you out of the GPU and have far worse performance.

We have a Jetson TK1 so it’s easist to attempt to use just that right now. Would plugging a usb camera into the Jetson, capturing and encoding the frames into h264 and sending them through the radio to be decoded and displayed onto the DS manage to stay under the 4mbps with the correct optimization?
We bought a usb fisheye lens but the current resolution, frame rate and compression artifacts make the stream horrendous to distinguish what’s what and I’ve heard teams have managed relatively decent looking live camera streams before.
Would the best way to encode them be with CUDA or does it use a separate h264 encoder. Or does the h264 encoder simply go through the CUDA pipeline without having to specify it?

I did a bit of experimentation with gstreamer on the TK1 last year, but ended up going with the RPi3/CSI camera as the better setup for our purposes. My feeling is that for a single USB camera it should be able to do what you want, but I don’t recall many of the specifics. Take the following as hints, but test and verify.

  • Stick with Nvidia’s Gstreamer(s). Their distributions contain CUDA code you don’t get if you build from public source. Writing CUDA code was out of scope for my experiments, but there’s a lot of of it in Nvidia’s Gstreamer, and elements that exploit the hardware perform a lot better than generic CPU code.
  • Look closely at Nvidia’s sample code. I don’t have the link, but I know there’s a page that gets pretty close to exactly what I think you’re trying to do.
  • Look at ALL the options for H.264 encoding elements. I believe there are several, and the choices are different in Gstreamer 1.0 vs 0.1 - don’t assume the former is better. There are huge differences among them.
  • Take the advice of more experienced TK1/Gstreamer users over mine. I just looked at this briefly, and it’s been a while.