Streaming usb cameras.

Well, the title is self explanatory, i had a pretty good grasp on how nivision worked, but, for the life of me, i cannot figure out how to initialize multiple cameras, let them chill, and only stream to the DS from one. Or is it possible to just plug all of the cameras into a raspberry pi, and stream the cameras to the DS from the RPi? Were using java.

I guess what im looking for is a ELI5 type deal, or a place to learn. Thanks for any help!

I’m not sure how he did it since I’m not program, but we also use java and were able to connect three cameras to a pi

we use c++ and we just plugged them into the RIO and used usbcamera.startautomaticcapture() and it started a camera server stream to the dashboard

I recommend starting by following this tutorial posted to WPILib’s Screen Steps Live: http://wpilib.screenstepslive.com/s/4485/m/24194/l/687863-off-board-vision-processing-in-java

Once you have one camera working, it’s straightforward to add more by calling startAutomaticCapture(1) etc.

Are you confused with how to setup the cameras on the RasPi side or connecting on the DS side?

Ok, so it is that simple. Is there any good way to combat latency? or is that totally dependent on the FMS?

While latency is mainly dependent on the field wifi, you can optimize the stream to lower bandwidth usage. Reducing streamed camera resolution will help especially if you are not using the full bandwidth on the DS (using a smaller screen area than you are streaming).

bummer, alight, i was doing the camera server system, and getting crazy lag at events. Oh well, hopefully they switch to a better radio system next year.

Often the major factor in visual lag/latency is not the communications system but rather the camera settings. Events have different lighting conditions than your lab, and most inexpensive USB cameras seem to prioritize image quality over FPS/latency. Make sure you turn off any “automatic” camera settings for things like exposure and brightness and instead set them to fixed; this can prevent the camera from slowing down the image feed to “brighten up” the image, and in general minimizes the amount of processing the camera firmware needs to do on each image. Also try various resolutions–generally latency will be lower and FPS will be higher at lower resolutions, but this is not always the case (the “native” and thus best performing resolution may not be the lowest resolution offered).

Also if it helps you can define which device files i.e. /dev/videoX the camera are connecting to.

To do this:

  1. plug one camera in, run dmesg
  2. Find the serial. The line should look similar to this:
    [6.390272] usb 1-1.4: SerialNumber: A7C70B40
  3. cd /etc/udev/rules.d
  4. create a file called 00-video.rules
  5. the contents should be similar to this:
    SUBSYSTEM==“video4linux”, KERNEL==“video[0-9]", ATTRS{serial}==“A7C70B40”, SYMLINK+=“stackcam”
    SUBSYSTEM==“video4linux”, KERNEL=="video[0-9]
    ”, ATTRS{serial}==“59959A00”, SYMLINK+=“gearcam”

The ATTRS{serial} will be your serial number for the camera. Then the SYMLINK can be what ever you want to call the camera. Then you can access the camera via “/dev/stackcam”, for example. Whenever you plug the camera in, it will automatically be accessible via the symlink you defined.

Regards,
K Murphy

The other way you can keep cameras consistent is by using the “/dev/v4l/by-path/…” names, which are named based on the physical USB port the camera is plugged into.

One of our teams programmers wrote this to toggle between 2+ cameras. Its written in java but I’m sure you can convert over to C++ if needed. Just plug the cameras directly into the RoboRIO, you can use a USB hub if needed.