Intel Realsense Depth Camera

Hi everyone,

My team recently purchased an Intel Realsense D435 depth camera. We thought that this could potentially allow our drivers to have better depth perception at the far end of the field.

However, we are having trouble setting up the camera to send a feed to the driver station. We attempted to set it up just like an ordinary USB camera through the roboRIO, however, this did not work.

The 3D camera requires specific drivers that only run on Ubuntu Linux or Windows and only work with x86 processors. They also require a USB 3.0 connection in order to work. This immediately ruled out the roboRIO as it has USB 2.0 and is an arm processor.

At first, we attempted to use an odroid-xu4, because it has USB 3.0 and runs Ubuntu, by installing the drivers on there, and sending the feed using a TCP server. However, this did not work as the odroid is also an arm processor.

After we did some more research, we found that a workaround was developed in order to make the camera work on a Jetson tx-1 coprocessor.

We are willing to try this, however, before we put in all of this effort, we wanted to know of other teams had ever done something similar to this using a 3D camera. We wanted to know if we were missing any obvious solutions that could allow us the stream this feed to the driver station and are open to any suggestions you may have.

Thanks,
Nathaniel

It should be able to be used with an ARM processor

  1. ::rtm::
  2. ???
  3. Profit?

It isn’t restricted to just x86 CPUs but you are right about the USB 3.0 requirement. The older realsense cameras can drop back to 2.0 but it ain’t pretty. The new ones, like the D435, don’t do this at all from my experiments with them.

I’m not sure what steps you ran through here… if you didn’t build the source for the target platform then it definitely won’t work.

It will also work on the TX2 and just about ANY aarch64 (ARM 64 bit) platform IF you compile the code for it from source. I’ve got a Scalys Grapeboard sitting next to me that I’ve compiled the realsense drivers for. It runs. Not smoothly (single core), but it runs/compiles.

Yes. We’ve been using the Stereolabs Zed camera since 2015/2016 and will continue to do so. Stereovision is awesome!

That being said, there is a HUGE difference between them. The Realsense cameras use IR light emitters and cameras to detect depth - similar to how the Kinect devices work (and a bunch of others). The Stereolabs Zed doesn’t use IR and instead uses two cameras and depth mapping happens in software using NVIDIA hardware to do the computation for it.

There are some other issues with the Realsense related to IR. Specifically, the FRC field is littered with IR reflective and IR transparent surfaces. Doesn’t mean the realsense can’t be used but I think filtering out the noise from the data is going to be tricky for any teams trying to use it. There are probably some ways to limit it’s scope and get usable data out of it to find a nearby game piece based on shape.