What camera(s) do you use?

We use the typical camera from Andymark for vision processing

However, programmer wants an upgrade, so I was considering…



and then I thought why not just ask the whopping 43k population on chiefdelphi!

We use the Playstation 3 Eye. It can do 60FPS at 640x480 or up to 160FPS at 320x240. Issues with it is that it’s a little bulky and hard to mount, it doesn’t work with the Rio (only use it if you’re using a coprocessor), and you have to custom compile a kernel module for the Jetson if you’re going to use it with one (although there are scripts for that). However, it’s only $5 or so and pretty low latency and configurable (it was originally intended for computer vision, after all).

We used an Axis Camera. It’s a pricey option, but it lets you work without a co-processor and stream to the driver station with low latency.

That said, we’re considering something different from last year. Either a "MIPI" camera module with a Jetson, or a complete Android phone like 254 et al.

Have the PS3 Eye for low resolution, high frame rate applications.

Have the Logitech C920 for high resolution, lower frame rate.

The Logitech C930 has a wider field of view.

I would avoid the Microsoft camera that previously found it’s way into FRC if used on Linux due to driver issues that may bite you if you try to mess with the white balance and exposure common for computer vision. These Microsoft camera are otherwise fine for webcam style use.

1 Like

We struggled a bit with cameras this year - especially the USB. Our final solution was to use old Axis cameras. The nice thing about them is this: if you cannot get the dashboard stream to work, you can navigate to the camera’s web page and watch the stream there.

For tracking, we played with a pixy camera and were highly impressed. The no-nonsense analog feedback makes it nearly trivial to hook up. We’re playing the the labView feedback and it works as well, but we haven’t had much time with it.

What are your goals of upgrading your camera? Typically, to avoid using too much CPU power to process vision, one would not want to process high resolution images. So upgrading camera to get a “better resolution” is the wrong goal. Going for a different connectivity could be a legitimate reason (i.e. USB vs IP). Another legitimate reason to “upgrade camera” is to have a smarter camera so that it will do some vision processing for you. The stereolabs camera may fall into this category. Pixy camera is another example of smart cameras. But again these smart cameras give you richer information requiring you to understand and process this information to make them useful. So are you prepared to do extra work to deal with smart info?

1836 used to use the Axis camera, and for 2017 we used the Microsoft lifecam 3000. It was great and worked well, but we are most likely going to switch to pixy for its simplicity to setup and implement.

This year we used the Microsoft LifeCam HD3000 with the NVIDIA Jetson TX1 as our coprocessor, it worked well for us.

This camera looks pretty darn nifty. Unfortuantely, I think the cost exceeds the COTS limit… at least it did in 2017.

We used the Microsoft LifeCam. It’s pretty cheap and suites our purposes.

Microsoft Lifecam will be plenty fine for vision processing.

However, I would HIGHLY recommend using Video4Linux with it so you can adjust exposure and such.

The only issue is mounting it. We used a 3D printed mount for it where you remove the back flexible thing and stick the 3D printed part where the flexible thing was. Doing a quick search for the mount you should find it and instructions to remove the flexible band.

That’s where you would be wrong:

And going back further in fact:

Configuring the camera is important. We don’t even bother removing the flexible back piece. Our team just designed the mounts to accommodate.

RED WEAPON 8K With Helium Sensor

This year we used a Pixy camera for vision. It worked though at times it really strained the roborio.

Has anyone tried the Raspberry Pi camera with the new Pi? It has some throughput advantages over USB cameras.

We used USB fisheye cameras for driver vision this year.

Not sure what you mean by new pi or throughput but the latest version of the camera module is using a Sony sensor and is nice. It should work well for FRC teams. It didn’t make it onto our robot but we did play around with it this past year.

The Raspberry Pi camera, from my understanding, connects directly to the GPU, which significantly reduces CPU loading. By new Pi, I’m referring to the Pi 3 vs Pi 2. Currently, I have the older 5 MP camera, but I’d like to try the new Sony one.

I’m confused. The pixy camera handles all the calculations with its own processor. The only thing it outputs is an analog voltage for left/right targetting, or x/y size data if you’re using the SPI output. The camera outputs updated info at 50 hz. That shouldn’t strain anything on the roborio.