Cameras (+IMUs)

Hello! I’m thinking of using a 4 camera robot.

I’d like one camera mounted that can rotate, for target acquisition, and then three hard-mounted cameras, one facing forward, one left, and one right. First of all, is such a thing possible? I did not see rules against 4 cameras, but I wonder how I could hook them up. Perhaps one via Ethernet, and 2 via USB? That’s only 3.

I am considering just doing 2 cameras, and rotating the other one to look forward, left, and right. But I would really like to know your thoughts on the possibilities of the 4 camera configuration.

Also, does anyone know how I could program multiple feeds to the Dashboard? I would only want one displayed, but I’d like to switch between them with buttons.

(BTW, i posted about IMUs over in Electrical, and though I’ve gotten some responses already (thanks guys!), that seems like a foolish place to post it when there’s this Control Systems sub forum. Sorry, I had not looked around enough. I think it’d be beneficial to link to it here.)

You could do multiple feeds using a modified version of http://www.chiefdelphi.com/forums/showthread.php?threadid=141904

While it is technically possible to have four cameras on the robot, I’m not sure it is the best or simplest approach. I’ll give you some technical input on doing four cameras, but I’m also going to ask some questions and make some suggestions.

If you want four cameras on the robot, you will need a hub, either USB or enet. These days it will probably be an enet switch instead of a hub, but for your purposes they are the same anyway. These will require power by the way, and you want that power to be protected so they don’t drop out. You will also need to identify the four devices by address, and this will vary depending on their type and the libraries being used. For LabVIEW and USB, it should be as simple as using “USB 0”, “USB 1”, etc. I don’t think the other languages make the lookup for you, so instead it will be “cam0” or something similar, but those jump around more in my experience.

For enet cameras, they will have unique IP addresses, either statically assigned or dynamically assigned.

The important thing is to keep the left/right/front/back associations correct over reboots or easily configurable or interpretable by the drivers.

The next consideration is that it will be rather difficult to stream all four cameras back to the dashboard at once. You can do this if you drop the resolution and frame rate, but you’ll end up with 1/4 the quality on four images. The other way of doing this is to only display and transmit one camera at a time and have a switch button for the drivers.

The WPILib and sample programs aren’t generally doing more than one camera at a time, so that also means that the beta teams and developers aren’t testing multiple cameras as well as a single camera. You will definitely need to modify the template code and may need to modify the libraries to get this to work the way you want.

As for my questions:
What are the four cameras for? Are they for humans to look at, kinda like a security guard station? Can the drivers really use this? Will they really use it?

Are the cameras to measure things? You may want to consider other sensors that can potentially make the measurement and potentially make it better. Remember – computer vision != human vision.

Also consider whether someone can help who has done this before. Four cameras may turn out to be four times harder than one. Are there better things to spend time on to improve the robot and learn cool stuff?

Greg McKaskle

Firstly, thanks for your long and detailed response. And thanks for the link gbear605.

Second, even before I got to your questions I began to be a little more wary of the possibility of implementing 4 cameras…I hadn’t thought of having to mess with some of those code libraries. I’m a Labview user by the way.

Your questions:

One camera will be attached to a rotating mounted shooter, and will be tasked with tracking the high goals and lining up shots. If it works properly during practice, odds are we won’t ever need to view its feed. The other three cameras are for driver awareness. We’re a little concerned about being able to see our short robot across two lines of defenses and properly manipulate it. The cameras would be able to provide the driver with front and side views. I’d planned on only streaming one at a time, and switching between them with the left and right bumpers.

The others seem more rhetorical, so I guess I won’t really answer them here…thank you for bringing them up though. They are certainly things I should think about.

I’m wondering about using one camera, mounted on top of the robot, on a pan/tilt servo mount. I could program buttons to rotate to preprogrammed positions, instead of using a joystick. That way the driver could take quick glances to the left and right if necessary. That seems to be less work altogether, and provide an almost equal amount of functionality to the driver, which as you suggest, he/she may not need.

I actually think we will see more cameras on robots this year than in most years because the field so rather congested. Both USB and enet cameras are supported for this, but if you want it to be easy, I think it is worth the extra money to buy an Axis camera. They are security IP cameras that are specially designed for this and if you start looking around you, you will find them in the ceiling of all sorts of places. Many will be bigger and in a protective housing, but the small ones are pretty easy to find too.

Doing an Axis camera for the DB for the drivers, and a camera on the robot for automation is exactly the direction our team is heading – I think.

Greg McKaskle

I’m currently trying to convince my team that we should go that route. I don’t think there is a significant enough loss in time with the quick turning of a servo mounted camera to justify several more cameras for the drivers.

Hi Greg. Hopefully I’m not going to be accused of hijacking this thread by asking this question. Can the RoboRio (with LabView vision tools) support other USB cameras besides the Microsoft Lifecam? I really like the Lifecam for the Dashboard view and/or targeting but we need a wider field of view this year. There are other cameras with wide-angle capabilities but I’d hate to buy one if the RoboRio can’t support it.

You have root access to the RoboRIO. If you can find a linux driver for it, you can run it.

The roboRIO installation image supports a large variety of USB webcams via UVC.

I have seen cameras that do not work, but most do. NI has a number of vision acquisition libraries, but the one we distribute with roboRIO is called IMAQdx, and it covers gig ethernet, IP such as Axis, and it supports webcams by wrapping drivers already on the OS such as UVC on linux. Other drivers exist for CameraLink and analog decoder boards.

Greg McKaskle

Greg, Another question that I think you could answer.
Suppose we have two USB webcams, one for RoboRIO targeting and another one to provide a wide-field-of-view video feed to the DB. Is this a feasible setup? Can the robot network support and differentiate between two USB webcams?
If not then we may need to think about using one USB camera for targeting and an Ethernet IP camera for the DB.
Thanks,
Dave Tanguay

I am going to forward GeeTwo’s good suggestion of using mirrors from another topic.

I’ve done this trick before with 8 USB cameras (not streaming at the same time) for an experiment with Ubuntu and it can be interesting. Maybe a little too interesting for the maximum size of an FRC robot.

It is possible to hook a really silly number of USB cameras to PC running Ubuntu (mostly likely most modern Linux kernels) if you are willing to get tricky and as long as you don’t want to receive video from them all at the same time. When you reach the limits of the underlying software - start using tri-state buffers or relays.

Suppose we have two USB webcams

It can work, but logistical things can get in the way. For LabVIEW, the USB cameras are enumerated on the robot and sorted using VID/PID and serial number to try and get them to stay in the same order. It is silly, but it may be easier to predict if you use two different types of webcams. Once sorted, they start at USB 0 and go up from there.

If you save a copy of the Vision Processing VI to a new name and make one for each camera, that is one approach. Or you can make another loop inside of the VI.

Our team is using one camera for driver and one for processing. I did the two loop approach. We also decided to do an Axis for driver and a USB for targeting. The Axis has better flexibility for compression and streaming and doesn’t need CPU resources while the USB is nice for the targeting cam.

I’m pretty sure the default dashboard defaults to USB 0. So if you want it to point to the other USB cam, just build from a template and change that one address. Similarly if you choose to go with an Axis instead, change the address. Keep in mind that USB HW option gets a compressed image from the camera, meaning the cRIO only needs to copy the data and not run the jpeg compression like it will if you use USB SW.

If you have issues or questions, just ask.

Greg McKaskle

Thanks for this Greg. You’ve cleared a few things up. After digging into the vision processing VI I now see how the image gets to the DB. Looks like most of the work routing the DB image is done in the the “CameraBackground Loop” VI. I’ll let you know if we have difficulties working with this. Right now I have a lot to learn about video image routing.
Dave Tanguay

Since you’re likely to need it…

Post from last year on switching between two cameras

This years on transmitting two cameras simultaneously

Big wet kiss for this link :smiley:
Thanks.