Image Processing with 2 cameras

#C++

Hey all,

We are trying to implement a target detection system by using 2 cameras.
Yet, we are having trouble saving images from 2 separate AxisCameras because WPILib will only allow us to have a reference to one of the 2 cameras, and when we try to get a reference to the 2nd camera, it simply returns the reference to the first. This is most likely because of the singleton in the AxisCamera class.

Did anyone encounter a similar problem? If so, what are the alternative solutions you have tried?

Thanks.

Ken, and Saul

The default version of WPILib doesn’t seem to support this. I would suggest either re-compiling WPILib and making the constructor public or making a duplicate camera class by copying the existing one with a new name, say DuplicateAxisCamera.

Either of these methods would work, however I suspect that if you begin trying to process multiple camera feeds on the cRIO you will likely run into a problem with the lack of raw processing power.

If you feel gutsy, perhaps try moving the image processing off-board onto a SBC. Unfortunately, I can’t suggest any particular system (I’m playing with CV on DSLR images taken at 3 Hz, so I’m working with a whole different dataset), but teams have tried using Arduino boards before. Another option is to do the processing off the robot, on the GS (access the datastream from the cameras on the GS, do your analysis there, then toss the data back to the cRIO using a serial port, which I believe is possible using with FRC software.)

You may have better luck using a coprocessor and OpenCV.

There are numerous threads on coprocessors - Freescale MX6, BeagleBone, Rasberry Pi, Mini-ITX pc, just to name a few.

We’re looking at doing the same thing. We have gotten it to work, with a kinect and a cheap webcam bought at the store. Though this was rather easy due to grabbing the image from the camera required different functions. I’d suggest figuring out which COM the various cameras are plugged into, then going from there.

If using an onboard processor, as I have been privileged with, you could just two programs, and have them talk to each other, or just have them both send what they get to the cRio. That is what we were doing when we thought we’d have weight for picking up frisbees. We had 2 camera, a kinect to track the alliance wall, and a cheap webcam to track frisbees on the ground. Since (in opencv that is) getting images from the cameras is different, it could be done. I’ve done stuff with the depth (path planning) while also tracking the alliance wall, so it can be done.

I dont really see how WPILib would be able to handle a task like this, however. And onboard processors are rather cheap. If you do want to get one, I suggest an O-droid product, and not the pi. It is SO much better. Anyways, why are you using 2 cameras to track? Are you doing 3d reconstruction? XD jk, I’d love to learn the system you’re thinking of, very unique. Note, unique != bad.

Unless you are talking about the Due, don’t all the Arduino boards have slow CPUs unsuitable for image processing?

Our team had massive success with our vision system this year. We just ran a labview program on the driver station that grabbed images straight from the camera. We limited the framerate to 20fps (we actually could get higher), and the laptop (core 2 duo) stayed below 40% CPU usage. This works with two cameras too (we played around with the new and old models of the axis cams at the same time).

However, I don’t really see why you’d need 2 cameras on a competition robot, unless you have one pointing forward and one back.

You’d have two cameras for stereo vision. This is very useful for aiming.

We had a system for Rebound Rumble that allowed us to aim effectively by indicating which direction the operator needed to adjust the turret. Although we never had auto-aim in competition, it did function in practice and worked quite well. We later discovered we had been using a blocked port!

I’m not sure if the Arduino Due has enough processing power to handle image processing at the rates we would need, but the CMUCam5 (Pixy) is using an ARM Cortex M4 with Cortex M0 as a coprocessor (LPC4300). Definitely something to look into.

You could derive your location on the playing field with 2 cameras. It’s a “cooperative” process (in that you know the dimensions of goals, etc.)

while that is true, it would be much easier to solve for the pose because you do know the dimensions of the target. AND say in 2012 with rebound rumble, there were 4 targets that were hardly ever obstructed (this year there was that pesky pyramid), so you could solve the pose problem for all 4 targets, so that if you only saw one hoop, you could calculate where the other 3 would be with deadly accuracy. That’s what we did. AND say you use a gyro, like we do with machanuums, you could have a check on the gyro readings passed off of your vision solutions, that is something we plan on doing in the future considering how versatile this program is.

Sorry, I’ve been a vision nut since I started vision programming for rebound rumble, well, since i wrote a program to track 2011’s pegs and tell which is which and what colour to put at each peg (this was done after the season).
Computer vision is a whole field of its own with computer science. Every decent college will have a professor that can at least help you in it. Just email them, I’ve worked with professors from harvey mudd, Wash U, and Missouri S&T, also soon to be umstl because they want to support our team by giving us mentors.

I find it unnecessary to use two cameras. Because 1) the data from both of them have to communicate between each other, and to do that, you have to solve for the relation between them (which is essentially pose), so why not use pose to begin with and only use one camera?

We had a similar issue in 2012. I’m going to assume WPI C++ has the same/similar bug WPI LabView does. In lab view, a memmory location is allocated for images and assigned a unique name. If any other function call is given the same image reference, it uses the same location.

It turns out, some of the LabView function calls had this name hardcoded internally (I went down the rabbit hole). When we changed the name, it started working again. Recomendation, dig through the libraries and find the referenced memmory pointer/location and change as necessary. Save the changed files as a new library.

Images are somewhat unique within LV as they are reference based. The name acts as the reference. This means that if you want to share images, you can use the name. And it means that if you intend to allocate a unique image, you need to use a unique name.

I think the WPI Camera stuff, at least the stuff written years ago didn’t even let you pick the image that cameras or processing went into. And as written it wasn’t appropriate to multiple cameras or more sophisticated image processing.

Greg McKaskle