Raspberry Pi + Camera Module = New Vision System?

I apologize if this isn’t the proper place for this, but this is kind of a mixture of everything about robotics.

As I finished up my summer I was looking for a fun project, and my dad had a Raspberry Pi and a camera module that plugs straight into it via a small ribbon cable. Nobody has made an easy-to-use API for it yet, so I figured that was a fun way to finish up my summer.

My goal for this was to help create a new vision system with these components, because it is very cheap. The rPi costs $35, and the camera costs only $25. Plus the camera is a 5MP camera capable of capturing resolutions much higher than an axis camera, and allows you to process with a CPU that is nearly twice as powerful as the latest cRIO without impacting the robot functions.

The efficiency of this design is incredible, because all the vision processing is offloaded to the rPi! My goal is to try to improve the API for interacting with the camera as much as possible to support C++ and Java. I’ll try to also write a white paper to fully explain the idea and how to use it the best way possible in time for people to experiment with it before the build season.

If you are curious, all I have is the github for the C++ API where everyone is welcome to contribute wherever possible.

EDIT: Here is the link to the github… https://github.com/Josh-Larson/CameraBoardAPI

I have been working slowly on something similar I called the ARMSight.

Are you planning on snagging parts of OpenCV or are you planning on rolling your own vision algorithms?

This combo certainly has potential and is similar to the BeagleBone and other eval boards that are paired with sensors and act as a smart subsystem on a robot.

I’m not so sure that you need the 5 megapixels, and the key factor is more likely to be the configurability of the camera. If you can control its exposure, its color processing, and its resolution, that will be quite helpful in getting the data you want without noise and without the need for lots of extra processing. The other thing to keep in mind is the development tools and debugging tools. Vision systems rarely work the first time, debugging them can be tricky, and you frequently need to debug them when the environment changes. So invest in the ability to see the images and understand what the various processing steps are doing, and you will be far more productive later.

Sounds like a great project.
Greg McKaskle

I’ve been looking at some of the cheap Android 4.0 tablets that have come into the market. You get a reasonable screen, 512 or 1 g memory, up to a 1.4 ghz quad core arm and a camera for less than 100$ and some refurbs for less than 60$. Pair the android sdk with open cv for android. Could this be the cheap robot vision system ? Would have to figure out how to get usb in to the crio.

You don’t actually need USB at the cRIO.
You bridge the interface with another circuit and you can do exactly what you intend.

When I helped propose the 2015 contol system I actually did this.
Course without intending to bridge to the cRIO.
However it wouldn’t be all that different.

The risk here though is if you use the integrated camera in the tablet (which I am presuming you intend) you have to contend with a few things. The tablet dimensions. The lack of a lens fixture (this is easily worked around I have recorded excellent video with a lens on an iPhone and you can even get a parabolic lens for the iPhone now; plus 3D printers make this easy). The potential for image controls in the firmware of the Android camera (there are circumstances where what looks good for a picture is not good for image processing). Plus there’s the matter of the tablet WiFi which might be an issue.

So while I actually did make this work and would happily make it work again if someone wants to tinker with it.

I thought an integrated camera about the size and shape of the existing cameras might be a better choice.
Course it might not be the cheaper choice because of the power supply issue.
I doubt I can bundle a camera, a dual core ARM board and a suitable battery for less than $150 in the kind of quatity I would expect from FIRST (in otherwords deliver it as a COTS product).
Though I have another application (or dozen) for it.

So far I’ve approached that 2 ways.
Accessible local frame dump so you can get the image without the overhead of TCP/IP (memory card).
A set of access points to set programatic detectors for things like white-out (which does impact processing performance).

The idea was:

  1. To be able to see what is going on if you need to see the process.
  2. To be able to alert if bad things are about to happen.

My current image sensor is from OmniVision.
I’ve been using their CMOS ‘camera’ sensors for years.

With this camera you can do all of that, and more! I’m very excited about it because you can also control these variables:

  • Resolution
  • Brightness
  • Rotation
  • ISO (Essentially shutter speed)
  • Sharpness
  • Contrast
  • Saturation
  • Encoding (Currently only JPEG, BMP, GIF, and PNG)
  • Exposure
  • Auto White Balance
  • Image Effects
  • Metering
  • Horizontal/Vertical/Both Mirroring

I’m trying to add more features so that it will have better debugging, etc. like you said. Any suggestions of what I could add, I would be more than happy to hear!

That camera module is actually using:

5MP (2592×1944 pixels) Omnivision 5647 sensor in a fixed focus module

So basically it is using a similar interface to what I was doing.
Most of OmniVision’s modules are relatively similar.
If you take a microscope to a large number of USB webcams you’ll find OmniVision’s wafer art.

I wasn’t doing this with a Raspberry Pi though.
I was using something I worked up with a local company that makes ARM development boards.
The Raspberry Pi is faster.
It doesn’t have built on CAN which I actually do.
It would be possible to add it (after all it works on this board).

This should be interesting.
When this was proposed it didn’t seem like anyone was interested in it.
I look forward to seeing how this works out.
If I can help please let me know.

Any suggestions of what I could add, …

One of the features I use all the time in LabVIEW is the probe. For images, this allows you to view the image at any stage of processing. To implement this on remote targets takes a bit of work, but you may be able to do something similar by having a library function that will dump the image data info a dedicated debug buffer. Then write a tool on the PC that will retrieve the debug buffer over whatever bus you have between the development computer and your board. Then display the image in the tool’s window.

It would be even cooler if you could figure out how to do this without necessarily recompiling.

Other than that, see if you can have access to view input and output values through something like NetworkTables. This will allow for much quicker iteration.

Greg McKaskle

I’ve been playing with a Raspberry Pi and a Logitech USB camera this summer and have been pretty impressed. I’ve got a Pi camera sitting on my desk ready to go… it’s on the list of things to check out.

One or two small things to keep in mind is that the Pi camera has a fairly wide field of view (as I understand it)… which can be good for latching on to a target, but maybe not so good for precise targeting. On the other hand… the increased resolution might make up for that.

The other minor thing to note is that it apparently has an IR filter. Which is only a big deal if you are using IR beacons as part of the nav/targeting package.

Other than those minor-to-the-point-of-almost-insignificant things, go for it. The Pi community might one day grow to match the Arduino community. That aspect alone makes the Pi stand out as a development platform.

Jason

Jason,
What resolution and processed frame rate are you able to get with that configuration?
Previous limitations on the combo you are using was keeping frame rate really low. Where do you have it now?

I have been working with the PCDuino and a Microsoft USB webcam and am able to get 320X240 images processed into tracking coordinated and to the cRio at 10+ FPS in a worst case lighting environment.
(I should be able to get close to 20+ FPS in the proper lighting environment and calibration.)

I have done some significant improvements to the speed of the rpi and the camera module, however it is still too slow for what I would like to get. I got about 450ms per image on 640x480 and bitmap images, and about 161ms per image on 640x480 and jpeg images.

I have a few ideas on how to create a system to fully utilize the two resources, and I’ll make sure to keep this thread updated when I make progress!

I am just wondering if anyone knows whether this camera will work
http://www.goldmine-elec-products.com/prodinfo.asp?number=G19511

A quick count on my Pi camera shows a 15 wire connector. The link you provide states an 18 wire connector. So, potential driver nightmares aside, I think there might be some hardware issues first.

As for a previous question I’m now getting streaming video, over wifi and internet, of “a few” frames per second at 640x480 using a Logitech USB camera and MJPG Streamer software. There were also some latency issues, but interestingly, those varied from browser to browser. I could have Firefox and Chrome open on the same computer and get different latencies on each browser.

So I don’t entirely blame the Pi for that.

I haven’t had a chance to play with the Pi camera itself yet… but (at risk of repeating my post to a different thread) will recommend anyone with a Pi take a look at PiFM.

Jason

I actually had just started work on a project like this after hearing about it from a friend, so I’m glad I found this thread, and I’d love to help! Is there anything specific I could help out with?

Also, a question. what do you think the best way would be to communicate between the Pi and the cRIO? I tried a simple TCP connection and tried to set up some basic communication, and I was getting about a full second of lag. Any ideas why this might be/alternative ways of communicating?

Again, this sounds really interesting, and I’d love to help out!

Do you guys know of any cheap substitute to the raspberry pi camera, that doesn’t use USB?

I hate how the pi doesn’t support any of the $2 cameras because of it’s interface. Why can’t the RPi foundation make the camera cheaper. $25 is a lot, especually when you can buy a better substitute at a lower price

You can help by checking out the github and making some example programs using OpenCV. It isn’t very fast, and I am grasping for any spare time that I can spend on this. So if anyone is able to make good examples that are able to utilize any library that would be very helpful to teams.

You can use the Camera Module that is $25.

As a status update, I started utilizing OpenCV and the UV4L driver to get a /dev/video0 input stream. Under 320x240 I got about 10-12 frames per second while processing and rotating the image (the pi wasn’t oriented correctly for my setup), and about 3 FPS under 640x480.

The goal was to find mini retro-reflective targets that resembled the targets from last year. The algorithm was converting to HSV, thresholding, then searching for contours. From there I filtered out the bad polygons and I was left with (mostly, occasionally I would get the outlier) my two targets.

I think this is a good time to give another update on my status of the project. Using a C++ V4L interface, I was able to get raw data incredibly fast using a low-level I/O read from a UV4L driver (/dev/video0). From there I made a Mat in OpenCV and thresholded the image, and then located the contours in the image. From there I did some filtering based on two factors: Contour Area, and “Rectangularity.” From there I was mostly left with just my vision targets, and I was streaming 320x240 at 30FPS with some idle time, and 640x480 at 12FPS with no idle time (lots of processing done on these raw images).

That isn’t where I stopped though, I also went on to make an android app to help calibrate the vision system based on different lighting environments. In order to do this, I chose to make an http server on the raspberry pi with a c++ library (libmicrohttpd) and set brightness, threshold, and contour area values as well as requesting images (PNG).

Great job. Keep up the progress!