Laptop Vision and Converting between WPIBinaryImage and BinaryImage

If anyone is making an effort to do vision on the laptop, let me make your life a little bit easier. Here is the (almost) working code for a vision processing widget in SmartDashboard that I found and have been tinkering with. Right now it’s kind of a frankenstein of Greg’s code and the stock vision code.


import edu.wpi.first.smartdashboard.robot.Robot;
import edu.wpi.first.wpijavacv.WPIBinaryImage;
import edu.wpi.first.wpijavacv.WPIColor;
import edu.wpi.first.wpijavacv.WPIColorImage;
import edu.wpi.first.wpijavacv.WPIContour;
import edu.wpi.first.wpijavacv.WPIImage;
import edu.wpi.first.wpijavacv.WPIPoint;
import edu.wpi.first.wpijavacv.WPIPolygon;
import edu.wpi.first.wpilibj.image.BinaryImage;
import edu.wpi.first.wpilibj.image.ColorImage;
import edu.wpi.first.wpilibj.image.CriteriaCollection;
import edu.wpi.first.wpilibj.image.NIVision.MeasurementType;
import edu.wpi.first.wpilibj.image.NIVisionException;
import edu.wpi.first.wpilibj.image.ParticleAnalysisReport;
import edu.wpi.first.wpilibj.image.RGBImage;
import edu.wpi.first.wpilibj.networking.NetworkTable;
import java.util.ArrayList;
import java.util.logging.Level;
import java.util.logging.Logger;

 * @author Greg Granito
public class LaptopVisionRobot extends WPICameraExtension {
    public static final String NAME = "Robot Camera Square Tracker";

    NetworkTable table = NetworkTable.getTable("camera");
    WPIColor targetColor = new WPIColor(255, 0, 0);
    criteriaCollection cc;
    public void init() {
        cc = new CriteriaCollection();      // create the criteria for the particle filter
        cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RECT_WIDTH, 30, 400, false);
        cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RECT_HEIGHT, 40, 400, false);
    public WPIImage processImage(WPIColorImage rawImage) {
        WPIBinaryImage blueBin = rawImage.getBlueChannel().getThresholdInverted(100);
        WPIBinaryImage greenBin = rawImage.getGreenChannel().getThresholdInverted(100);
        WPIBinaryImage redBin = rawImage.getRedChannel().getThresholdInverted(100);

        WPIBinaryImage finalBin = blueBin.getAnd(redBin).getAnd(greenBin);
        try {
            BinaryImage thresholdImage = finalBin;
            BinaryImage bigObjectsImage = thresholdImage.removeSmallObjects(false, 2);  // remove small artifacts
            BinaryImage convexHullImage = bigObjectsImage.convexHull(false);          // fill in occluded rectangles
            BinaryImage filteredImage = convexHullImage.particleFilter(cc);           // find filled in rectangles

            WPIContour] contours = finalBin.findContours();

            ArrayList<WPIPolygon> polygons = new ArrayList<WPIPolygon>();

            for(WPIContour c : contours){
                double ratio = ((double)c.getHeight()) / ((double)c.getWidth());
                if(ratio < 1.5 && ratio> 0.75){

            ArrayList<WPIPolygon> possiblePolygons = new ArrayList<WPIPolygon>();

            for(WPIPolygon p : polygons){
                if(p.isConvex() && p.getNumVertices() == 4){
                    rawImage.drawPolygon(p, WPIColor.CYAN, 5);

            WPIPolygon square = null;
            int squareArea = 0;

            for(WPIPolygon p : possiblePolygons){
                rawImage.drawPolygon(p, WPIColor.GREEN, 5);
                for(WPIPolygon q : possiblePolygons){
                    if(p == q) continue;

                   int pCenterX = (p.getX() + (p.getWidth()/2));
                   int qCenterX = q.getX() + (q.getWidth()/2);
                   int pCenterY = (p.getY() + (p.getHeight()/2));

                   int qCenterY = q.getY() + (q.getHeight()/2);

                   rawImage.drawPoint(new WPIPoint(pCenterX, pCenterY), targetColor, 5);
                   rawImage.drawPoint(new WPIPoint(qCenterX, qCenterY), targetColor, 5);

                    if(Math.abs(pCenterX - qCenterX) < 20 &&
                        Math.abs(pCenterY - qCenterY) < 20){
                        int pArea = Math.abs(p.getArea());
                        int qArea = Math.abs(q.getArea());
                        if(pArea > qArea){
                            square = p;
                            squareArea = pArea;
                            square = q;
                            squareArea = qArea;

            if(square != null){
                double x = square.getX() + (square.getWidth()/2);
                x = (2 * (x/rawImage.getWidth())) - 1;
                double area = ((double)squareArea) /  ((double)(rawImage.getWidth() * rawImage.getHeight()));
                synchronized(table) {
                    table.putBoolean("found", true);
                    table.putDouble("x", x);
                    table.putDouble("area", area);

                Robot.getTable().putBoolean("found", true);
                Robot.getTable().putDouble("X", x);
                Robot.getTable().putDouble("Area", area);
                rawImage.drawPolygon(square, targetColor, 7);
                table.putBoolean("found", false);
                Robot.getTable().putBoolean("found", false);

        } catch (NIVisionException ex) {
            Logger.getLogger(LaptopVisionRobot.class.getName()).log(Level.SEVERE, null, ex);
        return finalBin;

The major trouble I’m having, and why you’ll find this doesn’t compile, is the

BinaryImage thresholdImage = finalBin;

In order to do higher order image processing, I need to convert from WPIBinaryImage to BinaryImage, but there doesn’t seem to be a way to do this. My only alternative at this point is to physically save a bitmap to the harddrive, then read it out, but that is sooooo slow. Anybody have any ideas?

Java seems to lack a lot of the more complex image processing algorithms. Furthermore, the NI Vision Assistant can compile its vision processing scripts into C and LabView, but not Java. We initially wanted to do our image processing in Java, but as it stands I think it simply lacks the functionality to do it effectively.

What I’m doing is offloading the image processing to a Labview VI running on our driver station dashboard. This processes the images faster, reduces the load on the cRIO, and enables access to all the higher end functionality Labview offers for image processing. I transmit whatever data I need from the processed images back over TCP to the cRIO, and the Java handles it from there.

As much as I hate labview, it seems like one of the best ways to do things so far.

EDIT: And while I know this doesn’t strictly answer your question of how to do the conversion, it’s just my 2 cents on the image processing.

That’s the point of doing this conversion. The wpilibj package has almost all the same functions as LabView, prewritten in Java, you just have to call them. Trouble is getting the image to it in the first place. I’ve found WPIBinaryImage and BinaryImage’s respective basic memory units, the trouble is, they’re both protected. Is there a way to override the protection (ie make a private variable public in an already compiled jar)?

Hopefully I’m wrong, as far as I looked though the Java seemed rather limited.

If you want to mess with the libraries though that’s actually a great thing to get into, it opens up a ton of possibilities. Open up My Documents/sunspotfrcsdk/wpilibj and poke around in the /src folder. You can find all the WPI libraries in there, and you can just go find BinaryImage or whichever you’re trying to modify, open the .java up in netbeans, and rebuild the libraries.

I spent basically the last twelve hours tinkering with the libraries, and I don’t think much will come of it. I don’t have any idea how the wrappers for the Labview libraries format their images, because all they store is a Pointer. Rather than continue to be miserable, I think I’ll try the Labview route. Are there any good docs on making your own driverstation in Labview?

Heh, welcome to the club :stuck_out_tongue:

Download and update labview, and when you open it up start a new project and look for a DriverStation 2012 template. To make your actual vision processing script, play around with the NI Vision Assistant software. That’ll let you make a whole processing algorithm that it can convert into a Labview VI for you.

If you need any help with stages of this please don’t hesitate to ask me. I’ve done the same thing, plus a little, and it’s been a long and arduous path though seems to be well worth it so far. I’d be happy to give any information on my setup you need.

Basically, I have an image processing subVI in my dashboard. That spits some values back to the dashboard itself, and then over TCP, sends the x,y coordinates of the centers of each detected target to the CRIO. I’ve made libraries on the CRIO that handle its side of the TCP communication, and dealing with those x,y coordinates which I use for aligning the shooter turret.

In tinkering with the NI Vision Assistant, I’ve created what I think is a good image processing scheme. However, I’m finding that I’m missing one thing from the Vision Assistant that the Java libraries don’t have. It’s the Close Objects function in the Binary Images tab under basic morphing.

I’ve read several posts, including yours, that say Java only has limited image processing routines. So, I’m interested in finding out more detail on how you are using Labview on the driver station dashboard. Are you using the default dashboard provided with the driver station? We’re currently using the SmartDashboard. Would your method work with that? If not, I wonder if it’s possible to run both the default dashboard and the SmartDashboard at the same time… I have no experience in Labview but I do see that I can create a VI file for Labview straight from the Vision Assistant program. After that I have no clue how to implement it into the dashboard and then send the info back to the Java code…