Getting distance to the goal

Just wondering what other teams are using/have used to get the distance to the goal. If I have the distance to the goal I can throw the distance into my equation for our shooter and spit out the angle I need the shooter at to make the shot. Really, my quesetion is what is the most efficient manner when using vision processing to get the distance to the 3 ptn frisbee goal? I’m using java right now (doing a build on labview and java)

Any help is much appreciated!

We’ve used a sonic rangefinder which came with fewer headaches but less sophistication and detail than vision processing. This might get the job done for what you need. Mount it centered on the front of the bot and the distance to the wall can be used for calculating shooter speed/shot distance. A voucher from MaxBotix was included in KOP and might be a sensible solution.

Well, we used one of those last year, but if anything gets in the way, a frisbee or a bot or even the pyramid, it will return a false distance. So I was thinking of going with the photon cannon idea and using the distance between the top and bottom retroreflective tape to give me my distance using some basic trig functions. I was just seeing if there were any very efficient algorithms on doing this. I don’t want it to take up a ton of processing obviously.

Most teams I have talked with plan on using the pyramid to their advantage when lining up a shot. I think its a great plan.

I agree, but I’d like to be able to shoot from wherever, speeding things up ten fold.

Anywho, I found the whitepaper on tracking,
http://firstforge.wpi.edu/sf/go/doc1302?nav=1

I don’t quite understand the whole distance calculations where they are getting the width of the entire field of view.

Alrighty, I think I’ve got it, I’ll upload my java class in a few, its untested but theoretically works. I’ll have to test it tomorrow at our meeting.

I am very interested in seeing this as I am having difficulty doing this with vision processing.

I think there’s a java API out there, but I’m not sure what it’s called. D:

What the you have to do is determine the width of the target in pixels, given a constant value w/ the target centered, and find the hypotenuse, which is your distance
EDIT: Use the height for distance, not the width.The width can vary with angle to the target.
Example:
Note, these are necessary constants

    public static final int IMAGE_WIDTH = 320;
    public static final int IMAGE_HEIGHT = 240;
    public static final double TARGET_WIDTH = 2.0;
    public static final double TARGET_HEIGHT = 1.5;
    public static final double RECTANGLE_QUALITY_THRESHOLD = 95.0;
    public static final double PARTICLE_TO_IMAGE_THRESHOLD = 0.05;
    public static final double PARTICLE_AREA_THRESHOLD = 250.0;
    protected double fieldOfViewWidth = 0.0;

    /**
     * Return distance to target in inches
     *
     * @param targetHeightInPixels
     * @return
     */
    public double GetDistanceToTarget(int targetHeightInPixels)
    {
        double fieldOfViewHeight = TARGET_HEIGHT / targetHeightInPixels * IMAGE_HEIGHT;
        distance = (fieldOfViewHeight / 2.0) / 
                /*
                 * tan 23.5 degrees
                 */
                  0.4348123749;
        return distance * 12.0 * 1.4;
    }

I am assuming you can figure out how to filter for targets.

Appreciate the reply’s! Sorry it took a few hours for me to post the code, left for a few hours for dinner and such…


package org.first.team2620;

import com.sun.squawk.util.MathUtils;
import edu.wpi.first.wpilibj.DriverStationLCD;
import edu.wpi.first.wpilibj.Timer;
import edu.wpi.first.wpilibj.camera.AxisCamera;
import edu.wpi.first.wpilibj.camera.AxisCameraException;
import edu.wpi.first.wpilibj.image.BinaryImage;
import edu.wpi.first.wpilibj.image.ColorImage;
import edu.wpi.first.wpilibj.image.CriteriaCollection;
import edu.wpi.first.wpilibj.image.NIVision.MeasurementType;
import edu.wpi.first.wpilibj.image.NIVisionException;
import edu.wpi.first.wpilibj.image.ParticleAnalysisReport;
import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard;


/**
 *
 * @author frc2620
 */
public class CameraDistanceCalc {
    
    private AxisCamera camera_;
    private CriteriaCollection cc;
    private DriverStationLCD b_LCD;
    
    public CameraDistanceCalc(AxisCamera cam)
    {
        camera_ = cam;
        cc = new CriteriaCollection();      
        cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RECT_WIDTH, 30, 400, false);
        cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RECT_HEIGHT, 40, 400, false);
        b_LCD = DriverStationLCD.getInstance();
        b_LCD.updateLCD();
    }
    
    public CameraDistanceCalc(String cameraIP)
    {
        camera_ = AxisCamera.getInstance(cameraIP); 
        cc = new CriteriaCollection();      
        cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RECT_WIDTH, 30, 400, false);
        cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RECT_HEIGHT, 40, 400, false);
        b_LCD = DriverStationLCD.getInstance();
        b_LCD.updateLCD();
    }
    
    public double calcDistance()
    {
        double distance = 0;
        try 
        {
            ColorImage image = camera_.getImage();   
            //ColorImage image;                           
            //image =  new RGBImage("/10ft2.jpg");
            //BinaryImage thresholdImage = image.thresholdRGB(25, 255, 0, 45, 0, 47);   // keep only red objects
            BinaryImage thresholdImage = image.thresholdRGB(0, 45, 25, 225, 0, 45);     // Keep only green objects
            BinaryImage bigObjectsImage = thresholdImage.removeSmallObjects(false, 2);  // remove small artifacts
            BinaryImage convexHullImage = bigObjectsImage.convexHull(false);            // fill in occluded rectangles
            BinaryImage filteredImage = convexHullImage.particleFilter(cc);             // find filled in rectangles

            ParticleAnalysisReport] reports = filteredImage.getOrderedParticleAnalysisReports();  // get list of results
      
            ParticleAnalysisReport ThreePtn = reports[reports.length-1]; // Reports is listed by largest to smallest area in the rectangle, 3 ptner should be the smallest area
            
            double tw = 4.5;
            double tpw = ThreePtn.boundingRectWidth;
            double cw = 320;
            
            double WFOV = tw * (cw / tpw);
            
            distance = (WFOV / 2) / (MathUtils.atan(27 * (Math.PI / 180)));
            
            
            System.out.println(filteredImage.getNumberParticles() + "  " + Timer.getFPGATimestamp());

            SmartDashboard.putNumber("Number of Particles: ", filteredImage.getNumberParticles());
            SmartDashboard.putNumber("Height:  ", filteredImage.getHeight());
            SmartDashboard.putNumber("Width:   ", filteredImage.getWidth());
            SmartDashboard.putNumber("Distance:", distance);

            filteredImage.free();
            convexHullImage.free();
            bigObjectsImage.free();
            thresholdImage.free();
            image.free();

        } catch (AxisCameraException ex) {
            ex.printStackTrace();
        } catch (NIVisionException ex) {
            ex.printStackTrace();
        }
        return distance;
    }
}

I have not tested this, though if you need help let me know! I should be able too.

Has anyone tested my class I made? I’d like to know any results you have from it. :slight_smile: I haven’t had a chance to try it yet, other things have out weighed it.

Team 53 is going to mount the XBOX Kinect directly on our robot and use OpenCV and an on-board computer (Beagle Board) to process our images.

I wouldn’t recommend this. We used one last year. We had issues with teams on our alliance driving in front by accident, throwing our shooting off. Near the end of the second regional, we started to clue in that we should probably tell our alliance partners about it. Turns out this was a bad idea. Teams we played with in one match, and against in later matches (including eliminations) only had to SIT infront of us to throw our shooting off.

We had to fall back on our preset shooter values, which weren’t nearly as accurate or tuned in as our values from our ultrasonic. We spent all of lunch after alliance selections at our second regional tuning our presets, which we almost entirely used in our elimination matches.

However the height varies as well depending on the height of your robot and how close you are to the target.

you see, this is what our team did last year, we built a very complex robot (one that had the shooter able to turn, change the angle, had a range finder, camera, etc.) and we ended up not doing too well at competition because the programming was too complex for the programmers, all i’m saying is, if it doesn’t work out, shoot from the front or back of the pyramid and have a set angle and speed, that is what we are doing, good luck!

I agree with this. Having a shooter that can target from anywhere means layers of complexity in additional motors, moving parts that can fail, and programming. While there are certainly some teams that can pull this off, most of us will do better with a simple and reliable design.

I’ll add that targeting can be accomplished visually with a flashlight and no programming, since we have retroreflective targets.

A flashlight isn’t the best option for illuminating the targets. To make effective use of the retroreflector tape, what you want is a very wide angle light source. In any case, requiring the driver to split his attention between the field and the video display might not be a good choice.

If you simply want to “paint” a bright spot on the goals, a flashlight is okay. Just keep in mind that the retroreflective border isn’t going to do a good job of bouncing that light anywhere except directly back at the flashlight.