Rectangle corner extraction in JavaCV with source

Here is a sample program that goes through the essential JavaCV commands to convert a colored, distorted rectangle into a set of corners.

Two approaches are given. Both are close, but not quite there. The important part is that this program cuts through all the difficult ‘how do I call JavaCV’ questions. Maybe someone else can improve on the answers given. We are going to continue to refine this program as well.

Let’s see who gets there first.

Quick start:
Unzip the eclipse project RectangleDetectJavaCV. The windows 64 bit javacv jar files are included. This assumes c:\opencv is your install directory for opencv, not included). Also verify the javacv source paths, etc. Source paths are not required, but are very helpful.

All paths are hard-coded for simplicity, so you will have to change them to suit your environment.

Use the supplied image 20.jpg to get the results in processed20.zip.

processed20.zip (230 KB)
20.jpg
RectangleDetectJavaCV.zip (1.97 MB)
092_goodFeaturesEdgeOverlay.jpg
092_goodFeaturesRawOverlay.jpg


processed20.zip (230 KB)
20.jpg
RectangleDetectJavaCV.zip (1.97 MB)
092_goodFeaturesEdgeOverlay.jpg
092_goodFeaturesRawOverlay.jpg

cvSplit(image, b, g, r, null); // b and r are reversed for some reason

The reason b and r are “switched” is that the image is in BGR24, not RGB. The raw feed from the camera is YUV420, which FFmpeg converts to BGR24. Why BGR24? I don’t know.

From what I’ve read, color filtering will work better in HSL/HSV, and am currently trying to figure out how to do that.

Thanks for the reply. We are now using HSV filtering to get rectangles. We found we had to use multiple clip bands, depending on how far away from the hoop we were.

Maybe we can compare notes.

Just saw this today and I felt like integrating my code into yours (mostly because I need to do a lot of clean-up with my code and doing this forces me to figure out what parts are and aren’t needed in my own code).

  • A much easier way to do a threshold is cvInRangeS(…)
    an example would be
cvInRangeS(src, cvScalar(b_low, g_low, r_low, 0), cvScalar(b_high, g_high, r_high, 0), dst);
  • I kept the BGR threshold, but cvInRangeS also works if you convert your image to another type (like HSL)

  • For blurring, I scale it down to half-size and back up… I have no real reason for why I do it this way.

  • I also use a convexHull operation to fill in the gaps after that

  • I take each blob and find points which are closest to the corners of a bounding rectangle, I choose these as the corners.

  • I eliminate found rectangles that are within other found rectangles (this is a problem that comes with the logic I use).

  • I also eliminate small polygons, so that my threshold can be wider

I hope somebody will find this useful, it’s a pain to get started with javaCV. Classes to add and result are attached.

RectangleDetectKovs.java (10.7 KB)
PolygonStructure.java (5.2 KB)
05_polyCornersOnOriginal.jpg


RectangleDetectKovs.java (10.7 KB)
PolygonStructure.java (5.2 KB)
05_polyCornersOnOriginal.jpg

This is great.

Here is our latest version. It has gone through a number of iterations.
We now have a full system that can run on an on-board co-processor. The algorithm needs some fine tuning, so your post came at a great time.

Launch using VisionSystem.main().

We have hard-coded values such as team name, resolution, camera ip address, and frames per second.

The system includes

  • a batch file that can launch from a startup folder. We have a windows service wrapper that works on a laptop, but does not work on the embedded coprocessor, due to dll loading issues.
  • an auto-reconnect feature, so that the program can re-establish connection to the camera automatically.
  • a frame grabber from the on-board camera (mjpeg).
  • configurable resolution and frames per second settings. These are hard coded in the constructor calls, but could be externalized.
  • integration with the network table, so results can be sent back to the cRio. Note: the NetworkTable has a subtable feature, but we found that the internal table id assignment approach lead to runtime errors. We only use the top level table, named ‘SmartDashboard’.
  • Auto-sharpen feature, which changes the HSV settings if the system does not detect a rectangle. We found the reflection SV values changed significantly as the robot moved from 8 feet to 21 feet.
  • Auto-viewport feature to handle multiple rectangles in the view. We pull in the sharpened bits, find the x-center of mass, and find the vertical bars within +/- 167 pixels. The vertical bars are reliable indicators for the backboard. Then we find the tops of the middle 4 vertical bars, which give us a bounding box for the actual top rectangle and run our rectangle detection code on that.
  • An http server in the application, so we can use a browser to view internal status values.

We ended up moving away from the internal cv* functions because the hoop got in the way of the bottom bar. This kept the system from detecting the full rectangle. It may be that the approach we followed is too slow, so we may end up using your approach.

VisionSystem.zip (4.9 MB)


VisionSystem.zip (4.9 MB)

here are the set of images we used to build up the algorithm.

The sub folder hsv has the processed images that successfully produced a range (for launch speed) and field of view offset (for aiming).

bright640_full_field.zip (4.11 MB)


bright640_full_field.zip (4.11 MB)