Here is a sample program that goes through the essential JavaCV commands to convert a colored, distorted rectangle into a set of corners.
Two approaches are given. Both are close, but not quite there. The important part is that this program cuts through all the difficult ‘how do I call JavaCV’ questions. Maybe someone else can improve on the answers given. We are going to continue to refine this program as well.
Let’s see who gets there first.
Quick start:
Unzip the eclipse project RectangleDetectJavaCV. The windows 64 bit javacv jar files are included. This assumes c:\opencv is your install directory for opencv, not included). Also verify the javacv source paths, etc. Source paths are not required, but are very helpful.
All paths are hard-coded for simplicity, so you will have to change them to suit your environment.
Use the supplied image 20.jpg to get the results in processed20.zip.
cvSplit(image, b, g, r, null); // b and r are reversed for some reason
The reason b and r are “switched” is that the image is in BGR24, not RGB. The raw feed from the camera is YUV420, which FFmpeg converts to BGR24. Why BGR24? I don’t know.
From what I’ve read, color filtering will work better in HSL/HSV, and am currently trying to figure out how to do that.
Thanks for the reply. We are now using HSV filtering to get rectangles. We found we had to use multiple clip bands, depending on how far away from the hoop we were.
Just saw this today and I felt like integrating my code into yours (mostly because I need to do a lot of clean-up with my code and doing this forces me to figure out what parts are and aren’t needed in my own code).
A much easier way to do a threshold is cvInRangeS(…)
an example would be
Here is our latest version. It has gone through a number of iterations.
We now have a full system that can run on an on-board co-processor. The algorithm needs some fine tuning, so your post came at a great time.
Launch using VisionSystem.main().
We have hard-coded values such as team name, resolution, camera ip address, and frames per second.
The system includes
a batch file that can launch from a startup folder. We have a windows service wrapper that works on a laptop, but does not work on the embedded coprocessor, due to dll loading issues.
an auto-reconnect feature, so that the program can re-establish connection to the camera automatically.
a frame grabber from the on-board camera (mjpeg).
configurable resolution and frames per second settings. These are hard coded in the constructor calls, but could be externalized.
integration with the network table, so results can be sent back to the cRio. Note: the NetworkTable has a subtable feature, but we found that the internal table id assignment approach lead to runtime errors. We only use the top level table, named ‘SmartDashboard’.
Auto-sharpen feature, which changes the HSV settings if the system does not detect a rectangle. We found the reflection SV values changed significantly as the robot moved from 8 feet to 21 feet.
Auto-viewport feature to handle multiple rectangles in the view. We pull in the sharpened bits, find the x-center of mass, and find the vertical bars within +/- 167 pixels. The vertical bars are reliable indicators for the backboard. Then we find the tops of the middle 4 vertical bars, which give us a bounding box for the actual top rectangle and run our rectangle detection code on that.
An http server in the application, so we can use a browser to view internal status values.
We ended up moving away from the internal cv* functions because the hoop got in the way of the bottom bar. This kept the system from detecting the full rectangle. It may be that the approach we followed is too slow, so we may end up using your approach.