Go to Post Why follow when you can lead? - Elgin Clock [more]
Home
Go Back   Chief Delphi > Technical > Programming > Java
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rating: Thread Rating: 3 votes, 5.00 average. Display Modes
  #1   Spotlight this post!  
Unread 23-03-2014, 17:55
jwakeman jwakeman is offline
Registered User
FRC #0063 (Red Barons)
Team Role: Mentor
 
Join Date: Jan 2011
Rookie Year: 2010
Location: 16510
Posts: 182
jwakeman is just really nicejwakeman is just really nicejwakeman is just really nicejwakeman is just really nicejwakeman is just really nice
WORKING: 2014 Driver's Station Vision Processing

Hi all. It may be a bit late in the season for this to be useful for most but I want to share the Java/SmartDashboard based vision processing that my team is using this year. It is basically a combination of the DaisyCV code that was posted by Team 341, HERE and the 2014VisionSampleProject available in Netbeans. The DaisyCV code is setup to be a SmartDashboard extension widget and uses OpenCV to do the image processing operations. The 2014VisionSampleProject is intended to be run onboard the robot and uses the NIVision api to do the image processing operations. Our approach was to use the DaisyCV code as a base and update it with the specifics for this year's game using the example laid out in the 2014VisionSampleProject.

I've attached the Netbeans project for our vision processing widget as a zip file. I will go through all the steps to import the project into Netbeans and get it up and running. Then I will explain how to deploy the widget for use with your Driver's Station/SmartDashboard setup for competition. Finally, I will highlight a couple of the key changes we made.


Importing and setting the project up in NetBeans.
1. If you don't already have the Java development tools installed (Netbeans, JDK, and FRC plugins) then you need to follow the instructions HERE

2. Run the Smart Dashboard Standalone/Vision Installer 1.0.5.exe which you can download HERE

NOTE: Close Netbeans before running Installer 1.0.5.exe. If you install Smart Dashboard while Netbeans is open you need to close and re-open Netbeans so it will see additions made to the system PATH variable

3. Import the 2014VisionSampleProject into Netbeans. File->New Project->Samples->FRC Java->2014VisionSampleProject. This is required to get the sample images to test the code against.

4. Unzip the attached Team63MachineVision.zip into your NetBeansProjects directory.

5. Import the Team63MachineVision project into NetBeans with File->Open Project

6. When you open the project NetBeans will present you with a window that says "Resolve Project Problems". Basically you need to point it toward the .jar files it is dependent on. Here are the required .jar files and the paths to them on my machine.

SmartDashboard.jar:
Code:
C:\Program Files\SmartDashboard\SmartDashboard.jar
WPIJavaCV.jar:
Code:
C:\Program Files\SmartDashboard\extensions\lib\WPIJavaCV.jar
wpilibj.jar:
Code:
C:\Users\jdubbs\sunspotfrcsdk\lib\wpilibj.jar
NetworkTables_Client.jar
Code:
C:\Users\jdubbs\sunspotfrcsdk\desktop-lib\networktables-desktop.jar
javacpp.jar:
Code:
C:\Program Files\SmartDashboard\extensions\lib\javacpp.jar
javacv-windows-x86.jar:
Code:
C:\Program Files\SmartDashboard\extensions\lib\javacv-windows-x86.jar
javacv.jar:
Code:
C:\Program Files\SmartDashboard\extensions\lib\javacv.jar
WPICameraExtension.jar:
Code:
C:\Program Files\SmartDashboard\extensions\WPICameraExtension.jar
7. The final problem to resolve (JDK_1.7x86) takes a bit more work. I am working on a Windows 7 64-bit machine. The Smart Dashboard Camera Extension requires to be run with 32-bit java virtual machine. I downloaded and installed jdk-7u51-windows-i586.exe from HERE. Then to add the platform in NetBeans you need to go to Tools->Java Platforms->Add Platform. Point it to the path "C:\Program Files (x86)\Java\jdk1.7.0_51" and give it the name JDK_1.7x86. Then to select this platform for the project, right-click the Team63MachineVision project and under Properties->Libraries->Java Platform select JDK_1.7x86 from the drop-down list.

At this point you should be able to run and debug the project in NetBeans.
The project contains two source files. Team63VisionWidget.java and DaisyExtensions.java. The Team63VisionWidget.java file has a main() function which can be used to run the widget stand-alone and allows you to step-into the project and debug it in NetBeans. Before you step into the code you need to give it an image to process. To do this you right-click the project in NetBeans and go to Properties->Run->Arguments and enter the path to one of the sample images from the 2014VisionSampleProject. Make sure to put double quotes around the path. The path on my machine is:

Code:
"C:\Users\jdubbs\Documents\NetBeansProjects\Sample\VisionImages\2014 Vision Target\Right_27ft_On.jpg"
You can now use F7 to step into the code and F7 or F8 to step into/step over functions. The core function is:

Code:
 public WPIImage processImage(WPIColorImage rawImage)
There is probably only 3 items you need to change to make the code work for you.

1. Y_IMAGE_RES - This is the based on the resolution of the images you are bringing back from the camera. This link talks about configuring various settings for the camera.

2. VIEW_ANGLE - This is based on which model of Axis camera you are using. The other two models are commented out in the code.

3. Most importantly you need to set the HSV threshold values to work with the camera ring light you selected. This line of code has the threshold ranges in it:
Code:
 opencv_core.cvInRangeS(hsv,opencv_core.cvScalar(160,120.0,100.0,0.0),opencv_core.cvScalar(190.0,255.0,200.0,0.0),thresh);
The values are given as a set of lower values for hue, saturation and value then a set of upper values for hue, saturation and value. The fourth value in these cvScalar sets is not used and can just be set to 0.0. We are using a red ring light at this point and that is what these ranges are setup for in the code. To work with the sample images which use a green light a hue range of 70-100 works pretty well. So to illustrate here is what this line of code should be changed to:

Code:
opencv_core.cvInRangeS(hsv,opencv_core.cvScalar(70.0,120.0,100.0,0.0),opencv_core.cvScalar(100.0,255.0,200.0,0.0),thresh);
In my experience you probably only need to change the hue upper and lower range for different colored ring lights. The saturation and value ranges given here seem to work well for various ring light colors and lighting conditions.

To fine tune the color threshold values you should capture an image of the vision target using your camera and ring-light. This link talks about how to capture a image from the Axis camera through the web interface. Once you have an image captured and saved to your pc pass it as an argument to the debugging session in NetBeans as described above. Then edit the following line of code to pass true when creating the Team63VisionWidget object.

Code:
Team63VisionWidget widget = new Team63VisionWidget(false);
becomes

Code:
Team63VisionWidget widget = new Team63VisionWidget(true);
This will cause several windows to pop up during various stages of the image processing...process. One of these windows will have the caption, "Image". If you select this window and move your mouse around over the image you will see the HSV values of the pixel the mouse is hovering over in the Output window in the NetBeans IDE. It will look like this:

Code:
H:84.0 S:254.0 V:166.0
H:84.0 S:254.0 V:165.0
H:84.0 S:254.0 V:166.0
H:84.0 S:254.0 V:166.0
H:84.0 S:254.0 V:166.0
Hover your mouse over the vision target and make note of the ranges of HSV values you see while on the target.


This is how to deploy the widget for use with your Driver's Station/SmartDashboard setup for competition.

1. Create a file name LaunchSmartDashboard.cmd which contains the following text:
Code:
cd "C:\\Program Files\\SmartDashboard"
"C:\\Program Files (x86)\\Java\\jre7\\bin\\javaw.exe" -jar SmartDashboard.jar
Save this file to:
Code:
C:\Users\Public\Documents\FRC\LaunchSmartDashboard.cmd
2. Edit the file which tells the Driver's Station how to launch the Dashboard to point to this file we just created.

You will be editing the following file:
Code:
C:\Users\Public\Documents\FRC\FRC DS Data Storage.ini
Change the DashboardCmdLine entry in the file to become this:
Code:
DashboardCmdLine = ""C:\\Users\\Public\\Documents\\FRC\\LaunchSmartDashboard.cmd""
3. Right-click the project in NetBeans and select "Clean and Build". Then copy the file Team63MachineVision from:
Code:
C:\Users\jdubbs\Documents\NetBeansProjects\Team63MachineVision\dist
to:
Code:
C:\Program Files\SmartDashboard\extensions
4. Now when you launch the Driver's Station software the SmartDashboard should be automatically launched. In the SmartDashboard you can add the widget by going to View->Add->Team63 Target Tracker


OK! So now two things I think are improvements over the base DaisyCV code and one item which is a...non-improvement...over the 2014VisionSampleProject code.

The original DaisyCV code used the following set of operations to do the color threshold filtering of the image:

Code:
        opencv_core.cvSplit(hsv, hue, sat, val, null);

        // Threshold each component separately
        // Hue
        // NOTE: Red is at the end of the color space, so you need to OR together
        // a thresh and inverted thresh in order to get points that are red
        opencv_imgproc.cvThreshold(hue, bin, 60-15, 255, opencv_imgproc.CV_THRESH_BINARY);
        opencv_imgproc.cvThreshold(hue, hue, 60+15, 255, opencv_imgproc.CV_THRESH_BINARY_INV);

        // Saturation
        opencv_imgproc.cvThreshold(sat, sat, 200, 255, opencv_imgproc.CV_THRESH_BINARY);

        // Value
        opencv_imgproc.cvThreshold(val, val, 55, 255, opencv_imgproc.CV_THRESH_BINARY);

        // Combine the results to obtain our binary image which should for the most
        // part only contain pixels that we care about
        opencv_core.cvAnd(hue, bin, bin, null);
        opencv_core.cvAnd(bin, sat, bin, null);
        opencv_core.cvAnd(bin, val, bin, null);
We changed this to use the cvInRangeS function which I believe is functionally equivalent and is much easier to look at in the code! You also don't have to treat the color red any different than other colors!

Code:
        //cvInRangeS function does not require the frames to be split
        //and can directly function on multichannel images
        opencv_core.cvInRangeS(hsv,opencv_core.cvScalar(70.0,120.0,100.0,0.0),opencv_core.cvScalar(100.0,255.0,200.0,0.0),thresh);
The second improvement we made was the ability to hover the mouse over the image and get the pixel HSV values for the purpose of fine tuning the threshold values. This is accomplished with the following chunk of code:

Code:
            cvNamedWindow("Image",CV_WINDOW_AUTOSIZE);
            
            CvMouseCallback on_mouse = new CvMouseCallback() 
            {            
                @Override           
                public void call(int event, int x, int y, int flags,com.googlecode.javacpp.Pointer param) 
                {            	
                    if (event == CV_EVENT_MOUSEMOVE)
                    {            		
                        x_co = x;
                        y_co = y;
                    }
                    opencv_core.CvScalar s=opencv_core.cvGet2D(hsv,y_co,x_co);
                    System.out.println( "H:"+ s.val(0) + " S:" + s.val(1) + " V:" + s.val(2));//Print values
                }
            };
            cvSetMouseCallback("Image", on_mouse, null);
            cvShowImage("Image", input);
And finally the one compromise we had to make when converting the 2014 Sample Vision code. This code likes to use the operation:

Code:
        rectLong = NIVision.MeasureParticle(image.image, particleNumber, false, MeasurementType.IMAQ_MT_EQUIVALENT_RECT_LONG_SIDE);
        rectShort = NIVision.MeasureParticle(image.image, particleNumber, false, MeasurementType.IMAQ_MT_EQUIVALENT_RECT_SHORT_SIDE);
and gives an explanation of:

Code:
This method uses the equivalent rectangle sides to determine aspect ratio as it performs better as the target gets skewed by moving to the left or right.
We could not find an equivalent operation in the OpenCV api and therefore we just used the WPIPolygon objects directly when looking at the length and width of particles detected in image.

If anyone attempts to use this code/follow this guide and has trouble feel free to post your questions here and I will do my best to answer them. I would also be interested to know if anyone is able to successfully use this code for their robot. Good luck teams!
Attached Files
File Type: zip Team63MachineVision.zip (88.4 KB, 17 views)
Reply With Quote
  #2   Spotlight this post!  
Unread 27-03-2014, 15:14
nydnh01 nydnh01 is offline
Registered User
AKA: Naresh Hing
FRC #1601 (Quantum Samurai)
Team Role: Alumni
 
Join Date: Feb 2011
Rookie Year: 2011
Location: Richmond Hill, NY
Posts: 43
nydnh01 is an unknown quantity at this point
Re: WORKING: 2014 Driver's Station Vision Processing

Under the Daisy Extensions I am receiving errors on the return statements stating the method is not public and cannot be accessed outside the package.
Reply With Quote
  #3   Spotlight this post!  
Unread 27-03-2014, 15:49
Jared Russell's Avatar
Jared Russell Jared Russell is offline
Taking a year (mostly) off
FRC #0254 (The Cheesy Poofs), FRC #0341 (Miss Daisy)
Team Role: Engineer
 
Join Date: Nov 2002
Rookie Year: 2001
Location: San Francisco, CA
Posts: 3,078
Jared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond reputeJared Russell has a reputation beyond repute
Re: WORKING: 2014 Driver's Station Vision Processing

Awesome job! I'm glad to see that people are still using this. You did a great job of describing precisely how to get the environment set up, which is the aspect that I get (by far) the most emails and PMs about.

The extension using the mouse is actually something that we did in the 2013 version of DaisyCV (which I tried to upload to CD, but got an error...still looking into it). We would click in the frame where we were actually shooting the frisbees so that we could calibrate the vertical and horizontal offset of the shooter. Great for cases where the camera was bumped, a shooter wheel wore in, etc.
Reply With Quote
  #4   Spotlight this post!  
Unread 27-03-2014, 16:33
jwakeman jwakeman is offline
Registered User
FRC #0063 (Red Barons)
Team Role: Mentor
 
Join Date: Jan 2011
Rookie Year: 2010
Location: 16510
Posts: 182
jwakeman is just really nicejwakeman is just really nicejwakeman is just really nicejwakeman is just really nicejwakeman is just really nice
Re: WORKING: 2014 Driver's Station Vision Processing

Quote:
Originally Posted by nydnh01 View Post
Under the Daisy Extensions I am receiving errors on the return statements stating the method is not public and cannot be accessed outside the package.
Hmm..not sure why this would happen for you. The DaisyExtensions class and all the methods inside are public so they should be accessible. Have you made any changes to the code??
Reply With Quote
  #5   Spotlight this post!  
Unread 27-03-2014, 16:39
jwakeman jwakeman is offline
Registered User
FRC #0063 (Red Barons)
Team Role: Mentor
 
Join Date: Jan 2011
Rookie Year: 2010
Location: 16510
Posts: 182
jwakeman is just really nicejwakeman is just really nicejwakeman is just really nicejwakeman is just really nicejwakeman is just really nice
Re: WORKING: 2014 Driver's Station Vision Processing

Quote:
Originally Posted by Jared Russell View Post
I'm glad to see that people are still using this.
Kudos to you! I haven't tried all the various options RoboRealm, NIVision Assistant etc but for me your SmartDashboard extension was the most straight forward way to get offboard vision processing working.

Quote:
Originally Posted by Jared Russell View Post
We would click in the frame where we were actually shooting the frisbees so that we could calibrate the vertical and horizontal offset of the shooter. Great for cases where the camera was bumped, a shooter wheel wore in, etc.
Neat idea!
Reply With Quote
  #6   Spotlight this post!  
Unread 27-03-2014, 18:45
nydnh01 nydnh01 is offline
Registered User
AKA: Naresh Hing
FRC #1601 (Quantum Samurai)
Team Role: Alumni
 
Join Date: Feb 2011
Rookie Year: 2011
Location: Richmond Hill, NY
Posts: 43
nydnh01 is an unknown quantity at this point
Re: WORKING: 2014 Driver's Station Vision Processing

Quote:
Originally Posted by jwakeman View Post
Hmm..not sure why this would happen for you. The DaisyExtensions class and all the methods inside are public so they should be accessible. Have you made any changes to the code??
I have not made any changes to the code.
Reply With Quote
  #7   Spotlight this post!  
Unread 27-03-2014, 20:09
jwakeman jwakeman is offline
Registered User
FRC #0063 (Red Barons)
Team Role: Mentor
 
Join Date: Jan 2011
Rookie Year: 2010
Location: 16510
Posts: 182
jwakeman is just really nicejwakeman is just really nicejwakeman is just really nicejwakeman is just really nicejwakeman is just really nice
Re: WORKING: 2014 Driver's Station Vision Processing

Quote:
Originally Posted by nydnh01 View Post
I have not made any changes to the code.
Which version of NetBeans and which JDK are you using? Maybe just try a clean and build?

Last edited by jwakeman : 27-03-2014 at 20:11.
Reply With Quote
  #8   Spotlight this post!  
Unread 27-03-2014, 20:33
nydnh01 nydnh01 is offline
Registered User
AKA: Naresh Hing
FRC #1601 (Quantum Samurai)
Team Role: Alumni
 
Join Date: Feb 2011
Rookie Year: 2011
Location: Richmond Hill, NY
Posts: 43
nydnh01 is an unknown quantity at this point
Re: WORKING: 2014 Driver's Station Vision Processing

I am using JDK 1.7 and Netbeans v8. Can you send me your WPIJavaCV.jar file to see if there is anything different in the source code.

Last edited by nydnh01 : 28-03-2014 at 00:27.
Reply With Quote
  #9   Spotlight this post!  
Unread 28-03-2014, 10:43
jwakeman jwakeman is offline
Registered User
FRC #0063 (Red Barons)
Team Role: Mentor
 
Join Date: Jan 2011
Rookie Year: 2010
Location: 16510
Posts: 182
jwakeman is just really nicejwakeman is just really nicejwakeman is just really nicejwakeman is just really nicejwakeman is just really nice
Re: WORKING: 2014 Driver's Station Vision Processing

Quote:
Originally Posted by nydnh01 View Post
I am using JDK 1.7 and Netbeans v8. Can you send me your WPIJavaCV.jar file to see if there is anything different in the source code.
The WPIJavaCV.jar should come from installing the SmartDashboard 1.0.5 and will appear in C:\Program Files\SmartDashboard\extensions\lib. I can still send you the one on my system if you want. How should I send it?
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 12:46.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi