|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools |
Rating:
|
Display Modes |
|
|
|
#1
|
|||
|
|||
|
WORKING: 2014 Driver's Station Vision Processing
Hi all. It may be a bit late in the season for this to be useful for most but I want to share the Java/SmartDashboard based vision processing that my team is using this year. It is basically a combination of the DaisyCV code that was posted by Team 341, HERE and the 2014VisionSampleProject available in Netbeans. The DaisyCV code is setup to be a SmartDashboard extension widget and uses OpenCV to do the image processing operations. The 2014VisionSampleProject is intended to be run onboard the robot and uses the NIVision api to do the image processing operations. Our approach was to use the DaisyCV code as a base and update it with the specifics for this year's game using the example laid out in the 2014VisionSampleProject.
I've attached the Netbeans project for our vision processing widget as a zip file. I will go through all the steps to import the project into Netbeans and get it up and running. Then I will explain how to deploy the widget for use with your Driver's Station/SmartDashboard setup for competition. Finally, I will highlight a couple of the key changes we made. Importing and setting the project up in NetBeans. 1. If you don't already have the Java development tools installed (Netbeans, JDK, and FRC plugins) then you need to follow the instructions HERE 2. Run the Smart Dashboard Standalone/Vision Installer 1.0.5.exe which you can download HERE NOTE: Close Netbeans before running Installer 1.0.5.exe. If you install Smart Dashboard while Netbeans is open you need to close and re-open Netbeans so it will see additions made to the system PATH variable 3. Import the 2014VisionSampleProject into Netbeans. File->New Project->Samples->FRC Java->2014VisionSampleProject. This is required to get the sample images to test the code against. 4. Unzip the attached Team63MachineVision.zip into your NetBeansProjects directory. 5. Import the Team63MachineVision project into NetBeans with File->Open Project 6. When you open the project NetBeans will present you with a window that says "Resolve Project Problems". Basically you need to point it toward the .jar files it is dependent on. Here are the required .jar files and the paths to them on my machine. SmartDashboard.jar: Code:
C:\Program Files\SmartDashboard\SmartDashboard.jar Code:
C:\Program Files\SmartDashboard\extensions\lib\WPIJavaCV.jar Code:
C:\Users\jdubbs\sunspotfrcsdk\lib\wpilibj.jar Code:
C:\Users\jdubbs\sunspotfrcsdk\desktop-lib\networktables-desktop.jar Code:
C:\Program Files\SmartDashboard\extensions\lib\javacpp.jar Code:
C:\Program Files\SmartDashboard\extensions\lib\javacv-windows-x86.jar Code:
C:\Program Files\SmartDashboard\extensions\lib\javacv.jar Code:
C:\Program Files\SmartDashboard\extensions\WPICameraExtension.jar At this point you should be able to run and debug the project in NetBeans. The project contains two source files. Team63VisionWidget.java and DaisyExtensions.java. The Team63VisionWidget.java file has a main() function which can be used to run the widget stand-alone and allows you to step-into the project and debug it in NetBeans. Before you step into the code you need to give it an image to process. To do this you right-click the project in NetBeans and go to Properties->Run->Arguments and enter the path to one of the sample images from the 2014VisionSampleProject. Make sure to put double quotes around the path. The path on my machine is: Code:
"C:\Users\jdubbs\Documents\NetBeansProjects\Sample\VisionImages\2014 Vision Target\Right_27ft_On.jpg" Code:
public WPIImage processImage(WPIColorImage rawImage) 1. Y_IMAGE_RES - This is the based on the resolution of the images you are bringing back from the camera. This link talks about configuring various settings for the camera. 2. VIEW_ANGLE - This is based on which model of Axis camera you are using. The other two models are commented out in the code. 3. Most importantly you need to set the HSV threshold values to work with the camera ring light you selected. This line of code has the threshold ranges in it: Code:
opencv_core.cvInRangeS(hsv,opencv_core.cvScalar(160,120.0,100.0,0.0),opencv_core.cvScalar(190.0,255.0,200.0,0.0),thresh); Code:
opencv_core.cvInRangeS(hsv,opencv_core.cvScalar(70.0,120.0,100.0,0.0),opencv_core.cvScalar(100.0,255.0,200.0,0.0),thresh); To fine tune the color threshold values you should capture an image of the vision target using your camera and ring-light. This link talks about how to capture a image from the Axis camera through the web interface. Once you have an image captured and saved to your pc pass it as an argument to the debugging session in NetBeans as described above. Then edit the following line of code to pass true when creating the Team63VisionWidget object. Code:
Team63VisionWidget widget = new Team63VisionWidget(false); Code:
Team63VisionWidget widget = new Team63VisionWidget(true); Code:
H:84.0 S:254.0 V:166.0 H:84.0 S:254.0 V:165.0 H:84.0 S:254.0 V:166.0 H:84.0 S:254.0 V:166.0 H:84.0 S:254.0 V:166.0 This is how to deploy the widget for use with your Driver's Station/SmartDashboard setup for competition. 1. Create a file name LaunchSmartDashboard.cmd which contains the following text: Code:
cd "C:\\Program Files\\SmartDashboard" "C:\\Program Files (x86)\\Java\\jre7\\bin\\javaw.exe" -jar SmartDashboard.jar Code:
C:\Users\Public\Documents\FRC\LaunchSmartDashboard.cmd You will be editing the following file: Code:
C:\Users\Public\Documents\FRC\FRC DS Data Storage.ini Code:
DashboardCmdLine = ""C:\\Users\\Public\\Documents\\FRC\\LaunchSmartDashboard.cmd"" Code:
C:\Users\jdubbs\Documents\NetBeansProjects\Team63MachineVision\dist Code:
C:\Program Files\SmartDashboard\extensions OK! So now two things I think are improvements over the base DaisyCV code and one item which is a...non-improvement...over the 2014VisionSampleProject code. The original DaisyCV code used the following set of operations to do the color threshold filtering of the image: Code:
opencv_core.cvSplit(hsv, hue, sat, val, null);
// Threshold each component separately
// Hue
// NOTE: Red is at the end of the color space, so you need to OR together
// a thresh and inverted thresh in order to get points that are red
opencv_imgproc.cvThreshold(hue, bin, 60-15, 255, opencv_imgproc.CV_THRESH_BINARY);
opencv_imgproc.cvThreshold(hue, hue, 60+15, 255, opencv_imgproc.CV_THRESH_BINARY_INV);
// Saturation
opencv_imgproc.cvThreshold(sat, sat, 200, 255, opencv_imgproc.CV_THRESH_BINARY);
// Value
opencv_imgproc.cvThreshold(val, val, 55, 255, opencv_imgproc.CV_THRESH_BINARY);
// Combine the results to obtain our binary image which should for the most
// part only contain pixels that we care about
opencv_core.cvAnd(hue, bin, bin, null);
opencv_core.cvAnd(bin, sat, bin, null);
opencv_core.cvAnd(bin, val, bin, null);
Code:
//cvInRangeS function does not require the frames to be split
//and can directly function on multichannel images
opencv_core.cvInRangeS(hsv,opencv_core.cvScalar(70.0,120.0,100.0,0.0),opencv_core.cvScalar(100.0,255.0,200.0,0.0),thresh);
Code:
cvNamedWindow("Image",CV_WINDOW_AUTOSIZE);
CvMouseCallback on_mouse = new CvMouseCallback()
{
@Override
public void call(int event, int x, int y, int flags,com.googlecode.javacpp.Pointer param)
{
if (event == CV_EVENT_MOUSEMOVE)
{
x_co = x;
y_co = y;
}
opencv_core.CvScalar s=opencv_core.cvGet2D(hsv,y_co,x_co);
System.out.println( "H:"+ s.val(0) + " S:" + s.val(1) + " V:" + s.val(2));//Print values
}
};
cvSetMouseCallback("Image", on_mouse, null);
cvShowImage("Image", input);
Code:
rectLong = NIVision.MeasureParticle(image.image, particleNumber, false, MeasurementType.IMAQ_MT_EQUIVALENT_RECT_LONG_SIDE);
rectShort = NIVision.MeasureParticle(image.image, particleNumber, false, MeasurementType.IMAQ_MT_EQUIVALENT_RECT_SHORT_SIDE);
Code:
This method uses the equivalent rectangle sides to determine aspect ratio as it performs better as the target gets skewed by moving to the left or right. If anyone attempts to use this code/follow this guide and has trouble feel free to post your questions here and I will do my best to answer them. I would also be interested to know if anyone is able to successfully use this code for their robot. Good luck teams! |
|
#2
|
|||
|
|||
|
Re: WORKING: 2014 Driver's Station Vision Processing
Under the Daisy Extensions I am receiving errors on the return statements stating the method is not public and cannot be accessed outside the package.
|
|
#3
|
|||||
|
|||||
|
Re: WORKING: 2014 Driver's Station Vision Processing
Awesome job! I'm glad to see that people are still using this. You did a great job of describing precisely how to get the environment set up, which is the aspect that I get (by far) the most emails and PMs about.
The extension using the mouse is actually something that we did in the 2013 version of DaisyCV (which I tried to upload to CD, but got an error...still looking into it). We would click in the frame where we were actually shooting the frisbees so that we could calibrate the vertical and horizontal offset of the shooter. Great for cases where the camera was bumped, a shooter wheel wore in, etc. |
|
#4
|
|||
|
|||
|
Re: WORKING: 2014 Driver's Station Vision Processing
Kudos to you! I haven't tried all the various options RoboRealm, NIVision Assistant etc but for me your SmartDashboard extension was the most straight forward way to get offboard vision processing working.
Neat idea! |
|
#5
|
|||
|
|||
|
Re: WORKING: 2014 Driver's Station Vision Processing
Hmm..not sure why this would happen for you. The DaisyExtensions class and all the methods inside are public so they should be accessible. Have you made any changes to the code??
|
|
#6
|
|||
|
|||
|
Re: WORKING: 2014 Driver's Station Vision Processing
I have not made any changes to the code.
|
|
#7
|
|||
|
|||
|
Re: WORKING: 2014 Driver's Station Vision Processing
Which version of NetBeans and which JDK are you using? Maybe just try a clean and build?
Last edited by jwakeman : 27-03-2014 at 20:11. |
|
#8
|
|||
|
|||
|
Re: WORKING: 2014 Driver's Station Vision Processing
I am using JDK 1.7 and Netbeans v8. Can you send me your WPIJavaCV.jar file to see if there is anything different in the source code.
Last edited by nydnh01 : 28-03-2014 at 00:27. |
|
#9
|
|||
|
|||
|
Re: WORKING: 2014 Driver's Station Vision Processing
The WPIJavaCV.jar should come from installing the SmartDashboard 1.0.5 and will appear in C:\Program Files\SmartDashboard\extensions\lib. I can still send you the one on my system if you want. How should I send it?
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|