![]() |
WORKING: 2014 Driver's Station Vision Processing
1 Attachment(s)
Hi all. It may be a bit late in the season for this to be useful for most but I want to share the Java/SmartDashboard based vision processing that my team is using this year. It is basically a combination of the DaisyCV code that was posted by Team 341, HERE and the 2014VisionSampleProject available in Netbeans. The DaisyCV code is setup to be a SmartDashboard extension widget and uses OpenCV to do the image processing operations. The 2014VisionSampleProject is intended to be run onboard the robot and uses the NIVision api to do the image processing operations. Our approach was to use the DaisyCV code as a base and update it with the specifics for this year's game using the example laid out in the 2014VisionSampleProject.
I've attached the Netbeans project for our vision processing widget as a zip file. I will go through all the steps to import the project into Netbeans and get it up and running. Then I will explain how to deploy the widget for use with your Driver's Station/SmartDashboard setup for competition. Finally, I will highlight a couple of the key changes we made. Importing and setting the project up in NetBeans. 1. If you don't already have the Java development tools installed (Netbeans, JDK, and FRC plugins) then you need to follow the instructions HERE 2. Run the Smart Dashboard Standalone/Vision Installer 1.0.5.exe which you can download HERE NOTE: Close Netbeans before running Installer 1.0.5.exe. If you install Smart Dashboard while Netbeans is open you need to close and re-open Netbeans so it will see additions made to the system PATH variable 3. Import the 2014VisionSampleProject into Netbeans. File->New Project->Samples->FRC Java->2014VisionSampleProject. This is required to get the sample images to test the code against. 4. Unzip the attached Team63MachineVision.zip into your NetBeansProjects directory. 5. Import the Team63MachineVision project into NetBeans with File->Open Project 6. When you open the project NetBeans will present you with a window that says "Resolve Project Problems". Basically you need to point it toward the .jar files it is dependent on. Here are the required .jar files and the paths to them on my machine. SmartDashboard.jar: Code:
C:\Program Files\SmartDashboard\SmartDashboard.jarCode:
C:\Program Files\SmartDashboard\extensions\lib\WPIJavaCV.jarCode:
C:\Users\jdubbs\sunspotfrcsdk\lib\wpilibj.jarCode:
C:\Users\jdubbs\sunspotfrcsdk\desktop-lib\networktables-desktop.jarCode:
C:\Program Files\SmartDashboard\extensions\lib\javacpp.jarCode:
C:\Program Files\SmartDashboard\extensions\lib\javacv-windows-x86.jarCode:
C:\Program Files\SmartDashboard\extensions\lib\javacv.jarCode:
C:\Program Files\SmartDashboard\extensions\WPICameraExtension.jarAt this point you should be able to run and debug the project in NetBeans. The project contains two source files. Team63VisionWidget.java and DaisyExtensions.java. The Team63VisionWidget.java file has a main() function which can be used to run the widget stand-alone and allows you to step-into the project and debug it in NetBeans. Before you step into the code you need to give it an image to process. To do this you right-click the project in NetBeans and go to Properties->Run->Arguments and enter the path to one of the sample images from the 2014VisionSampleProject. Make sure to put double quotes around the path. The path on my machine is: Code:
"C:\Users\jdubbs\Documents\NetBeansProjects\Sample\VisionImages\2014 Vision Target\Right_27ft_On.jpg"Code:
public WPIImage processImage(WPIColorImage rawImage)1. Y_IMAGE_RES - This is the based on the resolution of the images you are bringing back from the camera. This link talks about configuring various settings for the camera. 2. VIEW_ANGLE - This is based on which model of Axis camera you are using. The other two models are commented out in the code. 3. Most importantly you need to set the HSV threshold values to work with the camera ring light you selected. This line of code has the threshold ranges in it: Code:
opencv_core.cvInRangeS(hsv,opencv_core.cvScalar(160,120.0,100.0,0.0),opencv_core.cvScalar(190.0,255.0,200.0,0.0),thresh);Code:
opencv_core.cvInRangeS(hsv,opencv_core.cvScalar(70.0,120.0,100.0,0.0),opencv_core.cvScalar(100.0,255.0,200.0,0.0),thresh);To fine tune the color threshold values you should capture an image of the vision target using your camera and ring-light. This link talks about how to capture a image from the Axis camera through the web interface. Once you have an image captured and saved to your pc pass it as an argument to the debugging session in NetBeans as described above. Then edit the following line of code to pass true when creating the Team63VisionWidget object. Code:
Team63VisionWidget widget = new Team63VisionWidget(false);Code:
Team63VisionWidget widget = new Team63VisionWidget(true);Code:
H:84.0 S:254.0 V:166.0This is how to deploy the widget for use with your Driver's Station/SmartDashboard setup for competition. 1. Create a file name LaunchSmartDashboard.cmd which contains the following text: Code:
cd "C:\\Program Files\\SmartDashboard"Code:
C:\Users\Public\Documents\FRC\LaunchSmartDashboard.cmdYou will be editing the following file: Code:
C:\Users\Public\Documents\FRC\FRC DS Data Storage.iniCode:
DashboardCmdLine = ""C:\\Users\\Public\\Documents\\FRC\\LaunchSmartDashboard.cmd""Code:
C:\Users\jdubbs\Documents\NetBeansProjects\Team63MachineVision\distCode:
C:\Program Files\SmartDashboard\extensionsOK! So now two things I think are improvements over the base DaisyCV code and one item which is a...non-improvement...over the 2014VisionSampleProject code. The original DaisyCV code used the following set of operations to do the color threshold filtering of the image: Code:
opencv_core.cvSplit(hsv, hue, sat, val, null);Code:
//cvInRangeS function does not require the frames to be splitCode:
cvNamedWindow("Image",CV_WINDOW_AUTOSIZE);Code:
rectLong = NIVision.MeasureParticle(image.image, particleNumber, false, MeasurementType.IMAQ_MT_EQUIVALENT_RECT_LONG_SIDE);Code:
This method uses the equivalent rectangle sides to determine aspect ratio as it performs better as the target gets skewed by moving to the left or right.If anyone attempts to use this code/follow this guide and has trouble feel free to post your questions here and I will do my best to answer them. I would also be interested to know if anyone is able to successfully use this code for their robot. Good luck teams! |
Re: WORKING: 2014 Driver's Station Vision Processing
Under the Daisy Extensions I am receiving errors on the return statements stating the method is not public and cannot be accessed outside the package.
|
Re: WORKING: 2014 Driver's Station Vision Processing
Awesome job! I'm glad to see that people are still using this. You did a great job of describing precisely how to get the environment set up, which is the aspect that I get (by far) the most emails and PMs about.
The extension using the mouse is actually something that we did in the 2013 version of DaisyCV (which I tried to upload to CD, but got an error...still looking into it). We would click in the frame where we were actually shooting the frisbees so that we could calibrate the vertical and horizontal offset of the shooter. Great for cases where the camera was bumped, a shooter wheel wore in, etc. |
Re: WORKING: 2014 Driver's Station Vision Processing
Quote:
|
Re: WORKING: 2014 Driver's Station Vision Processing
Quote:
Quote:
|
Re: WORKING: 2014 Driver's Station Vision Processing
Quote:
|
Re: WORKING: 2014 Driver's Station Vision Processing
Quote:
|
Re: WORKING: 2014 Driver's Station Vision Processing
I am using JDK 1.7 and Netbeans v8. Can you send me your WPIJavaCV.jar file to see if there is anything different in the source code.
|
Re: WORKING: 2014 Driver's Station Vision Processing
Quote:
|
| All times are GMT -5. The time now is 11:01. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi