View Full Version : New Vision sample program
BradAMiller
21-01-2012, 13:19
There is a new vision sample program included with the latest update of WPILibJ (just posted on the update site). Also, in the WPILib project, Documents section, there is a paper describing how it works. In addition, also in the Documents section, there are some sample images to play with and a Vision Assistant script that was used to create the sample code.
Please provide some feedback on this sample and the paper.
Brad
This has already helped us a lot. Our team had been struggling for quite a while trying to figure out how to track but the paper and the sample code simplified it drastically! Thanks a bunch!
Can you please provide a link to the update site you are referring to?
Thanks
nonamedude
21-01-2012, 22:15
There was a thread about this but here http://firstforge.wpi.edu/sf/frs/do/viewRelease/projects.wpilib/frs.2012_java_update_for_frc.2012_frc_update_netbe ans_modules
I've found the sample code but I can't find the document. Could someone provide the link?
Edit: Never mind, found it at http://firstforge.wpi.edu/sf/go/doc1304?nav=1
Will the Eclipse plugin be updated, too?
Jared Russell
21-01-2012, 23:32
Thanks, Brad - this is very helpful. We are still in the process of figuring out whether we want to do onboard or offboard image processing, but at least with this update, doing it onboard is a viable option!
rockytheworm
22-01-2012, 13:02
If we use a red led ring, the sample code should work, correct? There is nothing we have to change for it to run correctly? We are having some trouble getting it to work.....
BradAMiller
22-01-2012, 17:34
If we use a red led ring, the sample code should work, correct? There is nothing we have to change for it to run correctly? We are having some trouble getting it to work.....
I shot the images with a red ring light. Did you get it to work with the image downloaded into the cRIO and not using the camera?
Also, what problem are you having?
Brad
In regards to the earlier Eclipse question, the plug-ins will soon be updated to include this sample
eddie12390
23-01-2012, 08:56
Would anyone be able to suggest a light ring to use? This could be extremely useful but my team does not currently have a light ring.
Would anyone be able to suggest a light ring to use? This could be extremely useful but my team does not currently have a light ring.
We got ours from AndyMark for 8 credits each.
docdavies
23-01-2012, 15:56
I don't seem to be able to download the "Detecting Vision Targets in C++ and Java" PDF from FirstForge.
Reasons?
Doc
BradAMiller
23-01-2012, 16:28
We got ours from AndyMark for 8 credits each.
You can also get them here: http://www.superbrightleds.com/cgi-bin/store/index.cgi?action=DispPage&Page2Disp=%2Fmini_tubes.htm
Use the angel eye lights. For extra brightness you can get two sizes and nest them.
Brad
BradAMiller
23-01-2012, 16:29
I don't seem to be able to download the "Detecting Vision Targets in C++ and Java" PDF from FirstForge.
Reasons?
Doc
Did you look here: http://firstforge.wpi.edu/sf/go/doc1304?nav=1
Brad
xmendude217
23-01-2012, 22:40
I have a question regarding the sample program, given that we have an image, what's to do with it? How can I begin to translate the image into motor movement?
are there javadocs for the new code?
I have a question regarding the sample program, given that we have an image, what's to do with it? How can I begin to translate the image into motor movement?
We base our movement off of center_mass_x_normalized.
are there javadocs for the new code?
Some of the methods involved seem to not be setup for javadocs.
kamehameHA
25-01-2012, 18:44
Is there a Netbeans plugin update? So far it looks like the CriteriaCollection, which is used in the Java sample code, doesn't exist.
Is there a Netbeans plugin update? So far it looks like the CriteriaCollection, which is used in the Java sample code, doesn't exist.
The plugin update came at the same time as the new sample.
If your Netbeans isn't set to update every startup/day, go into the Plugins and Reload the updates page.
We're doing something very similar to the sample code (thresholding, convex hull, particle analysis), and finding it takes about a second to process each image on the cRio--far too slow for automated targeting.
Has anyone else been able to get the code to work onboard, with acceptable performance?
Thanks,
Steve (software mentor for team 649)
Hi,
We copied the sample code into our project and are using the sample images provided. When we run it we get zero particles found. I added some extra code to print a particle report after every image operation and find that there are particles before the final filter is run(see below).
I also had a question about the following lines:
cc = new CriteriaCollection();
cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RE CT_WIDTH, 30, 400, false);
cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RE CT_HEIGHT, 40, 400, false);
I read through the vision concepts doc and found descriptions of these criteria. But I was not sure if it meant:
a) crop the entire image to the described rectangle and return particles in the cropped area.
b) only return particles that fit in the size of described rectangle, so search entire image for particles that fit.
Here's the code and output. I am using 10ft2.jpg in this example output. I also tried the other xxft2.jpg images and got the same result, the last filter found zero particles.
BinaryImage bigObjectsImage = thresholdImage.removeSmallObjects(false, 2);
printParticleReports("bigobj", bigObjectsImage);
// fill in occluded rectangles
BinaryImage convexHullImage = bigObjectsImage.convexHull(false);
printParticleReports("convexhull", convexHullImage);
// find filled in rectangles
BinaryImage filteredImage = convexHullImage.particleFilter(cc);
printParticleReports("filtered", filteredImage);
The output is jumbled, see this thread: http://www.chiefdelphi.com/forums/showthread.php?t=101438
[cRIO]
[cRIO] * bigobj******************************************** *
[cRIO] 031250000000000444 , 0.0 )
[cRIO] Area : 301746.0
[cRIO] percent : 98.224609375
[cRIO] Bounding Rect : ( 0 , 0 ) 640*480
[cRIO] Quality : 98.33023755987878
[cRIO]
[cRIO]
[cRIO] Particle(1/2)
[cRIO] Particle Report:
[cRIO] Image Height : 480
[cRIO] Image Width : 640
[cRIO] Center of mass : ( 319 , 240 )
[cRIO] normalized : ( -0.0Particle(2/2)
[cRIO] Particle Report:
[cRIO] Image Height : 480
[cRIO] Image Width : 640
[cRIO] Center of mass : ( 323 , 31 )
[cRIO] normalized : ( 0.009374999999999911 , -0.8708333333333333 )
[cRIO] Area : 272.0
[cRIO] percent : 0.08854166666666666
[cRIO] Bounding Rect : ( 294 , 29 ) 55*7
[cRIO] Quality : 95.77464788732394
[cRIO]
[cRIO]
[cRIO] 2 146.245212
[cRIO] **********************************************
[cRIO]
[cRIO] 652 )
[cRIO] Area : 307200.0
[cRIO] percent : 100.0
[cRIO] Bounding Rect : ( 0 , 0 ) 640*480
[cRIO] Quality : 100.0
[cRIO]
[cRIO]
[cRIO] Particle(1/1)
[cRIO] Particle Report:
[cRIO] Image Height : 480
[cRIO] Image Width : 640
[cRIO] Center of mass : ( 319 , 239 )
[cRIO] normalized : ( -0.0031250000000000444 , -0.004166666666666vexhull************************** *******************
[cRIO]
[cRIO] * con1 146.837495
[cRIO] **********************************************
[cRIO]
[cRIO]
[cRIO] * filtered****************************************** ***
[cRIO] 0 147.054589
[cRIO] **********************************************
[cRIO]
Turns out someone had changed the low R param of the RGB filter (from 25 to 40) while switching between our green light filter and sample's red light filter. I changed it back to 25 and was able to run the sample code in our robot project and find targets. Once that worked we are guessing that the criteria lines add a filter that looks for particles that are in the size range 30,40 to 400,400.
cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RE CT_WIDTH, 30, 400, false);
cc.addCriteria(MeasurementType.IMAQ_MT_BOUNDING_RE CT_HEIGHT, 40, 400, false);
These criteria do not crop an image. If that explanation of the bounding rect width/height is incorrect please let me know.
dfischer
18-02-2012, 13:47
We are a second year team that is trying at this late stage in the build cycle to do some image analysis. We have a high level question to help us visualize how to make this work. We are using Java.
Where is the best place to do the image analysis? It seems we have a couple of possibilities:
1. Send images from camera to cRio and then run the image analysis code on cRio.
2. Send images from camera to laptop and then run the image analysis code on laptop
3. Send images from camera to cRio and access the images using ftp to laptop and run image analysis on laptop
Are teams using all of these alternatives? Which one is most popular?
Recommendations?
Other alternatives we haven't thought of?
thanks
Dave Fischer - mentor
Jasper Indiana team 3559
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.