Vision sample code

hey guys, just a quick annoying thing that’s been messing with our team. We’ve been trying to use the vision sample project 2014 with the m1013 camera. we’ve looked through it many times, and it appears that it should either just output not or not hot, correct? am I wrong in this assumption of what the sample code does? And then secondly, is there anything that you have to do to set up in the code? We set the camera ip to 10.3.84.11 (Sparky 384 w00t w00t!) and compression to 30, and all the other steps that the wpilib screensteps tutorial tell you to do in this page:
https://wpilib.screenstepslive.com/s/3120/m/8731/l/90348-camera-settings

So in conclusion: What exactly is the vision sample project supposed to do?

what are requirements, if any, to set up the camera that aren’t detailed in the screensteps?

Is there any code modifications that you have to make to the vision sample project 2014 before building and deploying?

Thank you all very much!

https://wpilib.screenstepslive.com/s/3120/m/8731/l/96278-axis-m1013-camera-compatibility

The 013 is a wider angle lens. Definitely pros and cons to using this. The out of box code was designed for cameras with a smaller angle. However you should be able to see both targets from your side of the truss, which most likely won’t be true for the other cameras.

Also the vision code has a lot of other stuff in there, if you are solely trying to detect hot or not, you can do that in a much simpler fashion. i.e. look at the height to width ratio of each particle and see if you a particle that appears to match the hot reflector.

Our team is using a min ratio and min width to detect hot or not.

Ok. How would we go about getting just a basic whether hot or not? I don’t even know where to start :eek:

You need to setup Netconsole to view the Output of the printf commands. Example:
particle: 0 is a Vertical Target centerX: 219 centerY: 73
Scores rect: 71.474359 ARvert: 68.804771
ARhoriz: 1.463931
particle: 1 is a Horizontal Target centerX: 190 centerY: 45
Scores rect: 79.473684 ARvert: 0.000000
ARhoriz: 54.441782
Hot target located
Distance: 18.566375

particle: 0 is a Vertical Target centerX: 219 centerY: 73
Scores rect: 70.833333 ARvert: 67.708301
ARhoriz: 1.440602
particle: 1 is not a Target centerX: 190 centerY: 45
Scores rect: 79.459459 ARvert: 0.000000
ARhoriz: 44.115830
No hot target present
Distance: 18.500995

Yeah i got the printf set up and I got a couple of things to print out, but nothing inside the really big if loop. In the sample code in autonomous, where it is: if(reports->size() > 0) we can get it to printf up until that point. Which means that thats evaluating to false. The only issue is that I have no idea why D: I didn’t change anything else in the code.

Have you done anything to calibrate the thresholds to your lighting and camera? It sounds like you aren’t ending up with any blobs after the threshold and size filter.

To find out you can try commenting out the while loop and un-commenting the lines that write the image to a bmp. Then ftp to the cRIO and grab the image from the threshold step to see if your target is there in red.

If not you will need to tweak the values of the threshold. The Calibration section describes one way using NI Vision Assistant: http://wpilib.screenstepslive.com/s/3120/m/8731/l/163361-calibration

Another option is to take a snapshot as described at the beginning of that document, then use your favorite image editing program to grab the HSL value of some arbitrary pixels of the target and then just try some ranges around that (+/-10, +/-20, +/-50) and look at the image you get each time.

Thats the other issue, I don’t really understand what a “threshold” is. I know i’ve hard-set the brightness to 30 by going to the camera setup in browser and setting it there.

Hi, SparkyShires. Thresholding is a way to separate the bright areas from the not-so-bright areas of in image. Basically, any pixel value below the threshold is converted to completely dark, or black. For extra complexity, This happens in each of the three color planes, which is why the Thresholding method requires so many parameters. After thresholding, the image will have only a few areas of interest, which were the bright areas in the image before thresholding. These are called “particles”. The vision code processes these particles to determine Hot or Not.

When I moved the sample code into my robot program, I had to change some things. I took a few statements used to initialize the camera out to a separate method. Then I changed the VisionAnalysis method from void to boolean, and had it return the HotOrNot decision with a return statement just above the catch statements. With the changes made, I copied the whole kit-and-kabootle into a Camera Subsystem, which I call from an Autonomous Command.
More work to do! Good luck.

Hi, SparkyShires. This might help, but my apologies if this is redundant. I spent two days trying to get an image. I’d start the the sample code, see about ten thousand error lines, shut down the code, and continue to debug the program. Eventually I realized that the first image took almost five seconds to materialize, and about 25 ms for each image thereafter. So unless you wait around for at least 5 seconds to get an image, your report size will be zero.

It’s also possible to take a perfectly good image, and with thresholding and filtering, make all the pixels go away. You can see if this is happening by using your favorite web browser to FTP into the cRIO. In the URL window, type in “ftp://10.xx.yy.2”, without the quotes. If you’ve captured images, they’ll show up in the cRIO directory. By the way, you have to refresh the browser when you have a new image captured.
Good luck!