Need clearing up on Vision Paper

I was reading the Vision paper for java and three particular lines stuck out that i didn’t entirely understand how to do.

  1. “The exposure on the camera was set by intentionally overexposing the
    image by shining a flashlight into the camera, allowing the auto exposure to reduce the sensitivity, then locking in that setting.”
    How would we “lock” the sensitivity. From what I understand, we’d have to stand in front of the robot with a flashlight every time it turns on to set overexpose it to light… Is that really what must be done?
  2. It says that “a width and height of 30-400 and 40-400 pixels” is the criteria. Doesn’t that change with the distance to the hoops? If so, how would I compensate for the pixel change? Or is that just a range of possible values?
  3. What does “center of mass x value” mean? Is that the pixel location (which doesn’t make sense because it would be coordinate points) for the center of a rectangle? I’m basically asking what this line of code means:
System.out.println("Particle: " + i + ":  Center of mass x: " + r.center_mass_x)
  1. Log into the camera from a web browser. Go to setup. (I don’t have the camera with me so this is from memory). In the options on the left hand side there is a link for setting up the image and there’s a sub-link for advanced settings. What you’ll do is shine the light in from of the camera and then use the pull-down and select hold (I think the setting is just called exposure). Then click save. The images from the camera will now be darker when you remove the light. This setting will remain through a power cycle.

  2. I think what he’s saying is that he is filtering the rectangles that don’t fit into those criteria.

  3. Center of mass is the coordinates all measured together to give you an approximate center of a “particle” (NI calls contiguous pixel masses particles). Thus, center of mass x is exactly what it sounds like. The center of the particle in question on the x axis.

Hope that helps.

  • Bryce
  1. There are various settings for the camera that can be manipulated. Exposure is a bit less controllable. The AxisCamera.ExposureT class does have a “hold” value, if you were using a flashlight to overexpose the camera. It may indeed be necessary to manually overexpose it for the perfect settings, but I doubt you’ll always need perfect settings.
  2. I imagine they found that you generally aren’t close enough or far enough away to go outside of these bounds for the size of the rectangles. Also, you really don’t want to pick up all the little reflections off of other things. (removeSmallObjects helps)
  3. Yes, it’s the center x-value of the rectangle.

My own question: I would like to know more about how to get static test images onto the cRIO.

Do you mean transfer them to the cRIO and then use them? Or save images from the camera during processing?

You can always just ftp a file over to the cRIO. If you open up a windows file explorer window (like My Computer) you can type the IP address of the cRIO into the address bar. For example, ftp://10.xx.yy.2 where xx.yy is you team number, is what you’d type. From there you can just drag and drop files into the folder.

To use them, you might have to use the imaq Functions from the NIVision library. I don’t see a read image in any of the Image classes provided in WPILib.

For the latter, you might find it here:

  • Bryce