Log in

View Full Version : Grip Image processing


hadarsi320
17-11-2016, 08:58
I'm trying to learn how to use Grip. I'm using the sample images from the wpilib website of the 2016 frc to get the outline of the tape.

I encounter some problem in the images that include a side shot of the goal that includes two sides of equal size, this is how it looks: http://imgur.com/a/ilKUe

Does anyone have any idea what should I change in the filters?
I tried changing it specifically for certain types of images but it ends up ruining images that are closer/further away.

euhlmann
17-11-2016, 09:22
Invent some sort of deterministic behavior; ie always turn to the left target in this situation.

Andrew Schreiber
17-11-2016, 09:37
Sort by area of contour, then choose the first contour (i.e. the largest) You may have to do this robot side.

SamCarlberg
17-11-2016, 10:08
I'd do what Andrew recommended. You can iterate through the arrays in networktables and keep track of the index of the contour with the largest area.

KJaget
17-11-2016, 10:29
From experience shoot at the one on the right (i.e. the largest X value). The one on the left is a reflection from the driver station glass seen when you're trying to shoot in auto from the spybot location. Ask me (or the drivers we shot at, sorry) how I know. ;)

EmileH
17-11-2016, 10:50
For 1058's high goal autonomous, we just take the first contour array index which is always the left goal since our auto runs under the low bar and lines up with the left goal.

euhlmann
17-11-2016, 11:57
Sort by area of contour, then choose the first contour (i.e. the largest) You may have to do this robot side.

The issue with this is that the areas may change as you turn towards a target. This creates a sort of extended donkey-haystack problem. It was an issue for us last season. Hence the strategy of choosing one to always use if the areas are sufficiently similar.

Hitchhiker 42
17-11-2016, 12:11
Sort by area of contour, then choose the first contour (i.e. the largest) You may have to do this robot side.

You do need to be careful with this. If you pick the right one in this case, and start turning towards it, you'll end up with the left goal being larger. This can result in massive jiggling between the two goals and not deciding properly.

To do this, you need to pick one and stay with it even if it becomes smaller. This means you need to figure out a way to keep the one you want (maybe by sending a one-time angle to turn to and then using that).

Andrew Schreiber
17-11-2016, 12:53
The issue with this is that the areas may change as you turn towards a target. This creates a sort of extended donkey-haystack problem. It was an issue for us last season. Hence the strategy of choosing one to always use if the areas are sufficiently similar.

You do need to be careful with this. If you pick the right one in this case, and start turning towards it, you'll end up with the left goal being larger. This can result in massive jiggling between the two goals and not deciding properly.

To do this, you need to pick one and stay with it even if it becomes smaller. This means you need to figure out a way to keep the one you want (maybe by sending a one-time angle to turn to and then using that).

That depends on if you continue to update or if you use the image to determine where to turn then do it outside of the vision system using other sensors. Which is what I have always found to be more reliable.

Edit: Actually I'm going to expand on this.

Vision systems are running at, what, 30fps if you're good? If you only update your feedback variable 30 times a second bad things are going to happen with bouncing ANYWAY. A more sane way to do it would be to utilize gyro integration over a short time frame to allow you to compute the angle you are from the target and then turn that distance. The gyro is less noisy and updates faster.