Cargo Vision Tracking

We have been experimenting with vision tracking using a LimeLight 2.0. However, because of our inconsistent lighting in addition to a plethora of other factors, we have had limited success. The ball never stays tracked and it cannot distinguish between bumpers and balls. We have tried using PhotonVision, but it’s unable to perform to our needs.

Here is a more specific list of our problems:

  • Lighting conditions change drastically in a short time span in the space we are using
  • There is often not being enough light in the space that we are using
  • The camera resolution makes tracking from distances hard
  • Under the condition where we can get tracking to work, the Limelight locks onto unintended targets (even with us using various contour filtering settings)
  • Unable to distinguish between bumpers and cargo
  • PhotonVision lagging extremely even with a small range of parameters

Overall, we just feel that we do not have enough experience in vision with these cameras. We would appreciate any learning resources that expand into the issues more clearly than the documentation.

Our intentions for vision tracking is to create an easier autonomous, so if there are any other ways we should be performing autonomous, please tell us.

1 Like

Thanks for posting!

Vision tracking can be a challenge, especially with different lighting conditions. Most teams use the retro reflective tape located on field pieces (this year the hub) to help align their robot or mechanism to a fixed target. The retroreflective tape will reflect a specific wavelength of light (the one emitted from the limelight ) that can be captured by the camera, filtered (also known as vision processing) and using the publicly available interface output an x,y coordinate that can be used as a feedback for your robot or mechanism.

The limelight folks have made a great website https://docs.limelightvision.io/en/latest/ that documents the process.

Good luck!

1 Like

Thank you very much for the advice!

However, most of our troubles stem from inconsistent tracking of the ball. We were wondering if you knew of anyways to prevent this?

I’m not sure whether using Limelight to track the cargos is a good choice since Limelight is only good at detecting objects that are reflective. Our team bought Pixy2 camera on Andy Mark this year and is planning to use it either on the field or study it as an off-season project. We haven’t experimented with it but I believe it is much better than Limelight. You can buy it on Andy Mark and check out its official docs here.

I don’t think the Limelight is any worse than other setups (although I don’t use one myself). Fundamentally, detecting color with just “room lighting” is going to be challenging when you don’t control the lighting. And red/blue are darker colors, unlike the balls from last year, which adds to the challenge.

I would try to open up the color thresholding as much as you can, especially the S, V values (of HSV). That will probably introduce a lot of noise regions, but you can then filter on shape parameters which key to a circle: it should be close to the same size vertical vs horizontal, the fill factor should be the right value, maybe min/max size, etc. (We do our processing in Python, no Limelight, so not sure what flexibility you have). Hopefully that should get you to what you need.

Good luck.

An added problem we saw is that the Blue cargo wasn’t very blue on the RBG scale but the Red cargo was very red and pretty blue too.

I would suggest using HSV, since that is a bit more “singular” in terms of values for a given color. However, a known challenge is that Red is roughly 0 in H but can wrap to the other side. So, if you are working in OpenCV, H has the range 0 → 180 and Red is roughly (I am guessing) 170 → 180 = 0 → 10.

2 Likes

The starting configuration for the field is more or less static for where the cargo will be placed every match. What my driver is doing (and probably easier) is doing pre-defined paths that we can select from depending on what our partners want to do. We’re going easier to harder autos --1 ball, 2 ball, 3 ball, etc… and getting really good at each of those before advancing to the next harder auto.

If you’re planning on autonomously chasing down missed shots/balls I’d spend more time on practicing to not miss shots you do take (effort is probably less and reward higher). If you’re using a vision system and you do accidently hit a high performing partner in auto that’s going to be sued against you in scouting.

2 Likes

The pixy2 isn’t going to work for tracking cargo since it uses a color algorithm exclusively. You need some form of shape filtering to be able to prevent selecting bumpers.

We experimented with hsv detection for cargo with limelight and photonvision on a limelight and raspberry pi and just did not get great results(slow and couldn’t distinguish bumpers). We have had made good progress with a raspberry pi and wpilibpi/axon using machine learning. But you really need a google coral for this to work well. FPS is too low without it. If you can get a raspberry pi, google coral and a webcam I would be happy to share our code and model.

2 Likes

From my testing using the limelight, I had major issues with using the limelight camera due to issues with lighting and saturation, both of which the built in camera was very bad at. USB cameras work a good bit better, but with a different issue: the circle detection algorithm isn’t good enough to be able to detect an imperfect circular ball and rule out a rectangle. For some reason, it draws a very elongated box around the bumper and doesn’t see an issue with the “circle” being stretched so far beyond being a square in terms of aspect ratio.

Using the other circle detection filters like how far the center is from the center of the contour etc, I was able to tune it a bit better, but it is still far from perfect, and I have serious doubts about reliability at comps. We are planning to shift efforts away from vision and toward path following, but we are not abandoning vision altogether.

Thank you for this advice, we will try using preset paths instead of vision tracking.

1 Like

And additionally the Pixy2 is awful to interface with, and unreliable at best. The I2C command interface will sometimes simply not give you back any data. Not to mention that it can’t be tuned without a USB cable, and it can’t stream video to you.

I’ve made a few pipelines that are able to effectively track the balls, but the most helpful thing to know is how HSV (Hue, Saturation, Value) works, since HSV is how both Limelight and PhotonVision work. Also, I’ve found that having exposure and brightness up, while keeping the LEDs off has been the most consistent option for me, however I am working in a place consistently lighted with white light, but ultimately try to tune your input to have as much contrast as possible to make it easier for your thresholding. That said, having a working knowledge of HSV will make it easier to quicken the process.

In very quick terms, Hue is what color you are using (i.e., red), Saturation is how rich or full the color is (red with low saturation is more gray), and Value is how bright the color is (value 0 is black). This image sums it up well. hsv

Since the red ball isn’t a full red, but rather a more dull red, the saturation will be more moderate, and since the ball is also kind of light, your value shouldn’t be too high either. However, I find it best to keep the top range of saturation to max, and the low parameter to about 3/4, because depending on lighting the ball will appear to have a more or less muted tone.

I’ve also found that since the ball has a fairly light color, It’s better to keep value set from max to maybe 10 units below whatever lets you fully detect the ball. Hue can be pretty specific because the ball is a very pure shape of red, it’s only the saturation and value that’s severely impacted by ambient lighting. I’ve found that keeping the value constraints fairly tight helps filter out bumpers to some degree, because they tend to be a very full red.

Once your thresholding is giving you a clear image of the ball, it’s time to contour filter!
It’s very important to keep these values as specific as possible while still keeping some room for error. Contour filtering applies to the rectangle that PhotonVision or Limelight thinks is a target. The 2 biggest factors of this are the W/H ratio and the Fullness of the blue rectangle. Keep area of the image at 0-100% so that you can detect at any range, but turn on speckle rejection all the way so that you filter out random pixels.

So, the W/H ratio. Since the image you should have after thresholding will be a fairly uniform circle, the Width to Height can be pretty close to 1:1 ratio, however, only a gibbous may filter depending on the ball’s shadow, so don’t do strictly 1:1, but close should do it.

For fullness of the rectangle, it’s fairly simple. Since the filtered image is a circle, the rectangle drawn around it won’t be 100% full because of the corners, so you don’t need to have it all the way at 100%, but it will be close because only the corners of the image will be empty. Doing this will eliminate most rectangles and squares. Hopefully that will get rid of your tracking issues.

However, if you are still having issues, the photonlib comes with a pretty nifty tool that lets you get the best target that most closely matches your parameters, so this can most likely also be fixed in code.

Hopefully this helps you out! Good luck with your vision!

2 Likes

It’d be great of limelight or photonvision could integrate this ML / Coral into their code bases

Why the thumbs down? Why can’t it just be a different type of pipeline?

If you are wanting to do that you should just do it from scratch. Image processing and machine learning are entirely separate worlds. Not practical to put ML in a Limelight or PhotonVision type system. If you are competent enough to use ML, you are also competent to use something like WPILibPI.

The quantity of work required to get a decent model and narrowness of application even if you do makes it a really really tough sell for a company like Limelight, that is really just packaging a sequence of simple OpenCV functions with a GUI and putting it in a case (this isn’t a bad thing!).

3 Likes

I think that is really a narrow view point. To an external user something like photon vision is a pipeline, along with some options. It provides a good library the provides a list of targets with a yaw and a pitch. I didn’t say that its 100% trivial. It’d be nice not to have a separate raspberry pi for each type of function.

Some team can’t justify that expense or complexity.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.