Game Piece Tracking

Does anyone have a good way of tracking the game pieces using vision? I know deep learning seems to be the best option, but I’d rather not dish out hundreds of dollars on coprocessors. I’d thought about using GRIP but I have a feeling that will be tough since the cargo is the same color as bumpers.

2 Likes

I suggest photonvision. You can use it on most cameras (limelight, gloworm, snake eyes) and it’s very customizable. Best thing: you usually don’t need a coprocessor.

In photonvision’s latest update, they released a feature that allows you to filter shapes (this can be used for ball detection with circle filtering to combat the bumper color problem).

output
From the thread listed above ^

Hope that helps!

11 Likes

One of our new students mentioned that he wanted to try tracking the Cargo. I mentioned the light problem to him…I wonder if you can get it to target a crescent? or an ellipse? since that’s what the Cargo might look like, due to lighting?

What’s the point of tracking cargo? To automatically align to it? I guess it could be somewhat useful during teleop.

2 Likes

I just figured it was for hands-free chaos defense. Just let the robot pick the targets and patrol the zone.

But maybe improving accuracy lining up during autonomous, if your driving is off a little it could correct. You’d be less reliant on lining up your start exactly.

And last would be lining up on cargo in teleop when the cargo is far away or obstructed. While a camera on the robot can help in that situation to, having the driver switch focus isn’t great. While some cargo tracking algos would be a distraction too depending on implementation. But the tracking or vision camera can be the same one.

I think our student wants to do it because it’s the best challenge. He understands we need the rest of the robot programmed, but he also so far has shown that he can do that pretty easily.

5 Likes

There is no ellipse shape. The circle accuracy setting allows it to enforce a more or less accurate circle. I have found that lowering that slightly will allow you to find not fully illuminated balls. This does have the cost of making false positives (you team mates bumpers are the biggest one) higher.

I believe @ngreen did some experimentation with using a “spotlight” (like the one people used to use for shooting) on the balls and that seemed to help.

2 Likes

You can see when the flashlight is turned on. The circle gets more round capturing the lower part better and the tracking stabilizes. I haven’t experimented much more than this, but it is my current plan rather than testing other LEDs. The flashlight zoom is set pretty much to the widest area but matching that to cover the camera fov best as possible is probably would work. This cargo was about 7 ft away. The camera tracking with the Pi Zero 2W and v1.3 pi camera was at about 50 frames per second but that may be lower tuning for more conditions.

The flashlight just is a tactical zoom flashlight. The battery carriage is soldered to, a small groove is cut to let wires out. I happened to use a 9v battery for testing. https://www.amazon.com/Adjustable-Tactical-Flashlight-Zoomable-Batteries/dp/B005FEGYCO?th=1

edit: I can recommend this ethernet/passive power adapter to connect the pi zero. I have this usb buck to power from the PDP with a 5A breaker.

3 Likes

This looks like a great solution! I’ll try it as soon as I get a chance. Thanks!

1 Like

I’m using photon vision, and I’m trying to use the contour controls to specifically target circles. I’m pretty sure when I created the pipeline, it defaulted to a “reflective” pipeline, but I need a “color shape” pipeline to control the shapes. I don’t know how to change the pipeline type or create a new one of a different type. Anyone able to help me on this?

3 Likes

I’m not familiar with photon vision specifically, but I would assume that they aren’t two separate pipelines. From experiment with GRIP a long time ago, I was able to use basically the same setup for both reflective targets and any other object of a certain color (a yellow sticky note, for example). The only difference was that the reflective targets stand out much better from the rest of the image, making it easier to track. Take that with a grain of salt though; I haven’t messed with vision in a while, and I don’t have any experience with photon vision specifically, so I could be totally wrong.

3 Likes

I wish the Limelight had this feature. It’s already a great product with great software, it just needs more

1 Like

Just to follow up on this: We had a dev version flashed which didn’t show the menu item. Updating the jar/Photonvision fixed the lack of pipeline switching in the GUI! Probably should pin ourselves to a version :slight_smile:

In the photon vision software there is a drop down menu labeled ‘Type’. If you click on this you can change to shape detection.

https://www.youtube.com/watch?v=laG18zTiEgU

image

Axon has started to show a lot of promise now after throwing a little over 2000 labeled images at it

8 Likes

I bet it wasn’t fun making that dataset :slight_smile:
Assuming you’re running that on a computer and not a pi?

1 Like

Photon vision allows you to change the exposure, brightness, hue, saturation, and value (how bright the pixel appears). Photonvision still isn’t very reliable though. The highest accuracy I’ve gotten the software to detect the cargo is with 15% circle accuracy which still detects a lot of bumpers. You also aren’t currently able to choose red very reliably because it is split between both ends of the color slider.

4 Likes

Yeah on the PI its got around 20FPS with the coral accelerator. I’ve just been using this video as a baseline to test all my models and this one has been the best so far. Also our team only labeled about 400 images in this dataset, the rest was provided by other teams posting their datasets on here as well as the dataset wpilib provided for us.

My team is mainly first years so there aren’t many well experienced drivers so we’re trying to track the cargo to take some of the load off their shoulders.

1 Like