Note Detection with OpenCV

Recently, I decided to try and detect notes with just OpenCV. I know that you can use Limelight and neural networks for this, but I feel like that is like killing a fly with a rocket launcher. Plus, getting a Limelight and a Coral is a lot more expensive than a USB Camera and a pi.

Using some color filtering, contour finding, and ellipse fitting, I was able to get it working really well!

Here are the results:
note_detection

I have it set to get the closest note, and you can see it also has no problems with really beat up notes (the tape is courtesy of our intake cutting the note in half).

One downside though is that we will have to tune the HSV values for the lighting conditions of each comp, but that shouldn’t be too bad.

The code: GitHub - TitaniumTigers4829/camera-note-detection: Uses opencv running on a pi and a usb camera to detect notes

18 Likes

That’s really neat, thanks! We might try this technique out, our limelight will be on the wrong side of the bot, so a pi on the front for finding notes may be a good solution

3 Likes

You might try throwing out pixels above a white point say RGB 240,240,240. That makes you more robust from sun in the skylights/windows. Otherwise, we find lighting in arenas better than the lighting in our robot class room.
I think use Core.inRange 0, 0, 0, 240, 240, 240

4 Likes

That’s a good point, I’ll make sure to tweak the upper bound for the color threshold so that doesn’t happen. That being said, we plan to have our camera angled down, so hopefully this wouldn’t be a problem.

You’re the hero we need quite frankly.

5 Likes

This is extremely cool, Thank you for doing this! My team is trying to set one up, but we are a bit lost on how to get this to work. Right now, we are connected to our pi which is not on the robot yet, but has WPIlibPi flashed onto it. We aren’t sure how to get the program to work with the web version first. Do you have any advice?

Our team is doing a similar kind of object detection to and I will say it works, one suggestion is one of the issues I found with just choosing the biggest contour does not tend to work as the big rack of notes behind the source just gets detected A LOT, granted we are not trying to fit an ellipse, so that might be something that could fix the issue, still, in general, we use to the field relative translation to figure out if the note being detection is outside the field, and throw it out if it is.

I’m 99% sure that because we do the Ellipse fitting, it wouldn’t detect the rack, but I’ll try to do some testing and get back to you.

We are going to work on it this week, so I can get back to you once we actually have it running on the pi, but in years past, we have just followed this and it has worked Vision with WPILibPi — FIRST Robotics Competition documentation. As for getting the position of the note relative to the robot, we are going to the FOV of the camera to get the angle that the robot needs to turn to face the note, and a lookup table for the notes size to estimate distance.

Hello, I am also a programmer on 4829 and have been working with Traptricker on this project. In order to integrate the code in the repo with the WPILib Image on the rPi, I used this example in my CD post: OpenCV w/ rPi 4 Note Detection (Python)
. Once you get the code working, you can view the results on wpilibpi.local/1182 (wpilibpi.local/1181 is for the normal camera without contour lines).

1 Like

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.