Pre-made Vision System for 2019 Season: Chicken Vision

Team 3997, the Screaming Chickens, would like to present ChickenVision. ChickenVision is an open source Python processing script that runs with the WPILib Raspberry Pi FRC Vision image. It can currently track the closest 2019 vision target and calculate the yaw of that target.

My goal is to make an almost plug-n-play vision tracking solution for teams new to vision who cannot afford higher-end vision systems. In order to achieve this goal, I need teams to try out the program and give feedback so I can continually improve it to serve teams better.

Feel free to shoot me any questions. I’m probably more active on Discord though

More info: https://github.com/team3997/ChickenVision

13 Likes

Will this work with LabView? What about Mecanum Drive? I am very interested since we are going with a Pi this year.

While I don’t have experience in either of those scenarios, this is independent of robot program. Think of this as a sensor. It just calculates the horizontal angle from camera to target (it can easily be extended though). I will try to get network tables working. You can use the feedback from the angle in any way you choose.

P.S My bad still getting used to discourse

1 Like

We ran into a problem with our own code regarding a situation in which the raspberry pi tries to process 3 reflective tapes or more. In that case we do not know how to focus on the pair we need to process. Did you solve it in your codes?

Yes, it does.

The hardest part about using my code is not the tracking, but tuning the HSV filter.

I’m not sure if it’s just a “me” thing, though it might be, but I was seeing errors with the getEllipseRotation being passed arguments it wasn’t expecting. EG, it seemed like values that were ZERO or similar were getting through the if block that had passed the contour check. This would result in the camera server and while statement going through resets every 10 seconds or so.

Thanks for letting me know. Right now, checkContours is almost negligible (pxArea > 10) After some testing, I too got errors from getEllipseRotations so I am currently adding a fallback of using minAreaRect for rotation if ellipserotation throws an exception. I’m not sure what you mean by resets though

If you watch the console output on the web pi interface you can see it bring up the camera server. Then it sets resolutions from the configs. After that it falls into the main py function with the comments found there. When it errors out the white true fails to be true and it resets the camera and starts all over again. It takes a good 5-10 seconds to bring all of that back online.

Oh, any chance you could commit that change? That would be awesome.

Any users of this should upgrade to FRCVision 2019.2.1 as it brings in the most recent robotpy-cscore and fixes Python console output to the web dashboard, amongst other features/fixes.

Sorry for my lack of knowledge on this. If I am correct this file gets uploaded to the Pi via the Pi’s Web Dashboard?

I won’t be able to until after school, but in the mean time you can go to last commit. “Pull request” one

I might even revert back because that one worked pretty well

Yep. For now, don’t use latest commit. Use second latest. I’ll try to fix it soon

Can you update without reflashing pi?

Not Peter, but just throwing this out there. That seems unlikely as openCV is the devil. I spent the better part of a week going through a various tutorials to get opencv working at 4.x.x when all of them were geared towards 3.x.x at the beginning of fall. I ended up with a working image in the end but once I turned the Pi loose on the students the very first one managed to brick the mSD card. I had a 4-6 hour long automated build script to install all the correct tools so I’m guessing it’s not a simple bash script to run.

Noted, I see the changes before that and will try it as soon as the 12V decides to come up to 12.9 again.

The only upgrade method at the moment is reflashing the SD card. I’ll look into trying to generate some kind of upgrade script, but it’s always going to be more reliable to simply reflash the card.

Cheers mate!
This is great, just a suggestion though. You could implement a system that calculates the robot rotation required when only 1 target is present in the camera.

We got a rPi setup and ran this code, and it’s super slow. I checked the CPU usage on the rPi from the web dashboard and it’s only hitting 25%, but the vision tracking is running at ~8fps. Meanwhile, we have vision code running on our roboRIO (student written) that runs at 30fps. They don’t have target grouping working yet, though. Could we maybe work on getting the performance up? Also, I will look into posting the Yaw value to networktables. You may see some pull requests from me over the season. Thanks for sharing.

1 Like

Just figured out why do slow. I used an inefficient blurring algorithm which showed down to about 2 fps.

A major update will be released this week which minimizes that and also tracks cargo

1 Like