How to Program Vision with Java and Raspberry Pi?


Our team is trying to figure out how to program vision with a Raspberry Pi. We are having trouble trying to find some example code to tell us how to program vision, so we would really appreciate some help! Thank you!

A couple places to start:

  • FRCVision is an off-the-shelf Raspberry Pi image for camera streaming and comes with some examples (in C++, Java, and Python) to use as a basis for a vision application (the examples don’t do any vision themselves, they just serve as a platform).
  • Use GRIP for experimentation with different image processing pipelines. GRIP has a code generator that outputs OpenCV code for the pipeline you create that can be integrated into the FRCVision example.
  • If you’re resource constrained, you might also want to look at more “canned” solutions like Chameleon vision.

Do you have any simple examples? I’m looking at the FRCVision Java example and it’s really confusing.

Most of that code you don’t need to touch, as it’s boilerplate to get it working with the FRCVision web dashboard (I should refactor it so it’s less confusing). Your vision processing code goes in line 290 and line 331. You can replace the entire MyPipeline with GRIP generated code.

So in the robot code, do I need to include all of the extra stuff? If so, where do I put it? Can I put it in its own class file?

You can move it to a separate file if you want. The extra stuff is needed if you want it to work with the FRCVision webpage (e.g. where it lets you add/remove cameras, change settings, etc). If you don’t care about that, you can remove that code entirely.

There is also the option of chameleon vision if you want to do some vision tracking, similar to grip

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.