JeVois Smart Machine Vision

The Pixy cam is for all intents and purposes a closed source device*. You can use their desktop app to configure what colors you want to track, filter color blobs based on size, and set up what the output should look like (SPI, analog signal, etc). The resolution of the image is small and you have limited options for what sort of shape based filtering you can do. This makes things like determining which thing is the target and which thing is the giant screen displaying the same color as the target hard to do reliably. On top of this, we have found the persistent settings on the Pixy camera to be unreliable, and have eliminated it from contention for use on our robots.

The Jevois camera is a system that is meant to be modified. You will write/find code that can track the target and upload it to the cameras onboard memory. Once the camera boots, it will run your custom code and output its results over a hardware or USB serial connection. You control all of the things the camera code is doing from setting camera hardware settings (exposure, AWB, etc) to finding targets to filtering targets to serializing results and sending them over the serial connection.

Our current strategy if 2018 requires vision tracking is to let our students develop a jevois camera module, and if that does not perform at the level it should (for whatever hardware reasons we have yet to find), go ahead and buy a Limelight camera.

*The code that runs on the pixy is indeed open to view, but the tracking code itself is part of a firmware image which would require you to build a system image and understand most of their software stack. I believe this is outside the scope of what an FRC should need to do.