Thread created automatically to discuss a document in CD-Media.
Using JeVois camera in FRC by: billbo911, Team 2073.
This “Quick Start Guide” will give you all the tools to get started running your Vision Tracking code on the JeVois camera in an FRC environment.
This document makes use of many of the tutorial steps provided by JeVois.org and adds to them a practical and straight forward approach to getting started with the JeVois camera in an FRC environment. The original upload is a preliminary version. As feedback is provided, updated versions will be uploaded. As it is now, this document is fairly complete and should be able to get you tracking with your code. This is written with the assumption you will be using OpenCV and Python. Translating this document for other languages should be fairly simple, but that will be left up to the reader.
This is a preliminary release. It is written for OpenCV and Python. Translating it for use with C, C++ ect. should be fairly simple.
Please provide feedback so future versions can be even more complete.
Wow, that means more to me than you could possibly know.
Why? It was a paper that was produced by Spectrum that inspired me several years ago to dive head first into vision tracking! In fact if you look at our code from as recently as two years ago, you can still see remnants of your code in there.
This paper, while it is not intended to show how to actually track an object, it is intended to show how to run a teams code on the JeVois.
Later this year, 2073 will release our actual code running on the JeVois. Even it will look a bit like what was originally presented by Spectrum. That is due to the fact that the approach is a commonly used approach for identifying targets based on color, shape and size.
I actually used many of the JeVois.org tutorials to build this guide. I combined information spread out across multiple tutorials and examples. The tracking code we running is a custom modified version of the code we have been using for the last three to four years on our robots.
The code we use to track runs at 60 FPS. It is acquiring frames from the camera in YUYV and sends the target location data to the RoboRio via USB, so at 480Mbs.
There is a bogus concept in the JeVois engine of reporting the FPS based on the time it takes to process the image and create the targeting data. This method claims ~150 FPS. In reality, you can only track as quickly as the camera can acquire images. So, 320 X 240 YUYV maxes out at 60 FPS.
The one thing I have noticed about the Mini USB connector is that it is really snug at first. When it is repeatedly inserted and removed hundreds of times during testing etc., it tends to loosen up. So, a method restrain it is advisable. The TTL connector has shown no signs of loosening up.
Boot time is on the order of < 10 seconds with reboots around 15.
As an additional note, there are four mounting holes in the bottom of the housing that provide a very convenient method of securing the camera.
As for durability, Marshal’s description says it all.
…and to complete the integration between the roboRIO and JeVois, on the roboRIO side you just use the normal WPILib SerialPort API. We’ve also tested it as part of our 2018 beta testing and it worked fine…great stuff!
We had no intention of covering the roboRIO side in our original document, but maybe it would be a really good thing to add. The only question is, which language(s) to cover. We are a Java team, so that’s a no-brainer.
OK, I’ll add it to the list.
It’s always my philosophy that it is better to share and document what ever you do and don’t worry about covering things you don’t use. If there is demand someone will publish a labview or C++ example later if you publish the Java code in your document.
Thanks for the replies, this product looks really promising. I went ahead and ordered one to try out, maybe it will make it to our robot next year! The Android phone is much more expensive and annoying to deploy to.