Our team is looking to try something we’ve personally never implemented before, the object/vision tracking card that allows for better autonomous runs and hopefully easier handling. I personally don’t know much about the process it takes and don’t know where to start. I’ve been researching different chips I’ve heard of to complete the task but other than that would like some suggestions on the issue. What cards are recommended and how easy are they to implement into programming?
Lots of teams use the Raspberry Pi or an Nvidia Jetson board.
The Pi is relatively easy to program on (in python, and it’s easy to install the OpenCV bindings for python).
I don’t have any experience with the Jetson but I assume it’s more difficult to use than the Pi. However, the Jetson is also vastly more powerful.
The third option is an Android phone. A handful of teams used one last season. The advantage is that you get a high quality camera and a powerful processor in a single package. It’s also easy to use OpenCV in Java. The caveat is that Android OpenCV doesn’t use the GPU by default – to enable it you need an Android with OpenCL support, and you need to use the T-API in native code (which is more difficult)
I know 1296 used the Pixy cam this year. I’ve heard it’s a little hard to set up, connecting to the RIO and everything, but it should be very straightforward from there.