Need help with the steps to get visionntracking working on our robot. We have been struggling. Any help is appreciated.
There are multiple ways of including vision in your robot to influence decisions taken by both the control and the driver. Here are the most common ones:
- Transmitting an image back to the driver station: Vision Processing
- Using an external processor (Raspberry Pi, Limelight, Graphics Card, etc.) to send data back to the roboRIO.
- Using the roboRIOs processor to perform vision conditioning, detection and measurements:
- National Instruments includes the NI Vision Assistant, which can be used by both LabVIEW and Java/C++ teams. Here are some resources to learn how to use it:
I presume it’s easier for the community to help if you let us know which method you are currently using.
If you are using a Limelight, we just made a complete LabView example program for 2019 which you can find here:
Using the raspberry pi image is probably similar to this since it posts values to NetworkTables as well, you would just need to change the table name and value names to their equivalents.