Need help with vision tracking

Our team 2183 has never been able to successfully use vision tracking. Ever year our drivers line everything up. We need any advice to get vision tracking up and working, any help is appreciated.

Here’s a 3-step process that our team used to get started on our first ever vision program this fall.
https://wpilib.screenstepslive.com/s/currentCS/m/vision/l/463566-introduction-to-grip
https://wpilib.screenstepslive.com/s/currentCS/m/vision/l/672730-generating-code-from-grip
https://wpilib.screenstepslive.com/s/currentCS/m/vision/l/674733-using-generated-code-in-a-robot-program

Vision this year will be a bit more challenging due to pairs of targets, however the concept is the same.

Use GRIP to filter out everything that isn’t targets, draw bounding boxes around what remains, then get their center coordinates.

You can then use those coordinates to turn the robot to face the target… ect

Finding a way to turn the cameras resolution down is a good idea for vision processing. It’s a natural filter to almost everything, and with an LED ring nothing on the field will be as bright.

read up on wpilib’s cscore docs

Thank you we appreciate the help.