I hypothesized that rasberry pi won’t have the graphic horsepower to accurately identify the multiple object esp. in autonomous mode. Well I have 3 options making vision tacking usable , buying a better single board computer and linking multiple rasberry pI . I believe that vision tracking require a decent gpu to precisely grab to hatch panel and placing thing in upper level of rocket. Send me a suggestion for the hardware for vision tracking.
A Raspberry Pi is plenty fast. Graphics cards aren’t very useful for FRC vision challenges because they’re so structured and can use traditional computer vision algorithms instead of the complex linear algebra for ANNs that needs a GPU to run quickly.
If the pi 3 isn’t fast enough for you, consider the Odroid C2 with a DietPi install. It’s almost the exact same form factor as the pi, $60, has twice the ram, and should have on order 4 x the performance of an RPi3 with stock Raspbian. DietPi is just a more stripped down linux so it boots faster and uses more processor cycles for things you want and less for things that don’t matter for FRC like fancy window managers.
You can use new Raspberry Pi image from WPILib. I did not look into it very much, but it seems to simplify a lot of things.
It really depends on what you want to do.
We got around 10-15 frames processed (image filtering, finding contours,angle and distance calculation, sending the results through networktables) per second in 2017 on RPI3. With 10 frames per second there is a delay of at least 10ms (realistically more like 20ms) between something happening and the RoboRio receiving the information about it so it cant really be used for closed look feedback (you could use gyro for feedback). Camera systems like the limelight can get up to 90 fps and you might be able to get even more with an Nvidia jetson.