|
Re: 30fps Vision Tracking on the RoboRIO without Coprocessor
The Zynq architecture has a hard ARM CPU and an FPGA on a single chip. The ARM is completely open because the safety code for FRC has been pushed into the hard-realtime FPGA. It is not possible with current tools to easily allow a partial FPGA update. So the FPGA is static for FRC during the regular season. If you want to use tools to change it in offseason, go for it.
The FRC FPGA doesn't currently have any vision processing code in it. It wasn't a priority compared to accumulators, PWM generators, I2C and SPI and other bus implementations. If you get specific enough about how you want the images processed, I suspect that there are some gates to devote. But many times, the advantage of using an FPGA is to make a highly parallel, highly pipelined implementation, and that can take many many gates. And if the algorithm isn't exactly what you need, you are back to the CPU.
So, with todays tools, CPU, GPU, and FPGA are all viable ways to do image processing. All have advantages, and all are challenging. There are many ways to solve the FRC image processing challenges, and none of them are bullet-proof. That is part of what makes it a good fit.
Greg McKaskle
|