Quote:
Originally Posted by yash101
I agree with the fact that driver-station vision processing is a great approach. However, there are still latencies.
|
This is true, but you haven't done any research to prove the latencies have any effect. For instance, a 10 foot ethernet cable will be every so slightly slower than a 9.9 foot ethernet cable, but in the scale of FRC, it doesn't matter.
I guess what I'm saying is that you need to try driver station vision processing before you dismiss it as slow. It's slower, but it's barely measurable, and if definitely won't impact your robot's responsiveness, speed, or accuracy. You won't notice the difference. In fact, I'd be willing to bet that the pi will actually be slower. Most of the lost time isn't in transmitting, it's in encoding. The USB drivers for the pi are pretty sketchy, and even if you're using the axis cam, you'll need to play around with FFmpeg and compile it yourself for the pi in order to get any decent amount of response. The pi isn't really that much faster than the cRIO, especially running the distro of linux included with it. vxWorks (the OS of the cRIO) is pretty darn optimized in terms of networking with the camera/DS, unlike the sketchy USB and TCP stuff going on with the pi.