pyfrc 'camera simulation' support available

There’s an example here: https://github.com/robotpy/examples/tree/master/physics-camsim . With this example, anytime you hit the trigger it will center the robot to a target if it can ‘see’ a target.

The idea behind this is that if you have a working vision implementation, it’s useful to simulate what the output of it would be so you can test your various algorithms (autonomous, other) without needing a robot. This simulation addition computes where your robot is relative to the specified targets, and then passes that back to your physics code where you can pass the value to the robot via NetworkTables. The ‘target angle’ is delayed by a specified latency, as a real vision system wouldn’t actually compute the target angle instantaneously.

Hopefully you find this useful, let me know if you have questions.