Hello everyone. One of team 6762’s offseason projects was to learn how to implement vision for this season, and we are pretty close. We have a PixyCam v2 with servos running on a BeagleBone blue and are able to implement object tracking using python 3 (We did have to rewrite some of the code for it to work with Python 3).
Now, we are at the stage of transferring this knowledge to our FRC bot. Since the pixy has a pan/tilt servo, we want to get the pan and tilt info out of the pixy as well. We can do this on our BeagleBot, but are wondering if anyone here has recommendations as to how to go about this with the Rio. On the BeagleBot, we are using USB to connect to the camera.
Our options as I see it are:
Option 1;
We could use the BeagleBone (or a raspberry pi) to export the data to the Rio.
Option 2;
Connect the pixy directly to the Rio.
In either instance, which protocol would you recommend we use to connect to the data?
Network tables, or one of the serial protocols (I assume Analog is out because of the pan/tilt data)?
I have read through the various threads about the Pixy on CD but did not see any teams using python.
I think the easiest way to go about it would be networktables. The RobotPy project has created pynetworktables which will let you communicate completely in python, and since the servo position of the Pixy won’t be changing very quickly, it should be more than fast enough.
For Option 1, definitely recommend using NetworkTables, it’s really easy to use/setup and allows you to use tools like shuffleboard and related things to do useful things with your data. Either a Pi or a Beagle would work.
For Option 2, you could definitely connect the pixy to the Rio. The hard part would be compiling the library you need for the RoboRIO. If you decide to go that route and can’t get the library built, file an issue at https://github.com/robotpy/roborio-packages/issues and we can probably build it for you and push it to the robotpy ipkg repo.
If you went this route on the RIO, if the pixy processing stuff is cpu-bound I’d recommend running it in a separate process, similar to how we run cscore processes.
… I’d probably lean towards Option 1 myself, as it forces you to separate your robot and vision logic.