Quote:
Originally Posted by MamaSpoldi
Team 230 also used a PixyCam... with awesome results and we integrated it in one day. The PixyCam does the image processing on-board so there is no need to transfer images. You can quickly train it to search for a specific color that you train it for and report when it sees it. We selected the simplest interface option provided by the Pixy which involves a single digital (indicating "I see a target") and a single analog (which provides feedback for where within the frame the target is located). This allowed us to provide a driver interface (and also program the code in autonomous) to use the digital to tell us when the target is in view and then allow the analog value to drive the robot rotation to center the goal.
|
We already had our tracking code written for the Axis camera, so our P loop only had to have a tiny adjustment (instead of a range of 320 pixels, it was 5 volts), so the PixyCam swap was almost zero code change.
We got our PixyCam hooked up and running in a few hours. We only used the analog output, didn't have time to get the digital output on it working. So if it never saw a target, (output value of around .43 volts I believe) the robot would "track" to the right constantly. But that is easy enough to fix in code...(if the "center" position doesn't update, you aren't actually tracking).
If we had more time we probably would have used I2C or SPI to interface with the camera, in order to get more data.
I know of at least 2 other teams from Georgia who used the PixyCam as well, being added in after/during the DCMP.