Pixy2 Camera help teaching targets and turning

We are going to try vision processing for the first time, so there is a lot we don’t know. We got a Pixy2 camera and we are able to communicate with it using the I2C connection to the RoboRIO, we are able to get x and y data.

The next step is to teach the camera to recognize the targets. We are not sure how to do this, we have tried putting the rocket and the cargo ship right on front of the camera but we are not able to capture just the reflective tape. Any ideas on how to accomplish this?

Let’s say we manage to teach the Pixy2 to recognize the target and we get (x,y) data back. How do I use the x data make the robot turn in the direction that we want to keep it center on the target?

1 Like

There are two main ways to teach the Pixy2 object recognition using colors. Both of these are described in the following tutorial. https://docs.pixycam.com/wiki/doku.php?id=wiki:v2:teach_pixy_an_object_2

I’ve followed the instructions on their wiki with no luck. It tries to grab more than just the reflective tape.

Did you use the Pixymon software to see what the Pixy2 was seeing? Also, you will probably need to use some kind of uniquely-colored LEDs to get the retro-reflective tape to stand out from the rest of the picture. Have you tried this as well? We use this LED ring from AndyMark. https://www.andymark.com/products/led-ring-green

Also, see the section on " Signature tuning" in the Pixy2 documentation. If you are picking up too much or too little on the given Color Signature, you can tune the values a bit.

I am using the Pixymon software. Thanks for the link to the LED ring. I am not using one. I’ll get one and give it a try.

Hey, wondering how this turned out, just seeing this and figured I’d ask.

Long time programmer, but quite new to Vision. We’re trying to debate between using a limelight V1 and a pixy2.

160 pxls is the middle of the picture with pixy2. You should use PID and give power to the right\left drive until the x of the picture equals 160 pxls.

The pixy tends to have a pretty limited range compared to the limelight, and is not very good at detecting reflective tape without some modification. I would recommend just getting a limelight v2, since they are pretty easy to set up and there is already a lot of support for recognizing this year’s vision target with them already.

Our team is using the limelight v2 this year, other than the documentation at the limelight website, where can I find the support at recognizing this year’s vision target?

The company that produces the limelight recently released vision models for this year’s vision targets on their website. It is under pnp target models.

https://limelightvision.io/pages/downloads

How do I use the pnp target models?

If you guys are using pixy purley for reflective vision target, I suggest bringing the brightness of the camera real low to only capture bright lightsource. This should let you capture the reflective tape consistently.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.