Confused about the Pixy

First of all, I will preface this by saying that we already have a few working Autonomous builds, so don’t worry about this being super late in the season - it’s mostly a “add it if we can, scrap it if we can’t” kind of thing.

Anyways, we’re trying to hook up a Pixy cam for tracking the gear pegs. However, we have no real clue what that entails. We have read the porting guide, but any further advice (and especially examples) would be much appreciated. First of all, the cable that came with the Pixy is much too short and doesn’t appear to be able to plug in to the RoboRio. So we need another cable. But should we use I2C or SPI? And, once it is connected, how do we pull the data for two objects in LabView (in order to average the distance and find their center)?

Again, any help is much appreciated, but don’t spend too much time freaking out over how “late” this is. :stuck_out_tongue:

bump

We found that the Pixy cam generated a pretty steady stream of data that included all targets seen. Using the RoboRio to parse it all didn’t work well because of the need to attend to other things, like running a robot.

We used an Arduino to read the cam data and sort out what we wanted, which is just the data that belongs to the first target. We then send it to the RoboRio over I2C, where the data pertaining to just the main target is always waiting to be used.

You can put the pixi in digitalX (best for tracking pegs I would think) or digitalY mode and it gives you a single digital output telling you if it sees anything. If it sees anything an analog output will tell you where in X or Y the (centroid of the) image (the biggest objected) is centered. It does not get much easier than that.

Training the pixi is a little tricky but that is true no matter how you use it. And the cables require some better than average cable making skills and tools.

The problem with that is that the gear peg has two targets, and we need to find the center of those two.

Yeah, we are hoping it sees the leftmost.

Which means you have quite a bit of uncertainty, especially as the robot actually does approach the spring. Now I am sure there are ways to reduce that uncertainty but I prefer removing that personally.

Rather than using the analog /digital x configuration, you could go with the regular SPI set up and have access to all objects that the pixycam sees, also then allowing you to look at the 2 largest objects so you don’t have to avoid seeing the right vision target. More info

Which is the plan… if can can figure out how to attach the dang thing to our rio. We aren’t exactly capable of making a custom cable.

If you aren’t capable of making a custom cable, I’d look into the USB library for pixy cameras as that might be your only choice. Most other options will need a custom cable type setup because of.how the robo Rio is set up

Looking closer, our rio had a unused gyroscope in the SPI port. If I got an SPI cable, could I directly connect the Pixy to the rio? They have the same port.

Just an idea:

But you can change the focus of the pixy by turning the lens. What if you defocus it enough that the two reflective targets blur together into one, then you can use the analog X out to track it and simplify some.

That would be hilarious. I’ll try it in Pixymon right now.

Edit: Need a mini-usb cable, give me a minute.

Sadly, no dice. We become unable to sight the peg before it becomes one target.

Yes, you can definitely use the SPI port on the RoboRIO. We have had success with ~1m ribbon cable over SPI. See this post for more details: https://www.chiefdelphi.com/forums/showpost.php?p=1659781&postcount=30