Reliability of Pixy Camera?

Our team is considering purchasing a Pixy camera (link below) to hopefully use in next year’s game. For the teams that have used a Pixy in the past, how reliable has it been? As it uses hue-based filtering, have any teams had problems tracking greyscale objects (such as the 2016 boulders)?

Also, after reading through the docs it looks like the get x, y locations are pixel-based. This seems like it may be an issue as the camera appears to have a fairly fish-eye lens (see photo linked below). Has this been an issue for any teams that do vision tracking by using vector-based calculations and moving by gryo angle rather than direct camera feedback?

Pixy: http://charmedlabs.com/default/pixy-cmucam5/
Fish-eye example: http://i74.photobucket.com/albums/i241/cmucam/Image209_zps1e87977b.jpg

Thanks!

You’re not going to be able to track the greyscale boulders, but the Pixy should be fine for stuff like the retroreflective tape on the goal. At least from our observations, the Pixy has a huge lack of contrast or color — everything’s pretty washed-out, so you’ll only be able to do bright colors.

Fisheye shouldn’t be that much of a problem. Worst case, you can use OpenCV to determine the distortion factors, then write a correction software-side.

My opinion is that the Pixycam would probably be better on a FTC - VEX scale field. Lighting variations and FRC field scale gave it problems when we tested it. In my opinion it might be better to bite the bullet and invest resources in learning grip - Opencv with some camera choices. Our student are working on a PI3 and opencv for next year. We used grip and a Pi2b this year. I would say that grip was a good learning tool.

Hey, so my team bought the Pixycam during the off season. It is one of the easiest visioning cameras to use. From what I have seen on my team’s Pixycam is that it doesn’t have to much of a fisheye effect to it, also if does are team didn’t do anything to fix it and it works great at our shop. Sadly we didn’t get to try it out at an off season. The only problem with the Pixycam is you can’t really get both x and y, at least yet. What is very helpful is for it to work you just have tell you Pixy what it is looking for once, on the computer interface, and then it sends you a voltage depending where it is and then you code for it.

I have been using the pixymon camera for some fun off season project. Here is are auto aiming ping pong robot https://www.youtube.com/watch?v=2pQD2WqQjCM

It is very impressive however there are some big downsides to it.

  1. if you ever brown the camera out it will loose its configuration
  2. Camera does not have great resolution so distance is a problem
  3. Can not do any advanced shape detection. This is almost a must in FRC

over all i would not recommend it for FRC. It is a lot of fun to play with and to learn but just not great for FRC

I’m grabbing your last two points because I have concerns unrelated to pixy.

  1. resolution and distance aren’t really required for FRC, shooting from the defenses was very doable for us last year at reasonably low resolutions. I’ve also had good luck with the CMUcam3 in the past which was probably comparable resolution. Typically the first step I do in any image processing is to lower the resolution of the image so I can process it faster.

  2. Last year we did 0 shape recognition, we relied almost exclusively on lowering the exposure on the camera.

I like to think our camera tracking was pretty reliable last year. I guess I’m just disagreeing that you really need high resolution and shape recognition.

On 1296 this year we used the Pixy for our vision both in auton and teliop, it worked great! We did try and use it for some boulder tracking and this did not work very well outside of ~6in, due to the grey boulder and carpet along with the lack of reflectiveness of the boulder. For a year that does not need shape recognition like this year with the tower the Pixy seems to be a relativity easy solution. My favorite part is Pixymon which allows for super easy tuning.

*Disclaimer I am not a programmer/software guy and most of this info was relayed to me from the programmer.

This is probably a better question for your programmers, but how did you interface with the Pixy.

We used analog and dio.

Agreed. We did basically the same and had a 100% auto shot accuracy at champs. We did use a higher resolution for more precise off center measurements but it was working almost as good with a lower resolution.

We used the Pixy camera from week 5 on in Stronghold. It was the key to our autonomous high shot and a major contributor to our 9-11 high goal matches. The interface we chose to use was the simplest one (digital/analog X). I found it to be a bit touchy to set up, but once calibrated to the lighting in each venue it worked well.

We did change the lens from the stock one (75 degree horizontal field of view) to I think 51 degrees field of view.

I could be wrong about using it on the FRC robot. Sounds like lots of people have had good luck with it

Has anyone seen the problem where if the board browns out you loss your config . Maybe there is just something wrong with my board

How are you able to use OpenCV with the Pixy? I was under the impression that there was no way to pull the raw feed from the camera.

Although I don’t know how, I would imagine there is some way of getting the raw feed from the pixy over usb. I say this because in pixyMon you can view the live feed from the camera. Again though, I’m not sure how one could access this through say open.

You aren’t. I think what they are mentioning is taking an image off the pixy and run it through opencv’s distortion stuff. Then in robot code apply the transform to the position returned by the pixy.

So the PixyCam has a pretty good resolution, though depending on the year may not work well enough, we actually got ours to work to about the mid-line. Also the boulders do seem to be a problem. Also to get a live feed from the pixycam it isn’t very possible, yet. (I’m hoping pixy releases an FRC version becuase they have an FLL version.) Though, my team didn’t need to use anything other then what is provided by Pixy, the Pixycam and the Pixymon, to interface. We also didn’t have any brown out problems. I do believe the PixyCam is very worth the cost, because on-board processing is better than off-board processing, and to set-up something like you would: need a camera, a kangaroo computer, and led lights. All this can come to a much more hefty price than the PixyCam. (which my team paid $100 for are setup and the other setup I’m talking about is nearly $200 or more.)

To add to that: 230 had one of the top scoring robots in the Carver division and was the first pick during alliance selections. Talking to them in the pits, I was surprised that such a simple targeting system could work so well. OP, you should bug them for details, as it can clearly be used reliably!

-not a programmer. :stuck_out_tongue:

After seeing how much great results people have had i started looking into the i2c communication. Any chance anyone has written the i2c communication for Labview?

To teams that got the Pixy to work. Where did you shoot from? Did you shoot from the defenses? Did you have to be centered and perpendicular to the target? What did you do about getting 2 targets when positioned to the side and 2 targets were in view?

We shot from anywhere along the outer works and from along the left edge of the field (as viewed from our driver station). Our preferred and most practiced spot was halfway between the low bar and the wall along the left side.

The 2 target problem turned out to be no problem at all. Using the digital/analog X interface the Pixy gave the X position for only the largest target it saw. In every case that target was the goal most perpendicular to our catapult, and therefore the goal we wanted to be aiming at.

For clarification, we used the two signals from the Pixy as follows:
The digital output told the robot when the Pixy saw what we taught it to look for anywhere within its visual field. At that point the robot removed the yaw authority from our driver and aimed itself. A large indicator on the driver station told our operator when the robot was aligned and ready to shoot. At that same time full drive authority was returned to our driver, just in case we needed to relocate due to being defended.