Reliability of Pixy Camera?

Mittens (1296) shot from most places, the best/favorite places were along the defenses in positions 3 and 4 for the safe shot, about 10ft past the low bar(when cycling) or between the batter and the defenses in front of the center goal. The driver was trained to know where the pixy would get confused (seeing two goals) and he would avoid those areas.

Two things:

  1. What settings would you alter in order to get the best performance? Actual numbers on specific settings would be very helpful, I’m curious to know what you did to have the best performance possible.

  2. What benefits did you find in changing the lens? What issues did you experience that made you look into changing the lens?

To anyone else who ran the Pixy this past year or have other experience, please feel free to chime in on your answers too.

Thanks everyone for all of the great responses!

I just have one other question. Is there a way to pull the raw footage from the camera in the case that more advanced vision processing is needed? It looks like it may be possible via a USB connection but I don’t see anything specifically referring to it in the USB api docs.

The Pixy is primarily a color sensor, so there is a balance to be found between consistently seeing what you want it to see with the exclusion of everything else. There are settings within the Pixymon calibration program to teach it how close is close enough to be considered a target, camera exposure, white balance, etc. We had some control over potential interference sources because we were looking for the specific orange of our targeting LEDs. We chose orange because it was not present in the field lighting. Others use green, YMMV. I don’t know specific calibration numbers, as the robot is hibernating at the school.

The lens choice was also made to reduce the potential sources for false targeting. Using the Pixymon program to see what the camera saw, the 51 degree lens gave us the best view of the field with the least view beyond it.

The system was not perfect and needed tweaking along the way. The scrolling LED display along the arena perimeter at champs was occasionally orange and had to be considered. We also found that the end wall diamond plate at champs was highly polished, far more so than at our district events. That caused or robot to shoot at its own reflection once in autonomous (funny in hindsight). After teaching it to turn 30 degrees to starboard before looking for a target that was no longer an issue.

The Pixy can actually be set to look for 7 simultaneous colors. Who knows, that may be useful this year.

I haven’t used the pixy cam on an FRC robot, but I purchased one and I’ll be using it for another robot competition. It has worked well in my initial testing with controlled lighting.

As stated. It is really more of a color-proximity sensor, a highly optimized and specialized sensor that does one thing well. If that is useful in the game, make use of it. If not, it is actually a pretty good exercise to make your own pixy cam using a USB camera and a little bit of processing code and extend it to do some additional measurements on the particles. But at that point, there isn’t much need to incorporate the pixy cam. Just build your own. It is a great learning opportunity.

The optimization is the hard part. But I’m pretty sure most can make it fast enough for what they need on the robot. We aren’t needing to measure the wing-beat speed of an unladen European swallow, are we?

Greg McKaskle

Ok, I think we may have to revisit the pixy cam. Our judgement may have been premature. We went with grip on a Pi2b and an IP camera. The programmers are working on Opencv on a PI3 with IP camera and a native Pi camera. The difference is color blob detection vs object detection. The Pixy cam is a much simpler solution. One can not have enough tools in the tool box for next season. It seams that machine vision is a requirement for FRC teams that aspire to play at a high level. Our best trained operator could aim and shoot at best in 7 to 8 seconds. Our auto aim and shoot was average of 3 to 4 seconds and more consistent.

It’s nice to see how far the CMUcam platform has come. I remember trying to fuss with the old rs232 ones years ago.
Has anyone looked into the OpenMV camera? I don’t hear as much about it, but I’ve been using one for a while and it’s pretty nice, I wonder how they compare with one another. (buying more than one $60+ dev board at a time is somewhat difficult to justify to oneself and ones wallet.)

do you know the updated rate for the OpenMC Camera

I have it working for java, and would be glad to share. It might be along the same lines as labview?(maybe)

that would be great. Are you just getting the largest value or how do you have it working?

If you mean the price, it’s $75. If you are referring to like, framerate or baud or something then I’m pretty sure that they’re configurable to whatever
<complaining>Tangentially, I myself purchased my own unit through the Kickstarter campaign, so it was at a discounted price of ~$60. They then tell us that they encountered production issues and had to spend the shipping fees they had charged on the problem, so we all needed to spend an extra $12 to get our already-bought boards actually sent to us, ultimately eliminating the discount we received for backing the campaign.</complaining>

Here is the code:
In robotMap:


public static I2C pixyi2c;
pixyi2c = new I2C(Port.kOnboard, 0x54);

In Main:


public static void printPixyStuff(){
	byte] pixyValues = new byte[64];
	pixyValues[0] = (byte) 0b01010101;
	pixyValues[1] = (byte) 0b10101010;

	RobotMap.pixyi2c.readOnly(pixyValues, 64);
	if (pixyValues != null) {
		int i = 0;
		while (!(pixyValues* == 85 && pixyValues* == -86) && i < 50) {
			i++;
		}
		i++;
		if (i > 50)
			i = 49;
		while (!(pixyValues* == 85 && pixyValues* == -86) && i < 50) {
			i++;
		}
		char xPosition = (char) (((pixyValues* & 0xff) << 8) | (pixyValues* & 0xff));
		char yPosition = (char) ((pixyValues* & 0xff << 8) | pixyValues* & 0xff);
		char width = (char) ((pixyValues* & 0xff << 8) | pixyValues* & 0xff);
		char height = (char) ((pixyValues* & 0xff << 8) | pixyValues* & 0xff);
		SmartDashboard.putNumber("xPosition", xPosition);
		SmartDashboard.putNumber("yPosition", yPosition);
		SmartDashboard.putNumber("width", width);
		SmartDashboard.putNumber("height", height);
		SmartDashboard.putNumber("Raw 5", pixyValues[5]);
	}
}

As for your question, we have it just detecting the largest value. There might be a way to access the other data(It might just be as simple as finding the right memory address), but we did not investigate that further. I will be happy to further explain this code(as it is a little complex), but even I forget and would have to look at it a lot. But, if you want info, then I would gladly do it! Also, so people don´t get mad at me, if you use it, give us credit. ;)************

So, i might be wrong, but I think that when it goes over 50, it is doing the same thing? That can’t be right, but I will ask someone else about it. The two bytes at the begging I think are to align the data so the data is always in the same spot in the array. There is no writing because we just want what the Pixy sees. Pixymon, a separate program, configures and saves all the setting that you want on the Pixy, so there is no need for writing.

So, this is what I was told about the two loops.
“check if the index is getting so high that you can’t align and see an entire frame.” I think this is that it takes too long to parse all the data so we split it up? Looking back at documentation, this is how the code should look. Now with comments!


// set the number of bytes to get from the pixycam each read cycle.  The pixycam outputs 14 byte blocks
// of data with an extra 2 bytes between frames per Object Block Format Figure
int maxBytes=64;


// declare the object data variables
int xPosition = 0;
int yPosition = 0;
int width = 0;
int height = 0;


// declare a byte array to store the data from the camera
byte] pixyValues = new byte[maxBytes];


// the remainder of this snippet should be placed in a loop where the data is also used.
// a while loop is suggested where the loop exits when the target is identified or a break button is
// depressed on the OI
boolean target = false;
boolean oiExit = false;


while (!target && !oiExit){


// read the array of data from the camera
RobotMap.pixyi2c.readOnly(pixyValues, 64);


// check for a null array and don’t try to parse bad data
if (pixyValues != null) {
	int i = 0;
// parse the data to move the index pointer (i) to the start of a frame
// i is incremented until the first two bytes (i and i+1) match the sync bytes (0x55 and 0xaa)
// Note:  In Java, the and operation with 0xff is key to matching the 0xaa because the byte array is
//           automatically filled by Java with leading 1s that make the number -86
	while (!((pixyValues* & 0xff) == 0x55) && (pixyValues* & 0xff) == 0xaa) && i < 50) { i++; }
	i++;
// check if the index is getting so high that you can’t align and see an entire frame.  Ensure it isn’t
	if (i > 50) i = 49;
// parse away the second set of sync bytes
	while (!((pixyValues* & 0xff) == 0x55) && (pixyValues* & 0xff) == 0xaa) && i < 50) { i++; }


// build the target data from the framed data
	xPosition = (char) (((pixyValues* & 0xff) << 8) | (pixyValues* & 0xff));
	yPosition = (char) (((pixyValues* & 0xff) << 8) | (pixyValues* & 0xff));
	width = (char) (((pixyValues* & 0xff) << 8) | (pixyValues* & 0xff) << 8) | (pixyValues* & 0xff));
	}


Hope this helps :)***********

We had a regular pixy communicating through I2C.