How can I set my camera to output as a grayscale? I need to do this to reduce the bandwidth it uses for transmitting. if it makes a difference I am using a Microsoft LifeCam HD-3000
I’m not an expert, but how would this decrease bandwidth usage? You’re still sending the same information, except this time instead of a pixel being red it’s just black.
If you’re worried about bandwidth, I would look into capping the frames per second the camera transmits or the resolution. I know the Axis cameras had a configuration utility page that allowed this sort of customization.
If I’m not mistaken converting to grayscale reduces bandwidth because of the number of channels it transmits. With a standard rgb image, you have three channels per pixel. Each of these channels has value. For a grayscale image however you only have one channel. Now two extra channels on one pixel would not do much in terms of performance but once you have a few hundred extra it can especially when you try to get that image to send almost instantaneously .
You are absolutely right. Converting to grayscale would reduce the amount of data being sent over the network by a factor of 3 if you are sending raw data.
There are a number of ways to convert from rgb to grayscale, what language/computer vision library are you using?
The LabVIEW WPILib has a function to set the Color Enable Property. True means color is enabled. False means the camera is grayscale. But the property is only supported for Axis cameras. JPEGs have a special encoding for grayscale images, and they are indeed smaller than color, but it isn’t as simple as a threeX ratio.
As mentioned by others, there are other effective ways to reduce bandwidth. Image resolution reduces by X squared. Framerate by X.
Yes, that is what I am saying. You are assuming that whoever wrote the firmware chose the most logical option and decided to only transmit one channel instead of three. How do you know that instead of transmitting (255, 255, 255) they are transmitting (0) and not (0, 0, 0)? While the image may appear grey, they is still a probable chance that they are just transmitting information the same way, this time with more zeroes.
It’s compressing to JPG before transmission across the network. That said, this will still reduce bandwidth requirement (although how much would require testing).
Before going this route, I’d recommend looking at reducing the resoultion & framerate if possible. I’m curious as to what is currently being used and what is required.
Another option would be to convert to HSV and then select which channel you want to process. That way you are still retaining information from RGB at 1/3 the data.
I’d still be leery about that. The HD3000 can output images in JPG directly and use internal hardware compression. Any sort of post processing on the roboRIO would require compressing the image on the roboRIO and this will use up significant processor resources.
To echo adciv. Potential options are great. But test them, as the real world isn’t obliged to work the way you expected.
If you are trying to use a camera for vision tracking I don’t know that this will be able to still vision track across a gray scale as it may confuse similar objects for the reflected light. If this is for a drive camera for driver vision going around obstacles, I would highly recommend keeping the camera in color so that you know what exactly you are looking at. the best option to reduce bandwidth would be dropping the frames to about 15 FPS and lowering the resolution to about half of the area the camera will take up on your screen. this way you are still able to visually use and see with the camera as well as being able to tell whether that is a blue or red robot charging at you and whether or not they will stop. If it still becomes a bandwidth issue you can always reduce the FPS or resolution further to make the FTA’s and Others happy.
Feel free to try grayscale though if you can figure it out, but I don’t think it will be as effective as reducing frames and resolution.
But you’re not sending raw data, you’re sending an MJPG. Its lossy compression, and the difference between color and grayscale will be negligible compared to bumping up the compression a bit
I don’t have time to do the side-by-side compare, but if you use an IP camera such as Axis and flip between color enable of true and false, you will see the difference. JPEG has a special definition for encoding Y only images and it is another way to reduce size. You are deciding whether you lose color and retain detail or lose color and detail by lowering compression. The other lever you can pull is the resolution or size of the image. The final lever is the framerate.
Experiment and make sure the drivers understand how to pull the levers for themselves. The default dashboard gives an LED indicator showing how their usage compares to the field limits. Also, I personally don’t have much experience with the new radio and bandwidth limiting, but I expect that the radio will cause the equivalent of framerate limiting if you don’t pull the levers to get your usage under the limit. I don’t expect the cameras to hog bandwidth the way they were able to in past years.