LifeCam USBCamera changing settings from Java

Hello, any more updates to this post? I was reading on another post GRIP will not work with Lifecam. Were you able to get past this without using Axiscam and produce code in network tables for targeting?

On the roborio, it would work with Lifecam. However, I moved to the RPi using the instructions from the GRIP wiki and a lot of my own discovery. I need to post everything, but haven’t had the time. Bottom line - it works pretty well on the Pi. I have not actually tested connecting to a USB Cam - I just assumed the instructions were correct that GRIP on the Pi does not work with USB cams. I set up the mjpg-streamer and configured GRIP to connect to port 5800 (I configured the streamer to stream on 5800, not 1180 like the instructions say because the publish module in GRIP publishes on 1180 and creates a conflict). You can use v4l2-ctl to adjust all of the camera settings - lots of control and it does it on the fly, without having to stop the stream. This means I can look at the stream in a browser and adjust the settings exactly as needed. I have some improvements to make - I would like to get the USB cam working because I’d like to eliminate the lag of the streamer. That’s a summary - if you have any specific questions, post them and I’ll try to answer them.

Thanks for the quick reply. We are using Lifecam with roborio and having trouble getting the network tables to update after publishing GRIP. Getting HSL Threshold needs a 3-channel input. When we open Outline Viewer we see the report but no coordinates. We are publishing the contour report with images and trying with webcam. Both result in no network table updates. Any feedback is appreciated.

I don’t understand the HSL threshold error - I still see that, but it works fine. I think it’s a sequencing bug (starting the threshold process before they provide a valid input image). You should see a message later in the traces that indicates that errors have been cleared and everything is normal.

What do you mean “publishing the contour report WITH IMAGES”?

I did see that sometimes I would have to restart the Outline Viewer to get it to see the GRIP changes. Network Table weirdness.

Step back and do things one step at a time. Make a GRIP pipeline that just pulls in the Lifecam, publish the video and the framerate, make sure that works. Add pieces and publish, etc.

BE CAREFUL ABOUT BANDWIDTH and CPU UTILIZATION.
320x240 definitely was slow on the roboRIO. (all we did was a HSL threshold and contour)
we dropped to 160x120. One problem with GRIP is that you can’t get the camera to generate a smaller frame size, you have to do a resize in GRIP - this is wasteful and bad. Grip is pulling in a 720p or 640x480 image (not sure which) and they you resize in GRIP to something that you can operate on and transmit to the driver station. Resize is an expensive operation, especially if you do one of the interpolations.

You can check CPU Utilization by logging into the roboRIO (from a terminal / command line) and typing:
top

look at the java process with GRIP.

On the RPi I had the CPU pegged, GRIP was taking 2/3 processor, jpg-streamer was taking 1/3. I dropped the frame rate and frame size to get the CPU utilization down to about 80%

What we tried on the roboRIO —

What I was originally doing was:
HSL threshold
publish frame rate right out of the source (that let me know that it was actually generating frames)
contour
contour filter
publish contour report
created a mask with original image and contours
published the mask

It worked fine when we would run manually and deploy robot code (basically during testing). But when I added the code to have the robot code launch GRIP, we started seeing memory issues and fairly regularly, it would crash the JVM, generating a core-dump file in the process. Core dump files are HUGE and it would eat up all the device storage space. When the robot code tried to relaunch, it would hang because it attempts to create some files (preferences for example) and it couldn’t because the file system was out of space. This is a VERY bad situation - robot code won’t run. When this happened, I had to manually delete the core-dump files to get robot code running again. This is why I switched to the Pi - too risky to have the robot code / JVM crash in the middle of a match.

BTW, make sure you have the latest version of GRIP. They’re up to 1.3.1 now. We are currently using 1.2.1. I know 1.1.1 had problems

Thanks again. When I said I updated Rio with images I meant the input source in GRIP was all of the Stronghold images. I tried to publish this way and then with Lifecam as source. Both gave me issues. Once we setup the hue,etc are we to publish GRIP with images or Webcam as source? As you can tell this is our first time using GRIP.

I found that problem as well. We received good info when connected to a laptop, but then nothing on the roborio. I found I had to hack into the project.grip file and update the settings there and with trial & error, got it working. I think solidity was one of the keys… Make it 0-100 (full scale).

This makes me nervous… I haven’t filled the filesystem, but the executable crashes with out of memory. Didn’t see a core. Before GRIP, I see only 25M free, so I know its close.

Does anyone know how long it takes to process a pipeline? Is there a way to find out the publish rate to NetworkTables?