USB camera with GRIP or Roborealm

Hello, has anybody who uses c++ been able to get images into a software such as GRIP or roborealm for image processing? If so how have you done it? Currently we here at 2172 are able to output from the camera through the rio to the smart dash but with the normal camera server code are unable to bring the images into GRIP or Roborealm. All of our softwares/firmwares are up to date. Any ideas on what could be wrong?

Thanks!
~Stew

Are you running GRIP on the roboRIO or the driver station?

If you’re not running it on the roboRIO, your best bet is probably to either use an Axis camera and connect to it over the network, or (if you’re using a vision coprocessor) to connect a USB camera directly to your vision coprocessor. The video protocol used by CameraServer is non-standard, and it eats up a bunch of CPU power on the roboRIO - it’s probably better to directly connect the camera to whatever is processing the images.

We have what I think is the same probem, although we are a jave team instead of C++. We are having trouble with getting the image source set up.

We want to plug in a usb camera to USB 0 on the roborio, but run GRIP on the driver station.

We can see the camera image on the smart dashboard. Now we want to process it using GRIP on the smartdashboard. How do we get that image that we are seeing into GRIP.

I’ve seen several articles saying how to get the output of GRIP back to the Roborio, but how do I get the output of the camera into grip?

GRIP doesn’t support the FRC dashboard protocol for inputs. Several people have asked how to do this, so I guess I might add support for it. It shouldn’t be too hard.

I am obviously missing the point on something. I am not completely alone in missing the point, but some people seem to have gotten it. I (and the students on my team) haven’t.

I have seen lots of references to “running grip on the driver station PC”.

We can run GRIP on a plain old PC, not running a driver station at the time, by plugging in a web cam, or just using the build in web cam of the laptop. It works great. However, that doesn’t help during a match. During a match, I have the USB web cam on the robot, plugged into the roborio. The roborio will acquire the pictures from the USB webcam, and communicate them to the driver station. The only place I have ever seen those pictures displayed is on the smart dashboard. Is there some other place?

Meanwhile, it seems that somehow on the laptop that the drivers are looking at, we are able to run GRIP. If that’s what we do, how do we tell GRIP, “Use the webcam on the Roborio as your source.”

Or am I missing the boat entirely? When I see people talking about “running GRIP on the driver station pc”, do they mean during testing, like we have been doing on the last few days, with sample images in our lab? I assume that “the drive station PC” is the PC that the drivers are staring at during a match, that is controlling and communicating with the robot. If so, how do I get the image from the robot, process it on the driver station PC, and then send the processed information back to the robot. That last part seems to involve network tables, but where does it get the data to process?

Sorry, maybe we haven’t been completely clear on how camera streaming works in GRIP and FRC in general.

  • You can run GRIP on the driver station PC with a USB or builtin webcam for testing
  • You can run GRIP on the driver station PC in actual competition if you use an IP camera (Axis camera), since it can send video over a standard M-JPEG stream. To do this, use the “Add IP Camera” button.
  • You cannot
    currently run GRIP on a driver station with a USB cam plugged into the roboRIO. This is because the roboRIO uses a non-standard protocol to stream data to the dashboard. This protocol is pretty simple, so we might add support for it soon. The reason we haven’t yet is mostly because this method is inefficient and results in lower frame rates, but it’s also the cheapest method in terms of hardware.

Thanks. Perfectly answers the question. (At least until the next question.)

We have a very basic follow-up question about the Axis M1011 and GRIP. We have the camera connected directly to a laptop with a network cable. When we select “Add IP Camera” we only get one image. How do we get the video stream that was described in the previous post? Thank you.

v1.2.0 (which will probably come out tomorrow) fixes a lot of stuff with IP cameras. I would wait until then and see if the problem still happens.

We have a similar question that was asked by David Lame, but I don’t think I saw an answer how to do this. His question was “how do I get the output of the camera into grip?” We have figured out how to get the camera on the laptop into GRIP, but not the camera on the RoboRio into Grip. If you could provide any information how to do this, it would be very helpful. Thanks.

With GRIP running on the RIO? It should work the same.

If you’re using a USB camera, you either need to have it plugged into the device GRIP is running on, or use something like mjpg-streamer to “convert” it into an IP camera.

I am having success with RoboRealm using an IP Axis Camera. To use a USB camera with RoboRealm you would need a Windows machine ON the robot (like a Kangaroo).

We’ve been experimenting with Grip for a while and are, in general, experiencing nothing but sadness and memory overflows.

Try putting -Xmx50m -XX:-OmitStackTraceInFastThrow -XX:+HeapDumpOnOutOfMemoryError as your deploy JVM options

That fix is already there.

We’re not giving up on Grip; we’ll keep trying it as new updates come out. But for the time being, RoboRealm is working.

I appreciate your hard work on Grip! Keep it up!