Sending Image from Camera to GRIP

Hi Everyone!

Our team did not use vision tracking this year so we are spending some time this summer figuring it out. We have learned how to process an image in GRIP and how to read the Network Tables in eclipse. However, we do not understand how to send an image from a live camera feed to GRIP. We are using a Logitech c270 HD Webcam and code in Java. Any tips or advice would be appreciated!

Thanks!
Greta
Team 5822 WolfByte

1 Like

If you are running GRIP on the driver station laptop or roboRIO, you can add the camera by clicking on** Add Webcam** in the sources section of GRIP. This section is in the bottom left corner of the window.

If you are trying to run GRIP on a Raspberry Pi, you’ll have to follow the steps in this guide.

We tried that but cannot add the USB camera feed from the rio. Is there another way to access it?

Have you been able to access the camera via the RoboRio Webdashboard.

A couple of things:

  1. I don’t believe GRIP can use the camera if you’re also sending that information to the driver station. So if you have robot code opening up the camera, it’ll hog it and grip will not be able to see it
  2. So if your robot code would normally open webcamX, then your GRIP code should now open webcamX instead
  3. My opinion is that running GRIP on the roboRIO is to slow for anything other than learning a little bit about vision processing. If you have the resources to get a Kangaroo PC (~$100) I would highly recommend that instead. Additionally, there are instructions for using a raspberry pi.
    3a) some extra context: If I recall, robot code wants to use about 60% of the processor running fairly basic robot code. GRIP also wants about 60% of the processor if you have what I would consider a reasonable quality image for vision processing. In my experience, we couldn’t reduce the delay to much less than 2-3 seconds running grip on the roboRIO, even with reducing the quality of the image substantially.

Good luck

Have you been able to access the camera via the RoboRio Webdashboard.

Yes we have

  1. I don’t believe GRIP can use the camera if you’re also sending that information to the driver station. So if you have robot code opening up the camera, it’ll hog it and grip will not be able to see it
  2. So if your robot code would normally open webcamX, then your GRIP code should now open webcamX instead
  3. My opinion is that running GRIP on the roboRIO is to slow for anything other than learning a little bit about vision processing. If you have the resources to get a Kangaroo PC (~$100) I would highly recommend that instead. Additionally, there are instructions for using a raspberry pi.
    3a) some extra context: If I recall, robot code wants to use about 60% of the processor running fairly basic robot code. GRIP also wants about 60% of the processor if you have what I would consider a reasonable quality image for vision processing. In my experience, we couldn’t reduce the delay to much less than 2-3 seconds running grip on the roboRIO, even with reducing the quality of the image substantially.

Thank you so much for the advice! Would you recommend a Kangaroo PC or a raspberry pi?

I haven’t used the raspberry pi. The kangaroo worked great, but there are a couple of gotchas. Mainly that the kangaroo will use the other ethernet port from the 2016 radio. Meaning toy either need an extra router, or be ok only connecting wirelessly