Hello everyone!
We have a symmetric robot which requires camera vision for both sides. For that, we use OpenCV methods that run on the roboRIO and flip the image when needed. We have discovered that these methods take some time to process and cause us to get loop overrun warnings. We run them in a seperate thread just like in the example of WPILib (WPILib ScreenSteps - Using the CameraServer on the roboRIO).
Is there a way to run the needed OpenCV methods on the Driver Station and display the manipulated stream to the drivers instead of flipping the imgae on the roboRIO?
Many thanks,
Aric Radzin, Team Captain and Lead Programmer, BumbleB 3339 .
What exactly do you mean by flip the image? Is it two cameras and you’re flipping which one is being displayed on the dashboard? Or are you flipping the camera feed of one camera (I.e. horizontal/vertical flip)?
Once you answer that question I have follow up advice for each.
Sincerely,
Orian Leitersdorf
Lead Programmer & Electrical Falcons 4338
In that case, can you check the camera feed through an internet browser (i.e. 10.33.39.2:1181 or whatever your port may be) and see if there are any options with regards to flipping the feed? Most cameras/streams have a built-in feature for flipping the feed, and if that option appears in the browser stream then you’ll be able to manipulate the camera’s variable for flip through a config JSON. This would, by far, be the easiest solution to the problem in my opinion.
If that is not the case, then I would develop a custom Shuffleboard/Smartdashboard widget (whichever you use) that flips the image according to a network table value. This could be developed outside of the bag and then quickly tested on the robot during Bag opening.
Another possibility which would be preferable would be connecting the camera to a RPi and running it through there. I would look at the FRCPiVision image for a head start there (if you want to go this route I can send you more information). This would most likely require more bag-open time for wiring the RPi.
Hi, I will try to check the settings available to use.
The question is also whether I can run my own custom OpenCV pipeline on the Driver Station to also draw some indicators for the drivers on the stream. We currently do it on the roboRIO.
As for connecting it through a Raspberry PI, it is a bit problematic for us because we are very close to the weight limit and would prefer a solution that utilizes the Driver Station.
Thanks for the quick reply!
Hi,
Yes you can run OpenCV functions through a custom widget for the SmartDashboard/Shuffleboard, you’ll just need to download the dependencies onto the computer.
If the indicators you’re drawing are static (as in don’t change during the match) then you could overlay a transparent PNG through the Smartdashboard by doing add->image and then resizing it to be over the CameraServer view. In that case, you wouldn’t need to program the custom widget (if the camera can flip in it’s settings as well).
Edit: Could you also check the CPU usage of the roborio while you’re running the opencv operations in the separate thread? It’s possible that the loop overrun error is not related.
Hey.
We need dynamic images…
Also, we’ve tried disabling the image flipping and the loop overrun warning stopped, so we are pretty sure that it is related.
Develop your own application that’ll run on the driver station during the match. You’ll need to package opencv into that application and write some custom code to flip the images etc… In terms of the programming language for the application, I would go with either Java (because that’s what you’re already familiar with if I remember correctly, and eclipse makes adding the opencv dependency pretty simple) or C++/python (for the simplicity with cv2.imshow instead of having to create a JPanel in java). In terms of receiving the images from the HTTP feed, see this. If you need to connect it to the network tables (for instructions from the rio on what to display), then see this.
Develop a custom widget for the SmartDashboard/Shuffleboard. I have never done this so I cant really help here, from what I could see the documentation for how to do this is pretty minimal. Here you should be able to develop a Java application that intakes the image from the stream and manipulates it before displaying on the dashboard.