This is our first year using a command-based project and using GRIP vision processing. We have made a grip project and have worked with the command-based project with success. Now we are faced with the problem of how we are supposed to implement the GRIP code that we have generated. We are able to send a camera feed to the driver station using the StartAutomaticCapture() function in our camera subsystem. We tried to make the grip pipeline a command with the .cpp and the .h file provided in the generated code. We also tried to make a grip pipeline object and call the .Process() method by passing in a matrix generated by GrabFrame(). This compiles but causes the error: “device or resource busy” and sometimes we would lose robot code altogether. What is a method we can use to implement our grip pipeline into our code?
You are trying to access the UsbCamera on port 0 in two places: VisionCommand.cpp line 24 and CameraSubsystem.cpp line 11. Whatever is doing it the second time will give a resource busy error.
My recommendation would be to move the entirety of your VisionCommand to the subsystem. It also needs to be in a separate thread; right now your VisionCommand will block the entire robot program. See VisionRunner for an easy way to do this, or this documentation page.
You can use cs::UsbCamera by value everywhere, it keeps an internal reference count.
I apologize, that was our code before grip, our latest code is in the git hub branch “grip_vision”.
I can see what you are saying but does that mean we should put the whole grip pipeline in our camera subsystem? Also how would we multi thread that because we had plans to use either command groups (we are still not sure if a parallel command group is multi threading) by making the grip pipeline a command (as it is in the latest code) or we would try creating a grip pipeline object and running that in a std::thread although, again we are still not sure if that will cause problems because unlike java, C++ does not have a frc vision thread class.
Our team was also considering using a raspberry pi to lessen the burden on the roborio processing our vision, would that be the better idea?
Yes, putting it on a Raspberry Pi is a good approach, especially if you’re uncomfortable with adding threading to your robot code. I recommend you take a look at the FRCVision image and its examples.
Ok so I looked into the Raspberry Pi and I have setup the FRC vision image on the pi and I can access the frcvision.local web server and I’m able to see the camera stream. I read the wpi instructions on how to upload code and it says to put an executable cpp file and it should run simply like that. Do I put the .cpp file that grip generates in there because it also comes with a header file that is necessary with all of the method and variable declarations? Or do I just put all of the variable deceleration in the .cpp file and upload that? We are a bit lost at this point. We looked at the example program and that created a grip pipeline object but where do we put the grip pipeline class itself?
You have to build the .cpp file using make to create an executable, which is what you then upload to the Pi. See the instructions in the example; for C++ you need to install a cross-compiler and put it on your PATH, then run “make” to build the executable. You’ll need to edit the Makefile to add the generated .cpp file so it also gets built, and modify the example .cpp file to use the GripPipeline instead of the example pipeline.
Do you have a “lib” directory in C:\Users\newton\Downloads\cpp-multiCameraServer\cpp-multiCameraServer that contains a bunch of .so files? There should be one, as it’s included in the zip file.