Objective:
Enable our driver to locate cargo that is on the far side of the field, obscured by the Hub.
Of options available in Jan 2022:
A. What is a camera system that:
has a minimum of lag
has a wide field of view
requires zero to low amount of software/co-processor add-ons etc. complexity
Implementation Question: What is the simplest way to integrate it with
the roboRIO and
the driver station software
We’ve used cameras in the past and had lag problems. (I know there are settings involved to improve this.)
We have a limelight-2 in our shop. We would like to know if there is a cheap solution so we don’t need a LL for each of our practice bases and can have lo-cost spare cameras to swap in if needed at a comp. We just want our driver to see what is happening out of view in real time - we are not pursuing a vision system that detects cargo.
disclaimer: I know nothing about the subject, besides what I see as Drive Coach.
A usb camera can be used, and lag is highly dependent upon image size, frame rate, and mostly how much bandwidth is available in the FMS at that particular time…which you don’t have much control over.
You have max of 4Mbps to send data back to your drive station. So, go backward and figure out what’s the resolution and frame rate your driver need 1st. then you can look at either getting an IP based camera (and bypass having to relate through your RIO or using a PI or other coprocessor) or you want to run them through your RIO or PI/Coprocessor. [If you don’t any spare PI’s in house already, you might want to think twice before deciding on PI - it’s either crazy expensive now or long ship time on those acorss pretty much all vendors - both legal or otherwise].
We had no issues with sandstorm lags in 2019. We were using Python and ran a simple wide angle camera to the ds. There alwas no processing done on the stream though.
I feel your options are either co-processor (even a Pixy, this may be reliable for ball detection, or Jensen) or use the Rio, but if you want processing on the Rio, I think opencv or labview vision are the only options.
It would be insanely cool if there were a port of photon that could run on the Rio, but I am not sure that is even possible.
The Limelight last I checked can do a second “driver camera” stream on a USB camera no problem, maybe do that since you have one in hand already? Just make sure if you can to lower the resolution to have more bandwidth available.
The RoboRio1 and 2 are both just too slow to be useful with PhotonVision. CameraServer.startAutomaticCapture() is all you need to get a USB webcam running a plain stream from the Rio.
Can you elaborate on this a bit? (as the author of cscore/CameraServer, I’m interested in hearing what problems you’re having)
Note using raw CV2 to talk to cameras is not going to be as reliable as cscore (which talks directly to V4L and has a lot of failure handling, e.g. it properly handles USB connects/disconnects, while raw CV2 capture will just lock up).
Thanks, its awsome you put so much care into the library.
We have had bugs reading from different cameras in Python using CameraServer. Due to this we did not write that functionality in C++, just read using OpenCV. Maybe I will try it in C++.
Our biggest problem was the fact that Python examples straight up didn’t work sometimes, using methods that didn’t exist, etc.
Now I know Python is just bindings, so maybe thats the big issue.
On C++, the example gave us some NetworkTables issues, but we fixed that when we copied over from what worked in Python.
The only place the library could be improved there is examples in the docs, and our team actually plans to write examples after the season.
Also I have one question: where is the source code for the web dashboard. We wanted to write a script that uploads the binary to the website automatically, the drag and drop becomes a pain. I guess we could also do it with SSH maybe.
All of the Python bindings for cscore/CameraServer come from RobotPy. The WPILib team doesn’t actually maintain those, but we bundled them into WPILibPi because we know a lot of teams would like to use Python for vision processing. Definitely if there’s some specific issues you ran into with the Python wrappers, it would be good to report those to the RobotPy maintainers (or if there’s an issue with the Python example, report it on the wpilibpi repo). The 2022 release is imminent as well so it’s possible that might fix a few things.
I agree the examples are a bit barebones and could use some love. Contributions welcome!
Regarding the web dashboard, the source code lives here: https://github.com/wpilibsuite/WPILibPi/tree/main/deps/tools/configServer . The server side is all C++. The actual transport for uploads is via WebSockets (a text message to start, binary messages for the contents, a text message to finish) so it should be relatively easy to script with Python.