Over the course of our last 2 regionals, we were having constant drivetrain issues, from 2 motor controllers in separate gearboxes each only going half of max speed, to both motors in one gearbox only going half of max speed. Now that we have won a regional and qualified for worlds, we would like to do more in sandstorm and during teleop. Our limelight is not giving any issues, but whenever we use the Microsoft LifeCam, we constantly have packet loss and high latency. We are not going over 3.5 mbps at any point during the match, and we are only using one or the other camera at a time.
We also are having an interesting issue with our button box. It works flawlessly when tethered, but as soon as a match starts, the driver station no longer recognizes the button box. Could this be an issue with computer drivers?
Any feedback on possible causes of this would be appreciated.
Do you mind posting your code on a github repo or similar? We will be able to see your camera settings and drive methods.
Sure. Programmers are sending me it now. I neglected to mention that our drive code is fixed and now its only camera issues. Do you still want our drive code or do you only need camera?
do u have a co processor?
There have been sporadic reports of problems on the field (but not in the pits) when you use a second Ethernet device and do not use an Ethernet switch (ie plug 2nd device directly into the radio). We had some troubles that vaguely sound like this. We probably will be running our next competition with a switch on the bot.
Are you sure about this? How are you switching cameras?
Wherever you bring up your cameras would be best, but including all of your code would be cool too!
We are never sending both feeds to the driver station. When we want to use the usb camera they uncomment the code and unplug the limelight, and when we want to use the limelight, they comment the code and plug in the limelight. We would like to be able to use 2 cameras but 1 is fine. Getting the usb camera would be crucial because it is in line with our hatch mechanism so we can line up better.
Programmers are working on that right now. We are thinking maybe compressing and regulating resolution with a raspberry pi may be better
We run two lifecams streaming simultaneously without issues. Our team will also be using the limelight for vision processing only, not driver vision.
Lack of a raspberry pi means you plug into the roborio directly correct? If so I can send you the code we use, but i’d also like to see the code you’re running right now.
You might want to pull up the DriverStation logs and see if it reports anything. It does log dropped packets. Not sure if that covers the camera, but if you are dropping packets, the controls will suck.
You should also consider turning down the camera bandwidth. If you are using the standard WPILib server, you can set compression (really it is quality) in the URL. It will mean the Rio is compressing the JPEG, but it might work.
At the competition we did notice high packet loss and 100ms latency at the highest, and it was around where we would be having drive issues.
How are you measuring your bandwidth usage? Are you using logs from the FMS, or something on your dashboard?
So my guess is that you are actually saturating your network. Maybe the camera is sending more than is logged (kind of doubtful, but…). Maybe the FMS could not deliver 4 Mpbs at the event. Poor radio placement could limit your bandwidth.
I would try getting the camera bandwidth down to around 2 Mbps. It is certainly doable with a image size of 320x240 and “compression” set to around 25-30.
Hmmm… You guys are able to use 2 lifecams with no co processor? We would prefer that to keep down complexity. Our camera code was written tuesday at competition and is not on GitHub yet. If you could send me your code so I can hand it over to the programmers that would be awesome. Once they get ours in GitHub Ill be sure to post it for people to view so we can determine if its a code issue.
When we were having our issues, FTA’s told us that we were never near the bandwidth limit.
Ok, that’s pretty solid there. How was the CPU load on the driver station?
Never checked that. Will be sure to check the logs again tomorrow for everything I have been told to watch for in this thread. From looking during match, doesn’t seem like that was the issue, but Ill have a definitive answer tomorrow.
For anyone wondering, we are using the wpi cameraserver code
CameraServer.getInstance().startAutomaticCapture(“CAMERA 1”, “/dev/video0”).setVideoMode(PixelFormat.kMJPEG, 160, 120, 20);
CameraServer.getInstance().startAutomaticCapture(“CAMERA 2”, “/dev/video1”).setVideoMode(PixelFormat.kMJPEG, 160, 120, 20);
The key here is the .setVideoMode argument. Remember that lifecams have supported resolutions and frame rate combinations so you can’t adjust them freely without looking up proper inputs. I threw this in our robotInit() and it streams to the shuffleboard just fine.
If you never go over your bandwidth it seems that that issue may lie somewhere else so I would give the above a shot and if it doesn’t work, post your code so we can rule that out.
We experienced the exact same issues at our event. FTAs were mystified, it appeared to be an issue where our joysticks would stop reporting to the driverstation, but that was also a red-herring as to the real problem.
What was/is happening is the QoS rules are starving out your control packets. These packets are being starved by your bandwidth usage with the cameras. The FTAs don’t get a report of events happening because the outbound rules on the radio local are the ones that are being enforced.
Here are the two things we did to fix our issue:
- Re-wrote our control loop so that our default commands weren’t the only place getting our joystick values, because those control packets get blocked effectively making you “lose comms” without losing comms and your driving is completely affected. So, we would poll the controller axis values in a matchPeriodic() method we wrote, that is called from both teleopPeriodic() and autonomousPeriodic(). This way, we aren’t dependent on the scheduler getting updates from the driver station to get joystick values. We then just store the axis values in variables that are then given to the driving commands directly, which kept driving under control.
2). We turned the stream down to low quality on the Limelight output page. At rest, and in the pits, etc, our bandwidth usage was down near .5 Mbps but while driving it was higher. We also had a secondary camera (Lifecam) plugged into the limelight which increased the bandwidth usage.
We got the same story from the FTAs that our network events were fine, but it’s clearly related to the bandwidth usage and the camera streams ONLY while on the field. Neither the CSAs nor FTAs had any realistic troubleshooting ideas at the time (I get they are volunteers and doing their best), but I think we ultimately narrowed down the problem to the bandwidth related to camera streams and once we made some tweaks we didn’t have any other problems. Which leads me to believe it’s got to be the QoS rules that are starving the control packets somehow from getting through to the driverstation.