LifeCam HD-3000 drops frames while moving

Edit: Issue seems to be related to camera motion and auto-exposure slowing down the camera. You can watch frame rates change just by moving the camera. Even turning off autoexposure didn’t fix it, just made it better… I think it’s just a cheap webcam processor dropping frames when behind.

Original Post:
Hello all, I didn’t know if I should make this a github issue or a topic here, so I started here.

As with many others I have been playing with AprilTags, and I think I came to a roadblock that should be looked at. I think the CameraServer.CsSink.grabFrame() function has an issue, and I think it’s affecting anything using CameraServer (both WpiLib and PhotonVision).

I was suspicious because while we are limited to ~7FPS with AprilTags reading, we were also only using ~40% CPU and why we can’t make data processing more parallel.

So, I made a VERY simple vision loop (posted below) in Python; just grabbed a frame and did nothing. Then I took the time between loops and logged it. This is a plot of the FPS, notice it groups really bad at 30, 15, and 7.5 FPS.

# some WpiLib vision initialization before
csInst = CameraServer.getInstance()
vs = NetworkTables.getTable('Vision') # Get the Vision NetworkTable

# start cameras
for config in cameraConfigs:
    cam = startCamera(config)
cvSink = csInst.getVideo(camera=cam)

#Begin Image processing loop
frame = np.zeros(shape=(640, 480, 3), dtype=np.uint8) 
while(True):
    t1 = time.perf_counter()#Used for testing performance
    FPS = (1/(t1-t0))#Used for testing performance
    vs.putNumber("FPS", FPS)
    ret, frame = cvSink.grabFrame(frame)
	t0 = t1

I’m definitely not CPU bound, running the WpiLibPi v2022.1.1 image on a Pi 4b, 2GB of RAM, LifeCam HD-3000 camera, configured in WpiLibPi as a 640x480, 30 FPS cam. No other streams were running on the Pi (verified by the Network traffic from Pi). Data collected on my laptop using the Robot Simulation (only to run a NetworkTables server and using WpiLib logging, used AdvantageScope to plot the data)
wpilibpi_status

1 Like

I had a PiCam laying around, so I plugged it in the CSI slot, configured WpiLib with only that camera, and ran the same dummy code for ~5 min, so it looks to be an issue with the UsbCamera instead.

1 Like

Have you tried any other USB cameras? The PiCam uses the exact same code in cscore as external USB cameras do (both go through the V4L layer).

1 Like

So I do have an old Logitech C270 (can do 720p) laying around, so I tried that.

So, for the first minute, I didn’t set a size/FPS in WpiLib, so I changed that. Then from 1-3, it ran 640x480px30fps, but I only got 15. I tried a hot replug between 4-5, then at 5:30, I bumped it to 1280x720px30 (one of the supported modes). It stayed pretty idle, but at 6 min, I moved the camera, started flashing AprilTags, and watched the frame rate change when I showed a full screen tag vs my lit hallway (no decoder is running).

I wonder if this camera is limited CPU and can’t push the frames.

I haven’t run this particular test but over the years I’ve seen fps very sensitive to lighting conditions especially when linked to the exposure setting. Pick a fixed low exposure, if you haven’t already.

What was OP doing or meaning by Left Axis, Discrete Fields, Right Axis?

Edited to add turn off auto white balance, too, if you are changing the lighting conditions by moving the camera or target.

1 Like

The Left Axis/Discrete Fields/Right Axis is just part of the plotting tool. I cut it off later. You can have separate scales for the left and right sides.

I really like the plotting tool, they just released a formal release supporting the WpiLib Log format. Release v2.0.0-beta-1 · Mechanical-Advantage/AdvantageScope (github.com)

So, after a little more playing, it does seem to be the camera realizing it can’t process the frame and just drops it. I can move the camera fast and watch the frame rate drop quickly. Sometimes it recovers, sometimes it doesn’t. I tried shutting off all the auto calibration things, but still can’t get a stable 30fps off of it while moving (in hand). I will say, auto-exposure killed the camera, the frame rate just drops to zero.

Here are my final test settings in case anyone cares. I’ll try to change the thread title to something more appropriate too.

3 Likes

Thanks for the tip about AdvantageScope. I can’t get it to connect to my robot, though. It says Searching not matter what variation of an address I use. I might have to read the documentation!

So, there are 2 parts to this:

  1. AdvantageScope was originally intended to work with AdvantageKit. (another tool they developed) You needed to change how your code was structured to take advantage of their logging format. I tried using it on a Simulation robot and it worked, I just didn’t like their style. That’s what you tried to connect to.

  2. In March, WpiLib released the DataLogManager class that will log data on the Rio. (On-Robot Telemetry Recording Into Data Logs — FIRST Robotics Competition documentation) We used it all competition season and it worked wonders. You just put in a couple calls in RobotInit (like DataLogManager.start(); ), and it will log all NetworkTables data for you. Throw a flash drive on the robot and it worked great. Even works with Robot Simulation too.

The issue is WpiLib never really made a good viewer for the data (yet). There is a tool that will get data off the Rio (if you don’t use a flash drive, it logs to Rio or Rio2 SD flash) and export it to a CSV. It just didn’t plot it. 6328 added support for WpiLib log files this fall, and it has been amazing to use now.

For the live connection, I think you’d be interested in this docs page specifically. Note that AdvantageScope only supports NetworkTables 4 (added in the 2023 beta of WPILib), not NetworkTables 3 (used by WPILib 2022 and older).

Thanks for the help.
I’m trying the Connect to Robot. I have proper logs there. I think what you are saying is Connect to Robot doesn’t work to get files; it’s only for Live which I wasn’t trying to use. I have to move my log files from the robot to the PC myself.
I tried the Download Logs… and that can’t find the roboRIO, either.
image

Okay - not what I meant to do.

Ah, I misunderstood. Have you checked the “roboRIO Log Folder” in preferences? The default is to read from “/media/sda1” (one of the possible USB drives), so if you’re saving to the RIO’s internal storage you’ll probably want “/home/lvuser”.

I just went to “Open…” and selected the WpiLog file, and it worked right. Make sure you get their V2 Beta release that just came out today.

yes I changed that to /home/lvuser. It can’t find the roboRIO. Maybe I must use the USB cable between the roboRIO and the PC and not another Ethernet?

It should work over ethernet (though USB is always an option of course). What address are you using? Based on your team number, I would expect it to be “10.42.37.2”

That Opens the PC folders. I thought I could get the files from the roboRIO. I’ll copy from roboRIO to PC to keep trying this out.
Thanks. I’m sorry I hi-jacked your thread.

Exactly right. I also tried roborio-4237-frc/ and roborio-4237-frc.local/ with and without the “/” at the end. These all work right now in the browser and PuTTY so I know the addressing is right.

Glad you found a workaround for now. If you continue to have problems downloading logs, feel free to open an issue on the GitHub repository: Issues · Mechanical-Advantage/AdvantageScope · GitHub

Thanks. I did download the files with Data Log Tool and am making graphs!

Since you thought what you are seeing would be manifest on PhotonVision I tried your experiment there on a RPi 4b and HD-3000. (I don’t know how to run your Python program on the WPILIBPI.)

My results are different than yours but with some similarities with both Driver Mode and AprilTag mode.

First, the fps is sensitive to exposure. I get 30 fps with low exposures resulting in very dark images from “black” to barely discernable blobs.

FPS drops to 15 then 7 as exposure is brightened. (Same break points as you see.)

Auto exposure consumes an inordinate amount of time and is repeated frequently so I agree that it is useless in this experiment as the camera apparently can’t take pictures and adjust exposure at the same time nor quickly.

With AprilTag recognition the frame rate increases with rapid movement. I don’t know the PhotonVision internals so I can only guess many frames are quickly rejected by a preliminary scan that doesn’t find any candidates as possibly AprilTags (because of the blur which is why we are concerned with high speed cameras or global shutter - mine should be here tomorrow).

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.