Limelight 2020.2 - More Zoom and Hardware Panning

Is it possible to change the pan through NT?

3 Likes

bump for info on Network Table zoom/pan

1 Like

Panning X and Y directly affects tX and tY. Panning would be used to counteract an offset in the physical placement of the Limelight. If I were to put my camera to the left of the robot resulting in an offset of 4 degrees, I would pan my X to the right to cancel my offset. In the process tX would still display in relation to the crosshairs (in our hypothetical situation Panning would place the center of the target on the crosshairs, changing tX to 0).

As for the second question, I haven’t been able to find any documentation on the new hardware Zoom and Panning features. It may just require the Technician of the drive team to change those values on the fly.

1 Like

Does anyone know how to compensate for hardware zoom in distance calculations using the y-offset method (detailed here)?

2 Likes

So would you have to add your panX (or whatever it’s called) to the value you get for tx from NT in order to counter-act the panning?

And I’m sort of confused, how is this different from calibrating, at least in the scenario you painted? If your LL were mounted at an offset but your shooter was pointing correctly at the target then you could just click calibrate. Would you want to use the pan to look around the field in order to increase how much the LL can see?

So far I haven’t been able to find many network tables about the the new features but if there is a value for the panning x or y it would need to be offset in code. For the next question, it’s different from calibration because panning can happen on the fly a lot easier. If you have someone to manage the limelight than you can be able to line shots up easier. If you don’t someone to manage the limelight there aren’t any network tables to control pan or zoom so you would need to make multiple pipelines and switch between them in code (i.e. TrenchPipeline, InitlinePipeline, and TargetzonePipeline).

This is conjecture, is it as simple as multiplying d, h2, by zoom factor?

DistanceEstimation

I believe that that’s the formula for finding the distance from the target if you can’t see your target. But possibly it could be applied to hardware panning. So far hardware panning and zoom can’t be changed through networktables but you can change pipelines in code so that could potentially be used for dynamically changing based on a button input or tA values.

We just flipped from 1x to 2x to 3x zoom and recoded the change in ty and subtracted it from our calculations in code. Seemed to work fine.

Yeah it would be that simple. What I did was I recorded the changes in tA in the different zooms and in my getTA method I modified my values by making an isZoomed boolean, using that to decide what modification tA receives.

1 Like

Can you programmatically change the hardware zoom during a match? Ex: If limelight is 30 ft away from the target, zoom 2x. If limelight is not 30 ft away from the target don’t zoom.

You would just do a pipeline switch similar to switching to driver camera mode.

@Brandon_Hjelstrom can you clarify here? If the zoom functionality isn’t moving the optics it’s not optical zoom, and calling it “hardware” zoom would be very misleading IMO.

1 Like

I am guessing here, but I think the way it works is something like:
Digital zoom is only when you zoom beyond 100 percent of the actual image size.
This Hardware zoom may utilize the magic of pixel size, physics, and advanced algorithms (interpolation), to combine the individual sensors to make a larger image. I know I am throwing a bunch of words that camera makers use to confuse those who do not know how computers, sensors, and physics work, and that this is not that crowd, but I do not actually understand the how much beyond the what. They are not the first people to do this, Most phone manufacturers do it, and recent Canon, Sony, and Nikon cameras do it as well.

@Brandon_Hjelstrom
As for the next update features, we would love to be able to adjust the LED brightness via programming and not just through pipelines. This may be incredibly necessary as we are already getting reports of teams having difficulty with their Limelights because the LEDs are too bright. It would be neat to do it with zoom too, but I could guess that is a much more complicated task

It’s still digital…?

4 Likes

While it does seem this way, technically it is digital, but not digital zoom. It is using the power of computing, but it is hardware not digital. The term is an old metric before we had such powerful sensors.

" The optical zoom is achieved by using your camera’s lens. The digital zoom , on the other hand, is achieved by cropping and enlarging the image once it has been captured by the digital camera’s sensor" (University of South Florida).

While we tend to think of a zoom lens as one where multiple convex lenses move back and forth to enlarge an image, it is not the only way to achieve zoom with hardware. You could combine the power of two sensors. I think that is what they are doing (in this case, I assume, and could be wrong, the sensors are the individual pixels, not the actual pixel array which is what most manufacturers refer to as the “sensor”).

I am not a camera tech and do not work for Limelight; I am just an English teacher who enjoys tech (hence I read a lot). So, I could be completely off-base here (Limelight could be doing something altogether), but that is what most camera manufacturers do.

While some consider this as dishonest, it is actually not in my mind. By definition, software zoom is lossy and therefore gets useless when you zoom too far, and for the most part, hardware zoom is not and does not degrade in the same way (all zooming has consequences). That is not to say that if you use this algorithm to turn the Limelight to get the F.O.V. of a 700 mm lens instead of actually using a 700mm lens, it will be a perfect image, but that is not what the Limelight is doing. Also, a 700 mm lens on an FRC robot would be useless for many other reasons (too limited F.O.V. and too unstable an image are just a few).

I like that we can get reliable zoom without buying a relatively useless attachment (magnifying glass), or a whole new camera (which may be more than 400 dollars).

I get that the name is odd. That is why it is called hardware zoom instead of analog zoom. Another way to do this is how most cellphones do it (multiple prime lenses; actually, I think my phone does both). Some photo editing software is now so advanced it can do this stuff automatically after you load your shots into the computer.

1 Like

So…what if we just called it “enhance” mode?

4 Likes

I agree with that. However, I think the entire industry would rather not differentiate. It is a lot more nuanced than we are making it too. HTC got a lot of flack when they first released their Ultrapixel, which is one example I think manufacturers accomplish this hardware zoom. Yet, a few years later, everyone was doing it. I do believe there is a branding aspect to it. However, I am not faulting the people at Limelight for following suit. The nomenclature has been around for a long time. I would not change it if I made a camera.

As it is, I feel they are one of the best customer supporting tech companies I have seen. They have a clear roadmap, have as full backward compatibility as possible, have frequent transparent, and effective updates, and they are the only third-party FRC vendor I have seen that has Python examples on their website (I do know it is a lot easier for them than say Rev or CTRE as they use network tables rather than interacting with the Rio directly).

1 Like

My understanding of how Limelight achieves zooming is that they crop, but they do not enlarge. That’s why they can say it’s not digital zoom.

On the input tab with no zoom, you get the option of 960x720 resolution, this means the optical sensor is at least capable of capturing 960x720. When processing the non-zoomed pipeline at 320x240, the image must be scaled from 960x720 before going through the pipeline.

I assume when zoom is used they’re just cropping from the captured 960x720 image. In 2x zoom they’d be cropping (480x360) and then still scaling, in 3x zoom they’d just straight crop (notice that 320x240 is exactly 1/3 of the dimensions from 960x720). The pan capability allows us to not just crop the image from the center, but choose a position on the captured image to use.

1 Like

So, I don’t mean to sound like a broken record, but I still think calling this hardware zoom is really misleading. You’re describing cropping the image. Just because you’re not then enlarging the image again doesn’t make it “hardware” zoom or anything. It’s just cropping. It’s still a software modification to the original image captured by the camera. I would really appreciate a clarification from the limelight team so we can all stop speculating and get the reasoning behind this naming.