Limelights work great, but are a little bright

R9 section m

3 Likes

I am not 100% sure, but I believe teams using pynetworktables can control them in realtime, even during a match, if you send and receive values with the tables. Teams can turn them on only when necessary, such as autonomous functions.

I believe there is a RobotDisabled() function that you can use to turn it off while the robot is disabled. If not, you could always have a wpilib timer that starts in RobotInit() and after 30 seconds (or however long you think the limelights take to boot up), the lights get shut off until Auton or Teleop Init.


“Purpose built for FRC”
400 lumens bright. Not allowed anymore???

1 Like

I don’t believe that the brightness is the issue. I think it is when the LED’s are left on during the entirety of the match, rather than being enabled to Target then disabled when not. According to rule R9, they can be used in this situation, as it states that and bright light “may only be illuminated for a brief time while targeting”. 2605 has used limelight for the past two years in this manner and hasn’t gotten any complaints about them.

2 Likes

I can answer for that, R9, m. High intensity light sources used on the ROBOT (e.g. super bright LED sources marketed as ‘military grade’ or ‘self-defense’) may only be illuminated for a brief time while targeting and may need to be shrouded to prevent any exposure to participants. Complaints about the use of such light sources will be followed by reinspection and possible disablement of the device.

The Limelight cameras have LEDs that produce at least four times the illumination of a standard ring LED system. That is bright enough to produce serious reaction to volunteers on the side of the field and generally can blind cameras on the field when the robot points at the camera. There is no need to be that bright when used with retro reflective tape. The LEDs can be turned on and off and the brilliance can be controlled through the interface software without powering down the camera. Please learn how to accomplish this before your next event.

1 Like

Thanks Al. We find that the potential for harm comes from a multiplication of two factors: intensity * exposure. Our implementation meets R9m via minimizing the potential for harm by reducing the 2nd factor, exposure. We reduce exposure across both the space and time axes, by both (1) pointing the Limelight away from the eyes of field bystanders and (2) only lighting up the LEDs while targeting the vision target, achieved through the recommendations made by the helpful folks in this thread.

Is there an acceptable level of light intensity for a robot on an FRC field? If so what is it? And is there a commonly available method of measuring said intensity?

10 Likes

As discussed before, here are the photos I took of our “sunglasses” for the pits.

We used these magnets to attach to the mounting screws at the top of the limelight.

Below is the STL of the sunglasses if anyone else would like to print them.

sunglasses.stl (69.0 KB)

3 Likes

While true that intensity and exposure are both factors, the size of the source also affects how the eye responds to the light source.

Now that the season is over, bump.

9 Likes

Anyone that has done this, could you please pm me?

Some teams are using info from the retro targets throughout the entire match. Wonder how that figures into the situation given how the rules are written.

1 Like

We take this approach. The code is in our Drive command in github.

Basically we enable the LEDs and vision tracking when the driver’s right stick is pushed forward.

Then you need to angle or shield the light so it isn’t likely to shine straight at faces. Obviously that’s “easy for me to say” but that is what you need to do, and how is left as an exercise for the reader.

We implemented an “auto preview” button in our code. This was the first button in a chord of buttons to be pressed for different auto modes. It allowed us to validate target acquisition before running a semi-auto hatch mode and worked very well. I’d highly recommend.

The way we made it work was just made a “human view” pipeline, and a “target view” pipeline and switched between them when the button was pressed and released. This worked great because the human view pipeline was calibrated for humans and the target view was completely blacked out except for the targets.

In addition, we also fashioned a small flexible visor out of cardboard and tape that we put above the camera, but we mostly did that to exclude light fixtures from its view, but I’m sure it helped shield the lights from view of bystanders too.

Sorry to bump this thread again but I had a question/suggestion regarding the limelight. Currently, when in dual cross-hair mode and adjusting cross-hair offsets, a target must be present for you to calibrate the new cross-hair position. I think it would be useful if the user could manually move the cross-hair to the left and right with a slider, even without a present target. This would be useful in the pits when a replica target is not readily available or other situations when it is just inconvenient to wheel out the robot for a calibration procedure. Does something already like this exist that I am unaware of? Thanks. (Also sorry if this is in a slightly wrong thread. It was just the most popular recent one).

4 Likes

I spoke with @Hjelstrom at detroit champs about this and it sounded like that would be pretty high on their list of things to implement before next year.

3 Likes

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.