LIDAR usage in FRC?

I’m Interested in whether there is a team that uses the Lider sensor featured in the FRC docs?

particularly in the 2-Dimensional LIDAR, it seems really cool and i would love to hear about the benefits and use cases it provides.

The short answer, like any LIDAR is ease of depth information.

In general building a depth map with LIDAR can be much more accurate and less computationally expensive than vision systems (tags mitigate this).

In the real world sensor fusion with LIDAR to generate depth maps for cameras is common practice, it combines accuracy and detail.

In FRC now days I would not try to localize robot position on the field with LIDAR; use April tags for that. Where LIDAR may be useful is in game piece acquisition (no April tags on cones and cubes) . With some clever processing you can have autonomous help in directing the robot to a ball or milk crate or tube.

1 Like

I want to say 4678 had a lidar this year (and possibly in years past) for this exact purpose… could be mistaken though.

Perhaps @Commonaught can confirm this, and maybe provide some insight if I’m remembering things correctly.

The new Garmin LidarLite sensor looks pretty cool.

Yes, they also used lidar in 2018 https://www.youtube.com/watch?v=zzXm9PiJVRo

and in 2019 https://www.youtube.com/watch?v=M9FVBJjNXuI

Wisdom of the ancients - test this with polycarb and shiny aluminum. The real field doesn’t always act the same way as your shop’s walls or plywood.

8 Likes

Yeah we have had issues in the past using all the different version of the LidarLite, a long long time ago we wrote some code to localize a robot to a known field using a few different lidar measurements. The problem we ran into is that polycarb tends to act transparent for the lidar so would give us very unreliable readings. We also found that there are slight variations in different fields so you cant just rely on hitting the top metal bars on the top of the polycarb or the bottom metal brace.
Edit: fixed polycarb being transparent in IR 905nm.

3 Likes

Not for FRC but where I use lidars is in IRL robotics. As @Skyehawk mentioned it’s used for mapping and obstacle avoidance. The video below is using front and rear 2d lidars with a simple chassis in gazebo and Ros. The red or blue lines on the right represent what the lidars (modeled after the RpLidar A1M8 $99 and what WPIlib mentions) would see in a house. There’s a guest appearance by the 2015 recycling bin in one room.

In any other application where you cant rely on having fiducial tags or markers (April tags, vision tape, colored or magentic tape lines, RFID tags, QR codes, physical rails) you would use “Natural Navigation”. You create a SLAM (Simultaneous Localization and Mapping) map of the environment in its resting state ideally. (An empty FRC field with objects in their expected start position.). The robot would then traverse the field building up chunks of the map. It makes an Occupancy Grid that will have black or white makes for free and clear spaces. Each square can be as small as 0.01m or less depending on the lidar used.

After you have a good and full map of the space you save that as your world map. That will always be the world origin and when you boot up the robot you have to tell it where your initial pose is located on the world map. After that you track your own Odometry and report it in real time against the map. But now you have stuff happening and you need a secondary map for real time local events.

The world map encompasses everywhere you can go. The local planner map is what you can see currently. You might want to tell the robot to go to the other side of the field behind an obstacle where it cannot see that location currently. The world map plans a general route using A*, Djikstra or some other path planner as you have it set. It knows what it should be like and what the field obstacles look like. As you start to move your local planner looks at your current sensor data and sees two robots blocking the path you were gonna take. That new local planner redirects you around the obstacle that wasn’t known to the world map. The world map planner then says oh you’re over there now. Here’s a revised route based on the static map.

I oversimplified it. But that was the best was to do field localization over a long time before AprilTags. Now… You could still do all of this and I bet Marshall and the Zebros group use bits and pieces of that old system still.

We used lidar in 2018 to pick up cubes in autonomous with considerable success. The straight edges of the cubes made them fairly easy to locate. We also had the same code available in teleop but I’m not sure if the drivers used it.

We used lidar in 2019 to locate balls as well as the scoring locations with very little success. The balls rolled too fast for the frequency of the scanner (a 360 lidar made by RP lidar that we sources from robotshop.ca) and our scanner didn’t have enough resolution to accurately identify the scoring locations (also the geometry of the cargo ship was fairly complicated).

We used lidar in 2023 to help center ourselves on cones (and maybe cubes?) with pretty decent success. The lidar was a fairly late addition and I expect we could have done better with more time to fine tune.

I agree with this:

The one caveat I would add is that when game pieces are pretty much rotationally symmetric (like this year’s “cubes”) use a vision setup like a limelight to identify the position. There are some cases where I think lidar has potential (namely tipped over cones from this year - I worked extensively on a vision program to identify the orientation of tipped over cones but it never got to competition ready success rates and not having accurate distance data made it very hard). In general this is when the geometry of the game pieces is a bit more complicated that a circle and you can’t effectively determine the middle of the game piece from a bounding box.

I am working on a generalized program the identify the position and orientation of game pieces based on lidar data and I’ll share that when it’s ready (read “a very long time”).

3 Likes

Adding a few details to what Commonaught posted … The expensive 360 RP Lidar used in 2018 and 2019 was a very interesting project. The main issue with it was the 10Hz scan rate. Most of the teams having great success with vision and sensing systems aim for something that will be able to capture and analyze data at better than 50Hz (the standard periodic frequency of the roborio). The lidar that was used on our 2023 robot consisted of 3 relatively inexpensive lidar sensors (Benewake TF-Luna modules) mounted 4" apart under the front bumper. These are single point sensors capable of detecting distances from 20cm to 800cm providing readings at up to 250Hz. Being 4" apart, it was always possible to determine how far a cone or cube was from the front of the robot and whether it was straight ahead, to the left or to the right. A limelight camera was used to get the robot aimed at the game piece. When our claw was deployed for cone pick up, it would block the limelight camera so then the lidar units were used to guide the robot into position to be able to grab the cone. For cubes, the claw position was different and we were able to use the limelight to navigate to the cube. Lidar readings were used later in the season to provide better distance accuracy when picking up cubes.

1 Like

Just remembered I should also mention another issue with the materials used on FIRST fields. You would think that the reflective aluminum checker plate that is sometimes used for walls and field elements would provide a great surface for LIDAR. Turns out it’s worse than polycarb. It can act like a mirror and ends up completely confusing lidar readings since reflections can start bouncing off all kinds of surfaces from the highly reflective but imperfect aluminum surface.