Sweep Scanning LiDAR

Hey Everyone,

A good friend of mine is partners in a new company that has designed an affordable scanning LIDAR called “Sweep”. I believe they have done an excellent job on it and it could have a very positive impact on the FIRST community. They are doing initial sales through kickstarter with scheduled delivery in Fall of 2016. The kickstarter was put up today and there are a good amount of early bird specials left. Team 2473 has reviewed one of the units and may have more to add on their experiences. There is a positive quote on the kickstarter page from the team. Some critical aspects: 40m range, price $250, 5Vdc power, USB connection.

This looks really cool and could be useful to the FIRST community but I have one question what class lasers are being used? Because if they are higher than class one that would make them illegal. On another note I am working on an autonomous RC car as a class project when not working with our FRC team. I think I might have to get one of these to try on the car. Great startup and tell your friends to keep going.

The FAQ section at the bottom says it is class 1.

Liking the price on that. Hope it compares well in practice to the current cheapest scanning lidar.

This morning I got some more info from Tyson, “Sweep uses a future version of the LiDAR-Lite sensor from Pulsed Light 3d (now Garmin). It has about 6 times the range of the current cheapest LiDAR and is quieter, lower power and more reliable in sunlit environments”.

What would be great in the future to see robot autonomous routines that can progress past preset positional movements to actual field and alliance member awareness. Also, if autonomous gets easier to implement, and more teams demonstrate ability, FIRST may start to give a little more time than 15 seconds for auto.

I’ve been waiting for LiDAR systems to drop in price enough for FRC use. From my experience with HOKUYO and SICK systems, I’m curious how they would handle the clear polycarb and highly reflective diamond plate used on a FIRST field. I know filtering for false distances would be very doable (the field is linear), but too much reflection would absolutely degrade resolution and accuracy, as well as bog down coprocessor time.

Has the team who’s used one of these tested them on a field with official FRC materials? Any data/insight they could share regarding the potential reflections/clear surface issue would be great to learn from.

I think we’re going to want a better localization method. Lidars are great but with so much stuff moving on the field SLAM with just lidar is going to be problematic.

Maybe some official FIRST offboard pointcloud processing tutorials so newer programmers can get into using kinects and the like?

Yeah, this scanner can be really useful. I’ve made a write up and interview with the co-founder Kent Williams about the Sweep LiDAR Scanner at Tech Stuffed. I hope their Kickstarter goes well!

Will this system have an issue with the rules about having a non-FIRST approved motor and not being controlled by the roboRIO? Seems cool otherwise!

Kent from Scanse here:

@themagic8ball - We haven’t heard of this being an issue, but we’re looking into it. As for the roboRIO we are developing a driver, so it definitely will be controlled by this.

Happy to answer any more questions!

We were doing a demo at Google last week and mentioned more intelligent vision. We ran through a number of ideas until we decided to do some work with lidar. One of guys did work with self driving cars said having two Lidars moving up and down like windshield wipers would be necessary in order to obtain a usable refresh rate.

GOFIRST ordered a couple to experiment with on our VEX U and Ri3D robots. Looked like a great product from a great group of people-- we look forward to having them in hand!

Looking on their Kickstarter, I see that Team 2473 used it. I am curious on what exactly they used it for. To me, this sensor seems great for obstacle avoidance in an unknown area, but I am not sure what it could do for you in a small defined area.

If you had one for your Stronghold robot, what would you use it for exactly?

Obviously I have never used this, so I can only speculate. I see this as an alternative to a camera with easier data for a computer to process. I could imagine it being used to find and center on the batter sections for auto-targeting shots. Could be a viable alternative to vision processing that can’t be fooled by changing lighting conditions. It would also give a more accurate distance “prediction” that doesn’t require being lined up square with the target (I’m sure some teams have algorithms that don’t require that, but the basic one does). I’m not necessarily saying it’s worth the cost or the development time, but it has applications.

Tyson from Scanse here: @Ari423 - LiDAR has a couple big advantages over using cameras when trying determine where the robot is in a space. For one, it has a 360 degree field of view. Of course the robot will block some of this, but being able to see most of the field all at once allows you keep track of landmarks while the robot moves and turns. Other advantages are immunity to lighting conditions and less processing overhead. We are still waiting on test data from teams with regard to how well our sensor works in the first environment with clear poly-carbonate and mirror finish diamond plating walls. Both surfaces can be hard to see clearly with LiDAR. Teams will likely want to use other landmarks in the arena for determining the robot’s location.

This year we have a XV11 lidar unit on our robot.

One of the largest issues with doing any sort of localization on the FRC field is the polycarb walls. At least in our experience, it is completely invisible to most LIDAR units. Unfortunately polycarb is very transparent to waves in the visible spectrum. Our unit was in the high 700um range which put it out of the visible spectrum, but still low enough to go right through the polycarb. We didn’t observe many issues with the diamond plating as long as we did some simple filtering to get rid of rediculous values. The tower works well as a defining feature making common localization algorithms such as AMCL work quite well.

Unfortunetally, we ran into some issues with the housings holding our LIDAR units interfering with the rough terrain so we had to move them to new places which don’t have as broad of a viewing angle making localization difficult.

We are however still using one of the units to identify and align with boulders on the field which helps when there isn’t a clear line of sight with a ball we are trying to grab. The LIDAR data is much more useful for identifying balls and transforming the balls location into a usable coordinate frame than cameras and CV.

Appreciate the feedback on these units. If this season’s test cases go well, we’ll be looking into getting a unit or two for the students to start learning a bit of localization with more real-world tools.

There is a 1.5in angle aluminum at the base of the polycarb walls. Just sayin’.