Actual RPLIDAR A2 Experience and Data

Now that the offseason is almost here, I am casting my net for projects to help my team in 2018.

This thread has definitely caught eye attention. Slamtec RPLIDAR A2 Now Available for FRC Teams

So…

For teams that used the RPLIDAR A2, how’d it work? Can you share your experience with the CD community?

How about raw data from the sensor taken on a real FIRST field? I am very nervous about all the reflective and transparent surfaces that are on a FIRST field. Can it see the PC wall behind the pegs? Does it get confused by the diamond plate metal walls near the driver’s station?

Did you process the data on the RoboRIO or did you use a coprocessor? If you used a coprocessor which one? Any thought about processing the info with a Jetson TX1?

Love to hear what you have to say.

Please share.

Thanks,
Dr. Joe J.

I don’t know of anyone other than us who used it. We did not use it on a robot this season. We did do some actual field testing with it and should be able to provide some insight. I have to defer to Kevin, one of the other 900 mentors, on the specifics of what we found but I can say that it was kinda hit/miss.

Of note, I spoke with Kevin O’Connor in Houston about the sides of the field being transparent and there is a possibility of adding a vinyl border to help with this but it really needs to be tested to know if that would help.

Also of note, solid state lidar in the FRC price range is coming within the next year so I’m curious to see what that brings with it.

+1, I am also curious about teams’ experiences with this.

All of the software components needed to build a full SLAM solution for FRC are available and they have worked well for me in the past (albeit with more expensive sensors). It’s just a matter of figuring out the interfacing, compute (Jetson?), and whether the quality of the sensor/data in an FRC environment makes this project worthwhile right now.

I think that the day will come when an FRC team can strap a LIDAR to their robot, spend a practice match driving around building a map of the field, and then exploit that map for pinpoint drift-free localization during the match.

I can tell you that we are going down the road for ROS to do exactly this. ROS is not 100% ideal for FRC but it can be made good enough to do some truly interesting things and the software libraries are hard to ignore.

One can dream.

Edit: Why don’t you? :stuck_out_tongue:

1 Like

Is there interest in doing FIRST FRC Jetson/RPLIDAR users group to work out a package that can do useful FIRSTy-type things?

Maybe we could get the WPI folks to help out a bit?

Anyone?

Dr. Joe J.

Kauai Labs is stepping into the fray with the open source SF2 initiative (sf2.kauailabs.com). The framework dev is underway with IMU odometry next up. The rocket science parts are already available open source (esp in ROS), but there’s lots of system engineering to do. After IMU odometry the next step is a bridge from pose on the RobiRIO to ROS for SLAM. If it makes sense to merge that into WPI we are open to discussing that…

One of WPI’s VexU teams this year (I guess last year now) was fully autonomous using a LIDAR from a Neato XV11 vacuum and a Raspberry Pi running ROS.

They just did a code release over on the Vex forum: https://www.vexforum.com/index.php/27203-wpi1-reveal

I love the Neato LIDAR but I worry that the motor doesn’t seem to obviously fall under the rule R32:

Motors integral to a COTS sensor (e.g. LIDAR, scanning sonar, etc.), provided the device is not modified except to facilitate mounting

The motor on a Neato LIDAR is “integral” in that you buy them together but it isn’t that integral in that you have to supply power to the motor separate from the sensor.

I would probably allow it if I were king but I’m not king so it would feel better if we could get a semi-official ruling that using it would pass muster before heading down the path of developing cool FRC libraries.

Anyone use the Neato this year and get it approved for use on an FRC robot?

Dr. Joe J.

We’re using the RPLIDAR Development Version (slightly different to the A2, but still fairly similar) on a robot for an Autonomous Robotics Competition in Australia (NI ARC).

In it’s protocol specification, it sends back the Distance and Angle (i.e. in a polar form, which you can convert to cartesian form with [x,y] = dist*sin,cos). The unit communicates over 115200 bits/s baud USB serial.

We’re going to be using it with the myRIO (the roboRIO’s little brother), as per the competition requirements. We’re looking at processing as much data as possible in the FPGA to overcome the USB latency / buffer issues.

For object and collision detection, we’re looking at inflating each datapoint into a sphere (let’s call it a blob), and then using those to build an almost “bumpy” map of the environment. If you want to go one step further, you can find the intersections of these blobs and tesselate them into a flat surface. Obviously there are better ways around this, like using actual proper SLAM techniques and algorithms and whatnot, but this seems to fill our requirements so far and can be done fully in the FPGA. I’ll admit SLAM isn’t something I’m very well versed in.

As for transparent and retroreflective materials, I can’t say anything for sure. The LIDAR unit is sitting on my desk at home and I don’t fly back until tonight so I might take a look sometime in the future if I can hunt down some RR tape and polycarb.

Are you planning to do any correction for your robot’s spin rate? The LIDAR spinning at 10Hz seems super fast but I would think that a robot can turn at rates sufficient to do a number on your map.

I am interested in your progress. Please keep us informed.

Dr. Joe J.

I believe the protocol sends data as the device rotates (i.e. does not buffer then send them all at once), which means offsetting polar angles of the LIDAR with the Gyroscope heading during initial processing should account for this. As your angular rotation increases, you may see LIDAR accuracy decrease in some regions and increase in others (i.e. the points ‘gather’ in one angle range), which is part of the reason we’re popping datapoints into blobs.

I’ll dig a little deeper when I get back to Perth and catch up on missed work.

There is lots to cover here. We’ll roll this into a whitepaper once we’ve caught up on sleep but until then sorry about the stream of consciousness approach.

For testing, our vision team hacked up a mount for the LIDAR. We affectionately called it our 2018 robot : https://www.chiefdelphi.com/forums/showthread.php?t=156204&highlight=900+reveal. The main thing we focused on was getting the LIDAR at 19 or so inches high. This would give the best chance of seeing the metal parts of the field perimeter. Even so, any small deviation from perfectly level would mean we were probably hitting plastic instead…

I got to drive this around the field on a cart during calibration at one of our events (the students were doing more important things like making sure our actual robot connected to the field).

Are Pop Tarts and Dr. Pepper valid robot energy storage devices?

All we gathered from that was raw LIDAR data : https://drive.google.com/open?id=0B8hPVHrmVeDgUWlHWGxUSzVFSUU. We did some basic post processing using the ROS implementation of Hector Mapping / SLAM (http://wiki.ros.org/hector_mapping) since that was a quick and dirty way to see what sort of data we would get. Hector SLAM is nice because it relies only on LIDAR data. That’s good, since that’s all we had. Eventually we’ll combine it using fused IMU data but for now I just wanted to see if it the data was horribly broken.

Given that, it looks sorta reasonable :

In this first image you can clearly see the driver station wall and a bit of the airship. You can also make out some students sitting with their robot near the side peg. The longer field walls do come through, kind of. Looks like there are enough hits to conclude there’s a straight wall there, but at the same time we’re also occasionally getting hits from outside the field. I think this means that depending on where we are the LIDAR is missing the edge of the field some times. There’s also some weirdness stitching the field perimeter together but that might be more a reconstruction problem than something inherently wrong with the data.

Subjective impressions : this LIDAR has good resolution. I ran it in the stands as a demo, and you could see people’s arms moving separately from their bodies as they walked by. Same with the data I recorded from the cart - my legs are several pixels wide and you can see them moving individually behind the cart. That’s impressive. Also, depending on position it could see the full width of the field … but probably not the full length of it.

The reconstructed maps aren’t great. I wish we had put a navX on the cart along with the LIDAR to get fused IMU data correlated to the LIDAR. Still, given that we didn’t even know if we could see the shorter sides of the field I’m pretty optimistic with these results.

Here’s the launch file I used after installing ROS hector slam:

<?xml version="1.0"?>

<launch>
  <!-- Transform x y z roll pitch yaw -->
  <node pkg="tf2_ros" type="static_transform_publisher" name="laser" args="0 0 .19625 0 0 0 base_link laser" />

<node pkg="rosbag" type="play" name="player" output="screen" args="--clock /home/kjaget/RPLidar_2017-03-10-11-19-25_0.bag"/>

  <node name="rviz" pkg="rviz" type="rviz" args="-d $(find rplidar_ros)/rviz/rplidar.rviz" />

  <node pkg="hector_mapping" type="hector_mapping" name="hector_mapping" output="screen">
    <param name="map_frame" value="map" />
    <param name="base_frame" value="base_link" />
    <param name="odom_frame" value="base_link"/>
    <param name="output_timing" value="false"/>

    <param name="use_tf_scan_transformation" value="true"/>
    <param name="use_tf_pose_start_estimate" value="false"/>
    <param name="scan_topic" value="scan"/>

    <!-- Map size / start point -->
    <param name="map_resolution" value="0.025"/>
    <param name="map_size" value="2048"/>
    <param name="map_start_x" value="0.5"/>
    <param name="map_start_y" value="0.5" />

    <!-- Map update parameters -->
    <param name="update_factor_free" value="0.4"/>
    <param name="update_factor_occupied" value="0.9" />
    <param name="map_update_distance_thresh" value="0.4"/>
    <param name="map_update_angle_thresh" value="0.06" />

    <param name="laser_max_dist" value="9.5" />
  </node>

  <arg name="disable_poseupdate" default="false" />
  <group if="$(arg disable_poseupdate)">
    <param name="hector_mapping/pub_map_odom_transform" value="true"/>
    <remap from="poseupdate" to="poseupdate_disabled"/>
  </group>
  <group unless="$(arg disable_poseupdate)">
    <param name="hector_mapping/pub_map_odom_transform" value="false"/>
    <node pkg="tf" type="static_transform_publisher" name="map_nav_broadcaster" args="0 0 0 0 0 0 map nav 100"/>
    <node pkg="tf" type="static_transform_publisher" name="nav__baselink_broadcaster" args="0 0 0 0 0 0 nav base_link 100"/>
  </group>

</launch>



```<br><br>![better_photo_1024.jpg|690x500](upload://w21mo25OjuAw7Io1pnBLrCknceA.jpeg)<br>![Field_Map_1.png|690x500](upload://7ngwZ60tI0l2qB9KFqQeREcwZEJ.png)<br>![Field_Mapping_2.png|690x500](upload://uBYkSrT4lBIMoiRtT86HnGNIUPX.png.png)<br><br><br>![better_photo_1024.jpg|690x500](upload://w21mo25OjuAw7Io1pnBLrCknceA.jpeg)<br>![Field_Map_1.png|690x500](upload://7ngwZ60tI0l2qB9KFqQeREcwZEJ.png)<br>![Field_Mapping_2.png|690x500](upload://uBYkSrT4lBIMoiRtT86HnGNIUPX.png.png)<br>

It’ll be up to the robo-mom at your event.

So much to digest.

But the first question that came to my mind was try to put the lidar close to the ground (~1" say) rather than 19"? Seems like trying to bounce lasers off the metal field border at the base is an easier shot than trying to hit that railing.

Just curious.

Dr. Joe J.

Good question!

It’s actually about the same distance vertically that you are trying to hit (though it does depend on the field being used - the AM field and the other field are slightly different).

For our testing it would have been harder to mount so we went up instead of down. On a real robot, you could go either way depending on the field setup.

I think that there might be a possible problem with this location with the interference of game pieces on the floor, such as fuel covering the floor this year.

Good point. Depends on the year I guess. In Arial Assist the floor seems better to me but perhaps this year the Fuel would have messed up the SLAM algorithms.

If you want to do robot avoidance (or robot interference) 19" seems better.

Best case scenario, the LIDAR can see the PC sheets.

Can’t wait to find out.

On a related topic The specs for the RPLIDAR are not that much better than the Neato LIDAR and I like the cost of the Neato a lot better (granted, I have the ability to 3D print a nice housing so that might not be a net win for a lot of teams).

What is the consensus regarding powering the rotation motor on the Neato as if it were a circuit rather than as a motor (i.e. not needing to power it via a motor controller)? (See the table under R32). If I have to give up a motor controller slot on the PDP and if I can’t actually spin up the LIDAR until the robot is enabled, I would argue that the PRLIDAR is a big step up. But if not, then what do folks think of using the Neato LIDAR for this offseason project?

Dr. Joe J.

My initial reaction : fuel is 5" in diameter, and pretty much everywhere in our matches since we hit the hoppers in auto and then keep on doing so. Plus I’d be a bit worried of reflections from the floor if the play field isn’t perfectly level … doesn’t take much to drop 1" over 10m.

But it is yet another thing to experiment with. Being on the bleeding edge is fun!

Oh, did I mention that the railing is different on each of the 2 types of fields…

The Neato sensor can be powered via a USB port from a COTS circuit and I personally feel that it doesn’t violate any rules. I’m on a phone right now and can’t find the link but the circuit is like $30. Trust me when I tell you that there is a world of difference between the two sensors and this one is more robust and a lot more reliable.

Getting the Neato sensor to talk to anything is a bit of a hack. It’s not exactly an open spec and has been reverse engineered to make it work.