Log in

View Full Version : Autonomous Robotic Mapping


mreda
13-05-2016, 14:24
Does anyone know of any way to create an accurate mapping system for a small area outdoor autonomous robot? I was thinking GPS but they can be off by 3 meters. Then I was thinking some sort of triangulation localization technique but I figured Id put it out there to see if anyone has any ideas.

Thanks,
mreda

jtrv
13-05-2016, 15:00
Perhaps this might be of interest to you https://github.com/JacisNonsense/Pathfinder

mreda
13-05-2016, 15:04
Perhaps this might be of interest to you https://github.com/JacisNonsense/Pathfinder

This can be added to a robot? I didnt see anywhere where I could purchase one. But it does look promising.

RyanCahoon
13-05-2016, 16:24
Does anyone know of any way to create an accurate mapping system for a small area outdoor autonomous robot? I was thinking GPS but they can be off by 3 meters. Then I was thinking some sort of triangulation localization technique but I figured Id put it out there to see if anyone has any ideas.

Are you asking about mapping (exploring an area and cataloging features and their positions) or are you talking about localization (finding the position of something within a known area)?

Some variables to consider:
- You said that 3m is too much error, but what sort of resolution are you looking for?
- How large of an area are you dealing with? 10s of meters, 100s of meters, 1000s of meters?
- Is it a fixed workspace (i.e. you can set up beacons) or are you moving into an unknown territory?
- What's the line-of-sight visibility like? Is it a fairly open space, or are there frequently objects which can occlude the robot's sensors? If there are occluders, are they large and metallic (i.e. would block radio frequencies)?
- How much money do you want to spend?


If it's an unknown space, you're going to need something like SLAM (https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping). There are many prebuilt libraries available for this (check out some of the ones that are available through ROS (https://www.google.com/search?q=slam%20site%3Aros.org) for example).

If you have a workspace that you control, here's a few solutions for several settings of the above parameters:
- Use a single camera and use the position and apparent size of a target(s) to determine 3d pose. This is how most FIRST vision systems work, and how the Oculus Rift tracking camera works. For a fairly out-of-the-box solution, take a look at some of the fiducial tracking libraries, such as AprilTags (https://april.eecs.umich.edu/wiki/AprilTags).
- Use several different cameras and track a target(s) and triangulate the pose. This is how motion capture cameras work.
- Use a Microsoft Kinect or other RGBD camera
- One of the magnetic position sensing systems from Sixense (http://sixense.com/razerhydra)
- Use one of the UWB (Ultra-wideband radio-frequency) localization systems. I have one of the kits from Decawave (http://www.decawave.com/) sitting on my desk waiting for when I have enough time to play with it more.
- Use a differential GPS (https://en.wikipedia.org/wiki/Differential_GPS) system.
- Place a whole bunch of RFID tags with unique IDs around the space and use an RFID reader to scan the nearest tag to determine your position.
- Mount several string pots (https://en.wikipedia.org/wiki/String_potentiometer) to your robot and connect the ends to various points around the space and trilaterate your position. As long as your workspace is very open, you have an instant, high accuracy estimate of your robot's position with very little processing required.

Jaci
13-05-2016, 22:12
This can be added to a robot? I didnt see anywhere where I could purchase one. But it does look promising.

It's a piece of code that runs on the robot so the robot can 'plan' its movements. Using encoders on the wheels and a gyroscope, you can plot a path using Pathfinder and the code will manipulate the motor outputs in an attempt to follow that path. With a bit of clever code, you could even use something like the NavX (or even still an encoder / gyro combination) to derive your position relative to the origin your robot started at.

mreda
13-05-2016, 23:50
Are you asking about mapping (exploring an area and cataloging features and their positions) or are you talking about localization (finding the position of something within a known area)?

Some variables to consider:
- You said that 3m is too much error, but what sort of resolution are you looking for?
- How large of an area are you dealing with? 10s of meters, 100s of meters, 1000s of meters?
- Is it a fixed workspace (i.e. you can set up beacons) or are you moving into an unknown territory?
- What's the line-of-sight visibility like? Is it a fairly open space, or are there frequently objects which can occlude the robot's sensors? If there are occluders, are they large and metallic (i.e. would block radio frequencies)?
- How much money do you want to spend?


If it's an unknown space, you're going to need something like SLAM (https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping). There are many prebuilt libraries available for this (check out some of the ones that are available through ROS (https://www.google.com/search?q=slam%20site%3Aros.org) for example).

If you have a workspace that you control, here's a few solutions for several settings of the above parameters:
- Use a single camera and use the position and apparent size of a target(s) to determine 3d pose. This is how most FIRST vision systems work, and how the Oculus Rift tracking camera works. For a fairly out-of-the-box solution, take a look at some of the fiducial tracking libraries, such as AprilTags (https://april.eecs.umich.edu/wiki/AprilTags).
- Use several different cameras and track a target(s) and triangulate the pose. This is how motion capture cameras work.
- Use a Microsoft Kinect or other RGBD camera
- One of the magnetic position sensing systems from Sixense (http://sixense.com/razerhydra)
- Use one of the UWB (Ultra-wideband radio-frequency) localization systems. I have one of the kits from Decawave (http://www.decawave.com/) sitting on desk waiting for when I have enough time to play with it more.
- Use a differential GPS (https://en.wikipedia.org/wiki/Differential_GPS) system.
- Place a whole bunch of RFID tags with unique IDs around the space and use an RFID reader to scan the nearest tag to determine your position.
- Mount several string pots (https://en.wikipedia.org/wiki/String_potentiometer) to your robot and connect the ends to various points around the space and trilaterate your position. As long as your workspace is very open, you have an instant, high accuracy estimate of your robot's position with very little processing required.

Thank you these look pretty good. To answer your questions, it will be a known area with little to no variation. I was thinking something with beacons sort of like the UWB setup you were talking about. I wasnt sure how cheap/readily available such a system is.

Thanks

RyanCahoon
14-05-2016, 00:30
the UWB setup you were talking about. I wasnt sure how cheap/readily available such a system is.

There are definitely more expensive (and correspondingly better performing) systems out there, but the reason why I got the Decawave system was because it's available on Digikey for under $1000. The TREK1000 (http://www.digikey.com/product-detail/en/decawave-limited/TREK1000/1479-1003-ND/5125656) kit appears to do localization as one of its featured example use cases (http://www.decawave.com/products/trek1000). (They only had the EVK1000 when I bought mine, so I have to get a couple of them to do full positioning.) The limited testing I've done so far suggests that they perform well within the advertised 10cm accuracy.

mreda
15-05-2016, 23:05
There are definitely more expensive (and correspondingly better performing) systems out there, but the reason why I got the Decawave system was because it's available on Digikey for under $1000. The TREK1000 (http://www.digikey.com/product-detail/en/decawave-limited/TREK1000/1479-1003-ND/5125656) kit appears to do localization as one of its featured example use cases (http://www.decawave.com/products/trek1000). (They only had the EVK1000 when I bought mine, so I have to get a couple of them to do full positioning.) The limited testing I've done so far suggests that they perform well within the advertised 10cm accuracy.

So if I wanted to record a course and play it back through any means, the only accurate way would be through extremely expensive localization sensors. I saw something out there about a gps + landmark system for working in a predetermined area. Would you happen to know anything about that?

AustinSchuh
16-05-2016, 00:49
So if I wanted to record a course and play it back through any means, the only accurate way would be through extremely expensive localization sensors. I saw something out there about a gps + landmark system for working in a predetermined area. Would you happen to know anything about that?

You may be thinking about differential GPS. For that to work, you need to be in an environment with little multipath. That means outside with a good view of the sky. Good GPS receivers are even more expensive than what was listed already. (You'll want to look into DRTK, which probably means a $6k+ receiver, and you'll need one for the bot and one for the base station.) You should be able to make cheaper receivers work, but that'll require you to solve the GPS solution yourself from the raw pseudoranges, and spend a lot of work getting the accuracy up.

Another method I've heard used is to point a camera at the ceiling and use identifiable features on the ceiling to localize. QR codes on the ceiling would make that even easier.

mreda
16-05-2016, 18:15
You may be thinking about differential GPS. For that to work, you need to be in an environment with little multipath. That means outside with a good view of the sky. Good GPS receivers are even more expensive than what was listed already. (You'll want to look into DRTK, which probably means a $6k+ receiver, and you'll need one for the bot and one for the base station.) You should be able to make cheaper receivers work, but that'll require you to solve the GPS solution yourself from the raw pseudoranges, and spend a lot of work getting the accuracy up.

Another method I've heard used is to point a camera at the ceiling and use identifiable features on the ceiling to localize. QR codes on the ceiling would make that even easier.

This is for outside. Is there a similar method for outdoor use? I know how it would/should work, but actually making this work is what is throwing me through a loop.

AustinSchuh
16-05-2016, 18:43
This is for outside. Is there a similar method for outdoor use? I know how it would/should work, but actually making this work is what is throwing me through a loop.

DRTK works only outdoors. If you have $ to throw at the problem, contact Novatel and they can tell you all the parts to buy.

mreda
16-05-2016, 18:50
DRTK works only outdoors. If you have $ to throw at the problem, contact Novatel and they can tell you all the parts to buy.

And there is the problem I don't have that much money to throw. My budget for this part of the project is rather small. If there is an outdoor version of the ceiling camera idea that might work.

Alan Anderson
16-05-2016, 19:29
If there is an outdoor version of the ceiling camera idea that might work.

Does your outdoor location have a ceiling? :confused:

I think your best bet might be to place visible beacons in known locations around the area, with a 360 degree camera view tracking the direction to each beacon in order to do inverse triangulation.

tr6scott
17-05-2016, 15:43
http://www.seattlerobotics.org/encoder/200108/using_a_pid.html

mreda
17-05-2016, 23:07
Does your outdoor location have a ceiling? :confused:

I think your best bet might be to place visible beacons in known locations around the area, with a 360 degree camera view tracking the direction to each beacon in order to do inverse triangulation.

thats more along the lines of what i was thinking. Is there a system out there that has this already?

Brian Selle
17-05-2016, 23:43
Haven't tried these but they look promising.

http://navspark.mybigcommerce.com/ns-hp-rtk-capable-gnss-receiver/

Mechvet
18-05-2016, 00:09
I used to work at a company that provided SLAM solutions out of the box to potential PHD's. We'd deliver anything from the top end systems ($70k+) to the quite budget friendly options.

If you're able to share your budget constraints, it would greatly help us recommending tech to get you where you need to be.

AlexanderTheOK
18-05-2016, 00:40
shouldn't need to spend too much money on a SLAM solution. and yes SLAM works outdoors (https://www.youtube.com/watch?v=qpTS7kg9J3A)

Last I checked you could get an xbox 360 kinect used for 25 a pop at gamestop. As long as whatever you are running can run ros on ubuntu you should be good. Odroid makes good cheap single board computers, and while I don't have any experience with the jetson boards, they should also be rather good, if not a bit on the expensive side going by what I've read on CD.

RyanCahoon
18-05-2016, 01:10
thats more along the lines of what i was thinking. Is there a system out there that has this already?

For the kind of accuracy (a few centimeters over a work area that I would guess is at least 10 meters on a side) and price point (less than $1000) you seem to be going for, you're probably not going to find some ready-made solution.

You may be able to find a cheaper option, but with 5-10 minutes of searching, the cheapest 360* camera that supports live streaming to a computer that I could find is the $449 VSN MOBIL V.360° (http://www.vsnmobil.com/products/v360) with a $299 HDMI converter (http://www.vsnmobil.com/collections/v-360-accessories/products/magewell-hdmi-to-usb-3-0-video-capture-dongle). If you want 360* coverage, another certainly cheaper option is to get several webcams pointed in different directions - the downside to this approach is you have to do all the extrinsic camera calibration yourself, which is generally a pain. I might recommend starting with just a single camera with a wide angle lens (be aware that you'll have to correct (http://photo.net/learn/fisheye/) for distortion (http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html)).

You can use one of the existing visual fiducial tracking libraries. I mentioned AprilTags already; ARToolKit is also widely-used. If you have a good view of the fiducial marker tag (place lots of tags around the area so the robot always has a good view of at least one), these libraries will give you a 6D pose estimate of the tag relative to the camera. You can invert this to give you the position of the robot relative to the tag, which then gives you the absolute pose of the robot when you add it to the known position and orientation of the tag.

Once you have position estimates derived from the individual observed beacons, you can fuse them to create a more accurate position estimate for the robot. There are fancier methods (https://www.google.com/search?q=particle+filter+localization) available, but a 90% solution could probably be achieved with a Kalman filter (https://en.wikipedia.org/wiki/Extended_Kalman_filter) and a few heuristics for resetting.

Here's a couple of projects that claim to do similar to what you're asking (I haven't tried them personally):
- https://github.com/ProjectArtemis/aprilslam
- http://pharos.ece.utexas.edu/wiki/index.php/Using_ROS_Camera-based_Localization_with_the_SimonSays_Demo
- https://github.com/LofaroLabs/POLARIS / http://wiki.lofarolabs.com/index.php/POLARIS_-_Position_Orientation_Localization_ARTag_Recogniti on_Indoor_System