View Single Post
  #4   Spotlight this post!  
Unread 13-05-2016, 16:24
RyanCahoon's Avatar
RyanCahoon RyanCahoon is offline
Disassembling my prior presumptions
FRC #0766 (M-A Bears)
Team Role: Engineer
 
Join Date: Dec 2007
Rookie Year: 2007
Location: Mountain View
Posts: 689
RyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond reputeRyanCahoon has a reputation beyond repute
Re: Autonomous Robotic Mapping

Quote:
Originally Posted by mreda View Post
Does anyone know of any way to create an accurate mapping system for a small area outdoor autonomous robot? I was thinking GPS but they can be off by 3 meters. Then I was thinking some sort of triangulation localization technique but I figured Id put it out there to see if anyone has any ideas.
Are you asking about mapping (exploring an area and cataloging features and their positions) or are you talking about localization (finding the position of something within a known area)?

Some variables to consider:
- You said that 3m is too much error, but what sort of resolution are you looking for?
- How large of an area are you dealing with? 10s of meters, 100s of meters, 1000s of meters?
- Is it a fixed workspace (i.e. you can set up beacons) or are you moving into an unknown territory?
- What's the line-of-sight visibility like? Is it a fairly open space, or are there frequently objects which can occlude the robot's sensors? If there are occluders, are they large and metallic (i.e. would block radio frequencies)?
- How much money do you want to spend?


If it's an unknown space, you're going to need something like SLAM. There are many prebuilt libraries available for this (check out some of the ones that are available through ROS for example).

If you have a workspace that you control, here's a few solutions for several settings of the above parameters:
- Use a single camera and use the position and apparent size of a target(s) to determine 3d pose. This is how most FIRST vision systems work, and how the Oculus Rift tracking camera works. For a fairly out-of-the-box solution, take a look at some of the fiducial tracking libraries, such as AprilTags.
- Use several different cameras and track a target(s) and triangulate the pose. This is how motion capture cameras work.
- Use a Microsoft Kinect or other RGBD camera
- One of the magnetic position sensing systems from Sixense
- Use one of the UWB (Ultra-wideband radio-frequency) localization systems. I have one of the kits from Decawave sitting on my desk waiting for when I have enough time to play with it more.
- Use a differential GPS system.
- Place a whole bunch of RFID tags with unique IDs around the space and use an RFID reader to scan the nearest tag to determine your position.
- Mount several string pots to your robot and connect the ends to various points around the space and trilaterate your position. As long as your workspace is very open, you have an instant, high accuracy estimate of your robot's position with very little processing required.
__________________
FRC 2046, 2007-2008, Student member
FRC 1708, 2009-2012, College mentor; 2013-2014, Mentor
FRC 766, 2015-, Mentor

Last edited by RyanCahoon : 14-05-2016 at 00:35.
Reply With Quote