![]() |
Re: Autonomous Robotic Mapping
|
Re: Autonomous Robotic Mapping
I used to work at a company that provided SLAM solutions out of the box to potential PHD's. We'd deliver anything from the top end systems ($70k+) to the quite budget friendly options.
If you're able to share your budget constraints, it would greatly help us recommending tech to get you where you need to be. |
Re: Autonomous Robotic Mapping
shouldn't need to spend too much money on a SLAM solution. and yes SLAM works outdoors
Last I checked you could get an xbox 360 kinect used for 25 a pop at gamestop. As long as whatever you are running can run ros on ubuntu you should be good. Odroid makes good cheap single board computers, and while I don't have any experience with the jetson boards, they should also be rather good, if not a bit on the expensive side going by what I've read on CD. |
Re: Autonomous Robotic Mapping
Quote:
You may be able to find a cheaper option, but with 5-10 minutes of searching, the cheapest 360* camera that supports live streaming to a computer that I could find is the $449 VSN MOBIL V.360° with a $299 HDMI converter. If you want 360* coverage, another certainly cheaper option is to get several webcams pointed in different directions - the downside to this approach is you have to do all the extrinsic camera calibration yourself, which is generally a pain. I might recommend starting with just a single camera with a wide angle lens (be aware that you'll have to correct for distortion). You can use one of the existing visual fiducial tracking libraries. I mentioned AprilTags already; ARToolKit is also widely-used. If you have a good view of the fiducial marker tag (place lots of tags around the area so the robot always has a good view of at least one), these libraries will give you a 6D pose estimate of the tag relative to the camera. You can invert this to give you the position of the robot relative to the tag, which then gives you the absolute pose of the robot when you add it to the known position and orientation of the tag. Once you have position estimates derived from the individual observed beacons, you can fuse them to create a more accurate position estimate for the robot. There are fancier methods available, but a 90% solution could probably be achieved with a Kalman filter and a few heuristics for resetting. Here's a couple of projects that claim to do similar to what you're asking (I haven't tried them personally): - https://github.com/ProjectArtemis/aprilslam - http://pharos.ece.utexas.edu/wiki/in...SimonSays_Demo - https://github.com/LofaroLabs/POLARIS / http://wiki.lofarolabs.com/index.php..._Indoor_System |
| All times are GMT -5. The time now is 23:42. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi