Hello, I am working on integrating ROS into our robot for SLAM and fully autonomous navigation, and I wanted to ask if anyone has tried integrating ROS into their roboRIO alongside the WPI software?
The advanced stuff we will take of, the only important part is getting a ROS node going on the roboRIO. Further information on the intended setup and usage can be found below. We would be following this: http://wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot#Kinect_.2B-Odometry.2B-_2D_laser
The basic hardware setup is like this:
- Jetson TX2 running ROS Kinetic (the computing powerhouse)
- Kinect to use as a RGBD sensor
- RpLidar for 360 degree laser scanning
- RoboRIO reports gyro and wheel odometry for further accuracy enhancement
- Stereo camera utilizing the TX2’s GPU cores for stereo vision and target tracking
For software, ROS handles high level functionality like SLAM and path planning:
- The roboRIO published the desired field position in a ROS node.
- roboRIO would send gyro and wheel data via a ROS node.
- Other ROS nodes take in sensor data from the Kinect and Lidar.
- RTAB-Map handles SLAM and path planning and determines the required robot velocities (X, Y, and R)
- A node on the roboRIO would receive the velocities, and pass that on to the normal robot WPI software.
- The stereo camera is for dedicated target tracking.
- Possible deep learning on TX2 using TensorFlow?