Introducing LocalNet
Goal
Over the past few weeks I have been developing and researching the possibility of using a Deep Neural Net for the purposes of localization, using raw input from a LIDAR sensor and heading information from an IMU. This is with the goal of having a DNN process this complex and often noisy data and generating a X,Y coordinate for the field, all while being stateless which means this can be ran from anywhere on the field at any point in time with out any previous information about starting position.
Simulation
Currently we are using a Unity-based simulator to both acquire this training data and run these tests. Although many may believe this wont translate to IRL performance (which may be the case), its been a constant job for me to emulate noise and real life performance. Currently the LIDAR simulates noise for each point, errors in measurements, delays, and different positions.
Along with this, the concern of the walls being transparent was raised, so for these tests I have mounted the simulated LIDAR on the top of the bot, so it gets a view of the Scale/DS. However, other positions and environments are easy to train and test. This simulator, model, and testing ca, all be easily trained on new environments by just loading in new meshes, which has proven to work.
As see in the GIF above. The bot sends this simulated data to the Jetson over UDP and will get a prediction in return, this is shown as the yellow post.
The Data
The training data is gathered from a version of this simulation that includes 4-6 bots that go to random points on the field. Their X/Y, LIDAR data, and heading are recorded periodically and saved to a file. This is ran usually 50x time scale.
- LIDAR is scaled 0-1 based on Max Range (20-40m in my testing)
- Field Coors are -1 to 1 based on max field width and length
- Heading is 0/1 based on 0/360
For LIDAR we have ran anywhere from 64-196 Data points with no issue besides training time increases, this is however one field of researching were still doing.
Usually for our training, we train on 3-5m entries, and test on 1-2m.
The Model
Currently the model consists of a 1D Conv net, connected then into a Fully Connected Net which is also where IMU data is inputed. Error is calculated as MSE. This was all writen in Keras built on Tensorflow.
I am still realivly new to ML and creating custom DNN’s so my explanation may be lacking
I train this model on a Nvidia 1080 GPU, currently running 75 Epochs, with each Epoch taking 10+ mins, however this has been changing constantly with new hyper parameters and varying datasets. We have been averaging around 0.002 - 0.003 Losses with noise.
Results and Plans
As see in the GIF the NN is currently successfully localizing based on LIDAR data, running on the Nvidia Jetson TX2 over UDP and ethernet. Currently the TX2 runs the model at <4ms per inference, so I have been running my tests at 100hz.
This net seems to handle noise and bad LIDAR data extremly well, however IRL testing is required, which I plan to perform extensivly once our club aquires a LIDAR unit, or partner with a team with a Jetson and LIDAR.
For future plans I am still actively developing this model and improving it, while also abusing it. There are plans to attempt to add additional sensor inputs such as vision and ultrasonics. Overall this project has been and continue to wil be an active project, and hopefully will prove to be an extremely powerful tool for FRC, with the goal of integrating this with additional NN’s to bring AI into FRC.
THIS IS ALSO ALL GOING TO (and is partly already) OPENSOURCE. ONCE I MAKE THE REPOS AND SITE CLEANER I WILL POST THOSE LINKS
Special Thanks to Michael (aka Loveless / Turing’s Ego) from Innovation DX for the mentorship and Jetson donation