I created a neural network. 3x8x16x8x2. Activation is tanh.
It took in delta encoder values one for each dt side, a millisecond clock value scaled for a 30 second auto between 0 and 1. The outputs were two values between -1 and 1
I scaled them up to -12 and 12 voltage-current values to control each dt side.
For back-propagation, I wrote a recorder to capture the voltages of the motors at every watchdog interval using a customized mutex timing lock (thread-safe), then scaled the values and back-propagated them into the net.
Doing a recording of the dt values, the clock, the voltage values was what I wanted to be the equivalency to a ‘path’. Then after say 10000 training iterations, the path would now be “stored” in the brain of the network.
To at runtime all the auto would do is preload the brain dat file, and during auto just passin the delta values, and a clock value and get the supposed voltages the talonsrx’s needed to stay on path.
It just spun in circles.
When playing back the recording, (ie letting the robot drive the recorded path) I got smooth results.
The AI is supposed to help with corrections in the path (floor, wheels, carpet threading). The thing both the recording and pathfinder can not do.
If anyone is interested in paring on the side to help develop something like this, I have a baseline multi-threaded neural network c++ that runs very smoothly even in large networks. Hit me up.