This is the ‘completed’ form of a Neural Network that team 1977’s programming team was throwing around in 2009. If you would like to contribute to the development, please PM me with your source forge account name so I can give you SVN access
Miscellaneous notes
Controls the robot’s drive
Currently only supports 2 motors, the code to allow for 4 motors and up will be added at some point
Unsure if fitness algorithm works or not, must be tested(but team 1977 can’t because our 2009 robot is disassembled)
Genetic Algorithms(what’s used on the connections, by far the largest section of code) generally take 500-5,000 generations to fully optimize
The drive function(Motor_Control() in the code) doesn’t include the setting of the motors, this is specific to the teams robot(which jaguars/victors are connected where).
Requires camera for the fitness function
Camera code needs to be added
Curently for the 2009 game. The reason for this is that the 2009 game was conceptually easier to write a fitness function for, and thus easier to build a neural network for. If you can think of a fitness function for Breakaway, please tell.
The robot currently tries to drive away and avoid opposing robots. This can be made opposite(robot trying to go after opponents robots)
Released under GPL
Again, this is Open Sourced, so if you would like to contribute, PM me with what your accounts name is.
but there’s going to have to be some modifications before it’ll work with a given teams robot. To be specific, where it says set motor value here in the function Motor_Call, that is replaced by code sending Motor_Left and Motor_Right to their corresponding jaguars(whichever ones those are). Also, you have to add the camera tracking code into the function camera_track, but you must add a little bit of code to determine the angle(Y, vertical, or phi, whichever one you prefer) and send it back (That angle is the one that the tilt servo is set to). Once those modifications are done, just call Motor_Control() when you want to run the Neural Net.
It could be used as a self-optimizing autonomous drive routine or really anything that involves driving.
Bob - No surprise in what I am going to suggest - Can we marry this up with the 5th Gear simulator and let each of 6 neural networks (with 6 different fitness functions) learn to each control one simulated robot? - Obviously, connecting the two sets of code is feasible given enough time and money. I’m asking if you think it would be easy enough and rewarding enough to want to actually do it.
I wouldn’t know quite how easily it would port, mainly because I haven’t dealt with .net all that much. It would be easier almost to write a fitness function for 5th Gear than it would for the main FRC bot. How useful it would be would depend on how much of the actual robot is simulated within 5th gear.
Just wondering, wouldn’t it make a bit more sense to use dynamic memory and float arrays instead of a bunch of named floats? It would have greatly reduced the amount of code that needed to be copy-pasted and changed…and I noticed you make a rnd(min,max) function, but use rand()%max the entire time in the code?
I’m really not sure what this is.
Is it a complete solution to an autonomous 'bot?
Is it a hardware abstraction to make perception and control easier to implement?
Is it implemented within the current FRC framework?
Why is it called a “neural network”? Is it for controlling of multiple robots from one AI?
What are the benefits of using it over simply using the WPI libraries directly?
Just wondering, wouldn’t it make a bit more sense to use dynamic memory and float arrays instead of a bunch of named floats? It would have greatly reduced the amount of code that needed to be copy-pasted and changed…and I noticed you make a rnd(min,max) function, but use rand()%max the entire time in the code?
It may. I’m pretty sure there are more efficiant ways of coding it. I used the way that I was most comfortable with. rnd() is called within the initialization function to provide a random decimal(when running it for the first time).
I’m really not sure what this is.
Is it a complete solution to an autonomous 'bot?
Is it a hardware abstraction to make perception and control easier to implement?
Is it implemented within the current FRC framework?
Why is it called a “neural network”? Is it for controlling of multiple robots from one AI?
What are the benefits of using it over simply using the WPI libraries directly?
Not complete, but a move in that direction. It allows for the robot to learn and drive itself according to a method of evaluating the fitness of certain options. Depending on how that algorithm is done, determines the efficiency and what the goals are(to a reasonable point. aka: must be able to be defined quantitatively
Depends on your definition of hardware. It isn’t traditional hardware abstraction in the sense that it isn’t using silicon/pcb/ic/doped circuits, but you could classify it as abstracting the brain.
Not really. Right now, yes the code implements some things within the WPILib, but as far as being placed into a specific template, and getting the vision code down(not added yet), that still remains to be done(I don’t have WPILib or WindrRver or any of the example code on my laptop right now)
Imagine how a brain functions. There is a network of neurons interconnected with synapses, and there are electrical pulses firing along these. A Neural Network emulates this design. There are a collection of neurons(some type of variable, usually floats) and their connections/weights(also usually floats). The neurons store the value of the net sum of their inputs. Their inputs are the connections times the value of the neuron that is closer to the initial neurons. In this NN specifically, there are 2 inital inputs(motor values) these 2 initial neurons then have their values summed across 18 intermediate neurons, also having their values multiplied by the respective connection or weight. That value times the weight( part of the ‘thought’ if you wish) is stored in the intermediate neurons. This process is repeated for the last ‘layer’ which is back to 2 output neurons. These 2 output neurons are the values that you set to the motors. A good article on how NN’s work is Neural Network Tutorial
This allows for a increase in autonomy, as well as having the robot be able to optimize how it drives(across multiple matches, due to the saving of the weights into a file).
I hope that answered your questions, and not increased your confussion
Okay, I think I got it.
A Neural Network is a method of summing and thresholding the values from multiple inputs to determine degree of truth. It’s a bit like fuzzy logic.
In other words, the neural network is the “planning” part of an autonomous robot. (Perceive, Plan, Control)
Or you could use it for the P and D of IPDE (Identify, Predict, Decide, Execute).
Of the 22 neurons, how many are in each layer?
And how many layers do you have(input, hidden?, output)?
I would like to take a stab at rewriting the network, and maybe a fitness function, but without the layers information i don’t know how many neurons to allocate.
Also, is the same network that was used on your 2009 bot, or is this rewritten code?
I did some neuralnet stuff a while back, and used a framework called “ECJ” to do the evolution part of the problem. It controls population dynamics / breeding / tournaments / etc. It was reasonably easy to use, and reasonably well documented. Even made using multiple machines easy!
There are 2 input, 18 hidden, and 2 output. It’s relatively rewritten(I had to rewrite the code, because it disappeared somewhere :/). It’s one of the options that we were thinking of for 2009, but never implemented it in competition.
In fact, as Eric pointed out, there are really two different levels to NNs. You can implement the low level portion and hope you get the bugs out, then start to train it and see what happens, or you can work at the high level to see what results you can achieve, and if successful, get it on the robot.
Should you model and prototype before you implement, or implement and find out what it is good for.