Go to Post Sometime it makes me think what would I do without FIRST... - Arefin Bari [more]
Home
Go Back   Chief Delphi > Technical > Programming > Java
CD-Media   CD-Spy  
portal register members calendar search Today's Posts Mark Forums Read FAQ rules

 
Reply
Thread Tools Rate Thread Display Modes
  #1   Spotlight this post!  
Unread 12-03-2015, 08:14
Altainia Altainia is offline
That one geeky guy...
FRC #5098 (Sting-R)
Team Role: Mentor
 
Join Date: Jan 2008
Rookie Year: 2007
Location: Kansas City, MO
Posts: 18
Altainia will become famous soon enoughAltainia will become famous soon enough
Neural Net

I had seen a while ago some people asking about Neural Nets to control robots. I have decided to write a small library. It has basic support for a multi-layered neural network, but one should only need a single perceptron for many mechanical works. Here is the link:

https://www.dropbox.com/s/6dmlzhl8yp...twork.jar?dl=0

Neural Network is a very, very broad term. Saying you want to build a neural net is like saying you want to build a vehicle, or a robot. It's very... generic. As a result, there are many different specializations. The one I present is intended for robots. At least, the first layer is.

A Neural Network typically has a collection of neurons. Each neuron receives a sum of values coming from other neurons and outputs some answer that is either modified, conditional or both. This one in particular has its neurons organized in layers and has options for sending a signal top-down based on modification, conditions or both. Those are the ActivationFunctions you can choose from. Sigmoid will give a value from -1 to +1, ideal for motor control.

You set up a brain in this library by calling the constructor and for the 'shape' argument, pass in something like this:
Code:
new int[]{x,y,z}
where x is the number of neurons in the first (input) layer, y is the number of neurons in the hidden layer and z is the number of neurons in the last (output) layer. There can be more than one hidden layer or none at all and the input layer can be the same as the output layer.

When the input goes down this network, the input is copied for each neuron in the layer beneath and each is multiplied by a unique weight between those two individual neurons. The output layer, of course, has no interconnected neurons so does not do that. Instead, each output neuron holds onto a bias value that gets added to its value before going through the activation function.

What makes this network special is its avoidance to over-compensating. When the network is trained, it pays less attention to the error value the closer its previous output was to 100% (1.0), so long as the error is in the same direction as the previous output. In other words, there is not reason to increase weights or biases in an attempt to get a better outcome if the previous outcome was as good as it's going to get. That would only build up weights and biases that would take time to revert from when needed.

For motor control, I recommend a single neuron in a single layer {1}. Give it a desired speed normalized to -1 thru +1 and grade it on that (its error should be the normalized desired speed - the normalized actual speed). Note: If using this to control position, I recommend sending the distance thru a sigmoid function which will give you values between -1 and +1 and feeding that as the desired speed. You may have to adjust the learning rate in the ctor and make it a small fraction depending on your machine.

Also note this works in simulations but has yet to be tested.

Last edited by Altainia : 12-03-2015 at 11:20. Reason: Details about the neural net usage
Reply With Quote
  #2   Spotlight this post!  
Unread 13-03-2015, 00:01
faust1706's Avatar
faust1706 faust1706 is offline
Registered User
FRC #1706 (Ratchet Rockers)
Team Role: College Student
 
Join Date: Apr 2012
Rookie Year: 2011
Location: St Louis
Posts: 498
faust1706 is infamous around these partsfaust1706 is infamous around these parts
Re: Neural Net

It was me that "asked" about NNs in FRC use. Really great post.

A few things I want to add and clarify (in no particular order):

The most common sigmoid function, the logistic growth, has a range of (0,1). I typically use the tanh function to scale my outputs to (-1,1).

Say you want to use a NN to make a value of a sensor x. Let's say that sensor is a gyro and you have tank drive. So you have 1 node in your input layer and 2 nodes in your output layer and n number of nodes for your hidden layer(s). For this problem I would probably do 6 nodes in one hidden layer. Architecture of NNs is sort of a dark art. I'm currently researching rate of convergence of different architectures in the lab, but that is a long term project.

Moving on. In order for you to train your NN, you need to see how your output of your NN does in terms of your goal. But your output is motor values and you surely cannot compare motor values to a distance.

Here is one solution to this problem: wait a finite amount of time for an updated sensor value based on your motor values, then use that updated sensor value to see if what you wanted to happen did, and train accordingly through backprop.

I have a very generic NN program on my computer in the lab written in C++, I'll add it to this post when I go there. My code allows for multiple hidden layers, but anything more than 2 hidden layers I feel would be unnecessary for any machine control loop you need to do for FRC.
__________________
"You're a gentleman," they used to say to him. "You shouldn't have gone murdering people with a hatchet; that's no occupation for a gentleman."
Reply With Quote
Reply


Thread Tools
Display Modes Rate This Thread
Rate This Thread:

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Forum Jump


All times are GMT -5. The time now is 10:27.

The Chief Delphi Forums are sponsored by Innovation First International, Inc.


Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi