|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools |
Rating:
|
Display Modes |
|
#16
|
||||
|
||||
|
Re: Tired of tuning pesty PID loops for hours? Try Neural Networks!
Quote:
If you look at the readme that is nowhere near complete, I talk about different transfer functions. You can use any function you want that will map an input x, where x ∈ ℝ such that f(x) has a range of [-1, 1] (or [0,1]) and f is strictly increasing on any interval [a,b] (for any b > a, then f(b) > f(a)) Or in other words, f'(x) ≥ 0, where f'(x) is the first derivative of f with respect to x. There are a few transfer functions that I intend to code in but I haven't gotten to it yet. One transfer function, called the step function, only has two outputs: -1 or 1. This is used for binary classification and can be adjusted to multiclass classification if you give it more discrete outputs, such as -2 -1 0 1 2. To implement another transfer function, you must calculate the derivative of it with respect to your input. If you haven't taken calculus yet, you can look it up online or ask a mentor (or fellow student who has taken calculus) to find the derivative for you. As for the bias node question. That was a mistake, that is why. That line of code: layers.back().back().setOutputValue(1.0); was suppose to be in the for loop right above it....thanks for pointing this out to me. I'm sure there are other things that are wrong in this code base as I haven't rigorously looked through it, but it works for the training data I gave it (the xor classifier), so it shouldn't be too devastatingly wrong. Asking questions is the best way of learning in my opinion. This was all rather new to me as well when I was a student in FRC. Then I went through a textbook on neural networks my last semester of high school (Neural Networks and Learning Machines by Simon Haykin). If you have the math background (that is, calculus III) then go for it, it could be a great way to fill your free time. Even if you don't have quite that extensive of a math background, you can still learn a lot by trying to go through it. If you never expose yourself to new material, you will never learn. A great resource for me several years ago was a online class over machine learning put on by Andrew Ng (https://www.coursera.org/course/ml). He founded google brain, which in the end resulted in android's speech recognition program. The guy know's what he's talking about. Edit the following morning: I added the capacity to save the state of a network. It hasn't been tested and I wrote it in one continuous go. There is most likely a bug in it. I'll look over it tonight and fix whatever I find and then test it. Last edited by faust1706 : 15-04-2015 at 09:14. Reason: grammer, typos. |
|
#17
|
|||
|
|||
|
Re: Tired of tuning pesty PID loops for hours? Try Neural Networks!
One thing I noticed and is confusing me, is that you seem to be treating the bias node just like any other neuron in the feed-forward operation. It seems to me that you would never want to feed the previous layer into the bias neuron, because its output is supposed to remain constant.
Also, this is causing the feedForward() method of neuron.cpp to loop out of bounds on the outputWeights vector, because it gets called on the bias neuron, which always has an index that is greater than the max index of outputWeights. This does not cause a crash in the C++ version because the [] operator does not do range checking, but if it is replaced with at() (which does do range checking), your code crashes. I'm not sure how well I explained this, but I hope you can understand it. |
|
#18
|
||||
|
||||
|
Re: Tired of tuning pesty PID loops for hours? Try Neural Networks!
This happened because I had a thought, partially went through with it, then forgot all about it later on. My thought was to feedforward into a bias node, but just force the output to be one. It seemed like a good idea at the time.
I changed how a network is constructed, which is in net.cpp. If I am not at the output layer, then the first node is the bias node. The bias nodes are added ontop of the architecture given. Meaning that if you give a topology of 2-4-4-2, the real topology will be 3-5-5-2. It still feels like I am missing something, however. I'll investigate further when I have less homework.... Thank you so much for pointing this out. |
|
#19
|
|||
|
|||
|
Re: Tired of tuning pesty PID loops for hours? Try Neural Networks!
I found something which greatly improves the performance of the learning process. In neuron.cpp, you calculate the derivative of the transfer function at output value, when I am pretty sure it should be calculated using the sum of the inputs. Before I made that change, both your C++ version and my Java version would only find a good solution to xordata.txt maybe 10% of the time. I'm not sure if you had more success. It now works nearly 100% of the time after making that change. Also, I think you meant to call "transferFunctionTanHDerivative" instead of "transferFunctionTanH" on line 129 of neuron.cpp.
On an unrelated note, you should probably include a .gitignore file in your repository so you don't accidentally commit backup and binary files. |
|
#20
|
||||
|
||||
|
Re: Tired of tuning pesty PID loops for hours? Try Neural Networks!
It was correct at one point...then I tried to add support for other logistical functions and messed up on it... A large part of why I shared this code was so I could have more pairs of eyes look at it, I really only work on it when I have downtime in the lab, which isn't very often. It is slowly coming along. I'll try to think of more data to train on and make a file for it. (The logical 'and' wouldn't be hard to make. Once I add a visualizer I could do some classification stuff or regression, but that's a little ways away.)
Yeah, I'm a git amateur slowly learning. Edit* I went ahead and added a anddata.txt file to give something else to train on. Last edited by faust1706 : 22-04-2015 at 08:45. |
|
#21
|
|||
|
|||
|
Re: Tired of tuning pesty PID loops for hours? Try Neural Networks!
I also made some other test files, which can be found here.
I don't really understand why you moved the bias neuron to be the first one in each layer. The code does not work right anymore (lots of out of bounds on the vectors and a weird crash in the destructor of Net). If you put the bias as the first neuron, you have to subtract 1 from the neuron index when getting its input weights from the previous layer, but only if you are not dealing with the output layer (because it does not have a bias). This creates (I think) unnecessary complexity. Also, how do you get your test file (neural-net.cpp) to work? I had to make my own to be able to test your code. |
|
#22
|
||||
|
||||
|
Re: Tired of tuning pesty PID loops for hours? Try Neural Networks!
Sorry about the late response. I had my last round of tests before finals these past two weeks. I was having corruption problems with my git repo so I deleted it and made a new one of the same name.
I got rid of neural-net.cpp and added a tester.cpp for easy testing. I also appear to have fixed the training issue, kind of. It worked 5 out of 5 times for me using th (tanh) on the xor data. I'll have to look into sigmoid. It isn't optimized, but it will work. It's not intended to be used for deep learning or anything. The most I could see an FRC team use is 4 inputs, a couple hidden layers and 3 outputs. That is, current x, y, heading, and time as inputs and a desired x y heading as outputs. I also finished saving an entire network. That is, the architecture, transfer funciton, and weights. I have code to load a network that compiles, but I haven't tested it yet. |
|
#23
|
|||
|
|||
|
Re: Tired of tuning pesty PID loops for hours? Try Neural Networks!
Just to let you know that I haven't given up on this, I tested my code on a real robot yesterday and ... well it seems I must have written it too late at night.
I ended up getting the robot to oscillate around a setpoint, but then I changed something and the robot began to spin endlessly. After AP tests are over I'll hopefully be able to debug it some more (and get some video). |
|
#24
|
|||
|
|||
|
Re: Tired of tuning pesty PID loops for hours? Try Neural Networks!
Quote:
|
|
#25
|
||||
|
||||
|
Re: Tired of tuning pesty PID loops for hours? Try Neural Networks!
Something you can do to see what is happening in terms of the NN is save the error of the network after every iteration.
The code has a rather unfortunate bug right now that i don't have time to fix because of finals, it crashes when you try to save the state of the network (the weights). I'll try to find time to fix it as soon as I can. Let me know if you run into more problems / the same problem and I'll see if I can help further. |
|
#26
|
|||
|
|||
|
I have gotten the saving and loading functionality working on my Java port. I modified your file format a little to make my parser simpler. I think most of my problems are caused by the robot code.
|
|
#27
|
||||
|
||||
|
Re: Tired of tuning pesty PID loops for hours? Try Neural Networks!
That's good to hear for me at least...It might be that there is too much influence from new data (alpha), something to look into, but it is unlikely if it is <.20. A simple cheat would be to dampen the motor values outputted by the net if it comes down to that.
1706 is investigating into swerve this summer. I can't wait to see what @cmastudios is able to do with it along with our swerve trajectory planner code. |
|
#28
|
||||
|
||||
|
Re: Tired of tuning pesty PID loops for hours? Try Neural Networks!
Wow, I really want to have a chance to mess with all of this.
|
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|