Will we see Artificial Neural Networks in FRC in 2010?

Hey guys,

When I heard that Java will be available as a programming option next year, I immediately thought of JOONE (http://www.jooneworld.com/). Here’s the question:

Will the cRIO and the Java virtual machine that will run on it be able to support JOONE? I’m really itching for the chance to apply ANNs in FIRST…

Thanks!
-Matt

Theoretically you could implement neural networks using any Turing-complete system (including LabVIEW even :cool:), so I don’t think the specific implementation of the JVM on the cRIO supporting your library has a lot to do with it. There are many free libraries out there for using neural networks; the bigger questions are 1. whether the cRIO will have enough computing horsepower to evaluate a network in real time during a match, and 2. if teams are able to put in the amount of work required to train a network in the short period of time we are given.

It is an interesting idea however, and sounds like a great off-season project, at least to start experimenting with the idea. What kind of functionality were you thinking to implement using ANNs?

–Ryan

If you google neural networks and pretty much any language, including LabVIEW, you will find pages of thesis, discussions, and even free code. As Ryan pointed out, the bigger issue is using it well. If they were magic, everyone else would be using them already.

Greg McKaskle

I don’t think we will see FIRST support neural networks in the near future. However, I do think that we might start seeing teams try them out on their own, if they find they are useful.

A reasonably sized network would easily run on the cRIO, for some definitions of reasonably sized. However, I’d be hard-pressed to find a way to train the network safely and quickly.

I co-authored a paper or two on neural networks designed for “long term” memory tasks. Our NN was written in Java, and the execution time of a single net was quick enough to run on a cRIO. However, we took tens of thousands of generations to evolve our solutions. Granted, our particular network was extremely flexible and therefore slower to evolve, but this would be a significant hurdle for a team. Using a less powerful network would converge quicker, but then it would be less powerful… Handcoding the network is possible, but really defeats the purpose: an ANN is supposed to teach itself tasks that you wouldn’t be able to code yourself.

I’m sure you could do it in any language that we have to program with at this point. Case in point: I had built a rudimentery one of about 24 neurons to contol the drive motors within the first week of this build season in C++, to show to the other programmers at the time.

What did the neural net perform? did it just randomly drive the motors? or was it to accomplish a goal on the field?

A goal. What we did was have the GA that ran the optimization of the NN(weights) be pushing toward being as close to the oppisite alliances trailer as possible. We did that with a combiniation of using the GetRobotAlliance() function and the camera. The problem that the system had was either we either would have to use 2 camera’s to get the distance to another teams trailer, or use our accelerometer to find another robot using a bunch of assumptions that were not necesarily true(Using netwon’s law of gravitation, assuming the mass of another robot). We couldn’t have 2 camera’s due to limitations with the cRio, and we couldn’t use the accelerometer because of the crowds pull would distort things as well as it wasn’t sensitive enough to handle the percision that it would require. So, the system wasn’t going to fly at this point, but it could if next year’s KoP contains a new cRio with 3 ethernet ports and an new Axis, but I’m doubtful of that.

There are a few simple and easy ways to get a distance with one camera though, we used the trig method.

Distance (inches) = (target height (In.) - camera height (in)) / tan(camera angle)

but you got it to track trailers in the first week!? ours still doesn’t even work properly (we had it work in the shop but never at a comp) :o

(also breaking our driver station in the 2nd week didn’t help)

There is no accelerometer in FIRST that can even come close to measuring the effect of universal gravitation from robots or people. The “crowd’s pull” did not affect your sensor, nor did you pick up robots.

I post this as a fun exercise in orders of magnitude, and do not intend it to be a performance in snobbery.

At 1 meter, a 150lbs robot will effect a 4.6 x 10^-10 g acceleration on your robot. The KoP Accelerometer produces 300mV/g, which means that the signal you are reading would be 138 picovolts.

This would require the entire analog system to be good to at least 34 bits before you would get any signal. It would take the cRIO 2.3 minutes to oversample to this level (almost the entire match) if you focused an entire analog module on it. Realistically :slight_smile: , you would want another 6-10 bits so you could tell how far away the other robot is. This would bump the time up to the 1 hour to 4 hour range.

Bear in mind that oversampling more than a couple extra bits isn’t really useful, as it requires the noise to be perfectly distributed. Oversampling 30 extra bits is pure fiction. Also, nothing could move during the sampling time.

Lets assume that NI pulls through and donates a 40bit at 100 samples per second A/D next year (These do not exist). Unfortunately, earth’s gravity will throw us off a bit. A change in inclination of the accelerometer by more than 10^-10 radians would destroy our reading. Roughly speaking, if a straight stick as long as the earth is wide was attached to the accelerometer, a deflection of more than a millimeter would be unacceptable. For our 6-10 bits extra range, our “earth-stick” could not deflect by more than the width of a human hair.

Edit: When Mars is at its closest to earth, its effect is 10 times that of the robot. At its farthest, its effect is a little less than double that of the other robot.

Edit 2: The moon’s effect is ~10,000 times stronger. The sun’s effect is ~6,000,000 times stronger. Considering the orbits, it takes 25 seconds for the moon and 90 milliseconds for the sun to change position enough to change the total acceleration vector by one “robot pull” unit.

though bobwrit could you post your code? if you are going to IRI then the Distance algorithms could be ironed out, and have your bot be a great demonstration of what can be done with neural networks

I’ll post it soon, but I have to recreate it(It’s on one of our schools computers and it’s locked for the summer, and it’s probaly deleted from those). It’ll probaly be a bit messy though. We’re not going to IRI unfortunatly, but oh well. I’ll see if I can get the code out by then…

Edit: It wont be ‘a bit’ messy, it’s realy bad in terms of messy-ness but hey, it’s coming together. The reason for the mess in it is due to the fact that C++ wont allow having dynamic variable names…