Autonomous mode... to the max

Hello again. My friend and I have been developing a unique program for a few months. There’s no real name for it, but it basically simulates simple life through Neural Networks. They are quite literally small digital forms of life. They are “born” with no knowledge, but through experimenting, they learn to move, eat, and survive.

Now, on to the real reason for this thread. This friend I mentioned, has the ability to read pixel data from a Webcam. I have the ability to send signals through a serial port. Basically, what I’m wondering, is are we allowed to securely mount a laptop and webcam on the robot for autonomous mode? My thoughts are that we take this program, set it up so when the Neural Net learns “Move Forward”, the Serial port sends a signal to a Digital input, which activates the wheels. I’m quite serious about this. I don’t know how well it would work learning-wise, but if it did, there would certainly be some impressed people.

Comments/Suggestions are welcome, but please try and keep it positive :wink:

Here’s the way I read the manual:

IF you can power the device from the robot battery,
IF you can interface it with the IFI Robot Controller,
IF the components meet the additional materials rules (namely the $200 limit on individual electronics and the availability to all teams rules), and
IF you can pull it off,

then it should be legal.

Even if it isn’t for the FRC, it’d still be a heck of a thing to try in the off-season. Best of luck!

As long as you only use it for testing or “walk-through” programming, I don’t think you will have a problem with the rules. If you should try to do this in a commpetition, prepare to not pass inspection.
Can you have a camera on the robot? Yes, as long as it is within the rules.
Can you have another computer on the robot, in addition to the controller? Yes, but with the following restrictions (which may be wrong): The other computer must be a “slave” to the RC. The RC is the only thing that can directly control motors and other outputs.

Now, if you could pull it off and explain how you did it (particularly if you involve students and they can explain it), you’d probably be in the running for some programming-type awards, as long as you stay within the rules.

Well that at least im pretty sure can be done. Look here http://www.chiefdelphi.com/forums/showthread.php?t=39395&highlight=gumstix

lol - I am a student. Chief programmer for the team.

And don’t worry, the ONLY thing the computer would be doing is sending 1s and 0s to Digital IOs. Think of it as a super crazy-advanced sensor of ultimate mind-boggling win.

As for the price limit, suppose the laptop was a piece of crap that a company was throwing away, and we took it? Does that count as spending money? (What if you couldn’t sell the laptop for even $50?)

Also… we can’t use the Laptop’s built-in battery? …that could be a problem…

the price (value) of any part on your robot is the price that anyone else will be able to purchase one just like it, so that every single team could obtain one (if they wanted to) at the same price.

So unless you know someone who is willing to ‘sell’ about 1000 laptop computers (used or otherwise) for the same price, the answer is no. Donated or dumpster-diver parts are not allowed.

You might be able to find a small single board computer (S100 for example), or possibly something like a PDA within the price constraints, then port your code to that platform.

Well, I’ve been talking with my friend here, and he seems quite confident that we can write a VB app with a Webcam to teach the Neural net, and then port it straight into the Robot’s controller to use the CMUcam.

[edit] Ok, we’re pretty sure we can do this, however we need to read the pixel-by-pixel data of the camera. Does anyone know where that is stored?

Out of some curiousity, could you provide more details about this neural network of yours? It seems real interesting. How does it learn? Any artificial-intelligence theories that you have based it on?

Thanks :slight_smile:

Hey Total,

This sounds like a really interesting project. What language did you write the code in? Do you want to elaborate a bit further on how it works? I’m just awfully curious from what i’ve read so far.

The CMUcam is nothing like a webcam. It is designed to process the pixel data directly within the camera, and only pass data about where colors are located out of it’s serial port. You can access the pixel data directly, however it takes about 5 seconds to transfer a single frame over the serial port. I don’t think this will do what you need it to.

Our team had an engineer that did the same kind of work for a robotics firm. He showed us some of his rifle tracking software it was pretty cool. It was a turret-mounted system that used a little camera on the top. It was programmed to follow up to 90 different targets in a picture and determine what was a friendly or foe. He said the camera that we used last year didn’t work for anything, and that a 30 dollar web cam would do a million times better job. Hopefully this years camera is better.

Heh… You’re gonna shoot me.

The Neural Net was originally programmed in VB.

As far as I know, it learns based on trial/error. When it does something right, it gets a reward. Something wrong = punishment. For instance, in the original program, eating was rewarding because it gave them energy. Not eating was a punishment because they were losing energy. Moving was a punishment because it uses energy, but moving toward food outweighed the punishment with the potential for a reward. Some of the “organisms” actually never figure this out and just sit there and die. Natural selection also occurs, in that when an organism “breeds” (gets too big and splits), both of the new ones undergo a random mutation. Those that get good mutations go on, those with bad mutations die. Obviously it won’t be this complex, since we will have a very limited amount of RAM, but it will still be a basic variant of a BackPropogation Neural Network.

My friend, SubQuantum will elaborate more tomorrow after he has received sleep, and I can’t continue this post, since a massive thunderstorm just randomly started happening above my house.

[edit] On Second thought, I’m still here. I’m using our team’s Programming laptop :slight_smile:

What you describe sounds more like a genetic algorithm than a neural netowork, but either way I think it’s kind of silly. Both algorithmic paradigms work only if you very carefully design the starting conditions. And only then for certain types of problem. You’ll need some sort of feedback for it to learn as well. I suppose you could do all of these things, but it would be much harder than implementing it with your own neural network (Bad dum tsssh) and almost certainly function an order of magnitude better.

That said, I want to see it done. :smiley:

The perfect processor for this is a Gumstix, which is a small linux computer with lots of convienent inputs and outputs. It’s a peice of cake to interface to a robot controller, and perfectly FIRST legal to the best of my knowledge.

Go for it!

sounds like fun! I hope it works out for you guys but I can’t say I can help you. if you get it to work you’ll have a programming award for sure!

Actually, I believe this is by definition a Neural network. It receives inputs to it’s input nodes, and then yes, through a series of weights, determines what Output nodes to trigger.That’s how it learns to walk - it experiments until it learns how to move accurately, and then experiments until it learns how to move intelligently. When I get a minute I’ll upload the demo of the environment for you. It still needs some work to keep a balance and have the life survive, but an older form of the simulation actually acted extremely intelligently. They began to shrink in size to concerve food, so one food particle lasted them a long time, and then even became Multi-cellular organisms (colonies of single-celled organisms that moved and behaved together). In this case, their input nodes were determined by how much food was around. For the purposes of this robot, we’ll have an input node for each sensor it can read, one (or more, SubQuantum will be able to decide better) neuron for the Camera, and then 20-30 neurons in between, leading to the 5-6 output neurons, which control “forward”, “Backward”, “Left”, “Right”, “Turret Up”, “Turret Down”, and probably others.

If possible, I’ll make it output the network activity to a laptop so we can actually see the neurons, where they are relative to each other, and what it’s brain is doing, exactly.

…I’d really like to make the robot operate entirely autonomously for the entire match… but I doubt that’ll happen. It’s possible though…

Congrats on learning enough about neural networks (NN) to want to use them on your robot. I’ve done quite a bit of research with NNs and genetic algorithms (GA). The life simulator that you describe is actually a fairly common thing when studying NN and GA. Each NN is described by a sequence of DNA. The GA breeds these DNA sequences and only selects the best to continue breeding. The GA is a technique to train the NN. The main problem with controlling your robot with a NN is the size that will be needed(I will explain this later) to process all of the data that your sensors collect. The GA is a hueristic search that will eventually find the correct NN. However, a bad NN could easily cause your robot to damage itself during its training.

Since a feedforward NN basically boils down to a long polynomial function of the inputs, there is no fancy mystery about how it works. Each layer of the network will divide the range of inputs into two groups, on and off. Hundreds of layers are needed to process the hundreds of pixels from a camera.

Thats all I have time for now, but I would be happy to discuss and argue about NN, GA, and artificial intelligence.

P.S. If I havent convinced you to stop using neural networks on your robot, let me know because I would love to help you to succeed despite my predictions. Plus, I had the same dream during my first year at FIRST

Wow… that seems pretty complex to code… are you sure the RC is capeable of running something like that?