Programming Competition: Autonomous Simulator

NOTE: I created this thread to move specific discussion away from this thread: Interest in Programming Competition?.

Here is my idea to implement this:

You submit 4 files (or a zip containing them):

  • A hex with your compiled program
  • A binary containing a DirectX program that describes the shape of your bot
  • An INI describing the positions and conections of motors, sensors, etc.
  • A zip containing source code. (Optional?)

To develop:

  • A program to stream (or just create animation)
  • A virtual macine (figurtively) to run the hex (Or any of the generated files?
  • A simulator to actualy take Output values from VM, apply that to model, add physics, and give the VM Inputs.
  • an optional OI web app? Many auto programs take values from the OI in the disabled time to set modes and such. Also may include an output window/dashboard.

My basic idea to implement the INI file is like this:

  • Only kit parts (or parts inputed by the orginizers) may be used for beginnings and ends for mechanical trains. (Our arm may be difficult, we’ll see)
  • there are a series of ‘Connectors’ or ‘Hubs’ between mechanical trains (MT) ends/beginigs.
  • the final ratio of a connection (between 2 connected items (Connector, wheel, motor, sensor, etc.)) is given.
  • Also given for each connection is the direction, the driver and the driven.
  • Every connection has to be the driver for at least one connection and the driven for at least one connection.
  • Ideally, the drivers can be back driven be the driven.
    To get specific with items:
  • Each motor has: Type, Polarity, Input (Byte), and Output (RPS)
  • Each potentiometer has: Turns, Input (RPS, a smart pot), Output (10-bit), and pin #
  • Each encoder has: EdgeCount, Input (RPS), Output (boolean), and pin #
  • Each Connector is a collection of objects with: Rate (RPS) and IO (Enum, Input or Output)
  • Each item is smart, and each send/recieve rates to handle for them selves (a pot will increment, a encoder will count).
  • Other sensors (Gyros, line followers) will be handled apropriately.
  • For simplification, Gyros will return rate of known turn, and banners for lines will be perfect.
  • IR servo/sensor combos will have: Position, Valur (Boolean)

This obviously needs to be revised and expanded (not to mention coded!) If we can choose a language, we can start coding.

I know this is going to be complex (No one said it would be easy!), but I think we can rise to the challenge. I await for responses before continuing.

I will voice my concerns once more. This seems unnecessarily complex. Sure, you could code all these things … but are they necessary? You could even add a screw class, and put each individual screw on the robot – but that seems overkill. But it all goes back to the point of the competition – do you want it to be an actual simulation of a robotics match, or just a simple programming competition which is related to FIRST in some way.

I think it may be helpful to write down and share a sample problem, so that others may get what exactly you want the competition to be. If you want the problem to be something that requires a near perfect simulation of a FIRST match – well, in my most humble opinion, that would take quite a while to program and debug (is the target to have it done this year, or is this a project you plan on implementing next year, using this year to work it out?).

I know I’ve already said all this … but I worked on such a simulation of a robotics match as part of my mentorship at my school for last year’s game, and unfortunately could not complete it because of its sheer size (particularly, the ODE physics support in CrystalSpace, a cross-platform graphics engine I used, was just developing and “buggy,” to say the least!). I hope, therefore, that I can offer some perspective to just what is needed (I’m not sure if you’ve undertaken a large programming project or not … but it always seems a lot easier at the onset, I assure you ;))

You can feel free to ignore this (:)), as I probably won’t be able to participate, but I personally feel that it would be better to have one robot that everyone must program, instead of allowing them to make their own. It takes some of the challenge out of it if you can add a gazillion (:)) sensors around the edges of your robot, instead of having design a way to work with only two sensors. Whatever, you can ignore me. :wink:

–EDIT–
At the very least, some rules regarding number of sensors and such should probably be added.

For an autonomous programming competition, I think it would be best to go simple like mtrawls said. The focus is on programming using sensors to achieve a certain goal. A setup like what Astronouth7303 suggested would be more like the Autodesk Inventor competition.

A maze would be a simple problem that I think we should try this year.

BUT, since Astronouth7303 started this discussion to deviate from the original programming competition, I’m going to return the competition discussion to the original thread I created.

Finally, my team began discussing an implementation for the competition structure. Check the original thread for continued discussion.

I know this would be extrodinarily complex, that’s why I’m not even trying to go solo on this. Obviously, there would be rules on sensors and such (16 pwms, 18 dig IO, 16 analog in, 8 relay).

Obviously, there would be a template bot for this, however I feel that if a team wants to take the time to make thier own bot, they should have the option to do that. Plus it’s way cooler.

If you are afraid of some one programming something that’s impossible, that’s what people are for.

This is the simplest way I could think of without doing AutoCad (Several thousand per liscense). If you look at LDraw, it uses a text-based format with 5 types of lines. It is simple, but you can do a lot with it (not curves). Same with this. You can’t do EVERYTHING, but you can come pretty close.

This will probably end up as a post-season project, however I think the FIRST programmer community can do this, especially if everyone pitches in a little bit. You know the hudereds/thousands of 3D games that contain a physics engine? We should be able to wrangle up something. Plus, you don’t have to have a human interface.

I’m surprised: No one mentioned the dificulty of a simulator, Your all talking about the engine. Maybe it will be a set of intertwined DLLs developed in many languages.

I think about the rewards of even coming half-way to perfecting this: an autonomous simulator, something to test a bot on while the bot is built (or shipped). Such a simulator would be the coolest thing I’ve seen since I’ve registered (a mere 57 days ago!), and more than worth it.

Granted, this project will be HUGE, definately going to be on Sourceforge or someone’s web space (Volunteers?), but the journey will be fun (Plus, some one can tell me why I can’t init variables in the middle of a procedure). Ok, please don’t quote me on that.

So maybe we can make an atempt at the 2nd largest Open Source project (1st: Linux). May be we’ll fail miserabley in the atempt. It isn’t impossible, such projects have been done before, and maybe with pooled intelect there will be enough brains to tackle this 1000-pound quarter-back (bad metaphor). Such projects aren’t impossible, just hard. But Gates didn’t write Windows, and Linus didn’t make RedHat. Don’t know 'till we try.

So who’s with me?

Last fall, our team invented a mini-challenge which involved building a FIRST-scale robot for practice. (We’re a rookie team and didn’t want the six-week build period to be our first robot-building experience.) It was tremendously useful and greatly improved our performance on the real FIRST competition. You can read about our practice challenge on our web site http://www.issaquahrobotics.org/challenge/index.html

Anyway, one thing this made pretty obvious is that the software team doesn’t get much time to program. So, while we were waiting for the robot to get built, we programmed some with a little EduKit robot. To try out some algorithms, I wrote a simple 2-D simulation in C++ which abstracted the playfield, sensors and motors. It ran the code in user_routines.c from the IFI code without modification (except via changing the header files), although it would be difficult or impossible to simulate the bugs in the PIC compiler. I didn’t fully simulate the PIC hardware, so analog inputs and PWM outputs worked, but I didn’t do things like interrupts.

Later, more for fun than anything else, I built 3-D model of the 2004 play field and our robot using OpenGL. I like OpenGL better than DirectX since it runs on multiple platforms. After I got it working on OS X, it was very quick and easy to get it running on NT/2000/XP. Porting to Linux should be equally easy, only the code that puts the OpenGL display into an OS window and the event code has to change. Of course if you do a good job of keeping your physics, rendering and event code separated and limit OS-specific code, it’s not too hard to port to another platform.

I did not do any physics modeling at all. The 2-D simulator would stop whenever the robot crossed any of the playfield boundaries, but the 3-D model didn’t do any hit detection at all, much less physical interactions.

Writing a 3-D physics engine is a LOT of work. I worked at a game company that created such an engine for creating Nintendo and PS2 games. It took many person-months to get it working with a specific set of objects interacting, on the same order of magnitude as what you’d need to turn a reasonable variety of robot designs on a given playfield. It’s easy to get a sphere bouncing on a flat rigid plane. It’s even pretty easy to get a spinning top to work correctly from pure physics principles without writing “spinning top” specific code. But, simulating friction and having multiple objects interact correctly is quite difficult. For example, having a bunch of blocks jumbled into a pile and behaving correctly is incredibly hard.

One of the most interesting challenges in writing real-time robot code is dealing with flaky sensors, non-linear motors, uneven friction, squishy wheels, chain slippage, and lumpy carpet. These things are all difficult to simulate realistically. It’s easy to write a line-following algorithm with perfect sensors; it’s much harder to write one that works with sensors that give uneven and inconsistent results and sometimes just flat-out lie.

I would be intereted in looking at it.

That’s easy to do in an object-oriented language.

That is one of the questions: build for what? Win32? MacOS? OS X? Linux? Java?

Friction probably won’t come 'till version 5. And a robot won’t interact with itself.

Like I said, the first couple versions will only do what’s expected, not what will happen. Our bot uses a CIM and a drill motor for driving each side. but w/o the drill, it can’t spin on carpet! in v0.5 of this, it would.

But the sooner v0.1 is made, the sooner we can get v5 made.

I’d say build for Win32 (:gaa: ) first, then MacOS (:)) and Linux (:)). I don’t know the numbers of people here who use Linux, but I’m sure that Win32 out numbers anything else.

–EDIT–
Oh, yeah, and I agree with the “get the skeleton up and running soon” philosophy. :slight_smile: