Tactical information?

One one the things about this year’s setup that I thought was cool is how you can query the FMS and get information about things like your alliance. What this made me wonder though, is about obtaining other information. It seems to me that the major barrier to developing some very cool expert systems, automatic strategy-processors, etc. is the lack of information such as “how many points has the blue alliance scored” or “Where is the nearest opposing robot”. For our team, at least, that means that the robot’s ability to operate autonomously remains limited to robot “internals”–i.e., the robot can track a target, but has to be told where and when to do it. This is turn means that most of this “tactical-level” processing has to be done by the human drivers, which is obviously not a perfect process.

The way that we’re trying to solve this is by employing our drivers as extremely sophisticated sensors who input information during the match, so it can be interpreted by an expert system designed during the strategizing sessions. However, it seems like this would duplicate a lot of the data the FMS already collects, such as real-time scoring. What would be really useful would be for the robot to be able to accsess RTS and possibly other data, such as robot positions on the field. Obviously, some parts of this could be game specific, but it could be designed to use a number of flexible “channels” so that FIRST could publish specifications about how to use the system for any particular game.

Where’s the fun in that? :stuck_out_tongue:

I agree that it would be helpful to have access to info that FMS already tracks, such as RTS like you point out, but by saying to FIRST “redesigned FMS to track robots and game pieces, and field elements, and …” to me sounds like you’re saying “do it for me, I don’t want to.” For me, most of the fun in programming is the challenge of figuring out how to make things work myself using systems that I’ve designed. Forgive me for being ego-centric.

From another, often taken, point of view, FIRST to some degree is supposed to introduce (granted some control on the environment) real-life situations that robots might have to adapt to. Take unmanned search and rescue, for instance, a “hot topic” in the field of robotics right now. There’s no “God” system that automatically tells the robots “oh hey, there’s a person hidden under 20 ft. of rubble over there, go get him.” That’s the job of the robot to find out.

If tracking the positions of other robots is something that you want to do, figure out some sensors that could accommodate that. FIRST didn’t give us a powerful new control system for nothing. I’m personally very interested in exploring what vision systems are capable of; we’ve only scratched the very surface of what they can do. In the off-season last year (using a co-processor) I came up with a system that could very robustly identify the positions of the track balls when they were on the overpass. Yes, it took a lot of work, but it is possible. And that’s just with one sensor package; you can get real power once you start combining together multiple sensor readings.

To get you started, think about this year’s game. They gave you bright colored targets to track robots, and the orbit balls are pretty brightly colored against a white playing field. But even if you didn’t have the targets on the robots, you could still do a pretty good job of tracking robots: On the field, there are basically two classes of fast-moving objects: robots (I’m including trailers with robots), and the orbit balls. So by tracking movement in the scene, you can isolate out these parts. There will be issues because your robot is moving as well, but there are algorithms to track the ground plane so you could use these to isolate movement of other things. Then due to the fact that robots are much bigger than orbit balls, you can separate the fast-moving objects into classes. FIRST always requires an identifier in a relatively pre-defined location on the robots to show which alliance they’re on (trailer bumper color this year, flags in the previous couple of years), so you can even track that if you want to. Just my initial thoughts.

And now you still have essentially the whole offseason ahead of you, so you have plenty of time to experiment with these kinds of ideas. If you need any ideas on how to get started on sensing other things in the environment, talk to your mentors; that’s what they’re there for. Or, if you want, feel free to start a discussion on CD. There’s plenty of people with years of experience in FIRST and industry who would be glad to help you out.

Good luck,
–Ryan

IMO, a standardized field information system would actually provide a lot more room for innovation. Right now, any team that wants to do something like your trackball-tracker has to develop it from scratch. That might be okay if you just want to look at one or two factors, like our tracking system this year, but I think that if you get up past a certain critical number(which is very easily attainable for an expert system), you end up spending all your time on the low-level sensor stuff, and not enough time on actually developing system that can use that data.

I have to agree with Ryan, though for a bit different reason. it sounds to me like you want the robot to be completely autonomous which completely takes the fun out of the match. even if you put a super computer on the field it can’t make decisions based on what your alliance members say or ask of it, only what it sees. Innovative? yes, but fun? not even for the programmers. Plus its asking a lot of FIRST and its FMS.

The cRio is good, but I doubt that a team could implement a full AI that would replace human drivers anytime in the next 10 years. On top of the programming prowess and processing power needed to anticipate, a robot has a limited field of vision. On top of THAT, there are some truly oddball strategies out there. In one match at one regional, a team attempted to throw a match to draw a triple G14 after an alliance partner didn’t make the field on time. How would a robot anticipate that? :stuck_out_tongue:

s. On top of the programming prowess and processing power needed to anticipate, a robot has a limited field of vision.

That would be the point of using data from the FMS or using human drivers as “sensors”–your tactical data isn’t neccesarily limited by your robot’s on-board devices or its physical location.

The cRio is good, but I doubt that a team could implement a full AI that would replace human drivers anytime in the next 10 years.

You don’t need a full AI, you need a domain-specific expert system, since it doesn’t need to do anything else other than play whatever the game of the year is. The ability of an expert system to control the robot is also going to depend on the game of the year–for example, since we had such a simple strategy for Overdrive(go as fast as we can aroun the track, avoiding stuff), the only barrier to being able to have a fully autonomous robot was some instability in our Ackermann system, which was a signal-processing problem, not an AI one.

In one match at one regional, a team attempted to throw a match to draw a triple G14 after an alliance partner didn’t make the field on time. How would a robot anticipate that?

Most likely a system of “match priorities” set before each match. This would basically be a weighted table that the robot stores which determines how much a particular event occuring influences the action that the robot takes in response.

This is very similar to what is done in autonomous soccer competitions.

I was thinking, such information could be useful in making an automatic autonomous selector system. It would be nice if the FMS system broadcasted the teams involved in the match to the robots involved in the match by team number. From this point on the Robot could decide what would be the best autonomous path to take. This would require some good scouting and a database of autonomous paths taken by any team.

Then again, this is what a lot of drive teams do when they put their bot on the field…

It’d still be cool though.

Isn’t it just simpler to have jumpers on the Driver Station or in the robot? While that’s cool, it seems obfuscated and a ton of work when you can flip a “turn right” switch.

Two responses: a lot of low-level sensor routines can be developed in the off-season. Task-specific data-mining routines, on the other hand, would most likely have to be developed during the season; unfortunately, this does include vision classifiers like track ball trackers. I agree that this is a challenging task, and if your team wants to make an effort to do it, would probably require several years of pre-planning, and a whole team of programmers who work well together. Not impossible, but quite hard especially given the time frame. FIRST knows this, which is why they don’t require it of teams.

+10 brownie points for the team that develops natural language process for their FIRST robot :stuck_out_tongue:

Exactly my point: no fun if FIRST has already done all the programming for you.

If you’re already using humans as sensors, why not use them as processing as well? You save a lot on having a less complicated interface and humans are way easier to program…:cool:

Yes, Overdrive was a computationally easy game. I joked with the other programmers on my team that we could probably develop a fully autonomous system if we had a little more time, obviously you’re one-upping us and actually going for it, for which I congratulate you. Note, however, that not all games are this easy. Considering that even the best research institutions have yet to develop domain-specific expert systems for operating vehicles in relatively-controlled environments (aka autonomous cars. And no, DARPA Urban Challenge wasn’t realistic road conditions), you’ll be hard pressed to develop one that has to evade 5 other machines in a considerably small space.

Yes this would probably be the easiest part, IMO. It becomes an exercise in game programming, i.e. glorified Chess Master. That would actually be rather fun to program, but would require quite a lot of computing power.

I’m going to go with this for a sec. If instead of the FMS just handing you the information that says “robot here, robot here, field element here, …” teams were given an overhead image of the field, I think that would be rather realistic. Essentially it could be thought of as a shared aspect of all the robots on the field, since there are usually rules that would restrict a team from making a small detachable UAV a part of their robot. I would personally love to see this in a FIRST game in the near future. Would make a pretty wicked angle to show to the audience as well. Hey GDC, you have my vote.

–Ryan

I was saying that a system like the one I described would be more or less for the cool factor and to push the envelope rather than to gain a substantial advantage. As far as Lunacy is concerned it’d be pretty pointless because most autonomous modes are random. Maybe one advantage that a system like I described would have is that the robot could “predict” where a target would be and only look in that direction, should it be tracking a target. Perhaps in a game with a different style of autonomous mode this would have some sort of large advantage.

Humans are also prone to emotions, illogical actions, mistakes. Humans get tired. Imagine if you will a sharpshooter, a human has small inconsistencies, they are tired, they had a muscle spasm, etc. A robotic sharpshooter would have a much higher repeatability. It is the same concept as why we use factory automation nowadays. If your job is to tighten a bolt for 8 hours a day we will just say you can get through 480 bolts (1/min) I would be willing to bet you that at least 60 of those bolts are not tight enough or are too tight. A machine can run 24 hours a day thus doing 3 times as much work as you but can also do it so that nearly ever single bolt is to spec. That is why you use a machine over a person.* An FRC robot would be capable of plotting the most effective route through congestion, or shooting the balls accurately. Just saying there are benefits to automating any process, there are also costs such as the added complexity.

*This is not to say there are not situations a human is currently better suited for

If you’re already using humans as sensors, why not use them as processing as well? You save a lot on having a less complicated interface and humans are way easier to program.

Maybe our team has particularly bad drivers, but based on what we saw at our regional, if our drivers had been an actual part of the software, we would have said “we HAVE to replace this before we can have a stable version”.

As for ease of programming–for our first couple of matches, our drivers had trouble remembering how to turn the robot on(which led to it not moving for the first part of the match), or even remembering to show up for the matches(which led to me having to step in as a backup driver for the first two matches):mad: :mad: . If that’s easy to program, I shudder to think of what “hard to program” would be.

It sounds to me like your problem is with your team and its drivers and this should not be a reason bug the GDC for software which would/could/should take years to develop perfectly. I mean no offence to you, i just think your looking in the wrong place.

$.02

It sounds to me like your problem is with your team and its drivers and this should not be a reason bug the GDC for software which would/could/should take years to develop perfectly.

My point is that these are issues that are more or less generizable to any human driver.

Actually, since we can already query the FMS for our alliance color and position the ability to grab data from the FMS already exists. If we just wanted access to the real time scoring system (assuming there is one next year) it should be relatively simple to allow that data too. Mind you, this is all speculation.

from what i understood he wanted exact positions of each robot and game pieces. i probably misread that though. sorry if i did, still, it seems like something needs to be brought up with his team.

No, you read it correctly. The position information that the system provides now relates to position at the start of the game, so it isn’t useful for much except determining an autonomous mode to run.

And it would be cool if you could have a top-down video feed to all the robots(and easier than making the regional staff or your drivers “semantically” parse data). The only thing is, FIRST seems to be worried about the impact of streaming video data on their network, so in terms of data compression transmitting simple x/y positions would be a lot easier on the network than a video containing 95% data that the robots are going to throw out.

I’d like to see an enhanced Dashboard-like data stream that could be read by any of your alliance partners. That way the robots could share their sensor data and intentions and potentially collaborate autonomously.

The information provided this year was in regards to which player station each team was at, not the starting location of the robots.

There were quite a few teams this year that showed that video could be transmitted to player stations given enough compression. And if the video were multicast, the system could handle it fine. The problem occurred when 6 separate video streams were trying to be sent (at low or no compression) at the same time.

Be interesting to see what kind of protocols teams would come up with for sharing information universally.

–Ryan