Project ORB: A superb predictive scouting system!

Okay, so I’ve been waiting and waiting and waiting to be able to tell the FIRST community about this, and I’m beyond excited to share tonight, and even though it’s not quite public yet, I’m getting this out here so people can start asking questions and making feature suggestions for this week.

I’m Sam Weaver, Lead Programmer for Team 4534: The Wired Wizards (archimedes division!) Early on in the season, our team’s leadership sat down and discussed scouting this year, and determined that we would not be able to dedicate a full time scouting staff at events this year. Programming was tasked to solve this, and we came up with an idea that we think redefines scouting for FIRST Stronghold.

Originating early on in the season under the codename “Project Magic Wand,” the name later changed to Project ORB, to better reflect what it does. What does it do, you may ask? Project Orb is an automatic predictive scouting system, that uses a combination of Neural Networks and statistical analysis that makes predictions about teams’ capabilities, specifically defense capabilities, high and low goal shooting abilities, climbing ability, and challenge consistency, and match predictions statistically more likely to be accurate than OPR. In essence, without entering in or providing any data, any user can view the capabilities of a robot to cross defenses, how many goals they can score in a match, and whether they can climb or not, completely automatically, for all 3144 active teams in the 2016 FRC Season. Cool, right?

How does it work?

Project ORB uses TheBlueAlliance’s API (thanks TBA!) to view and analyze all matcches a team has competed in over the entire season. This match data includes values about defense crossings, challenges and climbs, and high and low goal scores. Unfortunately, the data that the Field Management System at events provides to TheBlueAlliance is not spefici to which team completed which action, making the data along not very valuable. Here’s where our friends The Neural Networks come in! In our training process, we create a Feed Forward Neural Network for each and every**** team, and train it on their match data, with the goal to eliminate noise from their random alliance partnerships during qualifications. The results, are predictions on how many crossings of a defense a robot can make (ranging from zero to two.) This was the first step in Project Magic Wand, and we then adapted it to be more accurate, and adapted it for goal scoring and climbing and challenges.

How can I use it?

We did a beta test with all the teams at the 2016 NC District Championship (thanks guys o/ ) within our scouting app platform, and have since moved the data out to a standalone site, available at http://orb.scoutfrc.com/. Our programming team is still finishing up touches on the front end, so it is on our coming soon page at the moment, but we aim to have the site up and running on Thursday.

What are the significance of these numbers? How accurate are these results?

Inside the system, you will receive proficiency percentages for each defense on a team’s page. These percentages range from 0-100, and are typically mapped to the raw output of our networks, which range from zero to two. Unfortunately, due to how we gather the data, and trends in the game, some results are less accurate than others. Specifically, the Low Bar defense, for example, is almost never below a value of 1.5, due to the fact that there is a low bar robot in almost every match. As a result, the percentage for this defense will be tweaked to reflect a range that is more useful, such as 1.5-2. In contrast, you receive actual numbers for low and high goal shooting, which represent how many goals the system predicts they can score during a match.These numbers are broken up into auto and teleop values, which can allow you to predict a team’s auto routine. Strengths we’ve found in this system come in the ability to provide data in bulk, in supplement, or before your scouting team can get it. In addition, our system gives proficiency percentages of defenses, so while a team might report that they can complete both defenses in a defense group, our system can report which of the two they are weaker at, trumping what would typically be a boolean flag.

Please, please, ask questions. We want to get this system to the best possible state for this week’s competition. We rushed to put this post out tonight so that everyone could get the information as soon as possible, we can provide more data as we get more chances to figure out what people need to know, and as we keep tweaking our system.

Last but not least, I’d like to give special thanks to two of my programmers, Tom and Danny, for their extensive work on this project, and the rest of the programming team for their superb work this season. I wish all teams the best of luck, and we’ll see you at the competition!

Sam Weaver, Wired Wizards 4534

This is amazing. Thanks for the contribution I’m going to be taking a look in depth next thing tomorrow.

Good luck in Archimedes btw!

It’s like OPR calculation on steroids, can’t wait to check it our tomorrow!

I eagerly await proof. I hope you’re looking at something more than goodness of fit.

How accurate were your results at NC DCMP? Did you compare predictions with actual (eg. ORB provides a team should get 3 high goals, but in reality get something different)?

Really interested to take a look at the system, and it sounds super exciting! I’ve actually been working on training a Deep Learning system for FRC the last couple of years using some stuff I’ve been using in research here at Berkeley to do something quite similar. As of right now, all I’ve been doing is just dumping data from TBA and tweaking the training sets an a few external heuristics. I just never get a chance to do something real time. It did really really well for District Championships, but St. Louis is always another beast of its own. Feel free to shoot me a PM after Champs or find me in our pit if you’re interested at all for using that next year. I’ve been trying to integrate my rig with someone with a bit of UX skill because I definitely have none!

EDIT: I also just remembered I’ll be working on really similar core tech @ Google this summer.

weaverSam8,

Sounds like a great idea and a great use of computer-thinking to figure things out.

But I think I see a gap? Just because the statistics show that a robot ‘hasn’t’ done something doesn’t mean that it can’t.

For instance, in it’s whole life, our robot has only made like two low goals. There’s really seldom a reason to even try–we can make high goals at a high percentage. We only did those two in eliminations to beat defenders.

Same goes for low bar and rough terrain. Yes we can low-bar, but we have 10-inch wheels and good manipulators so we can cross all the defenses without much trouble. Almost always there are robots on our alliance that would prefer the low bar or the RT, so we ‘give’ it to them.

But I can see how the statistics may still show that my team scored on lowbar or RT, because likely someone on the alliance did, so maybe we’d get the points?

I don’t see how we’d ever be given points for being able to low-goal shoot.

How will your neural network evaluate a robot that just hasn’t done these?

What additional data are you using that makes match predictions statistically more likely to be accurate than component OPR?

This sounds interesting, but I’m cautiously skeptical about the use of neural nets and their benefits in this application compared to a statistical model. Nonetheless, I’m excited to see the results!

This is really interesting; I can’t wait to see how well it predicts at CMP. In the meantime, is it possible you could release its output for some of the other competitions this season? I’d really appreciate if you could release its evaluation of the teams at MAR CMP in particular so I can see how accurate the results are.

I have thought that cool things could be done with regards to scouting+machine learning. It is cool to see someone exploring the’s tools. I am looking forward to seeing more details about your system.

I don’t think it is a stretch to say a Neural Network can learn a model of robot performance that is much better than OPR, but you have not shown that your Neural Networks perform better or not.

Do you have any data of how well you predictions perform? Generally you perform a train-test split to see how well you can predict a robot’s contribution, and compare it to the robots actual contribution and see how well it matches.

I might have a few more thoughts later, but I am phone posting and don’t want to type out too much more right now.

Regardless of how effective this tool is, it is still a cool project to learn about neural networks.

From a scouting/strategy perspective, for anything but the low bar I would not be comfortable trusting a team that says they can do something they have not demonstrated before. A team that has never done something, in my eyes, is almost equivalent to a robot that cannot do something, barring very rare circumstances.
Sometimes it works out to trust another team with it, and other times it doesn’t.

We don’t have metrics on their goals predictions in specific on hand, we’d have to retrain the historical data and compare it to match data. What we can tell you is that our match prediction system (in a state less accurate than it is now,) was able to predict 6 out of 7 of the advancements in the finals, including predicting correctly lower seeded alliances triumphing over higher seeds, a victory even we humans didn’t quite predict.

I regret not being able to respond to all the questions, our team has had a very busy day today. Please, in the immediate, accept this:

Update on ORB:
We’ve decided to release tonight ORB in it’s current working state, even though it has a handful of bugs. We hope to squash these over this week, but we wanted to make sure teams could use the data so that everyone is on the same playing field.

ORB Beta will be available around 11:00 PM CST at http://orb.scoutfrc.com/. Please report bugs and feedback in this thread, we’ll try to be available on CD to answer more questions tomorrow, but if you’re at championships, feel free to stop by our pit, we’re 4534 in Archimedes, Row U. Until we provide some answers to some of your questions, we hope the results you’ll see tomorrow speak for themselves.

Good luck folks!

Looks like Curie isn’t showing up for me. Can anyone confirm?

We noticed the error in the logs before you reported it- we’re looking into it right now. It seems that we’re missing data points for 3 out of 3144 teams, and one of them happens to be in Curie…

The website is showing the “loading teams…” message for an awfully long time for me (it never loads the teams :(). Should I be worried?

EDIT: Carver loads fine, for some reason, although there is still a delay in opening.
DOUBLEEDIT: Also, what do the numbers mean? Is it all relative measurement?

We have discovered that we’re missing data for Teams 4, 8, and 11 for scaling. We’re going to inject artificial data for those teams (just for that single value) tonight, and tomorrow update it with the most accurate data. That’s all we can do tonight.

Looks like it is working now. Great job getting it up. Thanks for the work and dedication this took and allowing it to be usable for everyone.

This is incredible. I love the fact that I can still pull up data for teams that are not at champs. The data looks pretty darn accurate. I have two questions:

  1. Will these be updating as new matches are played during champs?
  2. Could you release a top 50/100/3000? I’d be interested to see how your program ranks the robots, and compare it to OPR and my personal favorites. I could (and might) compile this myself, but it’d take a while, and you might have some way of doing it much more quickly.

I second this like nobody’s business. I would love to compare ORB with 1712’s scouting data for MARCMP.