There are quite a few ways out there to predict matches numerically and mathematically. One of the methods Team 20 has been using is using our scouting data to generate a normal distribution for the match (as part of our collaborative scouting efforts, six scouts scout each robot each match). By adding the average scores for each robot on each alliance, subtracting each alliance’s sum score from each other, adding the variances, and taking the square root of those variances to find the standard deviation, we then can figure out what percentage of the normal curve’s area is past zero.
I’m looking for other ways to predict matches. Besides going through each match and predicting it for yourself, what are some of the way you/your team predict match outcomes? How accurate are your models?
I like your method for match prediction, but fail to see the use. Could you enlighten me as to how and why 20 uses this data and match prediction to gain an advantage? Or is it all for fun statistics?
We’ll use our data for strategizing our individual matches, and for pick lists. The real usage we’ve found for match prediction is for rankings – and predicting where other teams (including us) will stand for eliminations. We like a reasonable picture of the next day – a forecast, if you will.
And it’s fun to make the occasional bet on a match on the basis of mathematical models.
I’m not going to speak for team 20 as to why they do it, but I shall tell you why my team does.
We scout the same way team 20 does, with 6 scouters each scouting a different robot each match. You never want to biased when scouting as it can throw off the data. They then turn in the standard scouting sheets to our person inputting the data into our Excel scouting program. It takes the data we collect and produces ranks and the like depending on how much a certain aspect (i.e. scoring ability, defense ability, etc.) is “weighted”. The program is able to be easily customized each season to function according to the game making it great for each season with some tweaking.
Anyways, it is very sophisticated (sheets for separate teams, matches, etc.) and we use it to predict the upcoming matches and also for potential alliance members. Kinda for fun, but also to see which robots are the “game changers” in a match. If you know the game changers in a match, you know if you have to defend them, stay away from them, or ask your teammates to try a different strategy if you believe it will help the alliance win.
It is a very reliable system and I believe it was 95% accurate this season in predicting the matches very good at ranking the teams once tweaked right. Thank you scouters! A good scouting system can win you the regional! (It also gave some of our team members something to do if the were not cheering or working on the robot.)
By the way, I can’t really explain the specifics as to what it does because I did not make it and do not have access to it right now. I believe it uses the macros feature utilizing some 1500+ lines of code or something like that…
Exactly my thought. The best way to predict the match is to play it.
Consider this: in 2010, 469, 1114, and their third partner (I forget who it was) were favored so heavily to win Championship that very few people would have predicted them not winning. Predictions would probably have showed them winning. That was before Murphy’s Law struck that alliance with a vengeance on Einstein–things just started failing for no apparent reason, and they did not win the Championship.
That said, this year OPR was supposed to be fairly good at predicting the outcomes. Some years it is, some years it isn’t.
3138 has a very good match prediction system. It’s good enough to correctly predict who is going to win the regional just after alliance selections, even when “underdog” #7 alliances beat #2 alliances (though that happens quite a bit now).
Absolutely nothing! I have to admit that it’s the most accurate way to find out (barring scoring errors).
I like your method for match prediction, but fail to see the use. Could you enlighten me as to how and why 20 uses this data and match prediction to gain an advantage? Or is it all for fun statistics?
As Team 20’s drive coach this season, I found that the information was most useful for looking at the individual robots in a match rather than the outcome of the match as a whole. We have a match outcome predictor program, but it was… less than accurate. It was never used, which is why (I assume) Brennon is looking for an alternate method.
As Brennon pointed out, knowing the projected outcome of a match was useful in figuring out who might seed, but that sort of information is generally more useful when trying to make a pick list on Friday, not during a match. Knowing ahead of time who might be picking allowed us to figure out who they might pick, and how to put together an alliance to beat that potential alliance. That way, I did’t have to try to make a split decision on the field with everyone singing the Jeopardy song. With that being said, there are upsets; we found that out when our Friday pick list suddenly became not as relevant as we thought it would be, about an hour before selections at Championships. It would be nice to have very accurate match predictions where surprises like that didn’t happen. :rolleyes:
However, the information that we use to predict the match’s outcome is also very useful in and of itself before/during matches for all of the reasons that scouting is useful. With the type data that we were collecting, we could figure out the average score of each robot, and using that, the expected score of each alliance. By looking at the scouting data on our alliance partners and opponents for each match, I would know ahead of time that our two allies would probably score a combined total of about 15 points, and we could adjust our strategy accordingly (ask if they would be willing to play defense or feed). Or, if we were the lowest scorer on our alliance, we would consider playing defense ourselves if we could prevent the other alliance from scoring more points than we scored on average in telop. You can make your own (favorable) upsets that way.
So, you’re right. The main value of prediction isn’t knowing the outcome of the match; it’s knowing how the match might be played.
I just looked at OPR to project the 2013 SVR Satuday match results and rankings . (See the regional thread for more details.) I accurately predicted a lot of the rankings Friday night, and the regional winners (the captain wasn’t even ranked #1 at the time). However, as others have pointed out, this usually isn’t all that difficult. Conventional wisedom, as well as my projections, would have put 254 and 118 as the regional winners.
As to projecting matches/seedings to be pointless: it is and it isn’t. As many have pointed out, you never know how a match will be played until its actually played. However, it’s useful to have a good guess about how rankings will turn out in order to get ready for selections and matches on Saturday.
Maybe it isn’t critical to know exactly how each match will turn out, but scouting as a whole is important. Projected rankings play an important role in scouting for aliance selection.
I understand that OPR related models have around 80% accuracy this year, which has been phenomenal. However, this calculation would have meant next to nothing for Rebound Rumble, for instance. What did you guys do that year?
For others, I’ll pose the question again. Regardless of the usefulness of the information, can anyone share their match prediction models? I’d love to know how they’re calculated, used, and how accurate they are.
When talking about predictors, I think understanding the usefulness is very important. 33 likes to use it is a way to gauge strength of schedule and predict rankings as well as matches to watch out for. First event is always the toughest as there is no “going in” data.
Using a match predictor won’t turn a 2/10 team into a 10/2 team, but it often helps go from 9/3 to 10/2. while winning an extra match doesn’t sound that impressive, look at many of the standings, and you will find that the difference between loosing 2 and loosing 3 matches is frequently the difference between being a captain or not.
As our data comes in, we will put predicting scoring ranges on our scouting card we give to the coach.
It sounds like Team 20 has a very neat method for doing their predictions, and the accuracy sounds pretty impressive though you have to look at the accuracy relative to the known data you have. Post predicting is way different than pre-predicting.
Overall, it isn’t the most critical thing your team can do, but it may help you pay attention to the match you thought would be easy (and end up loosing) or help you strategize turning a loosing match into a win.
For all of these methods, a 75% accuracy level of who is going to win is probably close enough to pay attention to.
For 2012, we would use component OPR to help decide who should be doing the balancing for the Co-Op bridge.
Truth be told, we didn’t. Our robot that year really wasn’t good enough to need scouting data. That robot was a ~60-70% robot, with not much of a chance of seeding highly. This years robot was an 80% or so robot, good enough that we’d have to worry about seeding. That’s why I started looking at OPR predictions.
Like IKE said, its about that match that brings you from 7-3 to 8-2 that makes the difference. Predicting match results is not particularly good for mediocre robots, its only good for good or great ones.
While invalid data could always throw off a model, our current model’s accuracy is rather lacking. I’m hoping that someone would share a model they found useful and worked for them.
“average” match performance leadign into the match, and OPR Friday night for Ranking predictions. We would then look at the top 16 or so and see if any close matches might go the otehr direction, and how those would effect the rankings.
For the Championship, we will frequently use previous best OPR to run the schedule as soon as it is available looking for close matches to focus on and/or any potential “never going to win” that we could somehow pull a rabbit out of our hats.
Even if your robot isn’t in contention, I recommend working on the tools to have your team ready in case it is. We were easily out of the Top 3 on Archimedes (254, 469, and 987 were easily the 3 best on Archi), but some luck with the schedule and skill in some key matches put us in the spot to be the 2 captain. I would put this in a similar tool-kit as making a pick-list. Even if you aren’t a captain, you should make a pick-list just in case you are picked by someone without a list, or just to have practice to make a better one.
At the Bedford District Event my fellow programmers and I took to using the sum of the OPR of an alliance’s members to predict match results and rankings. We were about 85% accurate for the 25 or so matches we predicted on saturday. We also predicted our ranking to be 11-13 and we ended 12th. It was fun.
I used Max OPR to predict matches at the Championships. (I have been analyzing the statistics and plan to do a white paper.) I adjusted the Max OPR for each team by adding a factor depending on how well each team did at each regional starting with a couple bonus points if a team went to more than 1 regional. Then for each match, I figured the better teams would improve more than the average teams, so I increased the OPR of the best team on the alliance and decreased the OPR of the weakest team by percentages. Then I adjusted the sum of the OPRs up to equal the predicted, or later, actual average points per alliance.
If this predicted we were going to lose a match, then we had to change our alliance strategy from what was “normal”. Maybe we would have one of our alliance partners play defense against the highest OPR opponent. Or maybe use a robot for counter-defense to allow our Full Court Shooter partner to shoot disks without an opponent defender in the way. The prediction was based on the past performance. If it predicted we would lose, then we had to do something different than the past.
Since we predicted we would not be ranked in the top 8, if the prediction was to easily win a match, we would do things to show scouts that we had flexibility like shooting from different locations, climbing a different side or picking up disks from the floor.
We changed a 3 win 5 loss match schedule strength prediction into a 5 and 3 result. And we almost won 2 matches that we lost by a very small margin.
We ended up being picked #3 by Team 303 (also picked Team 1640) and our #3 alliance won the Newton Division Championship.
So we did something right.
Also, my predictions were 100% accurate for the Saturday matches involving the top ranked teams so we knew in advance which teams to talk to that we wanted to pick us.
Kinda funny- we predicted one match incorrectly, and that completely messed up our plan for the final divisional standings in Archimedes.
It certainly changed things.
One match put a team that (according to our data) was supposed to rank 2 above 33. Imagine now if another team had picked 469 instead of 33. Einstein would have looked a LOT different.
I imagine that could be a part of the reason my friend Brennon posted this thread.
I like your method (and your username!) Thunderchief, in that it accounts for teams getting better, but by what criteria did you account for a team getting better? Was it
“I’ve heard of this team, they probably got better.”
Or was it: “This team has been to three regionals, they continuously improved by about 6 points, therefore they’ll improve by another 6 for CMP”
A little more info on the exact criteria you used to differentiate between teams would be helpful.