I think others have covered the main reasons, so I’d like to explore some additional reasons why I believe match predictions can be helpful.
Plenty have already mentioned risky/conservative win strategies, and that’s important for sure, but remember that there are two other RPs in the match as well from bonus objectives. As a rule of thumb, I would always highly prioritize the bonus RPs in matches where the outcome was >70% certain. That’s because when you get into matches with that large of a skill gap between the alliances, there’s not a lot to be gained by either alliance from focusing on winning as the outcome has essentially already been scheduled. If that happens, just take the win/loss and go guarantee you get as many bonus RPs as you can.
The other thing I’d like to point out regarding match predictions is that they are the best validation tool you have for your scouting system. When you get around to alliance selection, you are trying to pick the teams that give you the best chance of winning. I really don’t care how high they score on all of your fancy scouting metrics, what matters is how well your scouting can actually predict who will win and lose.
Honestly, if you’re not validating your scouting with match predictions or something similar, you are becoming dissociated with reality. It’s easy to look back on matches that have already happened and explain why the outcome ended up the way it did, it’s much harder, but much more enlightening imo, to try to look forward and predict what will happen. This can be a bit more painful of a process, as you’ll be forced to face your own incorrect predictions head on, but that’s really what you gotta do if you really want to learn and improve. For reference, I can predict the winner of FRC matches about 70% (year dependent) of the time using Elo/OPR, so you should be hitting at least that high using your scouting data or you’re probably doing something wrong.
Sure! Here’s a good jumping off point. You can use a simple logistic function to make win probabilities. For each team, give them a rating (I’ll be assuming the rating has units of 2019 points, but any units can work), which we’ll denote q. Sum the ratings for the red robots to get q_red, and likewise for the blue robots to get q_blue. Plug those ratings into the following formula:
WP_blue = 1/(1+10^((q_red - q_blue)/s))
Where WP_blue is your predicted blue win probability and s is a scale factor that you have to determine. 30 is a good baseline for s in 2019, go higher if you have weaker scouting data or are early in the event, go lower if you are more confident in your scouting data or have verified predictions at earlier events.
Play around with prior scouting data and match results to determine how exactly to make your team ratings, a good starting point is just to sum up a team’s average points scored across all categories based on your scouting data, but you can add or subtract from teams’ ratings using whatever criteria you want. Understand this is just a baseline, you can start to play around with alternative formulas or variants of this one once you feel like you understand this formula well enough and it feels too restrictive.
Finally, I know Eugene already mentioned it, but jump on in to some of the prediction contests. It’s a fun way to compare your results to others and you can ask questions and learn right along with everyone else. I’m happy to answer any questions about my event simulator or Elo if you have any.