CHS DCMP Insight Collective Match Prediction Breakdown

I’ve gotten a lot of questions on how I’ve achieved up to 90% accuracy in my match predictions so here’s a comprehensive breakdown of how I did it.

Here’s my analysis that I pulled OPR from (marked as EPC) for predictions.

I took an evening and put together this Google Sheet with summations and a few other bells and whistles.
https://docs.google.com/spreadsheets/d/1TebUG29V_zJKVBtLo1Y6Nb3kfhZBpcGAqOdb3_u1VLY/edit?usp=sharing

This is all from DCMP and is broken down by day. Overall, including playoffs, my system was only 73% accurate.

Thursday, I pulled my data from the mastersheet from the other competitions this season, it wasn’t great. The point margins between expected and actual were in the teens and twenties. Not great at only 73% accurate.

Friday, I pulled solely from data that had been collected from Thursday, the most recent and accurate data that I had. I was on the side of the field, checking everything in real time! 76% accurate.

Saturday, the qualifiers went almost exactly as expected. Only 3 matches were predicted incorrectly. 90.5% accuracy.

Playoffs, semis and finals are hard to predict with my system and I recognize that, with 72% accuracy. The main, obvious flaws are lack of accountability for compatibility between teams, using solely OPR (which, for off season comps, we have found a way to quantify defense and add it to the formulas) and how points for climb are calculated, because not every alliance can double climb.

I’d love any ideas or criticism people. If you’re from CHS and want to learn how your team could join the collective, please email me at sslopey@gmail.com

So the sample size of only 20 matches was 91% accurate. Seems like some crazy advertising when your actual prediction rate over a somewhat acceptable sample size is 73.9% percent accurate. (which is still very decent)

Also your sheet shows 90.47%, so uhh… It’s 90% with correct rounding.

It seems like a cool project, and it’s good that you are interested in match predictions! Lets just make sure we are posting things that make sense.

1 Like

My old lead had been advertising as such so I continued. Ill go back and edit other posts. Thank you

2 Likes

Am I correct to understand that you are just taking:

(average#ofGamePieces * point value) for these predictions?

(avg cargo3)+(avg hatch2)+(avg starting points)+(avg climb points)

I attempted to account for everything, but was unable to quantify defense with our current app.

Were your predictions generally lower or higher than the actual score?

They were usually over, which I’m working on better calculations of climb and defense to fix. Scroll right, past absolute, there’s the margins that have either negative or positive. Negative was overestimated, positive was under.

1 Like

Did you by chance compare the results to just a straight prediction based on OPR?

I haven’t but it’s pretty doable and I can post results of that in a bit

1 Like

This would be rather labor-intensive (or not, if you find a smarter way to do this than what I thought of), but you could look into comparing the EPC for each team to their actual PC (APC? :thinking:) and analyze their robots to see if there are commonalities between teams that consistently perform significantly differently than what you predicted.


On another note, looking through the tableau analysis, I believe 2412 was listed instead of 2421. (I’m on 2412, and we’re located in PNW, not CHS)

1 Like

WHOOPS! I didnt catch that, thank you!

1 Like