2018 raw component score data for all matches through 3/11/2018
Thanks again, Russ, for providing data that we can play with.
Some quick observations:
Typical match score: 346-191
Outlier match score: 456-81 (two-sigma away from typical)
Typical winning margin: 155
Outlier winning margin: 375
254’s average winning margin in qualifying: 253 (c’mon guys!)
254’s average winning margin, all matches: 279
More related, in terms of interpreting this: Has anyone looked into doing an OPR-style calculation, except using the win (or -loss) margin as the output variable for the alliance? AKA, given a certain alliance member, what’s their expected contribution to the spread? I might give it a shot this if I manage to find time this week, but curious if anyone’s ahead of me already.
What you are describing sounds to me like CCWM, or calculated contribution to the winning margin. See the most recent presentation here for a more detailed description.
Blue AutoQuest Red AutoQuest Total Blue Face The Boss Red Face The Boss Total
Week Yes No Yes No Yes No Yes No Yes No Yes No
1 537 1439 27.18% 521 1455 26.37% 1058 2894 26.77% 59 1917 2.99% 70 1906 3.54% 129 3823 3.26%
2 670 1701 28.26% 654 1717 27.58% 1324 3418 27.92% 68 2303 2.87% 67 2304 2.83% 135 4607 2.85%
Russ Ether is not a robot. He is a real systems engineer. He happens to reside in the same corner of Michigan that I do, and has volunteered many times at our local FRC kickoffs and district competitions.
Many others have expressed skepticism about the above, citing the uncanny resemblance between his CD posts and those that one might expect from an AI.
Google “calculated contribution to winning margin.”
You’ll find that a few here on CD are ahead of you, and Nate Silver is ahead of them.
haha, oh dear :). I have no doubt of his personhood - I had always assumed a level of mystery due to the lack of team #, location, etc., yet vast technical knowledge and wisdom… (To me, even the username “ether” implied some level of mystery). He has helped me greatly in many problems, I wish to meet him in person some day!
Rich & Caleb, Thanks also for the reference. I recall seeing that thread but hadn’t dug into it too much yet. Will do so!
4944’s robot this year is a switch/vault bot and we did really well at the vault at Utah. I sorted teams based on average vault score and I thought I’d share the top 25 just for fun! We ended up at #11 out of the 1929 teams who have competed so far which is pretty good. Also, interestingly 0 teams this season have ended every match with 9 cubes in the vault (which would be a perfect score of averaging 45). I thought at least one would have accomplished this.
Rank Best Vault Teams Score
1 frc7225 44.33333333
2 frc5846 43.57142857
3 frc4557 42.85714286
4 frc7179 41.78571429
5 frc4272 41.42857143
6 frc5720 40.90909091
7 frc6411 40.90909091
8 frc7041 40.55555556
9 frc2016 40.3125
10 frc6618 40
11 frc4944 39.61538462
12 frc222 39.44444444
13 frc5531 39.0625
14 frc6628 38.46153846
15 frc6823 38.21428571
16 frc573 37.94117647
17 frc3397 37.85714286
18 frc3276 37.8125
19 frc610 37.77777778
20 frc5232 37.69230769
21 frc1024 37.5
22 frc5572 37.5
23 frc1731 37.5
24 frc4903 37.35294118
25 frc2171 37.1875
2018 raw component score data for all matches through 3/13/2018
THANK YOU to Eugene Fang and Phil Lopreiato for their amazing TBA database and API
4593 matches,49 events,2002 teams
3831 qual matches
437 quarterfinal matches
219 semifinal matches
106 final matches
includes all of week1, plus all of week2 except 2018bcvi (Canadian Pacific Regional) which does not complete until March 16th.
Just one off…
Note: I’ve edited this post to remove playoff matches. Higher seeds get assigned to red, which was throwing off data. No evidence of a red alliance conspiracy, as exciting as it would have been.
I was just doing some quick poking around vault points, and some interesting things popped up.
So, I took the difference between the unpenalized scores such that a positive difference was a blue win and a negative difference was a red win. Then, I made a matrix for each powerup, showing the average point difference for each combination of [red lvl played] vs [blue lvl played]. Color-coded, they look like this:
The boost table looks exactly how you would expect it. Alliances able to play more cubes scored more points. Easy correlation. The force table is interesting though. If a team plays force 1 or 2 when their opponent doesn’t play force at all, they usually lose. If a team plays force 3 when their opponent doesn’t, they usually win. Force 1 or 2 is a desperation play, force 3 is a “because we can” play.
[deleted]
Not sure if it accounts for all of the variation but the red alliance is the higher seed in elims.
Yup yup yup. Just realized the same thing and dropped an edit into my post. I’m going to update the whole thing with just quals matches.
That’s why I put the “comp level” field (column) in the spreadsheet
Wondering how the 49 events in Weeks 1 and 2 compare in terms of total scoring?*
2018 Weeks 1 & 2 Alliance Scores Percentile XLS](https://www.chiefdelphi.com/media/papers/download/5357)
2018 Weeks 1 & 2 Alliance Scores Percentile Plot](https://www.chiefdelphi.com/media/papers/download/5358)
- includes all of week1, plus all of week2 except 2018bcvi (Canadian Pacific Regional) which does not complete until March 16th.
*Weeks 1 & 2 Events Alliance Final Score Quartiles
25% 50% 75% 100%
arli 201 277 351 526
ausc 202 293 365 506
azfl 199 275 361 594
casd 205 282 362 609
ctwat 223 302 383 587
flor 176 262 344 475
gadal 111 210 308 514
gagai 148 235 307 447
gush 183 250 329 552
inmis 229 325 378 524
isde1 182 268 345 557
isde2 227 300 356 517
isde3 217 299 373 518
mabri 195 279 356 790
mawor 208 271 356 616
micen 129 216 294 506
migib 218 282 355 467
mike2 204 284 376 565
miket 198 269 330 519
misjo 215 296 360 601
misou 153 246 339 480
mitvc 196 277 330 479
miwat 233 294 371 481
mndu 212 283 359 474
mndu2 200 281 364 686
mokc2 220 289 358 510
mosl 201 266 346 506
mxmo 147 215 293 485
ncgre 134 239 329 789
ndgf 202 287 360 565
nhgrs 159 249 321 505
njfla 221 291 369 530
nyut 235 295 363 746
ohmv 228 289 369 519
onbar 200 266 369 541
onosh 201 272 355 518
orore 172 247 354 601
orwil 195 271 354 455
pawch 230 305 370 572
qcmo 215 286 361 555
scmb 188 272 360 497
tuis 148 223 312 562
txda 162 248 338 494
txlu 164 273 370 472
utwv 188 263 350 499
vagdc 195 279 363 674
vagle 196 259 354 487
vahay 201 282 360 633
wamou 205 271 350 510
```<br><br><br><br>
20-60-20 Table
20% of scores were less than Column A
20% of scores were greater than Column B
60% of scores were between A and B
A B
arli 177.7 368.3
ausc 182.5 374.5
azfl 174.8 373.1
casd 183.1 375
ctwat 210 396.5
flor 159.3 362.8
gadal 93 324
gagai 126.3 331.4
gush 170 343.7
inmis 213.2 391
isde1 165.9 359.9
isde2 209.3 376.4
isde3 198 383.5
mabri 184.5 372
mawor 187 371.8
micen 91.8 329.1
migib 199.9 361.1
mike2 189 391.5
miket 180 350.5
misjo 207.8 383.2
misou 121 359
mitvc 175.3 340.7
miwat 219.3 381.7
mndu 193.7 382.3
mndu2 173.6 382.7
mokc2 205.6 369.3
mosl 178 362.1
mxmo 138.7 316.9
ncgre 117 351.5
ndgf 181.4 375.6
nhgrs 144.9 337.1
njfla 205 375.9
nyut 223 378.6
ohmv 206.3 380
onbar 175.8 383.5
onosh 189.5 374
orore 150.2 366.8
orwil 183 369.5
pawch 214 387.5
qcmo 192.5 382
scmb 163.9 370.7
tuis 123 326.3
txda 135.8 361.1
txlu 147 387.9
utwv 170.3 368.4
vagdc 179 382
vagle 185.5 363
vahay 186.8 373
wamou 187.3 362.9
```<br><br><br><br>