Has anyone every made an EPA style metric that uses scouting data?

EPA/OPR and scouting data both have pros and cons, and when making a picklist it’s always useful to make use of both and consider them in context. EPA’s major con is that it doesn’t know who scores points in matches, while scouting data is usually unable to effectively relay which teams are enabling other teams to score. Has anyone ever attempted to create a metric similar to EPA where scouting data is inputted and used in the calculation? Just a thought

I’m not sure why you have this impression. It’s definitely something that is possible to scout for if you think it’s important.

EPA/OPR are used because of the limited amount of data available from the field, and are inherently less reliable because of the lack of granularity in the starting data. Using that type of regression with scouting data defeats the whole point of scouting, which is to ensure you have granular, accurate data to make decisions with.

2 Likes

Beyond feeding notes (which is something that 100% can and should be scouted), what is something in this game that a team can do on the field that enabled others to score?

2 Likes

I think OPR / EPA is better at evaluating this game than it has been in recent years due to the fact that there are many actions you can do other than scoring that result in success, though you need a larger sample size to see it accurately reflected.

Taking this one step further, perhaps looking at contributed win margin or point differential is even better that just offense as that would also consider:

  • are you investing time playing some hybrid defense

  • are you stealing notes from the opponent’s zone

Scouting data, while it is the true measure of what someone does, should perhaps be more focused on capability and how fast a robot can score, not just how many:

  • 10 notes scored that someone else shuttled are not the same as someone who scored 10 full field cycles

  • shuttlers don’t score but enable scoring, as you have said this shows up in EPA/OPR over time but not in scouting numbers

1 Like

Depends on who’s doing the scouting.

2 Likes

One other “note” one this year, I think overall offensive output has been dependent as well on the quality of your partners and the event in general.

Since scoring this year involves a multiplier, having partners who can make use of that matters.

In 2023 the ability to score 10 pieces a match generally gave you the same output every match. This year it gives you wildly different score depending on what your partners do. If all you do is shoot 5 point notes with good partners you can score high. If you have to split time amping and shooting unamplified because you are the only amp capable robot on an alliance, you score lower.

Auto comparability also has big impact on individual output.

1 Like

2 things;
Not being in the way. (Enabling alliance partners)
Defense. (Hindering opponents)

Being able to shoot from anywhere not directly in front of the subwoofer is a positive.

Reducing the opponent’s score is effectively the same as increasing your own, so defensive play (especially opportunistic) should be valued.

Totally agree, was a poorly thought out question. However there is a difference between scouting a shuttling robot and scouting a robot that is shooting shuttled notes. Shooting shuttled notes is not the same as full field cycles, and the difference is not very easy to quantitatively scout. Especially since shuttled notes can vary in distance from the scoring zone depending on the accuracy of your alliance partner. Qualitative scouting defienetly solves most of the issues here but in the interest of keeping the pick list meeting to a reasonable length its often not great.

Both EPA and OPR are just regression strategies, and you could modify them/create your own algorithm to fit your individual scouting data. That will probably require a lot of math and statistics knowledge though

So I think what you’re looking for is a lot simpler than you’re making it.
Take your scouting data (presumably in a spreadsheet) and make a pivot table. Assign a point value to what columns you care about, and you can actually weight what you want to look for. You can do this on the fly by altering the formulae once you have the pivot table built. This involves having a conversation about what you care about as a team, but you’re effectively making your own version of OPR for your first pick. Then maybe you’re looking for a totally different set of criteria for a second pick. Change it and make a different list.