View Single Post
  #5   Spotlight this post!  
Unread 18-12-2014, 10:44
Nemo's Avatar
Nemo Nemo is offline
Team 967 Mentor
AKA: Dan Niemitalo
FRC #0967 (Iron Lions)
Team Role: Coach
 
Join Date: Nov 2009
Rookie Year: 2009
Location: Iowa
Posts: 803
Nemo has a reputation beyond reputeNemo has a reputation beyond reputeNemo has a reputation beyond reputeNemo has a reputation beyond reputeNemo has a reputation beyond reputeNemo has a reputation beyond reputeNemo has a reputation beyond reputeNemo has a reputation beyond reputeNemo has a reputation beyond reputeNemo has a reputation beyond reputeNemo has a reputation beyond repute
Re: Strongest regional competitions

Quote:
Originally Posted by Abhishek R View Post
Personally, I want to see some kind of standardized z-score system that takes last year's OPRs of events and compares them with the mean score of that week of competition. Of course, all the qualms of OPR would apply, but I think it's a little more numerical and detailed compared to BBQ, and takes rankings to a minimal factor.
Before anybody allows themselves to read the following and experience angst over the obvious shortcomings of using imperfect past metrics to evaluate a team's performance, remember the following: this is just for fun.

I've been toying with concepts like this in my head as well. I'd like to create an index that tries to compare performance at different events more fairly, similar to the way baseball stats can adjust for things like park effects, league effects, different levels of offense in different years, and so on.

Rather than average OPR for the event, for evaluating replacement level I'd probably favor something like average OPR of teams 20-28 on the OPR list. And for evaluating the difficulty of winning the event, I'd probably look at the average OPR of the top 4-8 teams, or maybe even just the top 2-3.

It depends what one is looking for. If you want to know how hard an event is to WIN, then you mainly need to look at the strength of the top two teams other than your own team to know how high the bar is. If you want to gauge how hard it is to make the semifinals, on the other hand, you're probably looking at the strength of the top 10 or so teams, because you want to be in that group to have a good shot at getting on the top 5 or so alliances and avoid being the underdog in the quarterfinals.

I'm just spitballing here, but I think it might make sense to weight the value of an event win by average of the top ~3 teams, compared to the average of the top ~3 across ALL events. Then assign more or less value to a win based on how the event stacks up. And that's for 1st and 2nd robots on the alliance - I'd probably want to do something different for the 2nd pick or a backup robot. For finalists (robots 1+2) I'm probably looking at the average of robots ~3-5 compared to that average for all events. And so on. This has issues and it's a loose idea in my mind so far, but I think that would provide a bit more of a basis for comparing a team's win at Event A to another team's semifinalist finish at Event B.
Reply With Quote