View Single Post
  #33   Spotlight this post!  
Unread 07-09-2016, 20:06
Lil' Lavery Lil' Lavery is online now
TSIMFD
AKA: Sean Lavery
FRC #1712 (DAWGMA)
Team Role: Mentor
 
Join Date: Nov 2003
Rookie Year: 2003
Location: Philadelphia, PA
Posts: 6,563
Lil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond reputeLil' Lavery has a reputation beyond repute
Send a message via AIM to Lil' Lavery
Re: paper: Stop the Stop Build

Great working gathering and presenting this data. It was really an eye opening read in many ways.

However, I am given pause by some of the leaps taken when discussing the presented statistics in this paper. As engineers, I think we've all heard the oft repeated phrase "Correlation does not equal causation." There are some pretty dramatic leaps taken in the analysis of points 3 and 5 that ignore a host of other factors.

On point 3, to me Fig(2) isn't as clear as the proceeding paragraph claims it to be. The highest portions of the lost teams curve correspond with the high portions of the 2015 teams OPR distribution. That is to say, the most teams are lost from the OPR brackets that have the most teams total. That is obviously to be expected. Admittedly the skew shifts between the two plots, but I would like to see the actual loss ratios for each bucket rather than just raw totals.

Further still, while it's obvious from the tails of the plots that extremely poor performers are more likely to fail than extremely strong performers, there are a plethora of factors that could potentially explain that, rather than the teams failing because of their poor performance. Are these poor performing teams particularly inexperienced, underfunded, under resourced, or under mentored? FiM clearly has some degree of feedback on this, but the dynamic and culture of FiM varies greatly when compared to the rest of FRC given the levels of state sponsorship and funding. If FIRST HQ has similar surveying of teams lost to attrition, I would be very eager to see it. Given the other potential stressor on team retention among these extremely poor performers, I would be very cautious about making any leaps that a stronger on-field performance would result in them surviving to future seasons.

On point 5, I would like to echo the previous concern voiced by Greg Woelki. Each population in point 5 is a subset of the previous, but not a uniform sampling of the previous population. By removing 1-event teams from the 2nd even population, you're narrowing the sample to the teams that had the resources to compete twice and introducing a selection bias. There are even stronger selection biases with multiple event teams once you start factoring in teams that attended their district championships and/or FRC championship.

This selection bias is demonstrated in fig(6). Teams playing 1 event have a lower OPR at their first event than teams playing 2 events. That suggests that teams capable of competing multiple times are already at a higher level than those without the resources to compete multiple times. The upwards trends of all five groupings does mitigates the concerns of the selection bias to an extent, as it shows repeated plays do in fact help teams improve their performance, but the raw totals of the average OPRs mirror much of what is argued in point 6 (the better performing teams are already better and remain better). The average of the "Teams Playing 2" sample fails to reach the "Teams Playing 3" sample's beginning of season OPR, even after their 2nd event.

Most of all, both figures in point 3 are arguing that teams with more plays improve as the season progresses. There is a distinct difference between more plays (competition matches) and purely more robot access. While more competitions does mean more access, it also means a plethora of other factors, namely driver experience and competition field access. It's hard to say if more robot access alone would achieve the same levels of positive trends (or even if the gaps that already exist in point 6 could potentially be increased further). I'd be willing to wager that access to competition fields is a huge resource and a giant factor in the improved performance of teams that get repeat plays. I'd also argue that fig(6) even suggests this, as the steepest positive slopes in all four repeated play samples is between event 1 and event 2 (as teams get to test their robot on a real field for the first time).

Do not take this post to be a criticism of the concepts proposed in this paper or the elimination of bag day. Neither of those issues I have formed a strong opinion on to this point, as I see very valid arguments on both sides. Also do not take this as a criticism of Jim Zondag or the paper as a whole. I love the effort and dedicated to the program Jim has and the passion put into writing such a paper with the goal of moving FRC in a direction Jim feels is best for the program.
__________________
Being correct doesn't mean you don't have to explain yourself.
Reply With Quote