|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: paper: Stop the Stop Build
Great working gathering and presenting this data. It was really an eye opening read in many ways.
However, I am given pause by some of the leaps taken when discussing the presented statistics in this paper. As engineers, I think we've all heard the oft repeated phrase "Correlation does not equal causation." There are some pretty dramatic leaps taken in the analysis of points 3 and 5 that ignore a host of other factors. On point 3, to me Fig(2) isn't as clear as the proceeding paragraph claims it to be. The highest portions of the lost teams curve correspond with the high portions of the 2015 teams OPR distribution. That is to say, the most teams are lost from the OPR brackets that have the most teams total. That is obviously to be expected. Admittedly the skew shifts between the two plots, but I would like to see the actual loss ratios for each bucket rather than just raw totals. Further still, while it's obvious from the tails of the plots that extremely poor performers are more likely to fail than extremely strong performers, there are a plethora of factors that could potentially explain that, rather than the teams failing because of their poor performance. Are these poor performing teams particularly inexperienced, underfunded, under resourced, or under mentored? FiM clearly has some degree of feedback on this, but the dynamic and culture of FiM varies greatly when compared to the rest of FRC given the levels of state sponsorship and funding. If FIRST HQ has similar surveying of teams lost to attrition, I would be very eager to see it. Given the other potential stressor on team retention among these extremely poor performers, I would be very cautious about making any leaps that a stronger on-field performance would result in them surviving to future seasons. On point 5, I would like to echo the previous concern voiced by Greg Woelki. Each population in point 5 is a subset of the previous, but not a uniform sampling of the previous population. By removing 1-event teams from the 2nd even population, you're narrowing the sample to the teams that had the resources to compete twice and introducing a selection bias. There are even stronger selection biases with multiple event teams once you start factoring in teams that attended their district championships and/or FRC championship. This selection bias is demonstrated in fig(6). Teams playing 1 event have a lower OPR at their first event than teams playing 2 events. That suggests that teams capable of competing multiple times are already at a higher level than those without the resources to compete multiple times. The upwards trends of all five groupings does mitigates the concerns of the selection bias to an extent, as it shows repeated plays do in fact help teams improve their performance, but the raw totals of the average OPRs mirror much of what is argued in point 6 (the better performing teams are already better and remain better). The average of the "Teams Playing 2" sample fails to reach the "Teams Playing 3" sample's beginning of season OPR, even after their 2nd event. Most of all, both figures in point 3 are arguing that teams with more plays improve as the season progresses. There is a distinct difference between more plays (competition matches) and purely more robot access. While more competitions does mean more access, it also means a plethora of other factors, namely driver experience and competition field access. It's hard to say if more robot access alone would achieve the same levels of positive trends (or even if the gaps that already exist in point 6 could potentially be increased further). I'd be willing to wager that access to competition fields is a huge resource and a giant factor in the improved performance of teams that get repeat plays. I'd also argue that fig(6) even suggests this, as the steepest positive slopes in all four repeated play samples is between event 1 and event 2 (as teams get to test their robot on a real field for the first time). Do not take this post to be a criticism of the concepts proposed in this paper or the elimination of bag day. Neither of those issues I have formed a strong opinion on to this point, as I see very valid arguments on both sides. Also do not take this as a criticism of Jim Zondag or the paper as a whole. I love the effort and dedicated to the program Jim has and the passion put into writing such a paper with the goal of moving FRC in a direction Jim feels is best for the program. |
|
#2
|
|||||
|
|||||
|
Re: paper: Stop the Stop Build
Jim,
Thanks for all the great data analysis, tied together with great commentary! I have been a bit on the #keepthebag side, mostly from a "devil you know" philosophy. After a first quick read, I'm now squarely #onthefence, moving towards #banthebag. Sean, Thanks for pointing out all of the weak points I'd noticed as I read, and a couple more. Even given those, there can be no reasonable doubt that more time with hands on the robot and more drive practice (not necessarily in that order) means increased ability for a team to perform game functions and be competitive. All, Since reading about the poll this morning, I've been pondering the question of whether we'd still do a second robot if there is no bag, or (later), 8 hours of access per week. As background info: we're competing in the regional model, and this part of the country is still several years away from the team density to support districts. For the foreseeable future, we're looking at district registration and full team travel and hotel costs for a second event. We managed to binge-fund a trip to CMP in 2015, and drew in a few more sponsors, but unless we get a mentor or student with a better talent (or at least drive) to draw funding, we'll probably be able to afford a second regional about the same time we transition to districts. At 8 hours per week, we would probably expand our Saturday build (currently six hours) to eight or nine, and do a single unbagging each week where we did fabrication, drive practice, pit crew practice, and robot upgrades in a rush, and used our much shorter weekday evening schedules for planning, CADding, and working with a practice robot that we would definitely still build. At 20-168 hours of unbag per week, the question becomes a bit murkier, but I still think we would do two robots. Two robots are already part of our pre-bag processes (swapping robots off between project groups, including chassis, manipulators, programming, and drive team), so unless we lose a significant amount of resources (which could be money, facilities, mentors, or students), we would probably tweak the second robot processes, but not cancel them. The thing that excites me about a protracted unbagging each week is the possibility of a scrimmage. Currently, teams who do not build a second robot cannot even think about competing at a scrimmage, so there is no point in doing it in our area; I believe we are one of a very few. With an 8-hour unbagging window each week, I could definitely see enough teams to support a 3-6 hour scrimmage every week or two between "initial bag" and Bayou Regional, if we can identify a facility and carpet large enough to host the event. |
|
#3
|
||||
|
||||
|
Re: paper: Stop the Stop Build
Quote:
I'll also echo the desire for loss ratios by OPR bucket for Figure 2. The probably has a lot of noise, though, and if it's possible the case would likely be stronger by normalizing the OPRs and aggregating multiple years. I don't know what your database looks like though, so this might be a pain. I think there's also a way to address the questions that arise with Figure 6, but I'm not sure what it is yet. There should be a way to directly handle the relative difference in OPR between the populations versus the changes in each over time (demonstrating the salience of each factor). Similar to what Sean mentioned, for 2- versus 3-event teams, the fact that you are a 3-event team appears to be almost as useful if not more so than actually playing your third event--I would guess largely because you're a team that's going to qualify for DCMP based on your prior performance (or CMP). This is not to dismiss the paper's Point 5 that the figure is supporting, but the data is interesting. Overall, I think this case could benefit from talking more about the dataset. In OPR progression, how many DCMP and CMP performances are in Figure 5's green 3rd event line versus just being a 3rd "normal" (district or regional) event? Is there enough data from "normal" 3rd events to look at this directly, or do we have another proxy adjustment available? Dropping teams that didn't qualify for DCMP is certainly going to shift the OPR distribution regardless of play number. |
|
#4
|
||||
|
||||
|
Re: paper: Stop the Stop Build
All,
I may have missed this point in the great number of thoughtful responses: If the B&T is modified, what impacts/advantages can be realized for the competition season schedule? Just Wonderin' |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|