Quote:
Originally Posted by Andrew Schreiber
I don't think this is as impossible as you think it is. Actually running thousands of matches may be impossible but within a week I've often run simulations of thousands of matches (likely many more). That's me, with excel/R. Like seriously, it's not hard. Do I catch everything? Nah. But it woulda told me that 2011 the minibots were ridiculously over valued. (2012 was the first year I started doing real models and running a bunch of scenarios, they've gotten more complicated every year).
|
Quote:
Originally Posted by pfreivald
I'm not a computer game guy, but I've been paid in real life for design work on card, board, roleplaying, and tabletop miniatures wargames. I think the best parallel between any of these and FRC is the last one, because it's very open-ended in terms of strategy (including force selection) and tactics (in-game actions).
|
I've always had some nagging doubts about that comparison between FRC and other types of gaming, and a hypothesis to explain those doubts just occurred to me. In a tabletop game, while there might be a diversity of strategies, it's unlikely that those strategies are fundamentally dependent on the execution of a complex task whose successfulness is (as a practical matter) non-deterministic because of a combination of physical randomness (imprecision in control, variability in game pieces and field, etc.), human perception (of the drivers, officials, etc.) and tournament sorting (essentially random in qualifying, but without enough iterations to represent all permutations). The tabletop game is essentially a very complicated set of strategic possibilities with well-defined randomness. An FRC game (or any physical sport) is only reducible to a set of strategies if you also have a way of reliably modeling the effectiveness of strategic execution in an unpredictably random environment. Computer games probably fall somewhere in between, depending on the nature of the simulation.
If those unpredictably random factors are major contributors to the outcome of an FRC tournament (as I suspect is the case), then designing the game on the basis of the straightforwardly predictable components may be insufficient. That's not to say that I disagree with the idea of a using a statistical model of an FRC game for game design purposes, just that for it to have
validity, we need to be clear about the limitations of the model.
Better experimental methods might go a long way toward eliminating those limitations, but the feasibility of some possible approaches is rather questionable. For example, we could gather input and output data for robot mechanisms (of the type used in closed-loop feedback control) to get a sense of the ability of robots to physically execute tasks (confounded by operator ability, of course). Or we could record everything spoken in the question box, and
code it for rule compliance (subject, of course, to the existence of a canonical interpretation) to assess the quality of officiation and the likelihood of teams obeying the rules.
At the cost of analytical rigour, it's probably reasonably practical for the GDC to get much of the benefit of those formal methods by soliciting the input of people who are intimately familiar with those factors across a spectrum of FRC events and similar competitions, and asking for quantitative estimates of the distribution of variance. If the GDC isn't already doing this, maybe a good first step would be to try it for a selection of past games (for which the outcomes are known), to see if it can improve the predictive power of the models.