Please post any feedback (bugs, feature requests, etc.) you might have in this thread or create an issue on GitHub for it. Development is a never-ending process, and we’re always trying to improve our products.
As with all of The Blue Alliance, this app as well as TBA for iOS (still in the works) are entirely open source. If you are a mobile developer and want to help out (or if you want to learn), take a look at the GitHub page and start contriubting - we’ve still got a lot of awesome ideas we want to implement before the season starts and the more people helping, the faster we get cool stuff. We’d love to have your help!
A couple bugs I noticed at first glance are attached. Under the awards tab for a team at an event, different teams’ awards listed. A tie (match 54) highlighted as a win.
Would also be nice to be able to pin teams or events to the tops of tabs. Love the summary pages for teams at events, maybe split quals/elims record from overall record. Could even list awards there instead of as a standalone tab (could go either way on that).
Thanks to everyone who worked on this, especially Phil Lopreiato and Nathan Walters for taking initiative and spearheading the project. The app will only get better with time – I can’t wait for some of the things we’re planning on implementing. :]
That’s difficult. Many teams have different in-house prediction algorithms. I do know of a probabilistic ranking prediction algorithm, but it’s not very lightweight – I recall it taking a couple of hours to process 15 or so matches.
Prediction is tricky. It’s easy to do a deterministic prediction (win/lose), but that’s almost useless. How close is the win? How likely is it that that outcome will occur? What’s far more valuable in most situations is a probabilistic prediction.
For a simple OPR-based probability, you could do something like this:
Basically, this says that Red has a 75% chance to win if Red has 75% of the OPR.
However, you also have to consider here that OPR – the basis for that probability we generated – is not very good for some games. This year was one, and 2012 comes to mind readily. Even in games where OPR is actually pretty good (like 2013), OPR should never be regarded as the ‘god metric’ that most people think it is. See my whitepaper for more of a numerical analysis.
A better method of prediction in games like 2013 or 2012 where scoring was more linear and separable (and where scores were approximately normally distributed) would be to calculate the average points teams were putting up and the standard deviation in those points. You could then generate a normal model for total red points, with: