![]() |
Twitter decoding program
Well, while it is easy to read the twitter results from the FMS (after getting used to it). I was wondering about if someone has already made a program that will use the Twitter feed and organize it into a more orderly fashion?
The Twitter I am talking about is the one that is done by the FMS at the regionals. http://twitter.com/#!/frcfms |
Re: Twitter decoding program
Quote:
http://www.github.com/tombot/FmsGrabber |
Re: Twitter decoding program
|
Re: Twitter decoding program
4 Attachment(s)
Decided to make my own twitter decoding program that tries to gather the average points for the teams.
It outputs a csv file with the teams in one column, and the average fouls, score, etc points for each match that team was in. |
Re: Twitter decoding program
Quote:
1) why there was (and is) no Twitter data for Traverse City, and 2) is this data lost forever ? |
Re: Twitter decoding program
Quote:
|
Re: Twitter decoding program
|
Re: Twitter decoding program
While those sites provide the match results, the twitter feed provides this as well as a break down of how the points were scored by each alliance (IIRC, bridge points, foul points, hybrid points, and tele-operated points). I have yet to find another source for this kind of data.
|
Re: Twitter decoding program
1 Attachment(s)
Quote:
Also, see attachment. Does anyone know how the 205 TeleOp points number was computed for Team 67 at Waterford? Unlike the Hybrid and Bridge points, it does not seem to be equal to the total of the alliance TeleOp points scored in the 12 qual matches by the alliance in which Team 67 was a member. |
Re: Twitter decoding program
Quote:
It would be great if someone with a working twitter parser could compare the ranking results when QS, hybrid, and bridge scores are all tied, to see which is being used as the tiebreaker. Looking at the Alamo regional, there were 2 cases of this 2721 tied with 4162 and 2583 tied with 2969. Unfortunately, the magnitude of difference between then team's TP were large enough that including foul points would be unlikely to change the ranking. I did not see any such ties at Kansas City, BAE, or Smokey Mountain, but there's still a lot of events I didn't look at. I did ask about this on Q/A. |
Re: Twitter decoding program
2 Attachment(s)
I have now added least squares solving in order to better find the "impact" each team had on the score. The results are now much more accurate, and better predict the results in the final match( the total of the average scores from each team member is around +- 5 from the total score for that team(excluding outliers like team 93)).
( I am only using data from the qualifying matches to predict the final rounds) While of course hand recording the individual scores of each team would be more accurate, this should be a great help in determining which teams provide the most "positive" points to help in the finals. |
Re: Twitter decoding program
Quote:
http://www.chiefdelphi.com/forums/sh....php?p=1144595 http://www.chiefdelphi.com/forums/sh....php?p=1144727 Oh, and a couple questions: What linear algebra library are you using, and is there a reason you are using SVD? |
Re: Twitter decoding program
The math library is Eigen.
The reason why I am using SVD is because that is how Eigen's tutorials describe how to perform a least squares operation(http://eigen.tuxfamily.org/api/Tutor...Leastsqua res) I don't think missing scores is going to be that bad. As long as most of the scores are posted, there should be enough data to get a reasonably accurate result. If anything, the main problem with my model is that it is very limited, not counting defense, autonomous, etc |
Re: Twitter decoding program
1 Attachment(s)
Quote:
For this application, LDLT would be far faster* and plenty accurate. Quote:
* For computing least squares for single events, the matrix is small enough that the time difference is probably not even noticeable. But if you ever intend to expand the functionality to compute least squares for a matrix containing all the data from an entire year's worth of events, I believe there would be a very noticeable difference in speed. If you have the time and are so inclined, it would be interesting if you would try SVD with 2011's data and see what the computation time is. For reference, LDLT takes 12 seconds on my 8-year-old PC to do least squares on a matrix populated with all the qual data from all the events in 2011 |
Re: Twitter decoding program
Quote:
Quote:
|
| All times are GMT -5. The time now is 01:35. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi