FRC Stats Hub - Open Source Scouting Web App

All,

The 2011 season is just around the corner! So I thought it was an appropriate time to introduce a project that I finished (what ever that means) sometime in September.

First I want to give credit to mobilegamer999 for writing his Pit Display Program (C#) that was extremely popular last year. I got this idea from him.

My version is built on the Ruby web framework Sinatra. Right now it allows users to choose an event, a team, and then view that team’s rankings in the event. It’s pretty basic right now, but I wanted to make it public in order to create interest and hopefully take it to the next level.

You can find the project on github. It’s open source.

Features

  • Loads all FRC events (including Michigan Districts)
  • Generates a team list for the event
  • Allows user to view team statistics for a given event

Cool Techie Stuff

  • Wildcard Routing
  • Rspec tests

Todo:

  • Make calls to USFIRST site more efficient
  • Style the app

Good luck!

Many people have made similar sites over the years(myself included), so I know how difficult it can be.

May I ask how you’re collecting your information? Are you scraping it from the FIRST site/TBA/etc?

Yup, just using an open-uri object to scrape all this data. I could use the TBA api, but from previous experience it doesn’t get updated as quick as the actual FIRST HTML pages.

have you thought about using the twitter feed? it’s updated within a minute of the end of every match(and includes all kinda of information in the tweet(event, match number, teams involved(and alliance they’re on), final score, points scored per alliance, penalties per alliance, and much more)… the only drawback is that the format changes each year with the change in game information

if interested, send me a PM and I’ll share what I have, or you can check out @FRCFMS on twitter and look up the API on your own(it’s simply just parsing an XML or JSON feed)

Yes I’m familiar with the twitter feed. That’s an option as well, and it’s probably a heck of a lot easier to parse though the data - being JSON and everything.

yeah, I used an 80 line python script to parse through the XML feed and post to a database I had set up. It ran as a cron task every 30 minutes during the events to collect all the previous tweets.

The program is almost ready for the 2011 season! I just posted the latest source (with a major UI change) to my github.

Alright, we’re all set to go for 2011!

You can find it live at:
http://statshub.heroku.com/

Check out this blog post about known issues and features to come.

Enjoy!