IR@Home All Scores

Hey Everyone,

Like many of you are likely doing, I was clicking through each of the events to try and get an idea of where scores were landing. And this was tedious. So instead of spending 5 minutes doing that, I spent significantly more time making this spreadsheet.

It does not run automatically. Too many events for me to want to do the busy work to hook that up. Instead, it’s run by an app-script that creates a sheet for each event, and then copies their values into the main sheet.

You are welcome to make a copy of the sheet for yourself, just know that I have deleted my API key from the script. You can get your own here. Just open up the scripts for this google sheet, and paste in in the top. Then run the “RunMe” function. Might take a while to run. I hacked this together for myself so it’s not the prettiest.

Enjoy!

21 Likes

This is MUCH easier than scrolling through each group individually looking for scores! Thank you!!!

1 Like

Thank you for sharing and taking the time to put it together!!
This is very useful and informative.

Something i would watch out for is how some of the scores for the autonav and hyperdrive challenge may be incomplete. You can upload 1 of the 4 video files and still have it added to your raw score. You can tell because their calculated score is 0 in the category and they only have 2 challenges complete. just a thing to look out for in the future.

Dope spreadsheet nonetheless

How are you getting raw scores? They’re not in the FRC API.

1 Like

not OP but I wrote a scrapper to do this because I couldn’t figure out how to get raw times from the FIRST API. If anyone knows how to get the FIRST API to do this please let me know, I would rather not have to scrape the data.

Code Snippit to Extract Raw Scores from IR @ Home Event Code
import requests
from bs4 import BeautifulSoup
def get_rankings(event_code):
    text = requests.get("https://frc-events.firstinspires.org/2021/{}/rankings".format(event_code)).text
    
    bs = BeautifulSoup(text, 'html.parser')
    
    out = {}
    
    for tr in bs.find_all("tr"):
        team_num = None
        team_obj = {}
        
        for td in tr.find_all("td"):
            a_tag = td.find("a")
            if a_tag:
                team_num = int(float(a_tag.contents[0]))
            elif "rawScore" in str(td) and "rankColumn2" in str(td):
                team_obj["galactic_search"] = float(td.contents[0])
            elif "rawScore" in str(td) and "rankColumn3" in str(td):
                team_obj["auto_nav"] = float(td.contents[0])
            elif "rawScore" in str(td) and "rankColumn4" in str(td):
                team_obj["hyper_drive"] = float(td.contents[0])
            elif "rawScore" in str(td) and "rankColumn5" in str(td):
                team_obj["interstellar_accuracy"] = float(td.contents[0])
            elif "rawScore" in str(td) and "rankColumn6" in str(td):
                team_obj["power_port"] = float(td.contents[0])
                
        if team_num:
            out[team_num] = team_obj
        
    return out
3 Likes

Then you have this for 5712 in Fluorine…

12.5 seconds for the sum of three paths? Looks like it’s complete and has a score posted. My concept of what is possible may have to be recalibrated if this is a complete score.

3 Likes

I think we’re all scraping at this point. The Google Sheet by the OP uses an FRC API call to get the event codes, but then scrapes the FRC Event pages to get the data. I tried every documented FRC API call to find the IRAH raw data, but it’s not there. I’m guessing that the FRC Event page is using an API call to get it’s data (just good IT practice), but if it is, that API is not currently documented.

Maybe if someone has contact information for Alex Herreid, they could inquire about a possible undocumented API.

1 Like

Yup. I’m checking if a computed score exists in my top and average scores logic. And just like was said, it seems FIRST has some bugs for a few teams.

Just scraping. I use the api to get all the events and then use sheets importHTML function to pull the table from the web.

It’s not only entirely possible, but very likely they’re using an internal-only API to power their rankings content on FRC events. They don’t have anything in the FRC-Events API for raw score when I checked either, only calculated score, but they do have an errata for score calculation. Shame they didn’t give us the raw score and let us calculate the rest.

Funny how they still have a HTML comment that says “don’t scrape use the API!” and then don’t publish all the data in the API…

5 Likes

Looking on their YT, it may be just their Barrel Racing one.

2 Likes

Looks like some bugs have been fixed or resolved. 5712 now has more values in there, and the Power Port bug (not allowing scores above 45) seems to also be resolved. The highest score is now 73.

Wow that’s a high score! 9 cycles right? <7 seconds per cycle :open_mouth:

For sure!!
I’m really curious to see what the high end of the PPC is going to be!!

3 Likes

Some fun facts so far after updating this morning:

  • 61 teams have at least 1 computed score so far out of 1543 teams
  • The most popular challenge is the Hyperdrive Challenge with 44 submissions (not necessarily complete submissions though)
  • The least popular challenge is the Galactic Search Challenge with only 13 submissions
  • Out of 30 Interstellar Accuracy submissions, 5 are perfect scores
8 Likes

I hope more scores start coming up to make these last couple of weeks more fun! Itd be cool to know what the elite teams are doing and try to keep up.

Happy 2.5 weeks to go! Did another pull this morning, and not much has moved.

  • High Scores look like they did not move over the weekend

  • While there has been 15 more hyperdrive submissions since before the weekend, there has only been 1 more Galactic Search submission. Continuing the trend of Hyperdrive and Galactic Search being the most and least competitive challenges respectively.

  • 10-15 seconds is the largest pool for the Galactic Search currently. Breaking that 10s barrier seems 100% doable judging by how close we’ve gotten with only 14 submissions.

  • I added Power Port Cycle Estimation to my graphs tab to try and visualize where teams are landing. As you would expect, the distribution is slowly shifting to the right (more cycles) over time. But 4 cycles is the most common. Of course, this graph is inherently biased towards the left because a robot that does 6 cycles of only 2’s will look like a 4 cycle run with how I did my math.

3 Likes

Almost halfway through the submission window, are you (or anyone else) surprised with the lack of submissions so far?

1 Like

We opened our portal for the first time Wednesday of last week. We still have to pull the flagged runs from the hours long videos we have been taking and trim them for submission (really need to start and stop the camera for each run from now on). I’m guessing that a lot of teams won’t even worry about submitting until the last week or so as there is no real reason to submit early.

Some folks might feel like they don’t want to put a target out there for other teams to aim for either. I know we tend to do better when we know what we want to beat and right now all we have to beat is our last best time.

1 Like