Average score per match and cycle times

During the beginning of the season when working on strategy and prototyping, we try to predict possible robot cycle times and average points pet match. Generally, at the end of the season, we look back on these estimated values and laugh, knowing that they were totally unrealistic.

How do other teams predict predict cycle times and average number of points scored per match? Is there a rule of thumb people like to use?

Cheers,
Devin

One good rule of thumb is that a good robot will perform tasks (very approximately) as quickly as a human who is moving at a normal, not rushed pace. So try setting up a a field or just some scoring elements and timing different team members pretending to be a robot. Another thing that can help is comparing the cycle to previous games, and guessing how much easier or harder the tasks will be.

I take you mean unrealistic as in you expected robots to score faster and higher than what actually happened during the competition season?

Also, just for the curiosity and observation of us all, would you be willing to share your predictions of recent games?

Imagine an average FRC robot doing the task, think through how slowly actions normally take. Take your estimate and multiply it by 3, and that is roughly how long an average robot will actually take to complete the task.

Yeah we overestimated quite a bit.

For the community’s enjoyment - 12 high goals per match average for one initial strategy, 5 second cycles and 23 high goals per match for a catapult prototype. Keep in mind our robot’s performance was “modest” - I believe the best we ever ended up with was 3 or 4 high goals with our spring powered shooter.

EDIT: And for RR (I was a freshman so I could be misquoting), but I think we were aiming for a few 4 tote stacks and never got past a couple 2-3 tote stacks.

Obligatory relevant xkcd comic:
https://xkcd.com/1658/

I think this kind of overestimating probably came mostly from misunderstanding the tasks and lack of familiarity with robot performance. I’m not sure what kind of process you use for estimating, but having a seasoned team member consistently working on this each year is a good way to help. Study of previous games robot performance throughout a season really helps. This is something where a high quantity of knowledge is very important.

That awkward moment when i thought you meant Rebound Rumble, then was confused when you said tote stack

For most games that involve cycling between a loading area and a scoring area, a “good” (1st round pick) team will successfully accomplish the task roughly 3 times per match.

After watching the 2012 and 2013 games, we estimated that a fast robot can run a full court scoring cycle about 4-5 times early in the year and up to 6 to 7 times by Champs. Worked like a charm for 2014. For 2015 we guessed that teams could put up 4 stacks (which turned out to be reach) but alliances could put up 6 easily. For 2016, we looked at the earlier cycle times and guessed 6 full court, but noted that they could also poach.

It turned out that our high point scores for 2014, 2015 and 2016 using this method was dead on. (My son and I chose the break point on are over/under for Champs right at the eventual high score each year.)

I typically worry less about predicting absolute scores and prefer to focus on analyzing the impact that doing actions at different rates has on the outcomes. For example - 2014 if you could pass about as fast as you could acquire a ball there was a distinct advantage but if it took you a lot longer there was less advantage. By finding how long you CAN take to do things before they become less valuable you can drive your strategic design.

Another good example is when you have multiple goals with differing point values (such as 2013 or 2016, but I only have the 2013 model built) Obviously hitting the 2 point goals was easier so your accuracy was increased, but we needed to figure out how much different the accuracy had to be to make up the point difference. This is another case where you’re not looking exactly for a raw cycle speed but instead looking at points where the plot of scores reaches a local maximum.

I’ve recently started using an online tool called guesstimate for building these models. It’s reasonably easy. Here are links to 2013, 2014, and 2015. Mind you, these are not really complete models, they were built to tell me what I needed to know about the game based on our discussions. Other folks may have different needs.

How did you come to these predictions? Some sort of math or model? “Engineering Judgement”?

At the start of every season, we try and calculate the score of a given robot archetype on an empty field in a match. For time values, we take a rough approximation that’s partly heuristics and partly having a human being perform the task.

These scores give us a baseline to work off of when we’re deciding what type of robot to build.

In addition to counting how much score we contribute, we also create a short list of things that we require from our alliance partners to get our 4RP every match (assuming win).

One of the metrics we used this year was asking what game will we see on Einstein and when has that style of play generally been introduced to the events.

2014 it was consistent three assist cycles.

2015 it was consistent capped & littered six stacks with the use of the cans from the step.

For 2016 we predicted:

Breach, capturing the tower with counts below 0 using the high goal, autonomous scoring, and at least one scaling machine.

Using 2014 & 2015 as our main examples from the previous two seasons in districts we noticed these prime strategies typically weren’t consistent until the later weeks of district play with a few exceptions. It helped to show that the simple concepts would work well in this game when looking at a tower count of 0 with low goals being much easier to accomplish in earlier weeks compared to a tower count of 0 using high goals.

We also looked back at previous games and what the winning strategy was for our home regional in Week 1 knowing our first event in Week 2 had a similar playing field of fresh teams.

When it came to assigning times we said 3-4 cycles from the secret passage was our maximum output early in the season potentially 5-6 later on. We assumed each cycle would take roughly 25-30 seconds conservatively breaking down our route to 5 second increments to cross defenses, cross open sections of field with limited visibility, line up to receive a ball, line up to score, and add in some extra buffer time for traffic or difficulty in one stage.

Something you should always include is accuracy and plan on missing a few shots. When you crunch numbers during the first week of build season in games like 2012 & 2013 you should plan that at least one shot each cycle might miss potentially more when competing at your earlier events. Mid tier goals like 2012 & 2013 you can see some increased accuracy compared to going higher. Low goals in any game will still see missed attempts but has the highest accuracy which benefits early events.

Oooh, an easy way to do calculations with probabilistic distribution inputs? I like. Thank you!

Math and observations. We were counting the number of cycles and so we could see how many were happening for different levels of performance. In 2013 some teams got up to 7 and 8 cycles in Champs! So we see cycles of 20-30 seconds in most games.

How did you determine the different distributions and timing of various actions? Other folks have mentioned having humans simulate the action, so just curious.

Another thought that someone on our team brought up was the idea of numbers in the game manual hinting at how many cycles would be done per match. For example, a capture requires (required) 8 goals, which might point to an average of 3 high goals per bot (which is about where we ended up at least).

Gut feel and experience? Obviously for things like travel times you can compute that.

Distributions - Again, gut feel. I really alternate between 3. Normal in which I’m specifying the 95th percentile edges and the average is between them. Long tail in which I assume it’ll take about the minimum number but could have a longer tail out to the max. And equal where it’s just an equal distribution between all possible values (I don’t use this one much for obvious reasons, though if I wanted to have a non-constant lift time for totes in 2015 I could have done that)

But, as I said, I don’t so much use them to compute scores I focus more on interactions between actions and outcomes.

Edit: Numbers in manuals are based on what the GDC thinks things should be worth. They may be based on their simulations but, let’s be honest, this is a group that didn’t see the rain of frisbees at the end of the match in 2013… so, uh, use with caution.

Ah okay that makes sense. I like your idea of looking at when a higher scoring option becomes less effective. Could definitely see how that could have been very useful this year with high/low goals.

Now I think about it from that perspective, our off season robot that was a low goaler was significantly quicker in cycles than our high goaler build season robot, so it did end up doing better.

Edit: Numbers in manuals are based on what the GDC thinks things should be worth. They may be based on their simulations but, let’s be honest, this is a group that didn’t see the rain of frisbees at the end of the match in 2013… so, uh, use with caution.

Rain of frisbees at the end of the match in 2013? Before my time :frowning:

So, it’s not all about less effective. Sometimes it’s finding things that give you more bang for your time (2014 w/ passes) or things that have more room for delays because you have to do things fewer times (2015 w/ cans) or even things more tolerance for screwing up.

The way I try to evaluate games is look at the inputs I can control (how fast I do things, accuracy, which scoring methods, etc) and what outputs exist (usually raw score but sometimes other things like limited cans in 2015). Then it’s all about finding how to change inputs to get the outputs I want.

EDIT: Oh gosh, I keep forgetting that that 2013 was a “long time ago” :frowning: