The use of the Kinect and Cheesy Vision in 2015 and beyond

It all started innocuously enough with a Q&A entry a week after kickoff:

Q. Are we allowed to use the kinect as part of our driver station during autonomous mode this year?
A. There are no rules prohibiting this.

And was reiterated after build season:

Q. Per Q55, the Kinect is allowed as part of our driver station during autonomous. Please clarify: May a Driver, remaining compliant with G16 & G17, use the Kinect that is part of the driver station to control the Robot during Auto?
A. Yes.

These responses opened the door for the types of indirect control of the robots we saw in autonomous, most notable CheesyVision, but also the Kinect control used by us and 973. I have one simple question about all this, Should indirect control of the robot during autonomous mode (i.e. CheesyVision and Kinect control) be allowed for the 2015 season? My personal opinion is that allowing these forms of control removes the autonomy from Autonomous Mode (we had close to complete operator control over our robot in Autonomous once we started using the Kinect). Regardless of what I think, I’m curious to see what the community thinks. Was the autonomous excitement on Einstein enough to justify this type of control, or would you prefer Autonomous Mode to remain autonomous.

1 Like

I would prefer that the rules reflect the game. Aerial Assist and goalie-bots lent themselves towards limited human control during the autonomous period, due to their reactionary nature, and resulted in a great deal of excitement. A game like Rebound Rumble, even though it allowed, encouraged, and downright highlighted Kinect use, saw it practically never used because robots never interacted with their opponents in the autonomous period, and better accuracy could be achieved with pure autonomy.

For the record, I generally prefer mostly isolated autonomous periods with a high ceiling for performance like Rebound Rumble or Ultimate Ascent, but think that the “hybrid period” should come and go as the games require, rather than forcing one way to work for all games.

1 Like

If we had some really long autonomous (longer than 20 seconds) i would love to have some form of corrections to avoid collisions and etc. But if 2015’s auto is like the years before (15 seconds or less) I don’t think we want kinect control for another year. We already pushed the envelope and I thinks its enough for now.

I’m really torn on this.

On one hand, the current rules basically take the “auto” out of autonomous. On the other hand, autonomous mode is usually really boring. The Einstein autonomous chess match between 1114 and 254 is maybe my favorite FRC memory of all time and maybe the most exciting thing that happened all year.

I believe that the use of indirect input is highly game dependent. For example, it was legal in 2012 and 2013 (Q198), but hardly used. It may be a few more years before there is a game design that makes indirect input useful again.

Normally, the only excitement in autonomous is whether a robot will fail. That’s not very exciting, or inspiring. The race to the bridge in 2012 was exciting, as was the chess match on Einstein this year. I’m in favor of giving teams the tools to make more interesting and exciting autonomous modes.

Full disclosure: We talked about a Kinect controlled blocker starting in late build season, and implemented it for our second regional and championships.

It all depends on how the game is set up. It really only depends on how it flows with certain games. I don’t think that webcams, the Kinect, and similar devices should be banned, but the rules pertaining to them should be very specific and limit certain types of control opportunities.

It depends on the game, but I think one should be able to tell whether this type of thing is allowed with a quick glance at the rules. If the GDC desires for this period to take place without any human input, they should call it Autonomous mode. If they want to allow things like Kinect and Cheesy Vision, they should call it Hybrid mode or something similar. Calling it Auto mode and allowing this type of thing just doesn’t make sense to me.

We likely would not have developed Cheesy Vision had the field implemented hot goal lighting properly.

I think it does cheapen the autonomous period, but it made for exciting matches on Einstein.

If it’s any consideration, Kinect was allowed last year (but it had little utility overall) and 2012.

The way I see it you have a few levels of “autonomous” with increasing difficulty:

  • A script (what the majority of autons are)
  • Multiple scripts (pre-selected before match)
  • Indirect input (Kinect, etc)
  • Actual autonomous (decision trees, actually identifying objects on the field and making decisions based on that input)

Simply writing the script is hard enough for some teams, to get everything figured out on their robot well enough to consistently perform a given action. Maybe some teams do some error checking (is a ball loaded) to keep from destroying their robot, but in general… you’re executing a series of commands blindly.

The better teams have a playbook, which they can play against their opponent’s playbook or use in various situations. 1/2/3 ball auton, multiple locations, shot angles, goalie routines, etc.

Indirect input allows you to “trump” your opponent’s playbook if they have a static script, by essentially playing your robot in real time against their more static script.

Actual autonomous mode only offers advantages over indirect input in the scenario where a computer can identify a situation and react better than a human.

I feel like the first 3 steps actually play out pretty well. Each step is incrementally harder and is incrementally more rewarding. It’s a little awkward this year as you move from multiple scripts to indirect, because there really isn’t THAT much additional work to develop it, and in situations it really can be a trump card.

My biggest hang-up is that there is basically no incentive to move to full auton. Like… watch the video of the Google car and automated driver, or any system that has really advanced sensors to detect objects, calculate trajectories, etc. I really think the evolution of FRC will include a lot more “driver assist” functions, like say an automated incoming ball tracker and catcher this year… or being able to identify a goalie pole and shoot around it. The level of effort to pull something like this off is immense though, and I don’t feel like it would really dominate over an “indirect” input robot in auton mode.

So my only real beef (and why I voted no), is I feel the Kinect lessens the incentive to iterate toward full auton, but I don’t feel like it really broke anything this season. I’d also be ok with keeping it legal for a season or two more, as teams push the boundary on auton, then weaning people off it, or giving extra incentive bonuses for auton without the Kinect (or any indirect input from the driver).

Even though strategy chess match on Einstein was probably some of the most exciting robot matches I have ever witnessed, I still voted no.

We are what I would phrase as a pretty low tech team that keeps the designs simple to complete the task, we try to finish on time, iterate, and never break down.

To me the most important differentiater between the 2nd tier teams and the rest of the pack is a consistent autonomous. Even better is a consistent multi game piece auto.

My first year in FRC as a mentor was logomotion. My oldest son was a freshman, and our team had a very strong mechanical design group, but programming wise, the team was lacking mentor support. Coming to FRC as a FLL coach, I had no clue what teleop was, or auton was. Logomotion auton was to follow a line and hang an uber tube. In FLL we followed lines all the time, this looked pretty easy task for us, as in FLL we only had one sensor, but in FRC they gave us 3!

Long story short, week 1 at Kettering, our first match, we hung an uber tube. I screamed so long, I almost passed out. My son and the rest of the programmers were jumping around going crazy. We won a our first blue banner that day, and my son was completely hooked on robotics. That season was magical, and ended losing to the Cheesy Poofs on Galileo, who would later be world champs.

4 years later, my son is lead programmer, and on the drive team. No robot banners this year. I would have to go back and check, but from memory, our two ball auto missed one ball twice all season. Our one ball hot detect had significantly more failures, but this was mainly due to losing a second to wait for field to indicate hot and drive in high gear, which jostled the ball around.

It may be boring to watch a bot meander down a line, and hang an ubertube, every single time, but if you are programming that auto, it is the most exciting thing in the world. This year, it was also really inspiring come out of your first district with the third best auto ranking in the world week 4.

Coming from FLL where you have 2:30 Auto, to FRC where you may get 15 seconds, or 10 seconds (or an unpublished 7.5 seconds week 1, and 9 seconds week 4) I say let us have our auto. That is where there is some real programming challenges are, and if not us 2nd tier teams might as well drop the default code on the bot, and start training drivers to improve.

That being said, I have watch the Einstein matches multiple times already, and I still watch the Cheesy Poofs “hybrid mode” montage on youtube a few times a year, so at least we got that. :slight_smile:

1 Like

I voted NO. FLL students know what autonomous means and we should keep the same meaning across platforms.

Obviously I will never hold it against a team for deciding to use any legal means to make their robot more competitive. But autonomous really ought to be autonomous IMO.

I don’t have an opinion, other than that the GDC should be able to make the appropriate rules however they want.

There’s no rule in FLL that prevents you from holding different colored cards in front of a color sensor (as long as you don’t touch the robot).

I never understood hybrid mode as a concept - all we are doing is using a less efficient, ergonomic, and effective controller. Maybe the case can be made that it’s an interesting challenge to program your own inferior interface, but we aren’t running low on interesting challenges, and they use joysticks on the space station.

I’m of the opinion that auto should be auto, but I like seeing robot interaction in auto, which is what made Einstein interesting. Check out the 2006 Lone Star regional finals:

If we had a game like that now, you know that there would be some sensor based tracking of robots, and the cat-and-mouse iterations of auto modes would be just as fun.

I’d define holding coloured cards in front of a colour sensor as “influencing.”

I don’t think it’s a good thing to let teams control their robot in autonomous, whether the game allows teams to interact with the opposing alliance in auto or not. Being able to react to something happening, like somebody beating you to the bridge in auto, a blocker robot deploying, or a missed intake for a two ball autonomous takes away the whole point of having autonomous mode. Give teams a few years, and they’ll come up with a way to control the entire robot in autonomous mode.

I don’t think allowing control of your robot with your kinect or webcam will even make robots more competitive/more interesting in auto. There were plenty of teams this year that didn’t even bother moving forward in autonomous mode. I doubt there would be a significant increase in the number of moving robots in autonomous mode.

I loved the Einstein chess match. It’s definitely one of my favorite FRC moments of all time.

As far as the original question, I’ve actually been thinking about this for a while. One thing that I’ve noticed throughout my tenure in FRC is how FRC really is less of a programming competition, and more of a mechanical competition. Let me clarify: programming, yes, is vital to the final outcome of how a team performs. And don’t forget the role of code in teleop either.

However, the effort/reward ratio for programming in FRC does is dissimilar to the effort/reward ratio for mechanical design, strategy, or even drive team training. As a programmer, I can name off some robot code that’s been inspirational to me: 341’s auto-aim SmartDash widget, 254’s auto-climb sequence, and all of the crazy autonomous modes that are out there. 987’s centerline in 2013 that would gracefully degrade to the 2pt goal was awesome, as was their autonomous scripting system. But I can name so many more robots, designs, or mechanical things that are just as inspirational. All of 67’s, 254’s, 469’s, and 1114’s robots. But when’s the last time we, as a community, have celebrated true innovation in programming? Gordian is a fully implemented scripting language by 4334. 4334 also spent the time to completely rewrap WPI in the form of ATALibJ. 1540 has their own custom robot framework that’s open source. When’s the last time I’ve heard anyone post in awe of any of those things?

It’s because that the subset of FRC people who can appreciate them is a much smaller fraction of the total population; additionally, it’s hard to appreciate it because it’s much more abstract than a linkage or a drivetrain. And it’s not directly convertible to points. Innovation in code does not guarantee better robot performance. In reality, programming innovation has a habit of blowing up in the face of a humble high school programmer. The FRC just does not reward attempting top-notch programming.

This helps to define the effective ceiling for programming. Effective programming is transforming the robot from an expensive paperweight and making it controllable. Auto modes are just calls to the same code. Yes, teams can do better – but the marginal reward for doing so is much lower at higher levels. The same is true for anything in FRC, but I have the distinct feeling that it’s a much sharper decrease in reward than in other aspects of FRC. Part of the reason for this is because of the defined floor that we have too. WPILib makes it really, really hard to screw stuff up. This is intentional – to have a massive, expensive paperweight is not how I would like anyone to spend their FRC season. But, at the same time, it makes programming easier, almost handed to you. The marginal effort for making a Talon spin a CIM is so low. Everyone does it. And non-robot stuff, like driver assist programs are great – but I’ve noticed that my drivers are just fine (and often prefer) going it solo.

I see ‘Hybrid modes’ as rewarding more complex programming for improved control during autonomous. As I think that bringing the programming floor down is a major violation of GP, ethics, and morals, something has to be done to increase the rewards of programming. This is one such way.

1 Like

I’m curious then. Why do you think that so many teams used a Kinect/webcam this year in auto?

My opinion is that autonomous mode should actually be autonomous, and it should exist to challenge programmers in FIRST. Once you have a functional Kinect style setup working, then you can do almost anything in autonomous with very little effort (amazing goalie bots). However, I prototyped autonomous (actually autonomous) CV goalie bot code shortly after Kickoff: I used image processing in Matlab to prototype detection of either red or blue bumpers of other robots to either avoid enemy goalies or detect enemy shooters. This is a complicated system and would take much more tweaking to get it to work right compared to Cheesy Vision, but if teams were to accomplish a programming task such as this, I think it would in the end be a much more fulfilling and educational experience.