The use of the Kinect and Cheesy Vision in 2015 and beyond

The excitement of Einstein speaks for itself. Whether it’s a Kinect, or CheesyVision, or four buttons like in 2008, tools to foster an autonomous chess match make the game more fun to watch from the start.

Is that still autonomous? By strict definitions, no–but we already stretch the definition of “robot” at times too.

It could have been a chess match with or without vision-based inputs, though. 1114 could have programmed a mode that drove towards the hot goal, and all we used Cheesy Vision for was to select which side to drive towards. The actual path we drove was all autonomous.

Sorry, that’s not what I meant to write. We did use Cheesy Vision this year, and we always made a hot goal when we ran our 1 ball auto mode. Having control in autonomous mode is a competitive advantage. If we had it working at our first event, which we lost by 2 points, we would have won. I meant to say that robots that do use kinect/webcam control are more competitive, but that I don’t think it will do much to solve the issue of robots just sitting there in auto.

With the exception of the Einstein finals, which are 3 out of the 10,655 matches played this year, how many of them were more exciting to watch because of kinect control? At least for this years game, shooting in a hot goal is not particularly more exciting than shooting in the not hot goal for most spectators.

No problem. I thought that you were trying to say something along these lines. I agree that immobile auto robots is still a big issue in FRC, and that having or not having Kinect/webcam during auto will likely have no appreciable impact on this problem.

It was more consistent than letting the robot sense the objects on the field. Hot goals were shaky at best, and the way they were designed hampered even the best vision systems. Also, teams like 1114 and 973 wanted to react to other robots.

If we decide that any control is unacceptable in Autonomous, we also need to ensure that our robots can get all the information they need from the field. Vision targets need to be consistent.

We also need a consistent way to sense other robots, something that has to be put on every robot. Perhaps retro-reflective stickers on bumpers, motion-capture markers on the robot, or something like the trailers from 2009.

Goalie bots were OP with kinect.

Half our blocking was done by guessing where our opponents would shoot (from scouting data), and driving there with encoders and a number we selected on driverstation after the robots lined up.

This would have been reasonable, and still resulted in the SAME einstein chess match.

My feelings exactly. If they want to allow other “indirect” forms of control then call it hybrid mode. When I hear autonomous mode the first assumption I make is that the robots will be autonomous.

In principle, I think autonomous means autonomous and it should be that way. On the other hand, I like the incentive for teams to be working with stuff outside the norm (like Kinect and CV stuff). I’m divided over whether it should still be a part of autonomous/hybrid mode or a different part, but overall I liked that Einstein chess match as much as the next FIRSTer.

So I guess I would be in the “Yes, but don’t call it autonomous mode” category.

I disagree. Nowadays, robots are much less mechanical and a lot more electrical. For example, every single design we prototyped this year for shooting the ball was used effectively by some top team. We essentially arbitrarily used a catapult. Top teams are all very well engineered, however, at one point it stops being effective to maximize strength and it becomes more useful to program.
However, using PID drive controls, automatic shifting, swerve drives, etc. are all primarily electrical now. Programming plays a very large part in how a robot performs, whereas mechanical can only go so far. Top teams have good code.

I vote no for using kinect/ vision control for auton. This is because if programmed well enough, autonomous can easily become just an extended teleop with little effort, especially for defensive autons.

We spent weeks integrating a camera solution for auto mod this year. We got it working on the practice field, but never had time to calibrate it on the field at champs (I wish they would let us calibrate every morning instead of Thursday only. We got it working Thursday around lunch). The idea that something as simple as open source code and a web cam would be better than the a robot running truly autonomously galls me. Please, remove the de facto hybrid mode for next year.

I’m not sure why or how a kinect/webcam makes autonomous more competitive (I assume you mean driver station based control).

I also generally disagree with the sentiment that hot goal targeting due to timing was an intractable problem. At our second regional we spoke to the field team after our first match (where we missed hot goal detection) and they indicated that the reflective tape indicator was taking some finite time to flip over. We re-timed our code at the competition and didn’t miss hot after that - using only robot based webcam and image processing.

To me, that seems like an absolutely plausible “real-life” engineering type problem, and I thought it was great learning experience.

I don’t disagree with the use of kinect after the ruling - but to call it “autonomous” and to use driver station webcam/kinect seems to be not in the spirit of the word autonomous (but I’m not arguing it was inappropriate given the rule clarifications). I would have preferred if it remained more autonomous.

I don’t really think this is the case. Think of the Cheesy Poofs for example, a team that really has mechanical, electrical, and software systems that are second to none. I’d bet that they’d still win a lot of matches if they replaced their code with something hacked together overnight, and they’d still probably win if I decided I’d rewire their robot as a surprise (assuming I didn’t screw anything up on purpose). But on the other hand, if you took their same code and electronics and put it on a janky JVN catapult, I’d bet they’d win matches, but not as many.

In FRC, you can win from just having awesome mechanisms, even with only so- so wiring and code. I’ll give you that poor code and wiring can lose matches, but you sure can’t win without well designed mechanisms.

I voted no. But I am also extremly torn here. Autonomous is designed for 0 driver control whatsoever, wether it direct or indirect. Using the kinect or CheesyVision druing auto seems to bend the rules a bit. Although by the way they are written it is allowed, it kinda takes away from the definition of autonomous.
I’ll amit my team used something very similar to CheesyVision at champs after our camera crapped out on us. At the time I was all for this, probably because I was so excited we were doing well and I wanted to keep it up.
On the other hand the use of this created extremly exciting stand-offs, namely Einstein finals. 1114 and 254 stand-offs were exhilerating to watch and I don’t know that that could happen with purely pre-coded instrusctions.

I don’t think it should be allowed. Although these systems were technically legal and a really smart way of using the specifics of the rules to increase a robot’s ability, I think it diminishes the idea of an “autonomous period”. It’s no longer autonomous, it is being controlled by a human, even if indirectly.
I can’t hold the use of this strategy against any team or person; it wasn’t illegal and as I said earlier, an extremely smart way of doing things. In the future, however, I think that GDC should either make indirect, live in-match control illegal, or change it to a hybrid period permanently.

Software is increasingly important in today’s world. We need to challenge the programmers more, not less.

The Poofs’ auton performance on Einstein was inspirational; with or without Cheesy Vision.

I think that the use of indirect control of robots should be allowed. I also think that the hybrid period is brought back next year instead of auto, so that it promotes this kind of innovation.

For Auto mode this kind of stuff doesn’t really fit, but I think it should be a possibility.

Having scouted a number of the other division playoffs as well as our own, the goalies really became salient in those rounds. It was interesting how they weren’t important during quallifying.

We used it very effectively in quals in our division, winning some tough matches based on blocking.

I will agree that in elims it was more effective to block for two reasons. More teams were shooting, and you generally had better partners to help defend the missed balls with.

This is essentailly what 1712 was hoping to do with our autonomous modes (with the original intent being to accomodate any partner’s potential 2/3-ball routine while still earning our 5 mobility points), but we kept introducing drive lag when we implimented it. Had we got this working, I would have pushed for adding a goalie pole and using it there as well.

It depends on the game. A bit of autonomy is nice, but I don’t think a strictly autonomous mode is a prerequisite to a successful FRC game.

With that in mind, if the GDC wants the robots to have absolute autonomy, then they should write that into the rules; if they want hybrid autonomy, then they can write that instead. If they want to provide different incentives for increasing degrees of autonomy, maybe they can do that too. Whatever they do, it would be better if it’s fairly clear from the start, instead of being enabled through open-ended rules and Q&As.