Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   General Forum (http://www.chiefdelphi.com/forums/forumdisplay.php?f=16)
-   -   The use of the Kinect and Cheesy Vision in 2015 and beyond (http://www.chiefdelphi.com/forums/showthread.php?t=129245)

Karthik 30-04-2014 18:13

The use of the Kinect and Cheesy Vision in 2015 and beyond
 
It all started innocuously enough with a Q&A entry a week after kickoff:

Q. Are we allowed to use the kinect as part of our driver station during autonomous mode this year?
A. There are no rules prohibiting this.

And was reiterated after build season:

Q. Per Q55, the Kinect is allowed as part of our driver station during autonomous. Please clarify: May a Driver, remaining compliant with G16 & G17, use the Kinect that is part of the driver station to control the Robot during Auto?
A. Yes.

These responses opened the door for the types of indirect control of the robots we saw in autonomous, most notable CheesyVision, but also the Kinect control used by us and 973. I have one simple question about all this, Should indirect control of the robot during autonomous mode (i.e. CheesyVision and Kinect control) be allowed for the 2015 season? My personal opinion is that allowing these forms of control removes the autonomy from Autonomous Mode (we had close to complete operator control over our robot in Autonomous once we started using the Kinect). Regardless of what I think, I'm curious to see what the community thinks. Was the autonomous excitement on Einstein enough to justify this type of control, or would you prefer Autonomous Mode to remain autonomous.

Joe G. 30-04-2014 18:19

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I would prefer that the rules reflect the game. Aerial Assist and goalie-bots lent themselves towards limited human control during the autonomous period, due to their reactionary nature, and resulted in a great deal of excitement. A game like Rebound Rumble, even though it allowed, encouraged, and downright highlighted Kinect use, saw it practically never used because robots never interacted with their opponents in the autonomous period, and better accuracy could be achieved with pure autonomy.

For the record, I generally prefer mostly isolated autonomous periods with a high ceiling for performance like Rebound Rumble or Ultimate Ascent, but think that the "hybrid period" should come and go as the games require, rather than forcing one way to work for all games.

Mark Sheridan 30-04-2014 18:19

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
If we had some really long autonomous (longer than 20 seconds) i would love to have some form of corrections to avoid collisions and etc. But if 2015's auto is like the years before (15 seconds or less) I don't think we want kinect control for another year. We already pushed the envelope and I thinks its enough for now.

Tom Bottiglieri 30-04-2014 18:25

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I'm really torn on this.

On one hand, the current rules basically take the "auto" out of autonomous. On the other hand, autonomous mode is usually really boring. The Einstein autonomous chess match between 1114 and 254 is maybe my favorite FRC memory of all time and maybe the most exciting thing that happened all year.

Joe Ross 30-04-2014 18:25

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I believe that the use of indirect input is highly game dependent. For example, it was legal in 2012 and 2013 (Q198), but hardly used. It may be a few more years before there is a game design that makes indirect input useful again.

Normally, the only excitement in autonomous is whether a robot will fail. That's not very exciting, or inspiring. The race to the bridge in 2012 was exciting, as was the chess match on Einstein this year. I'm in favor of giving teams the tools to make more interesting and exciting autonomous modes.


Full disclosure: We talked about a Kinect controlled blocker starting in late build season, and implemented it for our second regional and championships.

DevBal5012 30-04-2014 18:27

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
It all depends on how the game is set up. It really only depends on how it flows with certain games. I don't think that webcams, the Kinect, and similar devices should be banned, but the rules pertaining to them should be very specific and limit certain types of control opportunities.

PVCpirate 30-04-2014 18:38

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
It depends on the game, but I think one should be able to tell whether this type of thing is allowed with a quick glance at the rules. If the GDC desires for this period to take place without any human input, they should call it Autonomous mode. If they want to allow things like Kinect and Cheesy Vision, they should call it Hybrid mode or something similar. Calling it Auto mode and allowing this type of thing just doesn't make sense to me.

Cory 30-04-2014 18:46

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
We likely would not have developed Cheesy Vision had the field implemented hot goal lighting properly.

I think it does cheapen the autonomous period, but it made for exciting matches on Einstein.

Christopher149 30-04-2014 19:15

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
If it's any consideration, Kinect was allowed last year (but it had little utility overall) and 2012.

Steven Smith 30-04-2014 20:12

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
The way I see it you have a few levels of "autonomous" with increasing difficulty:

- A script (what the majority of autons are)
- Multiple scripts (pre-selected before match)
- Indirect input (Kinect, etc)
- Actual autonomous (decision trees, actually identifying objects on the field and making decisions based on that input)

Simply writing the script is hard enough for some teams, to get everything figured out on their robot well enough to consistently perform a given action. Maybe some teams do some error checking (is a ball loaded) to keep from destroying their robot, but in general... you're executing a series of commands blindly.

The better teams have a playbook, which they can play against their opponent's playbook or use in various situations. 1/2/3 ball auton, multiple locations, shot angles, goalie routines, etc.

Indirect input allows you to "trump" your opponent's playbook if they have a static script, by essentially playing your robot in real time against their more static script.

Actual autonomous mode only offers advantages over indirect input in the scenario where a computer can identify a situation and react better than a human.

I feel like the first 3 steps actually play out pretty well. Each step is incrementally harder and is incrementally more rewarding. It's a little awkward this year as you move from multiple scripts to indirect, because there really isn't THAT much additional work to develop it, and in situations it really can be a trump card.

My biggest hang-up is that there is basically no incentive to move to full auton. Like... watch the video of the Google car and automated driver, or any system that has really advanced sensors to detect objects, calculate trajectories, etc. I really think the evolution of FRC will include a lot more "driver assist" functions, like say an automated incoming ball tracker and catcher this year... or being able to identify a goalie pole and shoot around it. The level of effort to pull something like this off is immense though, and I don't feel like it would really dominate over an "indirect" input robot in auton mode.

So my only real beef (and why I voted no), is I feel the Kinect lessens the incentive to iterate toward full auton, but I don't feel like it really broke anything this season. I'd also be ok with keeping it legal for a season or two more, as teams push the boundary on auton, then weaning people off it, or giving extra incentive bonuses for auton without the Kinect (or any indirect input from the driver).

tr6scott 30-04-2014 20:14

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Even though strategy chess match on Einstein was probably some of the most exciting robot matches I have ever witnessed, I still voted no.

We are what I would phrase as a pretty low tech team that keeps the designs simple to complete the task, we try to finish on time, iterate, and never break down.

To me the most important differentiater between the 2nd tier teams and the rest of the pack is a consistent autonomous. Even better is a consistent multi game piece auto.

My first year in FRC as a mentor was logomotion. My oldest son was a freshman, and our team had a very strong mechanical design group, but programming wise, the team was lacking mentor support. Coming to FRC as a FLL coach, I had no clue what teleop was, or auton was. Logomotion auton was to follow a line and hang an uber tube. In FLL we followed lines all the time, this looked pretty easy task for us, as in FLL we only had one sensor, but in FRC they gave us 3!

Long story short, week 1 at Kettering, our first match, we hung an uber tube. I screamed so long, I almost passed out. My son and the rest of the programmers were jumping around going crazy. We won a our first blue banner that day, and my son was completely hooked on robotics. That season was magical, and ended losing to the Cheesy Poofs on Galileo, who would later be world champs.

4 years later, my son is lead programmer, and on the drive team. No robot banners this year. I would have to go back and check, but from memory, our two ball auto missed one ball twice all season. Our one ball hot detect had significantly more failures, but this was mainly due to losing a second to wait for field to indicate hot and drive in high gear, which jostled the ball around.

It may be boring to watch a bot meander down a line, and hang an ubertube, every single time, but if you are programming that auto, it is the most exciting thing in the world. This year, it was also really inspiring come out of your first district with the third best auto ranking in the world week 4.

Coming from FLL where you have 2:30 Auto, to FRC where you may get 15 seconds, or 10 seconds (or an unpublished 7.5 seconds week 1, and 9 seconds week 4) I say let us have our auto. That is where there is some real programming challenges are, and if not us 2nd tier teams might as well drop the default code on the bot, and start training drivers to improve.

That being said, I have watch the Einstein matches multiple times already, and I still watch the Cheesy Poofs "hybrid mode" montage on youtube a few times a year, so at least we got that. :)

Al Skierkiewicz 30-04-2014 20:23

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I voted NO. FLL students know what autonomous means and we should keep the same meaning across platforms.

Jared Russell 30-04-2014 20:28

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Obviously I will never hold it against a team for deciding to use any legal means to make their robot more competitive. But autonomous really ought to be autonomous IMO.

cgmv123 30-04-2014 20:33

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I don't have an opinion, other than that the GDC should be able to make the appropriate rules however they want.

Quote:

Originally Posted by Al Skierkiewicz (Post 1381221)
FLL students know what autonomous means and we should keep the same meaning across platforms.

There's no rule in FLL that prevents you from holding different colored cards in front of a color sensor (as long as you don't touch the robot).

Kris Verdeyen 30-04-2014 20:45

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I never understood hybrid mode as a concept - all we are doing is using a less efficient, ergonomic, and effective controller. Maybe the case can be made that it's an interesting challenge to program your own inferior interface, but we aren't running low on interesting challenges, and they use joysticks on the space station.

I'm of the opinion that auto should be auto, but I like seeing robot interaction in auto, which is what made Einstein interesting. Check out the 2006 Lone Star regional finals:

https://www.youtube.com/watch?v=0oLgAC7rwGg

If we had a game like that now, you know that there would be some sensor based tracking of robots, and the cat-and-mouse iterations of auto modes would be just as fun.


All times are GMT -5. The time now is 05:43.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi