|
|
|
![]() |
|
|||||||
|
||||||||
| View Poll Results: Should indirect (Kinect or CV) control of robots during Autonomous Mode be allowed? | |||
| Yes |
|
132 | 40.12% |
| No |
|
197 | 59.88% |
| Voters: 329. You may not vote on this poll | |||
![]() |
|
|
Thread Tools |
Rating:
|
Display Modes |
|
|
|
#1
|
|||||
|
|||||
|
The use of the Kinect and Cheesy Vision in 2015 and beyond
It all started innocuously enough with a Q&A entry a week after kickoff:
Q. Are we allowed to use the kinect as part of our driver station during autonomous mode this year? A. There are no rules prohibiting this. And was reiterated after build season: Q. Per Q55, the Kinect is allowed as part of our driver station during autonomous. Please clarify: May a Driver, remaining compliant with G16 & G17, use the Kinect that is part of the driver station to control the Robot during Auto? A. Yes. These responses opened the door for the types of indirect control of the robots we saw in autonomous, most notable CheesyVision, but also the Kinect control used by us and 973. I have one simple question about all this, Should indirect control of the robot during autonomous mode (i.e. CheesyVision and Kinect control) be allowed for the 2015 season? My personal opinion is that allowing these forms of control removes the autonomy from Autonomous Mode (we had close to complete operator control over our robot in Autonomous once we started using the Kinect). Regardless of what I think, I'm curious to see what the community thinks. Was the autonomous excitement on Einstein enough to justify this type of control, or would you prefer Autonomous Mode to remain autonomous. |
|
#2
|
|||||
|
|||||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I would prefer that the rules reflect the game. Aerial Assist and goalie-bots lent themselves towards limited human control during the autonomous period, due to their reactionary nature, and resulted in a great deal of excitement. A game like Rebound Rumble, even though it allowed, encouraged, and downright highlighted Kinect use, saw it practically never used because robots never interacted with their opponents in the autonomous period, and better accuracy could be achieved with pure autonomy.
For the record, I generally prefer mostly isolated autonomous periods with a high ceiling for performance like Rebound Rumble or Ultimate Ascent, but think that the "hybrid period" should come and go as the games require, rather than forcing one way to work for all games. |
|
#3
|
||||
|
||||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
It all depends on how the game is set up. It really only depends on how it flows with certain games. I don't think that webcams, the Kinect, and similar devices should be banned, but the rules pertaining to them should be very specific and limit certain types of control opportunities.
|
|
#4
|
||||
|
||||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
It depends on the game, but I think one should be able to tell whether this type of thing is allowed with a quick glance at the rules. If the GDC desires for this period to take place without any human input, they should call it Autonomous mode. If they want to allow things like Kinect and Cheesy Vision, they should call it Hybrid mode or something similar. Calling it Auto mode and allowing this type of thing just doesn't make sense to me.
|
|
#5
|
|||||
|
|||||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
We likely would not have developed Cheesy Vision had the field implemented hot goal lighting properly.
I think it does cheapen the autonomous period, but it made for exciting matches on Einstein. |
|
#6
|
|||
|
|||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
If it's any consideration, Kinect was allowed last year (but it had little utility overall) and 2012.
|
|
#7
|
|||
|
|||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
The way I see it you have a few levels of "autonomous" with increasing difficulty:
- A script (what the majority of autons are) - Multiple scripts (pre-selected before match) - Indirect input (Kinect, etc) - Actual autonomous (decision trees, actually identifying objects on the field and making decisions based on that input) Simply writing the script is hard enough for some teams, to get everything figured out on their robot well enough to consistently perform a given action. Maybe some teams do some error checking (is a ball loaded) to keep from destroying their robot, but in general... you're executing a series of commands blindly. The better teams have a playbook, which they can play against their opponent's playbook or use in various situations. 1/2/3 ball auton, multiple locations, shot angles, goalie routines, etc. Indirect input allows you to "trump" your opponent's playbook if they have a static script, by essentially playing your robot in real time against their more static script. Actual autonomous mode only offers advantages over indirect input in the scenario where a computer can identify a situation and react better than a human. I feel like the first 3 steps actually play out pretty well. Each step is incrementally harder and is incrementally more rewarding. It's a little awkward this year as you move from multiple scripts to indirect, because there really isn't THAT much additional work to develop it, and in situations it really can be a trump card. My biggest hang-up is that there is basically no incentive to move to full auton. Like... watch the video of the Google car and automated driver, or any system that has really advanced sensors to detect objects, calculate trajectories, etc. I really think the evolution of FRC will include a lot more "driver assist" functions, like say an automated incoming ball tracker and catcher this year... or being able to identify a goalie pole and shoot around it. The level of effort to pull something like this off is immense though, and I don't feel like it would really dominate over an "indirect" input robot in auton mode. So my only real beef (and why I voted no), is I feel the Kinect lessens the incentive to iterate toward full auton, but I don't feel like it really broke anything this season. I'd also be ok with keeping it legal for a season or two more, as teams push the boundary on auton, then weaning people off it, or giving extra incentive bonuses for auton without the Kinect (or any indirect input from the driver). |
|
#8
|
||||
|
||||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Even though strategy chess match on Einstein was probably some of the most exciting robot matches I have ever witnessed, I still voted no.
We are what I would phrase as a pretty low tech team that keeps the designs simple to complete the task, we try to finish on time, iterate, and never break down. To me the most important differentiater between the 2nd tier teams and the rest of the pack is a consistent autonomous. Even better is a consistent multi game piece auto. My first year in FRC as a mentor was logomotion. My oldest son was a freshman, and our team had a very strong mechanical design group, but programming wise, the team was lacking mentor support. Coming to FRC as a FLL coach, I had no clue what teleop was, or auton was. Logomotion auton was to follow a line and hang an uber tube. In FLL we followed lines all the time, this looked pretty easy task for us, as in FLL we only had one sensor, but in FRC they gave us 3! Long story short, week 1 at Kettering, our first match, we hung an uber tube. I screamed so long, I almost passed out. My son and the rest of the programmers were jumping around going crazy. We won a our first blue banner that day, and my son was completely hooked on robotics. That season was magical, and ended losing to the Cheesy Poofs on Galileo, who would later be world champs. 4 years later, my son is lead programmer, and on the drive team. No robot banners this year. I would have to go back and check, but from memory, our two ball auto missed one ball twice all season. Our one ball hot detect had significantly more failures, but this was mainly due to losing a second to wait for field to indicate hot and drive in high gear, which jostled the ball around. It may be boring to watch a bot meander down a line, and hang an ubertube, every single time, but if you are programming that auto, it is the most exciting thing in the world. This year, it was also really inspiring come out of your first district with the third best auto ranking in the world week 4. Coming from FLL where you have 2:30 Auto, to FRC where you may get 15 seconds, or 10 seconds (or an unpublished 7.5 seconds week 1, and 9 seconds week 4) I say let us have our auto. That is where there is some real programming challenges are, and if not us 2nd tier teams might as well drop the default code on the bot, and start training drivers to improve. That being said, I have watch the Einstein matches multiple times already, and I still watch the Cheesy Poofs "hybrid mode" montage on youtube a few times a year, so at least we got that. ![]() |
|
#9
|
|||||
|
|||||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I voted NO. FLL students know what autonomous means and we should keep the same meaning across platforms.
|
|
#10
|
|||||
|
|||||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Obviously I will never hold it against a team for deciding to use any legal means to make their robot more competitive. But autonomous really ought to be autonomous IMO.
|
|
#11
|
||||
|
||||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
My feelings exactly. If they want to allow other "indirect" forms of control then call it hybrid mode. When I hear autonomous mode the first assumption I make is that the robots will be autonomous.
|
|
#12
|
||||
|
||||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
In principle, I think autonomous means autonomous and it should be that way. On the other hand, I like the incentive for teams to be working with stuff outside the norm (like Kinect and CV stuff). I'm divided over whether it should still be a part of autonomous/hybrid mode or a different part, but overall I liked that Einstein chess match as much as the next FIRSTer.
So I guess I would be in the "Yes, but don't call it autonomous mode" category. |
|
#13
|
||||
|
||||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I don't have an opinion, other than that the GDC should be able to make the appropriate rules however they want.
There's no rule in FLL that prevents you from holding different colored cards in front of a color sensor (as long as you don't touch the robot). |
|
#14
|
|||||
|
|||||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
Quote:
|
|
#15
|
||||
|
||||
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I don't think it's a good thing to let teams control their robot in autonomous, whether the game allows teams to interact with the opposing alliance in auto or not. Being able to react to something happening, like somebody beating you to the bridge in auto, a blocker robot deploying, or a missed intake for a two ball autonomous takes away the whole point of having autonomous mode. Give teams a few years, and they'll come up with a way to control the entire robot in autonomous mode.
I don't think allowing control of your robot with your kinect or webcam will even make robots more competitive/more interesting in auto. There were plenty of teams this year that didn't even bother moving forward in autonomous mode. I doubt there would be a significant increase in the number of moving robots in autonomous mode. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|