Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   General Forum (http://www.chiefdelphi.com/forums/forumdisplay.php?f=16)
-   -   The use of the Kinect and Cheesy Vision in 2015 and beyond (http://www.chiefdelphi.com/forums/showthread.php?t=129245)

Karthik 30-04-2014 18:13

The use of the Kinect and Cheesy Vision in 2015 and beyond
 
It all started innocuously enough with a Q&A entry a week after kickoff:

Q. Are we allowed to use the kinect as part of our driver station during autonomous mode this year?
A. There are no rules prohibiting this.

And was reiterated after build season:

Q. Per Q55, the Kinect is allowed as part of our driver station during autonomous. Please clarify: May a Driver, remaining compliant with G16 & G17, use the Kinect that is part of the driver station to control the Robot during Auto?
A. Yes.

These responses opened the door for the types of indirect control of the robots we saw in autonomous, most notable CheesyVision, but also the Kinect control used by us and 973. I have one simple question about all this, Should indirect control of the robot during autonomous mode (i.e. CheesyVision and Kinect control) be allowed for the 2015 season? My personal opinion is that allowing these forms of control removes the autonomy from Autonomous Mode (we had close to complete operator control over our robot in Autonomous once we started using the Kinect). Regardless of what I think, I'm curious to see what the community thinks. Was the autonomous excitement on Einstein enough to justify this type of control, or would you prefer Autonomous Mode to remain autonomous.

Joe G. 30-04-2014 18:19

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I would prefer that the rules reflect the game. Aerial Assist and goalie-bots lent themselves towards limited human control during the autonomous period, due to their reactionary nature, and resulted in a great deal of excitement. A game like Rebound Rumble, even though it allowed, encouraged, and downright highlighted Kinect use, saw it practically never used because robots never interacted with their opponents in the autonomous period, and better accuracy could be achieved with pure autonomy.

For the record, I generally prefer mostly isolated autonomous periods with a high ceiling for performance like Rebound Rumble or Ultimate Ascent, but think that the "hybrid period" should come and go as the games require, rather than forcing one way to work for all games.

Mark Sheridan 30-04-2014 18:19

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
If we had some really long autonomous (longer than 20 seconds) i would love to have some form of corrections to avoid collisions and etc. But if 2015's auto is like the years before (15 seconds or less) I don't think we want kinect control for another year. We already pushed the envelope and I thinks its enough for now.

Tom Bottiglieri 30-04-2014 18:25

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I'm really torn on this.

On one hand, the current rules basically take the "auto" out of autonomous. On the other hand, autonomous mode is usually really boring. The Einstein autonomous chess match between 1114 and 254 is maybe my favorite FRC memory of all time and maybe the most exciting thing that happened all year.

Joe Ross 30-04-2014 18:25

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I believe that the use of indirect input is highly game dependent. For example, it was legal in 2012 and 2013 (Q198), but hardly used. It may be a few more years before there is a game design that makes indirect input useful again.

Normally, the only excitement in autonomous is whether a robot will fail. That's not very exciting, or inspiring. The race to the bridge in 2012 was exciting, as was the chess match on Einstein this year. I'm in favor of giving teams the tools to make more interesting and exciting autonomous modes.


Full disclosure: We talked about a Kinect controlled blocker starting in late build season, and implemented it for our second regional and championships.

DevBal5012 30-04-2014 18:27

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
It all depends on how the game is set up. It really only depends on how it flows with certain games. I don't think that webcams, the Kinect, and similar devices should be banned, but the rules pertaining to them should be very specific and limit certain types of control opportunities.

PVCpirate 30-04-2014 18:38

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
It depends on the game, but I think one should be able to tell whether this type of thing is allowed with a quick glance at the rules. If the GDC desires for this period to take place without any human input, they should call it Autonomous mode. If they want to allow things like Kinect and Cheesy Vision, they should call it Hybrid mode or something similar. Calling it Auto mode and allowing this type of thing just doesn't make sense to me.

Cory 30-04-2014 18:46

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
We likely would not have developed Cheesy Vision had the field implemented hot goal lighting properly.

I think it does cheapen the autonomous period, but it made for exciting matches on Einstein.

Christopher149 30-04-2014 19:15

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
If it's any consideration, Kinect was allowed last year (but it had little utility overall) and 2012.

Steven Smith 30-04-2014 20:12

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
The way I see it you have a few levels of "autonomous" with increasing difficulty:

- A script (what the majority of autons are)
- Multiple scripts (pre-selected before match)
- Indirect input (Kinect, etc)
- Actual autonomous (decision trees, actually identifying objects on the field and making decisions based on that input)

Simply writing the script is hard enough for some teams, to get everything figured out on their robot well enough to consistently perform a given action. Maybe some teams do some error checking (is a ball loaded) to keep from destroying their robot, but in general... you're executing a series of commands blindly.

The better teams have a playbook, which they can play against their opponent's playbook or use in various situations. 1/2/3 ball auton, multiple locations, shot angles, goalie routines, etc.

Indirect input allows you to "trump" your opponent's playbook if they have a static script, by essentially playing your robot in real time against their more static script.

Actual autonomous mode only offers advantages over indirect input in the scenario where a computer can identify a situation and react better than a human.

I feel like the first 3 steps actually play out pretty well. Each step is incrementally harder and is incrementally more rewarding. It's a little awkward this year as you move from multiple scripts to indirect, because there really isn't THAT much additional work to develop it, and in situations it really can be a trump card.

My biggest hang-up is that there is basically no incentive to move to full auton. Like... watch the video of the Google car and automated driver, or any system that has really advanced sensors to detect objects, calculate trajectories, etc. I really think the evolution of FRC will include a lot more "driver assist" functions, like say an automated incoming ball tracker and catcher this year... or being able to identify a goalie pole and shoot around it. The level of effort to pull something like this off is immense though, and I don't feel like it would really dominate over an "indirect" input robot in auton mode.

So my only real beef (and why I voted no), is I feel the Kinect lessens the incentive to iterate toward full auton, but I don't feel like it really broke anything this season. I'd also be ok with keeping it legal for a season or two more, as teams push the boundary on auton, then weaning people off it, or giving extra incentive bonuses for auton without the Kinect (or any indirect input from the driver).

tr6scott 30-04-2014 20:14

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Even though strategy chess match on Einstein was probably some of the most exciting robot matches I have ever witnessed, I still voted no.

We are what I would phrase as a pretty low tech team that keeps the designs simple to complete the task, we try to finish on time, iterate, and never break down.

To me the most important differentiater between the 2nd tier teams and the rest of the pack is a consistent autonomous. Even better is a consistent multi game piece auto.

My first year in FRC as a mentor was logomotion. My oldest son was a freshman, and our team had a very strong mechanical design group, but programming wise, the team was lacking mentor support. Coming to FRC as a FLL coach, I had no clue what teleop was, or auton was. Logomotion auton was to follow a line and hang an uber tube. In FLL we followed lines all the time, this looked pretty easy task for us, as in FLL we only had one sensor, but in FRC they gave us 3!

Long story short, week 1 at Kettering, our first match, we hung an uber tube. I screamed so long, I almost passed out. My son and the rest of the programmers were jumping around going crazy. We won a our first blue banner that day, and my son was completely hooked on robotics. That season was magical, and ended losing to the Cheesy Poofs on Galileo, who would later be world champs.

4 years later, my son is lead programmer, and on the drive team. No robot banners this year. I would have to go back and check, but from memory, our two ball auto missed one ball twice all season. Our one ball hot detect had significantly more failures, but this was mainly due to losing a second to wait for field to indicate hot and drive in high gear, which jostled the ball around.

It may be boring to watch a bot meander down a line, and hang an ubertube, every single time, but if you are programming that auto, it is the most exciting thing in the world. This year, it was also really inspiring come out of your first district with the third best auto ranking in the world week 4.

Coming from FLL where you have 2:30 Auto, to FRC where you may get 15 seconds, or 10 seconds (or an unpublished 7.5 seconds week 1, and 9 seconds week 4) I say let us have our auto. That is where there is some real programming challenges are, and if not us 2nd tier teams might as well drop the default code on the bot, and start training drivers to improve.

That being said, I have watch the Einstein matches multiple times already, and I still watch the Cheesy Poofs "hybrid mode" montage on youtube a few times a year, so at least we got that. :)

Al Skierkiewicz 30-04-2014 20:23

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I voted NO. FLL students know what autonomous means and we should keep the same meaning across platforms.

Jared Russell 30-04-2014 20:28

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Obviously I will never hold it against a team for deciding to use any legal means to make their robot more competitive. But autonomous really ought to be autonomous IMO.

cgmv123 30-04-2014 20:33

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I don't have an opinion, other than that the GDC should be able to make the appropriate rules however they want.

Quote:

Originally Posted by Al Skierkiewicz (Post 1381221)
FLL students know what autonomous means and we should keep the same meaning across platforms.

There's no rule in FLL that prevents you from holding different colored cards in front of a color sensor (as long as you don't touch the robot).

Kris Verdeyen 30-04-2014 20:45

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I never understood hybrid mode as a concept - all we are doing is using a less efficient, ergonomic, and effective controller. Maybe the case can be made that it's an interesting challenge to program your own inferior interface, but we aren't running low on interesting challenges, and they use joysticks on the space station.

I'm of the opinion that auto should be auto, but I like seeing robot interaction in auto, which is what made Einstein interesting. Check out the 2006 Lone Star regional finals:

https://www.youtube.com/watch?v=0oLgAC7rwGg

If we had a game like that now, you know that there would be some sensor based tracking of robots, and the cat-and-mouse iterations of auto modes would be just as fun.

Gregor 30-04-2014 20:46

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by cgmv123 (Post 1381228)
There's no rule in FLL that prevents you from holding different colored cards in front of a color sensor (as long as you don't touch the robot).

Quote:

Originally Posted by Nature's Fury Manual
17
• After each start, the robot is considered "autonomous" and remains so until the next time you touch/influence it.

I'd define holding coloured cards in front of a colour sensor as "influencing."

Jared 30-04-2014 21:00

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I don't think it's a good thing to let teams control their robot in autonomous, whether the game allows teams to interact with the opposing alliance in auto or not. Being able to react to something happening, like somebody beating you to the bridge in auto, a blocker robot deploying, or a missed intake for a two ball autonomous takes away the whole point of having autonomous mode. Give teams a few years, and they'll come up with a way to control the entire robot in autonomous mode.

I don't think allowing control of your robot with your kinect or webcam will even make robots more competitive/more interesting in auto. There were plenty of teams this year that didn't even bother moving forward in autonomous mode. I doubt there would be a significant increase in the number of moving robots in autonomous mode.

brennonbrimhall 30-04-2014 21:03

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I loved the Einstein chess match. It's definitely one of my favorite FRC moments of all time.

As far as the original question, I've actually been thinking about this for a while. One thing that I've noticed throughout my tenure in FRC is how FRC really is less of a programming competition, and more of a mechanical competition. Let me clarify: programming, yes, is vital to the final outcome of how a team performs. And don't forget the role of code in teleop either.

However, the effort/reward ratio for programming in FRC does is dissimilar to the effort/reward ratio for mechanical design, strategy, or even drive team training. As a programmer, I can name off some robot code that's been inspirational to me: 341's auto-aim SmartDash widget, 254's auto-climb sequence, and all of the crazy autonomous modes that are out there. 987's centerline in 2013 that would gracefully degrade to the 2pt goal was awesome, as was their autonomous scripting system. But I can name so many more robots, designs, or mechanical things that are just as inspirational. All of 67's, 254's, 469's, and 1114's robots. But when's the last time we, as a community, have celebrated true innovation in programming? Gordian is a fully implemented scripting language by 4334. 4334 also spent the time to completely rewrap WPI in the form of ATALibJ. 1540 has their own custom robot framework that's open source. When's the last time I've heard anyone post in awe of any of those things?

It's because that the subset of FRC people who can appreciate them is a much smaller fraction of the total population; additionally, it's hard to appreciate it because it's much more abstract than a linkage or a drivetrain. And it's not directly convertible to points. Innovation in code does not guarantee better robot performance. In reality, programming innovation has a habit of blowing up in the face of a humble high school programmer. The FRC just does not reward attempting top-notch programming.

This helps to define the effective ceiling for programming. Effective programming is transforming the robot from an expensive paperweight and making it controllable. Auto modes are just calls to the same code. Yes, teams can do better -- but the marginal reward for doing so is much lower at higher levels. The same is true for anything in FRC, but I have the distinct feeling that it's a much sharper decrease in reward than in other aspects of FRC. Part of the reason for this is because of the defined floor that we have too. WPILib makes it really, really hard to screw stuff up. This is intentional -- to have a massive, expensive paperweight is not how I would like anyone to spend their FRC season. But, at the same time, it makes programming easier, almost handed to you. The marginal effort for making a Talon spin a CIM is so low. Everyone does it. And non-robot stuff, like driver assist programs are great -- but I've noticed that my drivers are just fine (and often prefer) going it solo.

I see 'Hybrid modes' as rewarding more complex programming for improved control during autonomous. As I think that bringing the programming floor down is a major violation of GP, ethics, and morals, something has to be done to increase the rewards of programming. This is one such way.

Caleb Sykes 30-04-2014 21:12

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Jared (Post 1381242)
I don't think allowing control of your robot with your kinect or webcam will even make robots more competitive/more interesting in auto.

I'm curious then. Why do you think that so many teams used a Kinect/webcam this year in auto?

donald_pinckney 30-04-2014 21:29

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
My opinion is that autonomous mode should actually be autonomous, and it should exist to challenge programmers in FIRST. Once you have a functional Kinect style setup working, then you can do almost anything in autonomous with very little effort (amazing goalie bots). However, I prototyped autonomous (actually autonomous) CV goalie bot code shortly after Kickoff: I used image processing in Matlab to prototype detection of either red or blue bumpers of other robots to either avoid enemy goalies or detect enemy shooters. This is a complicated system and would take much more tweaking to get it to work right compared to Cheesy Vision, but if teams were to accomplish a programming task such as this, I think it would in the end be a much more fulfilling and educational experience.

Billfred 30-04-2014 21:30

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
The excitement of Einstein speaks for itself. Whether it's a Kinect, or CheesyVision, or four buttons like in 2008, tools to foster an autonomous chess match make the game more fun to watch from the start.

Is that still autonomous? By strict definitions, no--but we already stretch the definition of "robot" at times too.

Jared Russell 30-04-2014 21:32

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Billfred (Post 1381255)
The excitement of Einstein speaks for itself. Whether it's a Kinect, or CheesyVision, or four buttons like in 2008, tools to foster an autonomous chess match make the game more fun to watch from the start.

Is that still autonomous? By strict definitions, no--but we already stretch the definition of "robot" at times too.

It could have been a chess match with or without vision-based inputs, though. 1114 could have programmed a mode that drove towards the hot goal, and all we used Cheesy Vision for was to select which side to drive towards. The actual path we drove was all autonomous.

Jared 30-04-2014 21:34

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by inkling16 (Post 1381249)
I'm curious then. Why do you think that so many teams used a Kinect/webcam this year in auto?

Sorry, that's not what I meant to write. We did use Cheesy Vision this year, and we always made a hot goal when we ran our 1 ball auto mode. Having control in autonomous mode is a competitive advantage. If we had it working at our first event, which we lost by 2 points, we would have won. I meant to say that robots that do use kinect/webcam control are more competitive, but that I don't think it will do much to solve the issue of robots just sitting there in auto.

With the exception of the Einstein finals, which are 3 out of the 10,655 matches played this year, how many of them were more exciting to watch because of kinect control? At least for this years game, shooting in a hot goal is not particularly more exciting than shooting in the not hot goal for most spectators.

Caleb Sykes 30-04-2014 21:41

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Jared (Post 1381259)
Sorry, that's not what I meant to write. We did use Cheesy Vision this year, and we always made a hot goal when we ran our 1 ball auto mode. Having control in autonomous mode is a competitive advantage. If we had it working at our first event, which we lost by 2 points, we would have won. I meant to say that robots that do use kinect/webcam control are more competitive, but that I don't think it will do much to solve the issue of robots just sitting there in auto.

With the exception of the Einstein finals, which are 3 out of the 10,655 matches played this year, how many of them were more exciting to watch because of kinect control? At least for this years game, shooting in a hot goal is not particularly more exciting than shooting in the not hot goal for most spectators.

No problem. I thought that you were trying to say something along these lines. I agree that immobile auto robots is still a big issue in FRC, and that having or not having Kinect/webcam during auto will likely have no appreciable impact on this problem.

TheMadCADer 30-04-2014 21:44

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by inkling16 (Post 1381249)
I'm curious then. Why do you think that so many teams used a Kinect/webcam this year in auto?

It was more consistent than letting the robot sense the objects on the field. Hot goals were shaky at best, and the way they were designed hampered even the best vision systems. Also, teams like 1114 and 973 wanted to react to other robots.

If we decide that any control is unacceptable in Autonomous, we also need to ensure that our robots can get all the information they need from the field. Vision targets need to be consistent.

We also need a consistent way to sense other robots, something that has to be put on every robot. Perhaps retro-reflective stickers on bumpers, motion-capture markers on the robot, or something like the trailers from 2009.

AdamHeard 30-04-2014 21:46

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by TheMadCADer (Post 1381263)
It was more consistent than letting the robot sense the objects on the field. Hot goals were shaky at best, and the way they were designed hampered even the best vision systems. Also, teams like 1114 and 973 wanted to react to other robots.

If we decide that any control is unacceptable in Autonomous, we also need to ensure that our robots can get all the information they need from the field. Vision targets need to be consistent.

We also need a consistent way to sense other robots, something that has to be put on every robot. Perhaps retro-reflective stickers on bumpers, motion-capture markers on the robot, or something like the trailers from 2009.

Goalie bots were OP with kinect.

Half our blocking was done by guessing where our opponents would shoot (from scouting data), and driving there with encoders and a number we selected on driverstation after the robots lined up.

This would have been reasonable, and still resulted in the SAME einstein chess match.

kylelanman 30-04-2014 22:06

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Jared Russell (Post 1381224)
Obviously I will never hold it against a team for deciding to use any legal means to make their robot more competitive. But autonomous really ought to be autonomous IMO.

My feelings exactly. If they want to allow other "indirect" forms of control then call it hybrid mode. When I hear autonomous mode the first assumption I make is that the robots will be autonomous.

cadandcookies 30-04-2014 22:10

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
In principle, I think autonomous means autonomous and it should be that way. On the other hand, I like the incentive for teams to be working with stuff outside the norm (like Kinect and CV stuff). I'm divided over whether it should still be a part of autonomous/hybrid mode or a different part, but overall I liked that Einstein chess match as much as the next FIRSTer.

So I guess I would be in the "Yes, but don't call it autonomous mode" category.

asid61 30-04-2014 22:15

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by brennonbrimhall (Post 1381245)
I loved the Einstein chess match. It's definitely one of my favorite FRC moments of all time.

As far as the original question, I've actually been thinking about this for a while. One thing that I've noticed throughout my tenure in FRC is how FRC really is less of a programming competition, and more of a mechanical competition. Let me clarify: programming, yes, is vital to the final outcome of how a team performs. And don't forget the role of code in teleop either.

However, the effort/reward ratio for programming in FRC does is dissimilar to the effort/reward ratio for mechanical design, strategy, or even drive team training. As a programmer, I can name off some robot code that's been inspirational to me: 341's auto-aim SmartDash widget, 254's auto-climb sequence, and all of the crazy autonomous modes that are out there. 987's centerline in 2013 that would gracefully degrade to the 2pt goal was awesome, as was their autonomous scripting system. But I can name so many more robots, designs, or mechanical things that are just as inspirational. All of 67's, 254's, 469's, and 1114's robots. But when's the last time we, as a community, have celebrated true innovation in programming? Gordian is a fully implemented scripting language by 4334. 4334 also spent the time to completely rewrap WPI in the form of ATALibJ. 1540 has their own custom robot framework that's open source. When's the last time I've heard anyone post in awe of any of those things?

It's because that the subset of FRC people who can appreciate them is a much smaller fraction of the total population; additionally, it's hard to appreciate it because it's much more abstract than a linkage or a drivetrain. And it's not directly convertible to points. Innovation in code does not guarantee better robot performance. In reality, programming innovation has a habit of blowing up in the face of a humble high school programmer. The FRC just does not reward attempting top-notch programming.

This helps to define the effective ceiling for programming. Effective programming is transforming the robot from an expensive paperweight and making it controllable. Auto modes are just calls to the same code. Yes, teams can do better -- but the marginal reward for doing so is much lower at higher levels. The same is true for anything in FRC, but I have the distinct feeling that it's a much sharper decrease in reward than in other aspects of FRC. Part of the reason for this is because of the defined floor that we have too. WPILib makes it really, really hard to screw stuff up. This is intentional -- to have a massive, expensive paperweight is not how I would like anyone to spend their FRC season. But, at the same time, it makes programming easier, almost handed to you. The marginal effort for making a Talon spin a CIM is so low. Everyone does it. And non-robot stuff, like driver assist programs are great -- but I've noticed that my drivers are just fine (and often prefer) going it solo.

I see 'Hybrid modes' as rewarding more complex programming for improved control during autonomous. As I think that bringing the programming floor down is a major violation of GP, ethics, and morals, something has to be done to increase the rewards of programming. This is one such way.

I disagree. Nowadays, robots are much less mechanical and a lot more electrical. For example, every single design we prototyped this year for shooting the ball was used effectively by some top team. We essentially arbitrarily used a catapult. Top teams are all very well engineered, however, at one point it stops being effective to maximize strength and it becomes more useful to program.
However, using PID drive controls, automatic shifting, swerve drives, etc. are all primarily electrical now. Programming plays a very large part in how a robot performs, whereas mechanical can only go so far. Top teams have good code.

I vote no for using kinect/ vision control for auton. This is because if programmed well enough, autonomous can easily become just an extended teleop with little effort, especially for defensive autons.

wilsonmw04 30-04-2014 22:19

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
We spent weeks integrating a camera solution for auto mod this year. We got it working on the practice field, but never had time to calibrate it on the field at champs (I wish they would let us calibrate every morning instead of Thursday only. We got it working Thursday around lunch). The idea that something as simple as open source code and a web cam would be better than the a robot running truly autonomously galls me. Please, remove the de facto hybrid mode for next year.

dougwilliams 30-04-2014 22:29

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Jared (Post 1381259)
I meant to say that robots that do use kinect/webcam control are more competitive...

I'm not sure why or how a kinect/webcam makes autonomous more competitive (I assume you mean driver station based control).

I also generally disagree with the sentiment that hot goal targeting due to timing was an intractable problem. At our second regional we spoke to the field team after our first match (where we missed hot goal detection) and they indicated that the reflective tape indicator was taking some finite time to flip over. We re-timed our code at the competition and didn't miss hot after that - using only robot based webcam and image processing.

To me, that seems like an absolutely plausible "real-life" engineering type problem, and I thought it was great learning experience.

I don't disagree with the use of kinect after the ruling - but to call it "autonomous" and to use driver station webcam/kinect seems to be not in the spirit of the word autonomous (but I'm not arguing it was inappropriate given the rule clarifications). I would have preferred if it remained more autonomous.

DampRobot 30-04-2014 22:37

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by asid61 (Post 1381290)
I disagree. Nowadays, robots are much less mechanical and a lot more electrical. For example, every single design we prototyped this year for shooting the ball was used effectively by some top team. We essentially arbitrarily used a catapult. Top teams are all very well engineered, however, at one point it stops being effective to maximize strength and it becomes more useful to program.
However, using PID drive controls, automatic shifting, swerve drives, etc. are all primarily electrical now. Programming plays a very large part in how a robot performs, whereas mechanical can only go so far. Top teams have good code.

I don't really think this is the case. Think of the Cheesy Poofs for example, a team that really has mechanical, electrical, and software systems that are second to none. I'd bet that they'd still win a lot of matches if they replaced their code with something hacked together overnight, and they'd still probably win if I decided I'd rewire their robot as a surprise (assuming I didn't screw anything up on purpose). But on the other hand, if you took their same code and electronics and put it on a janky JVN catapult, I'd bet they'd win matches, but not as many.

In FRC, you can win from just having awesome mechanisms, even with only so- so wiring and code. I'll give you that poor code and wiring can lose matches, but you sure can't win without well designed mechanisms.

Charles Boehm 30-04-2014 22:50

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I voted no. But I am also extremly torn here. Autonomous is designed for 0 driver control whatsoever, wether it direct or indirect. Using the kinect or CheesyVision druing auto seems to bend the rules a bit. Although by the way they are written it is allowed, it kinda takes away from the definition of autonomous.
I'll amit my team used something very similar to CheesyVision at champs after our camera crapped out on us. At the time I was all for this, probably because I was so excited we were doing well and I wanted to keep it up.
On the other hand the use of this created extremly exciting stand-offs, namely Einstein finals. 1114 and 254 stand-offs were exhilerating to watch and I don't know that that could happen with purely pre-coded instrusctions.

dudefise 30-04-2014 23:00

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I don't think it should be allowed. Although these systems were technically legal and a really smart way of using the specifics of the rules to increase a robot's ability, I think it diminishes the idea of an "autonomous period". It's no longer autonomous, it is being controlled by a human, even if indirectly.
I can't hold the use of this strategy against any team or person; it wasn't illegal and as I said earlier, an extremely smart way of doing things. In the future, however, I think that GDC should either make indirect, live in-match control illegal, or change it to a hybrid period permanently.

billylo 01-05-2014 00:27

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Software is increasingly important in today's world. We need to challenge the programmers more, not less.

The Poofs' auton performance on Einstein was inspirational; with or without Cheesy Vision.

orangemoore 01-05-2014 00:58

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I think that the use of indirect control of robots should be allowed. I also think that the hybrid period is brought back next year instead of auto, so that it promotes this kind of innovation.

For Auto mode this kind of stuff doesn't really fit, but I think it should be a possibility.

Citrus Dad 01-05-2014 13:46

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Jared (Post 1381259)
With the exception of the Einstein finals, which are 3 out of the 10,655 matches played this year, how many of them were more exciting to watch because of kinect control? At least for this years game, shooting in a hot goal is not particularly more exciting than shooting in the not hot goal for most spectators.

Having scouted a number of the other division playoffs as well as our own, the goalies really became salient in those rounds. It was interesting how they weren't important during quallifying.

AdamHeard 01-05-2014 13:49

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Citrus Dad (Post 1381545)
Having scouted a number of the other division playoffs as well as our own, the goalies really became salient in those rounds. It was interesting how they weren't important during quallifying.

We used it very effectively in quals in our division, winning some tough matches based on blocking.

I will agree that in elims it was more effective to block for two reasons. More teams were shooting, and you generally had better partners to help defend the missed balls with.

Lil' Lavery 01-05-2014 16:21

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by AdamHeard (Post 1381265)
Goalie bots were OP with kinect.

Half our blocking was done by guessing where our opponents would shoot (from scouting data), and driving there with encoders and a number we selected on driverstation after the robots lined up.

This would have been reasonable, and still resulted in the SAME einstein chess match.

This is essentailly what 1712 was hoping to do with our autonomous modes (with the original intent being to accomodate any partner's potential 2/3-ball routine while still earning our 5 mobility points), but we kept introducing drive lag when we implimented it. Had we got this working, I would have pushed for adding a goalie pole and using it there as well.

Tristan Lall 01-05-2014 17:42

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
It depends on the game. A bit of autonomy is nice, but I don't think a strictly autonomous mode is a prerequisite to a successful FRC game.

With that in mind, if the GDC wants the robots to have absolute autonomy, then they should write that into the rules; if they want hybrid autonomy, then they can write that instead. If they want to provide different incentives for increasing degrees of autonomy, maybe they can do that too. Whatever they do, it would be better if it's fairly clear from the start, instead of being enabled through open-ended rules and Q&As.

Mastonevich 01-05-2014 17:56

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I want to see fully autonomous robots some day so I voted no. If we allow hybrid it might stifle creativity because there is another way to achieve the objective.

AdamHeard 01-05-2014 17:57

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
There are reasonable compromises that could be done.

For example, kinect input is shut of after 2 seconds, or only open after 7 seconds.

Something where it essentially limits you to send signals that launch other actions, versus directly controlling motion.

Mark Sheridan 01-05-2014 18:05

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by AdamHeard (Post 1381640)
There are reasonable compromises that could be done.

For example, kinect input is shut of after 2 seconds, or only open after 7 seconds.

Something where it essentially limits you to send signals that launch other actions, versus directly controlling motion.

Or a point bonus for not sending commands from driver station during auto.


Emphasis on bonus instead of penalty. bonus always sounds better.

Hjelstrom 01-05-2014 18:25

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I voted 'no'. The autonomous 'chess match' on Einstein was awesome! (Kudos to the poofs for being ready for that!) But there have been many examples of autonomous mode 'chess matches' over the years without needing hybrid control. In 2013 we had lots of them when fighting for the discs in the center of the field for example.

Abhishek R 01-05-2014 20:54

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Hjelstrom (Post 1381647)
I voted 'no'. The autonomous 'chess match' on Einstein was awesome! (Kudos to the poofs for being ready for that!) But there have been many examples of autonomous mode 'chess matches' over the years without needing hybrid control. In 2013 we had lots of them when fighting for the discs in the center of the field for example.

The 2012 fights over the balls, while maybe not as impressive as 1114 and 254 on Einstein this year, were still pretty exciting.

Jared Russell 01-05-2014 21:05

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Abhishek R (Post 1381688)
The 2012 fights over the balls, while maybe not as impressive as 1114 and 254 on Einstein this year, were still pretty exciting.

As a member of 341, we competed against 233 in Boston, at Champs, and then again at IRI. It became a game of chicken...who could get to the bridge first, push harder, and push for longer to get those balls. Good times.

rick.oliver 02-05-2014 08:11

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I voted yes. It provides another facet to the competition which can enhance robot performance. It has the potential to raise the floor, so to speak, and the ceiling.

Dave Campbell 02-05-2014 09:08

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Emphatically NO! I cannot believe that we allowed it this year. The intent of the rules were clear to my team. Autonomous control uses pre-programmed instruction, using sensors to create the changes in robot actions, no driver actions or human input to drive the machines. If we want hybrid control, then it should be called hybrid control. Thanks for bringing this up Karthik.

KPSch 02-05-2014 09:32

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I voted no. The chess matches on Einstein were fun to watch and Cheezy Vision was a great way to work around the hot goal issues this year. However, as a programming mentor for my team I feel that a pure, non-hybrid autonomous presents a better challenge. The teams have to figure out ways for the robot to interact with its environment without human input.

I realize that 254 and 1114's hybrid autos were really just instructions to go left and go right, but this is a slippery slope. What's to stop a team from holding up colored cards to control a wider variety of actions?

Chris Hibner 02-05-2014 09:34

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by rick.oliver (Post 1381822)
I voted yes. It provides another facet to the competition which can enhance robot performance. It has the potential to raise the floor, so to speak, and the ceiling.

Playing devil's advocate: Then why not just do away with autonomous altogether? With a little more programming, Kinect/Chezy Vizhun can completely replace the joystick so why not just let the teams use their joysticks?

Boe 02-05-2014 09:43

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Chris Hibner (Post 1381848)
Playing devil's advocate: Then why not just do away with autonomous altogether? With a little more programming, Kinect/Chezy Vizhun can completely replace the joystick so why not just let the teams use their joysticks?

This is why its going to go away next year I believe. Back in 2012 we tested the kinect for driving the robot and preforming actions (there was even a team who balanced a bridge in build with the kinect), and with a bit of practice we were able to do most stuff we would need for auton. Due to wanting a more reliable auton, and a variety of robot problems we never ended up using it. In games where you can play defense in auton a defender with good programming can shut down all but the best teams it seems.

Jared Russell 02-05-2014 10:24

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Chris Hibner (Post 1381848)
Playing devil's advocate: Then why not just do away with autonomous altogether? With a little more programming, Kinect/Chezy Vizhun can completely replace the joystick so why not just let the teams use their joysticks?

A lot of people don't realize that Team 254's 2012 Hybrid Mode was basically all teleoperation with the Kinect.

Evidence: www.youtube.com/watch?v=ZaOiaC0I8pY

/s

vgdude999 02-05-2014 10:42

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I voted no. I actually thought about indicating the hot goal using the kinect when I first started thinking about how to detect it, but I determined that autonomous meant autonomous, and this method would most likely be against the rules. I never checked the Q&A, but if I had, I don't think I would have used the kinect anyway.

Moral of the story: autonomous should be autonomous, nothing else.

mjc49 02-05-2014 10:50

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
I voted no, but I believe the game makers need to define one way or another if this is allowed in autonomous mode. It was magical to watch the finals and I think it can add a lot of strategy to a 'pre-teleop' period. I just don't think it is correct to say it is autonomous mode if drivers are able to manipulate the robot.

Lil' Lavery 02-05-2014 11:29

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Now, how do people's opinions change if the autonomous mode is not at the beginning of the match?

AdamHeard 02-05-2014 11:31

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by mjc49 (Post 1381882)
I voted no, but I believe the game makers need to define one way or another if this is allowed in autonomous mode. It was magical to watch the finals and I think it can add a lot of strategy to a 'pre-teleop' period. I just don't think it is correct to say it is autonomous mode if drivers are able to manipulate the robot.

I will reiterate that the finals on Einstein likely would have been the EXACT SAME sans kinect and cheesey vision.

billbo911 02-05-2014 11:50

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Lil' Lavery (Post 1381893)
Now, how do people's opinions change if the autonomous mode is not at the beginning of the match?

I love this question. Not that it changes whether a robot is a "Robot" or just a big "RC car", but it introduces a new dynamic to the game, and one I have been hoping would be played out one of theses coming seasons.
Think about how moving Autonomous to the middle or end of a match would change both offensive and defensive strategies!

Quote:

Originally Posted by AdamHeard (Post 1381895)
I will reiterate that the finals on Einstein likely would have been the EXACT SAME sans kinect and cheesey vision.

Agreed!
When you get to that level of performance, there will always be solid ways of addressing the challenges at hand.

Personally, although I like the ability to use things like Kinect and CheeseyVision, I prefer to leave Autonomous, fully autonomous.

rick.oliver 02-05-2014 13:27

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Chris Hibner (Post 1381848)
Playing devil's advocate: Then why not just do away with autonomous altogether? With a little more programming, Kinect/Chezy Vizhun can completely replace the joystick so why not just let the teams use their joysticks?

What is the difference between a Kinect unit or a webcam located on the drivers' station reacting to input and influencing the robot's action as compared to a camera or other type of sensor which is mounted directly on the robot and reacting to input? One difference is how they each are counted (or not counted) in the weight limit ... that could be an argument for excluding their use.

Is it purely an autonomous mode or is it a hybrid or is it almost indistinguishable from teleoperated? How is the programming challenge that different? Why does that matter?

We used a Kinect to initiate the autonomous routine to launch the ball and then drive forward. It enable us to detect a Hot goal and hit it consistently. We employed a simple solution using available technology to meet a challenge.

JesseK 02-05-2014 14:06

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by Lil' Lavery (Post 1381893)
Now, how do people's opinions change if the autonomous mode is not at the beginning of the match?

We postulated that Autonomous would be at the end of the match this year, given some pre-season Dr. Who references. It would be an interesting twist to autonomous, but would also make it much harder to accomplish due to greater variance in starting conditions.

Generally speaking, I agree with Adam - some compromise is probably the best way forward. Either way, it will have to be addressed - non-wired robot interaction (i.e. cameras/etc) will definitely increase as time moves on an technology gets better or simpler ideas get introduced. The video game market has come up with some really cool immersive ideas lately.

allen.mays 02-05-2014 14:32

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by rick.oliver (Post 1381955)
What is the difference between a Kinect unit or a webcam located on the drivers' station reacting to input and influencing the robot's action as compared to a camera or other type of sensor which is mounted directly on the robot and reacting to input?

I think the biggest difference is who is processing data to make a differential decision. For "indirect" controls, the driver is making those differentiations, whereas in autonomous, those must be made by the robot according to the programming. Don't get me wrong, I think that the Kinect and CV controls are interesting and fun, and I credit teams like the Cheesy Poofs for being creative enough to develop them within the rules, but a true autonomous should be just that - autonomous.

Citrus Dad 02-05-2014 14:43

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
Quote:

Originally Posted by AdamHeard (Post 1381895)
I will reiterate that the finals on Einstein likely would have been the EXACT SAME sans kinect and cheesey vision.

I'm not sure about that. I think the human control element allowed for nuances that were critical and could not have been incorporated into easily the auto routines.

adciv 02-05-2014 18:57

Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
 
We programmed our auto the same way 254 did, as a switch in the autocode for going left/right. We just used the kinect instead. We have done vision processing in the past; but looking at what was going on with the field and the way the vision sensors were placed, I'm glad we didn't.

If these systems are prohibited, then here's what we need:
(1) fields that operate as stated, the lights this year were significantly off. I don't know about the vision targets; but the yellow lights were routinely switching at 5.5 to 6 seconds or later. I don't know about other teams, but this was routinely causing our first ball in our two ball auto to be "early".

(2) Targets that are useful. We were considering using the lights around the goal over the vision targets due to their placement. With the kinect being available and so easy to implement, it became a no-brainer to use the mark 1 human eyeball. The vision targets this year were not in a good spot.

(3) An auto that eliminates the usefullness of having a human in the loop.


All times are GMT -5. The time now is 17:07.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi