![]() |
The use of the Kinect and Cheesy Vision in 2015 and beyond
It all started innocuously enough with a Q&A entry a week after kickoff:
Q. Are we allowed to use the kinect as part of our driver station during autonomous mode this year? A. There are no rules prohibiting this. And was reiterated after build season: Q. Per Q55, the Kinect is allowed as part of our driver station during autonomous. Please clarify: May a Driver, remaining compliant with G16 & G17, use the Kinect that is part of the driver station to control the Robot during Auto? A. Yes. These responses opened the door for the types of indirect control of the robots we saw in autonomous, most notable CheesyVision, but also the Kinect control used by us and 973. I have one simple question about all this, Should indirect control of the robot during autonomous mode (i.e. CheesyVision and Kinect control) be allowed for the 2015 season? My personal opinion is that allowing these forms of control removes the autonomy from Autonomous Mode (we had close to complete operator control over our robot in Autonomous once we started using the Kinect). Regardless of what I think, I'm curious to see what the community thinks. Was the autonomous excitement on Einstein enough to justify this type of control, or would you prefer Autonomous Mode to remain autonomous. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I would prefer that the rules reflect the game. Aerial Assist and goalie-bots lent themselves towards limited human control during the autonomous period, due to their reactionary nature, and resulted in a great deal of excitement. A game like Rebound Rumble, even though it allowed, encouraged, and downright highlighted Kinect use, saw it practically never used because robots never interacted with their opponents in the autonomous period, and better accuracy could be achieved with pure autonomy.
For the record, I generally prefer mostly isolated autonomous periods with a high ceiling for performance like Rebound Rumble or Ultimate Ascent, but think that the "hybrid period" should come and go as the games require, rather than forcing one way to work for all games. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
If we had some really long autonomous (longer than 20 seconds) i would love to have some form of corrections to avoid collisions and etc. But if 2015's auto is like the years before (15 seconds or less) I don't think we want kinect control for another year. We already pushed the envelope and I thinks its enough for now.
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I'm really torn on this.
On one hand, the current rules basically take the "auto" out of autonomous. On the other hand, autonomous mode is usually really boring. The Einstein autonomous chess match between 1114 and 254 is maybe my favorite FRC memory of all time and maybe the most exciting thing that happened all year. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I believe that the use of indirect input is highly game dependent. For example, it was legal in 2012 and 2013 (Q198), but hardly used. It may be a few more years before there is a game design that makes indirect input useful again.
Normally, the only excitement in autonomous is whether a robot will fail. That's not very exciting, or inspiring. The race to the bridge in 2012 was exciting, as was the chess match on Einstein this year. I'm in favor of giving teams the tools to make more interesting and exciting autonomous modes. Full disclosure: We talked about a Kinect controlled blocker starting in late build season, and implemented it for our second regional and championships. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
It all depends on how the game is set up. It really only depends on how it flows with certain games. I don't think that webcams, the Kinect, and similar devices should be banned, but the rules pertaining to them should be very specific and limit certain types of control opportunities.
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
It depends on the game, but I think one should be able to tell whether this type of thing is allowed with a quick glance at the rules. If the GDC desires for this period to take place without any human input, they should call it Autonomous mode. If they want to allow things like Kinect and Cheesy Vision, they should call it Hybrid mode or something similar. Calling it Auto mode and allowing this type of thing just doesn't make sense to me.
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
We likely would not have developed Cheesy Vision had the field implemented hot goal lighting properly.
I think it does cheapen the autonomous period, but it made for exciting matches on Einstein. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
If it's any consideration, Kinect was allowed last year (but it had little utility overall) and 2012.
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
The way I see it you have a few levels of "autonomous" with increasing difficulty:
- A script (what the majority of autons are) - Multiple scripts (pre-selected before match) - Indirect input (Kinect, etc) - Actual autonomous (decision trees, actually identifying objects on the field and making decisions based on that input) Simply writing the script is hard enough for some teams, to get everything figured out on their robot well enough to consistently perform a given action. Maybe some teams do some error checking (is a ball loaded) to keep from destroying their robot, but in general... you're executing a series of commands blindly. The better teams have a playbook, which they can play against their opponent's playbook or use in various situations. 1/2/3 ball auton, multiple locations, shot angles, goalie routines, etc. Indirect input allows you to "trump" your opponent's playbook if they have a static script, by essentially playing your robot in real time against their more static script. Actual autonomous mode only offers advantages over indirect input in the scenario where a computer can identify a situation and react better than a human. I feel like the first 3 steps actually play out pretty well. Each step is incrementally harder and is incrementally more rewarding. It's a little awkward this year as you move from multiple scripts to indirect, because there really isn't THAT much additional work to develop it, and in situations it really can be a trump card. My biggest hang-up is that there is basically no incentive to move to full auton. Like... watch the video of the Google car and automated driver, or any system that has really advanced sensors to detect objects, calculate trajectories, etc. I really think the evolution of FRC will include a lot more "driver assist" functions, like say an automated incoming ball tracker and catcher this year... or being able to identify a goalie pole and shoot around it. The level of effort to pull something like this off is immense though, and I don't feel like it would really dominate over an "indirect" input robot in auton mode. So my only real beef (and why I voted no), is I feel the Kinect lessens the incentive to iterate toward full auton, but I don't feel like it really broke anything this season. I'd also be ok with keeping it legal for a season or two more, as teams push the boundary on auton, then weaning people off it, or giving extra incentive bonuses for auton without the Kinect (or any indirect input from the driver). |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Even though strategy chess match on Einstein was probably some of the most exciting robot matches I have ever witnessed, I still voted no.
We are what I would phrase as a pretty low tech team that keeps the designs simple to complete the task, we try to finish on time, iterate, and never break down. To me the most important differentiater between the 2nd tier teams and the rest of the pack is a consistent autonomous. Even better is a consistent multi game piece auto. My first year in FRC as a mentor was logomotion. My oldest son was a freshman, and our team had a very strong mechanical design group, but programming wise, the team was lacking mentor support. Coming to FRC as a FLL coach, I had no clue what teleop was, or auton was. Logomotion auton was to follow a line and hang an uber tube. In FLL we followed lines all the time, this looked pretty easy task for us, as in FLL we only had one sensor, but in FRC they gave us 3! Long story short, week 1 at Kettering, our first match, we hung an uber tube. I screamed so long, I almost passed out. My son and the rest of the programmers were jumping around going crazy. We won a our first blue banner that day, and my son was completely hooked on robotics. That season was magical, and ended losing to the Cheesy Poofs on Galileo, who would later be world champs. 4 years later, my son is lead programmer, and on the drive team. No robot banners this year. I would have to go back and check, but from memory, our two ball auto missed one ball twice all season. Our one ball hot detect had significantly more failures, but this was mainly due to losing a second to wait for field to indicate hot and drive in high gear, which jostled the ball around. It may be boring to watch a bot meander down a line, and hang an ubertube, every single time, but if you are programming that auto, it is the most exciting thing in the world. This year, it was also really inspiring come out of your first district with the third best auto ranking in the world week 4. Coming from FLL where you have 2:30 Auto, to FRC where you may get 15 seconds, or 10 seconds (or an unpublished 7.5 seconds week 1, and 9 seconds week 4) I say let us have our auto. That is where there is some real programming challenges are, and if not us 2nd tier teams might as well drop the default code on the bot, and start training drivers to improve. That being said, I have watch the Einstein matches multiple times already, and I still watch the Cheesy Poofs "hybrid mode" montage on youtube a few times a year, so at least we got that. :) |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I voted NO. FLL students know what autonomous means and we should keep the same meaning across platforms.
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Obviously I will never hold it against a team for deciding to use any legal means to make their robot more competitive. But autonomous really ought to be autonomous IMO.
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I don't have an opinion, other than that the GDC should be able to make the appropriate rules however they want.
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I never understood hybrid mode as a concept - all we are doing is using a less efficient, ergonomic, and effective controller. Maybe the case can be made that it's an interesting challenge to program your own inferior interface, but we aren't running low on interesting challenges, and they use joysticks on the space station.
I'm of the opinion that auto should be auto, but I like seeing robot interaction in auto, which is what made Einstein interesting. Check out the 2006 Lone Star regional finals: https://www.youtube.com/watch?v=0oLgAC7rwGg If we had a game like that now, you know that there would be some sensor based tracking of robots, and the cat-and-mouse iterations of auto modes would be just as fun. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I don't think it's a good thing to let teams control their robot in autonomous, whether the game allows teams to interact with the opposing alliance in auto or not. Being able to react to something happening, like somebody beating you to the bridge in auto, a blocker robot deploying, or a missed intake for a two ball autonomous takes away the whole point of having autonomous mode. Give teams a few years, and they'll come up with a way to control the entire robot in autonomous mode.
I don't think allowing control of your robot with your kinect or webcam will even make robots more competitive/more interesting in auto. There were plenty of teams this year that didn't even bother moving forward in autonomous mode. I doubt there would be a significant increase in the number of moving robots in autonomous mode. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I loved the Einstein chess match. It's definitely one of my favorite FRC moments of all time.
As far as the original question, I've actually been thinking about this for a while. One thing that I've noticed throughout my tenure in FRC is how FRC really is less of a programming competition, and more of a mechanical competition. Let me clarify: programming, yes, is vital to the final outcome of how a team performs. And don't forget the role of code in teleop either. However, the effort/reward ratio for programming in FRC does is dissimilar to the effort/reward ratio for mechanical design, strategy, or even drive team training. As a programmer, I can name off some robot code that's been inspirational to me: 341's auto-aim SmartDash widget, 254's auto-climb sequence, and all of the crazy autonomous modes that are out there. 987's centerline in 2013 that would gracefully degrade to the 2pt goal was awesome, as was their autonomous scripting system. But I can name so many more robots, designs, or mechanical things that are just as inspirational. All of 67's, 254's, 469's, and 1114's robots. But when's the last time we, as a community, have celebrated true innovation in programming? Gordian is a fully implemented scripting language by 4334. 4334 also spent the time to completely rewrap WPI in the form of ATALibJ. 1540 has their own custom robot framework that's open source. When's the last time I've heard anyone post in awe of any of those things? It's because that the subset of FRC people who can appreciate them is a much smaller fraction of the total population; additionally, it's hard to appreciate it because it's much more abstract than a linkage or a drivetrain. And it's not directly convertible to points. Innovation in code does not guarantee better robot performance. In reality, programming innovation has a habit of blowing up in the face of a humble high school programmer. The FRC just does not reward attempting top-notch programming. This helps to define the effective ceiling for programming. Effective programming is transforming the robot from an expensive paperweight and making it controllable. Auto modes are just calls to the same code. Yes, teams can do better -- but the marginal reward for doing so is much lower at higher levels. The same is true for anything in FRC, but I have the distinct feeling that it's a much sharper decrease in reward than in other aspects of FRC. Part of the reason for this is because of the defined floor that we have too. WPILib makes it really, really hard to screw stuff up. This is intentional -- to have a massive, expensive paperweight is not how I would like anyone to spend their FRC season. But, at the same time, it makes programming easier, almost handed to you. The marginal effort for making a Talon spin a CIM is so low. Everyone does it. And non-robot stuff, like driver assist programs are great -- but I've noticed that my drivers are just fine (and often prefer) going it solo. I see 'Hybrid modes' as rewarding more complex programming for improved control during autonomous. As I think that bringing the programming floor down is a major violation of GP, ethics, and morals, something has to be done to increase the rewards of programming. This is one such way. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
My opinion is that autonomous mode should actually be autonomous, and it should exist to challenge programmers in FIRST. Once you have a functional Kinect style setup working, then you can do almost anything in autonomous with very little effort (amazing goalie bots). However, I prototyped autonomous (actually autonomous) CV goalie bot code shortly after Kickoff: I used image processing in Matlab to prototype detection of either red or blue bumpers of other robots to either avoid enemy goalies or detect enemy shooters. This is a complicated system and would take much more tweaking to get it to work right compared to Cheesy Vision, but if teams were to accomplish a programming task such as this, I think it would in the end be a much more fulfilling and educational experience.
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
The excitement of Einstein speaks for itself. Whether it's a Kinect, or CheesyVision, or four buttons like in 2008, tools to foster an autonomous chess match make the game more fun to watch from the start.
Is that still autonomous? By strict definitions, no--but we already stretch the definition of "robot" at times too. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
With the exception of the Einstein finals, which are 3 out of the 10,655 matches played this year, how many of them were more exciting to watch because of kinect control? At least for this years game, shooting in a hot goal is not particularly more exciting than shooting in the not hot goal for most spectators. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
If we decide that any control is unacceptable in Autonomous, we also need to ensure that our robots can get all the information they need from the field. Vision targets need to be consistent. We also need a consistent way to sense other robots, something that has to be put on every robot. Perhaps retro-reflective stickers on bumpers, motion-capture markers on the robot, or something like the trailers from 2009. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
Half our blocking was done by guessing where our opponents would shoot (from scouting data), and driving there with encoders and a number we selected on driverstation after the robots lined up. This would have been reasonable, and still resulted in the SAME einstein chess match. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
In principle, I think autonomous means autonomous and it should be that way. On the other hand, I like the incentive for teams to be working with stuff outside the norm (like Kinect and CV stuff). I'm divided over whether it should still be a part of autonomous/hybrid mode or a different part, but overall I liked that Einstein chess match as much as the next FIRSTer.
So I guess I would be in the "Yes, but don't call it autonomous mode" category. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
However, using PID drive controls, automatic shifting, swerve drives, etc. are all primarily electrical now. Programming plays a very large part in how a robot performs, whereas mechanical can only go so far. Top teams have good code. I vote no for using kinect/ vision control for auton. This is because if programmed well enough, autonomous can easily become just an extended teleop with little effort, especially for defensive autons. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
We spent weeks integrating a camera solution for auto mod this year. We got it working on the practice field, but never had time to calibrate it on the field at champs (I wish they would let us calibrate every morning instead of Thursday only. We got it working Thursday around lunch). The idea that something as simple as open source code and a web cam would be better than the a robot running truly autonomously galls me. Please, remove the de facto hybrid mode for next year.
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
I also generally disagree with the sentiment that hot goal targeting due to timing was an intractable problem. At our second regional we spoke to the field team after our first match (where we missed hot goal detection) and they indicated that the reflective tape indicator was taking some finite time to flip over. We re-timed our code at the competition and didn't miss hot after that - using only robot based webcam and image processing. To me, that seems like an absolutely plausible "real-life" engineering type problem, and I thought it was great learning experience. I don't disagree with the use of kinect after the ruling - but to call it "autonomous" and to use driver station webcam/kinect seems to be not in the spirit of the word autonomous (but I'm not arguing it was inappropriate given the rule clarifications). I would have preferred if it remained more autonomous. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
In FRC, you can win from just having awesome mechanisms, even with only so- so wiring and code. I'll give you that poor code and wiring can lose matches, but you sure can't win without well designed mechanisms. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I voted no. But I am also extremly torn here. Autonomous is designed for 0 driver control whatsoever, wether it direct or indirect. Using the kinect or CheesyVision druing auto seems to bend the rules a bit. Although by the way they are written it is allowed, it kinda takes away from the definition of autonomous.
I'll amit my team used something very similar to CheesyVision at champs after our camera crapped out on us. At the time I was all for this, probably because I was so excited we were doing well and I wanted to keep it up. On the other hand the use of this created extremly exciting stand-offs, namely Einstein finals. 1114 and 254 stand-offs were exhilerating to watch and I don't know that that could happen with purely pre-coded instrusctions. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I don't think it should be allowed. Although these systems were technically legal and a really smart way of using the specifics of the rules to increase a robot's ability, I think it diminishes the idea of an "autonomous period". It's no longer autonomous, it is being controlled by a human, even if indirectly.
I can't hold the use of this strategy against any team or person; it wasn't illegal and as I said earlier, an extremely smart way of doing things. In the future, however, I think that GDC should either make indirect, live in-match control illegal, or change it to a hybrid period permanently. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Software is increasingly important in today's world. We need to challenge the programmers more, not less.
The Poofs' auton performance on Einstein was inspirational; with or without Cheesy Vision. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I think that the use of indirect control of robots should be allowed. I also think that the hybrid period is brought back next year instead of auto, so that it promotes this kind of innovation.
For Auto mode this kind of stuff doesn't really fit, but I think it should be a possibility. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
I will agree that in elims it was more effective to block for two reasons. More teams were shooting, and you generally had better partners to help defend the missed balls with. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
It depends on the game. A bit of autonomy is nice, but I don't think a strictly autonomous mode is a prerequisite to a successful FRC game.
With that in mind, if the GDC wants the robots to have absolute autonomy, then they should write that into the rules; if they want hybrid autonomy, then they can write that instead. If they want to provide different incentives for increasing degrees of autonomy, maybe they can do that too. Whatever they do, it would be better if it's fairly clear from the start, instead of being enabled through open-ended rules and Q&As. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I want to see fully autonomous robots some day so I voted no. If we allow hybrid it might stifle creativity because there is another way to achieve the objective.
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
There are reasonable compromises that could be done.
For example, kinect input is shut of after 2 seconds, or only open after 7 seconds. Something where it essentially limits you to send signals that launch other actions, versus directly controlling motion. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
Emphasis on bonus instead of penalty. bonus always sounds better. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I voted 'no'. The autonomous 'chess match' on Einstein was awesome! (Kudos to the poofs for being ready for that!) But there have been many examples of autonomous mode 'chess matches' over the years without needing hybrid control. In 2013 we had lots of them when fighting for the discs in the center of the field for example.
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I voted yes. It provides another facet to the competition which can enhance robot performance. It has the potential to raise the floor, so to speak, and the ceiling.
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Emphatically NO! I cannot believe that we allowed it this year. The intent of the rules were clear to my team. Autonomous control uses pre-programmed instruction, using sensors to create the changes in robot actions, no driver actions or human input to drive the machines. If we want hybrid control, then it should be called hybrid control. Thanks for bringing this up Karthik.
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I voted no. The chess matches on Einstein were fun to watch and Cheezy Vision was a great way to work around the hot goal issues this year. However, as a programming mentor for my team I feel that a pure, non-hybrid autonomous presents a better challenge. The teams have to figure out ways for the robot to interact with its environment without human input.
I realize that 254 and 1114's hybrid autos were really just instructions to go left and go right, but this is a slippery slope. What's to stop a team from holding up colored cards to control a wider variety of actions? |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
Evidence: www.youtube.com/watch?v=ZaOiaC0I8pY /s |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I voted no. I actually thought about indicating the hot goal using the kinect when I first started thinking about how to detect it, but I determined that autonomous meant autonomous, and this method would most likely be against the rules. I never checked the Q&A, but if I had, I don't think I would have used the kinect anyway.
Moral of the story: autonomous should be autonomous, nothing else. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
I voted no, but I believe the game makers need to define one way or another if this is allowed in autonomous mode. It was magical to watch the finals and I think it can add a lot of strategy to a 'pre-teleop' period. I just don't think it is correct to say it is autonomous mode if drivers are able to manipulate the robot.
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Now, how do people's opinions change if the autonomous mode is not at the beginning of the match?
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
Think about how moving Autonomous to the middle or end of a match would change both offensive and defensive strategies! Quote:
When you get to that level of performance, there will always be solid ways of addressing the challenges at hand. Personally, although I like the ability to use things like Kinect and CheeseyVision, I prefer to leave Autonomous, fully autonomous. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
Is it purely an autonomous mode or is it a hybrid or is it almost indistinguishable from teleoperated? How is the programming challenge that different? Why does that matter? We used a Kinect to initiate the autonomous routine to launch the ball and then drive forward. It enable us to detect a Hot goal and hit it consistently. We employed a simple solution using available technology to meet a challenge. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
Generally speaking, I agree with Adam - some compromise is probably the best way forward. Either way, it will have to be addressed - non-wired robot interaction (i.e. cameras/etc) will definitely increase as time moves on an technology gets better or simpler ideas get introduced. The video game market has come up with some really cool immersive ideas lately. |
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
Quote:
|
Re: The use of the Kinect and Cheesy Vision in 2015 and beyond
We programmed our auto the same way 254 did, as a switch in the autocode for going left/right. We just used the kinect instead. We have done vision processing in the past; but looking at what was going on with the field and the way the vision sensors were placed, I'm glad we didn't.
If these systems are prohibited, then here's what we need: (1) fields that operate as stated, the lights this year were significantly off. I don't know about the vision targets; but the yellow lights were routinely switching at 5.5 to 6 seconds or later. I don't know about other teams, but this was routinely causing our first ball in our two ball auto to be "early". (2) Targets that are useful. We were considering using the lights around the goal over the vision targets due to their placement. With the kinect being available and so easy to implement, it became a no-brainer to use the mark 1 human eyeball. The vision targets this year were not in a good spot. (3) An auto that eliminates the usefullness of having a human in the loop. |
| All times are GMT -5. The time now is 17:07. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi