This year there was a clarification to the AUTONOMOUS MODE rules (which I’ll refer to as a change) that you noticed if you were watching the Q&A, specifically Q55, Q363, and Q410. Basically you can do anything you want with your Driver Station webcam during autonomous mode. The Cheesy Poofs made an awesome program called CheesyVision which takes advantage of the rule change - you can find the thread about that here. I forked the Poof’s repo and decided to see how far I could push the new rule - why not make a full joystick with buttons using only the webcam? So I did.
My reasoning for developing and releasing this code is twofold:
Demonstrate the extent to which this rule change can be exploited to gain full teleop control during AUTONOMOUS MODE.
Even the playing field so that all teams can take full advantage of the rule change.
Keep on roboting and good luck to all attending Championships!
If you have any questions / comments / use the project / fork it and develop it further I’d love to hear from you, my email is [email protected] (I don’t check Chief very often)
P.S. I don’t have access to a cRIO so I havn’t tested the network communication / sockets stuff. In fact I can’t even guarantee that the java code will compile. If someone could test / debug this I would be very grateful.
This is a cool feature, though I don’t think the concerns over the sanctity of autonomous are really warranted.
In my opinion, autonomous programs are usually better at autonomous mode than people could ever be, which is why Hybrid mode was never really adopted by teams. Autonomous is generally constructed in a way where chaos is minimized, making time is the limiting factor, rather than adaptability.
Introducing capabilities to autonomous mode in a way that mimics operator control would not be an advantage in my opinion, the real power of these sorts of systems is in using human brains to fill the role of sensors which trigger certain pre-scripted actions.
Given the option, would you really want your driver to directly control your robot in autonomous mode?
DjScribbles makes a great point but misses one important aspect of the situation. Auto is risky and this level of operator intervention provides a plan B that could easily grow into a significantly robust capability for teams to recover from an auto routine that did not go as planned.
Is this good or bad? I don’t know. I have a feeling that it is not what the GDC had in mind when they made this allowance.
Although this is a cool demo, I don’t think it has major implications in practice. Until a team uses this to drive their robot in auto and hits shot percentages equal to or better than their teleop shot percentages, I’m not convinced.
Concerns:
Does your human know exactly where your robot needs to move to make shots?
Does your human know exactly how much the robot needs to turn to make shots?
Will your human make a mistake in their movement?
Is vision calibration at a real competition venue viable?
Does this get good enough fps to actually drive without crashing? On a Classmate?