Visual Joystick - a full 2 axis, 4 button joystick during autonomous mode

Hello all,

This year there was a clarification to the AUTONOMOUS MODE rules (which I’ll refer to as a change) that you noticed if you were watching the Q&A, specifically Q55, Q363, and Q410. Basically you can do anything you want with your Driver Station webcam during autonomous mode. The Cheesy Poofs made an awesome program called CheesyVision which takes advantage of the rule change - you can find the thread about that here. I forked the Poof’s repo and decided to see how far I could push the new rule - why not make a full joystick with buttons using only the webcam? So I did.

http://i.imgur.com/F93rXAV.png

Here is a YouTube video demonstration

Here is the source

My reasoning for developing and releasing this code is twofold:

  1. Demonstrate the extent to which this rule change can be exploited to gain full teleop control during AUTONOMOUS MODE.
  2. Even the playing field so that all teams can take full advantage of the rule change.

Keep on roboting and good luck to all attending Championships!

If you have any questions / comments / use the project / fork it and develop it further I’d love to hear from you, my email is [email protected] (I don’t check Chief very often)

P.S. I don’t have access to a cRIO so I havn’t tested the network communication / sockets stuff. In fact I can’t even guarantee that the java code will compile. If someone could test / debug this I would be very grateful.

Thanks for releasing your source, I’ll definitely have to look into it.

I officially rename this project Flow-Vision :p. Nice job mike.

And just because I can, here’s a video of the Visual Joystick controlling an AR.Drone 2.0 quadrotor.

This is pretty awesome. I really like it from a general OMI stance. Just don’t sneeze ;).

However, it completely removes autonomy from the autonomous period. I suspect FIRST will address this if it becomes more widespread.

Why is this any different than Kinex?

It isn’t. Yet even the Kinect’s functionality, with implementation as great as this, completely circumvents the point of autonomous.

This is a cool feature, though I don’t think the concerns over the sanctity of autonomous are really warranted.

In my opinion, autonomous programs are usually better at autonomous mode than people could ever be, which is why Hybrid mode was never really adopted by teams. Autonomous is generally constructed in a way where chaos is minimized, making time is the limiting factor, rather than adaptability.

Introducing capabilities to autonomous mode in a way that mimics operator control would not be an advantage in my opinion, the real power of these sorts of systems is in using human brains to fill the role of sensors which trigger certain pre-scripted actions.

Given the option, would you really want your driver to directly control your robot in autonomous mode?

DjScribbles makes a great point but misses one important aspect of the situation. Auto is risky and this level of operator intervention provides a plan B that could easily grow into a significantly robust capability for teams to recover from an auto routine that did not go as planned.

Is this good or bad? I don’t know. I have a feeling that it is not what the GDC had in mind when they made this allowance.

I have a feeling this is exactly the OP’s point, that allowing this kind of control in autonomous defeats the purpose of autonomous mode entirely.

Although this is a cool demo, I don’t think it has major implications in practice. Until a team uses this to drive their robot in auto and hits shot percentages equal to or better than their teleop shot percentages, I’m not convinced.

Concerns:
Does your human know exactly where your robot needs to move to make shots?
Does your human know exactly how much the robot needs to turn to make shots?
Will your human make a mistake in their movement?
Is vision calibration at a real competition venue viable?
Does this get good enough fps to actually drive without crashing? On a Classmate?

Still a cool vision demo, thanks for sharing.

It (controlling the robot in autonomous) opens up a world for autonomous defence, specifically goalie posts.