View Full Version : Oculus Rift Driver Controls?
Joey1939
03-01-2015, 21:45
People have been discussing that even a few high stacks could impede on your view of the field. If you can't see the landfill, it would be impossible to position your robot to pickup additional totes. I imagine that on Einstein, the alliances will stack every tote building up high walls, completely blocking the driver's view. I wonder if this is the first year that using an Oculus Rift like system could be advantageous to let the driver see from the perspective of the robot. It wouldn't necessarily be an Oculus Rift, but simply watching the driver station instead of the field. Worse case scenario is knocking over your own totes because you can't see where you are going.
IMO, the problem with having the driver use a robot-mounted camera is that robots tend to move erratically and turn very quickly, so the driver would experience vertigo (i think thats right?) and probably be less effective as a driver.
That being said, I'm all for the idea of one day having the driver have some kind of vision from the robot side.
There was a thread on this just a couple weeks ago. It's very hard to get good video.
AlexanderTheOK
04-01-2015, 02:09
Yup. Take a look if you have time at this (http://servo.texterity.com/servo/201411/?folio=43#pg43). It's not exactly feasible on the field, but it's a fantastic thing to do off the field.
cbale2000
04-01-2015, 02:42
Personally, I think the most practical way to utilize an Oculus Rift or similar device is as an augmented reality Heads Up Display, rather than an all out video feed from the robot.
It would have the advantage of allowing important information to be in your field of view at all times without having to look at a laptop screen, and if you were cleaver you might even be able to set up a picture-in-picture with a small camera feed from the robot in the corner of your view. The advantage to doing it this way is you're not totally dependent on a real-time video feed from the robot and the robot only needs one camera instead of two.
There's a couple problems I can think of:
1) You'd have to stream some very high quality video from the robot. It sounds like each team will have 5+mbps to work with, but 5mbps only helps you if you're sending compressed video. Uncompressed 1920x1080 video is about 2gbps, or about 400x too much data.
The new control system is probably incapable of compressing video in real time at that bitrate.
2) Assuming you could get high-res, high-quality video transmitted, I bet there'd be lots of situations where the camera just couldn't resolve something you needed to see in the distance. Is that darker gray blob a tote, or is it another robot? You'd have a constant tradeoff between zoom levels and distant detail. You'd want a wide FOV to maneuver the bot, but a narrow one to distinguish things with detail more than a couple feet away.
Both those things said, it is probably possible to do and would be very neat, I just don't think the work required to make it happen would be worth it.
Personally, I think the most practical way to utilize an Oculus Rift or similar device is as an augmented reality Heads Up Display, rather than an all out video feed from the robot.
The problem is that you'd be confining the driver to a 1280x720 (DK1) or 1920x1080 (DK2)-resolution view of the world. It'd be hard to distinguish robot details looking through a OR. A HUD _would_ be good, but a better implementation might be to build a helmet with a phone mounted on top, and have a phone app that talks to your robot.
Here's a CAD model:
http://www.tourdegiro.com/personal/oculus-phone.png
techhelpbb
04-01-2015, 09:52
Generally, when the driver's normal field of view needs to remain intact, I'd suggest a flip down/up monocular. It's a single display on a head band and you flip down that display.
At a technical level it ought to be compatible with FIRST hardware.
Outside of the technical level someone should check before they do it.
The only savings it offers you is that you don't look down to use it.
To use it most effectively you need to train your driver because it will mess with depth perception.
Go the one step further. Put a gyro in the headband and have a mode where the robot faces the same direction as the user.
virtuald
14-04-2015, 22:33
Personally, I think the most practical way to utilize an Oculus Rift or similar device is as an augmented reality Heads Up Display, rather than an all out video feed from the robot.
It would have the advantage of allowing important information to be in your field of view at all times without having to look at a laptop screen, and if you were cleaver you might even be able to set up a picture-in-picture with a small camera feed from the robot in the corner of your view. The advantage to doing it this way is you're not totally dependent on a real-time video feed from the robot and the robot only needs one camera instead of two.
We developed code for most of this, until it was disallowed by the rules. Check out the code release post (http://www.chiefdelphi.com/forums/showthread.php?t=136611) in the programming forum.
jijiglobe
14-04-2015, 22:57
As a driver, I feel that any robot with first person driving would back up into stacks all the time. One solution our team came up with is taking a really really really really tall poll with a camera on the end into the driver station that the coach holds and gives a top down view of the field. This way the driver can see the entire field at once without any obstructions.
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.