Has anyone ever thought of hooking something like an oculus rift/google cardboard to an onboard camera for the robot to create a sort of 1st person driver for the robot. You can argument the vision to aid in aiming, driving, etc. I know in some aspects it might be cumbersome, but I think it sounds like a good idea. (Is it even allowed in FRC?)
Sure it would be allowed. (and I have spent hours thinking about this)
The problem lies with the infrastructure of FRC itself. There is a bandwidth cap for robot-driverstation communication, and to get 3d you would need 2 cameras streaming to the DS to work. The problem arises with the fact that you have to choose (Pick any two)
- Horrible Resolution
- Horrible Framerate
- Horrible Compression
I, personally, would get a pretty severe headache from that for even just a few minutes.
Were we able to send dual 480/720p 20+fps cameras, it would be definitely a viable control system IMO.
Combine the 2 images into a steroscopic image on the robot then send frames of that steroscopic image. So the viewer at the driver’s station can wear either the old fashioned red/blue or polarized lenses. Should be able to do it well within the computing power offered by a laptop that fits within the COTS rules if those do not change this year.
http://www.3dglassesonline.com/learn/how-do-3d-glasses-work
Which oddly enough in my previous suggestions I suggested sending full frames one at a time (aka key frames) instead of a stream of video changes. It should be less difficult to make the network handle that sort of work. If you snap the pictures from the 2 cameras, the proper distance appart, process them, then send that (as quickly as you can) that will likely work out. Even if you get a single static image it would still exhibit depth.
Remember that headsets need to be disconnected from the driver when the match starts, and if cable get tangled or something you will lose precious seconds.
If the glasses are merely standard polarized or red/blue lenses you wouldn’t have any cables to tangle. Another upside.
Extra amusement points though if someone gets some displays and retrofits a ViewMaster ;).
Now that is an interesting idea. Get a smallish polarized 3d tv and wear those glasses you get from the RealD 3d movies.
I think the OP was talking about integrating the gyroscopic capabilities of the Oculus to control the robot by steering the robot and raising/lowering a manipulating arm, which would independantly be possible (Via a TrackIR set or similar)
Oh. That’s a really cool idea! Would it still look okay through those glasses?
Also, as an alternative to using just tilt sensors in the Rift, you could use Kinect for the arms.
Using colored lens you loose some color. For any really important colors one could probably find the best colors for the lens set and simply swap the real colors for that color during the processing. Upside to this is there is nothing unusual about the display. You could just use the existing laptop displays.
Using polarized lens you get much better color reproduction at the cost of a compatible display.
This form of interface is normally used when you are miles way from the real device/environment and needing to immerse yourself into the situation as if you could be there. This is true whether it is a visualization cave, 3D glasses, etc.
When you have a view of the situation, I think a HUD or secondary monitor is a far better approach. This allows pilots to “see” what radar sees, but doesn’t generally remove and try to reproduce the things that are in front of the craft.
If you have a rift, I’d encourage you to experiment with it using any of the simulators available. Identify beneficial experiments and then try them on the robot. I highly doubt that the driver would benefit from wearing these, but perhaps the operator trying to manipulate the arms/rollers/etc would.
If you don’t have a rift and are looking to justify the team buying one, I’d encourage you to write up the details of how it would be used.
Greg McKaskle
Last year, on my vision system which I didn’t finish, I had the goal to have some driver interface where the image is converted to a vector, only with keypoints. This way, you get a lot of information to your face, but it doesn’t look that bad. You can also run this at a higher rate without exceeding bandwidth restrictions.
One practical idea that I thought of is a…HUD!
You are getting the video feed directly from the glasses, not the robot. This way, you can see around and not get nauseated by the intense motion blur on the robot. You have a coprocessor on the robot, calculating things such as distance to objects, or maybe even robot position on a minimap. This can be displayed at the corner of the display and the driver merely has to move his or her eyes to get the measurement.
I think the OP was talking about integrating the gyroscopic capabilities of the Oculus to control the robot by steering the robot and raising/lowering a manipulating arm, which would independantly be possible (Via a TrackIR set or similar)
Yes that is what I was getting at. However, I was not thinking to control steering via the glasses because that would be impractical for 360 degree turn (you would have to turn all the way around). Although now that I think of it, it would be interesting to control steering via the glasses, that’s something that i would have to think about.
Also, as an alternative to using just tilt sensors in the Rift, you could use Kinect for the arms.
See that is where I wanted to take the next step of this system to, an interactive arm control system. But what about it you build gloves and put some sort of button sensor/switch in the arm of the robot so that when the robot successfully grabs an object a small micro-vibration motor in the gloves turns on as an added user response (just an idea…)
One practical idea that I thought of is a…HUD!
No need to make a who rift apparatus for that, you need something like this:
http://www.instructables.com/id/DIY-Google-Glasses-AKA-the-Beady-i/
Thats also an idea, but not as fun as a 1st person display.
The problem lies with the infrastructure of FRC itself. There is a bandwidth cap for robot-driverstation communication, and to get 3d you would need 2 cameras streaming to the DS to work. The problem arises with the fact that you have to choose (Pick any two)
That is the only problem that i saw with this idea. That will take some thinking! If anyone else has any ideas I would love to continue this conversation.
Technology issues aside, I think that having a first person view of the bot would be a huge disadvantage. When you’re driving the robot, you’re able to see the whole field and what’s going on, where (e.g. Map Awareness). With a first person view, you are limited to a small window of visibility. This may help with aiming, but this is a team game* and its important to know what’s going on behind you.
*Going off of A.A.
In the future, virtual reality could be used to make a driver SIM which would be just as adequate for driver training as driving a normal robot if it was done properly. It could be difficult to make though.
Years ago I worked out a ‘virtual reality’ simulation of a FIRST robot in VRML.
LOL some of you probably were not alive then.
It had really poor physics modeling and I suspect that most modern game engines would put it to shame.
Considering it was for fun I wasn’t really putting much into it.
That said a modern game engine and Oculus Rift for immersion should be possible.
Considering what one can do with mere shutter glasses.
I am pretty sure DarkBASIC has Rift integration and real time accelerated physics support.
LOL I’ve had a DarkBASIC license since VB6 I bet that’s older than some of you as well
We had a HUD last year, with a screen on the glasses. It was pretty sweet, but when the camera’s image was up it wasn’t nearly as useful as you’d think. We won the Innovation in Controls award on Galileo since the Rockwell people were so impressed by the solution we used to comply with the rules and still not lose much time at the beginning of the match. This year we won’t use the same setup since the drivers do not want the cumbersome-ness of it - yet they still want some of the info the HUD gave them. We’re working it from a different angle for this coming year.
A whitepaper for 2014’s HUD should be up sometime after Thanksgiving Day.
Virtual Reality would be something that I would be able to use quite well. I want some sort of HUD which displays distances to objects, updated at a really high rate (30+FPS). It would also be a good place for diagnostics such as robot speed, direction relative to start and much more.
It’s an amazing piece of technology. It’s a bit impractical, especially for FRC, though!
I feel like not the driver, but the guy/girl operating the manipulator/scoring device may have a competitive advantage by having a first-person view.
Take 2011 for example. A camera that gave the manipulating person that kind of view would be a great advantage. If you recall, the robot was scoring facing towards you, potentially 20 ft to your left/right, while your view is obstructed by tubes. Being able to see exactly what is going on from a sane orientation could help a lot.
Yes, definitely the person operating the manipulator should have the VR. Even if the driver operates the driving and manipulator, the second driver could have the glasses just to help with strategy, like for example telling the driver the robots position during a dog pile. I’ve been working on implementing a VR glasses and I’ve gotten to the point where I can get the video from the robot to the glasses but at a around 5 FPS, I have to find a better way to package the data.
a few years back i actually played with this idea. We had 2 axis cameras mounted on the bot for stereoscopic vision on the driver station. We used green/magenta glasses(could also use red/blue) and while it did work. it wasn’t responsive enough to really be usable for a match. The low framerate and horrible latency made it be unusable. if the latency could be reduced( at least two orders of magnitude) then it may have potential but until then this type of system simply isn’t worth the trouble.
What about a joystick button? You could push the button to download frames from the robot or something similar. This way, you don’t have to always be situated with one or the other – HUD/No HUD.
I’m sure that a HUD would be most useful with some sort of tracking system, where post-processed data could be displayed. Distances to known objects could be printed on the object. Alignment could also be displayed.
For example:
TARGET POSITION:
10 degrees right
12 feet forward
SHOOTING POSITION:
10 degrees right
Move forward 2 feet