Chief Delphi

Chief Delphi (http://www.chiefdelphi.com/forums/index.php)
-   Technical Discussion (http://www.chiefdelphi.com/forums/forumdisplay.php?f=22)
-   -   Virtual Reality 1st Person Driver? (http://www.chiefdelphi.com/forums/showthread.php?t=131217)

iggy_gim 22-11-2014 17:39

Virtual Reality 1st Person Driver?
 
Has anyone ever thought of hooking something like an oculus rift/google cardboard to an onboard camera for the robot to create a sort of 1st person driver for the robot. You can argument the vision to aid in aiming, driving, etc. I know in some aspects it might be cumbersome, but I think it sounds like a good idea. (Is it even allowed in FRC?)

BBray_T1296 22-11-2014 22:14

Re: Virtual Reality 1st Person Driver?
 
Quote:

Originally Posted by iggy_gim (Post 1409573)
Has anyone ever thought of hooking something like an oculus rift/google cardboard to an onboard camera for the robot to create a sort of 1st person driver for the robot. You can argument the vision to aid in aiming, driving, etc. I know in some aspects it might be cumbersome, but I think it sounds like a good idea. (Is it even allowed in FRC?)

Sure it would be allowed. (and I have spent hours thinking about this)

The problem lies with the infrastructure of FRC itself. There is a bandwidth cap for robot-driverstation communication, and to get 3d you would need 2 cameras streaming to the DS to work. The problem arises with the fact that you have to choose (Pick any two)
  • Horrible Resolution
  • Horrible Framerate
  • Horrible Compression

I, personally, would get a pretty severe headache from that for even just a few minutes.

Were we able to send dual 480/720p 20+fps cameras, it would be definitely a viable control system IMO.

techhelpbb 23-11-2014 00:03

Re: Virtual Reality 1st Person Driver?
 
Quote:

Originally Posted by BBray_T1296 (Post 1409602)
.. and to get 3d you would need 2 cameras streaming to the DS to work.

Combine the 2 images into a steroscopic image on the robot then send frames of that steroscopic image. So the viewer at the driver's station can wear either the old fashioned red/blue or polarized lenses. Should be able to do it well within the computing power offered by a laptop that fits within the COTS rules if those do not change this year.

http://www.3dglassesonline.com/learn...d-glasses-work

Which oddly enough in my previous suggestions I suggested sending full frames one at a time (aka key frames) instead of a stream of video changes. It should be less difficult to make the network handle that sort of work. If you snap the pictures from the 2 cameras, the proper distance appart, process them, then send that (as quickly as you can) that will likely work out. Even if you get a single static image it would still exhibit depth.

asid61 23-11-2014 00:36

Re: Virtual Reality 1st Person Driver?
 
Remember that headsets need to be disconnected from the driver when the match starts, and if cable get tangled or something you will lose precious seconds.

techhelpbb 23-11-2014 00:58

Re: Virtual Reality 1st Person Driver?
 
Quote:

Originally Posted by asid61 (Post 1409615)
Remember that headsets need to be disconnected from the driver when the match starts, and if cable get tangled or something you will lose precious seconds.

If the glasses are merely standard polarized or red/blue lenses you wouldn't have any cables to tangle. Another upside.
Extra amusement points though if someone gets some displays and retrofits a ViewMaster ;).

BBray_T1296 23-11-2014 01:04

Re: Virtual Reality 1st Person Driver?
 
Quote:

Originally Posted by techhelpbb (Post 1409612)
Combine the 2 images into a steroscopic image on the robot then send frames of that steroscopic image. So the viewer at the driver's station can wear either the old fashioned red/blue or polarized lenses. Should be able to do it well within the computing power offered by a laptop that fits within the COTS rules if those do not change this year.

http://www.3dglassesonline.com/learn...d-glasses-work

Which oddly enough in my previous suggestions I suggested sending full frames one at a time (aka key frames) instead of a stream of video changes. It should be less difficult to make the network handle that sort of work. If you snap the pictures from the 2 cameras, the proper distance appart, process them, then send that...as quickly as you can that will likely work out. Even if you get a single static image it would still exhibit depth.

Now that is an interesting idea. Get a smallish polarized 3d tv and wear those glasses you get from the RealD 3d movies.

I think the OP was talking about integrating the gyroscopic capabilities of the Oculus to control the robot by steering the robot and raising/lowering a manipulating arm, which would independantly be possible (Via a TrackIR set or similar)

asid61 23-11-2014 02:17

Re: Virtual Reality 1st Person Driver?
 
Quote:

Originally Posted by techhelpbb (Post 1409620)
If the glasses are merely standard polarized or red/blue lenses you wouldn't have any cables to tangle. Another upside.
Extra amusement points though if someone gets some displays and retrofits a ViewMaster ;).

Oh. That's a really cool idea! Would it still look okay through those glasses?
Also, as an alternative to using just tilt sensors in the Rift, you could use Kinect for the arms.

techhelpbb 23-11-2014 07:38

Re: Virtual Reality 1st Person Driver?
 
Quote:

Originally Posted by asid61 (Post 1409631)
Oh. That's a really cool idea! Would it still look okay through those glasses?
Also, as an alternative to using just tilt sensors in the Rift, you could use Kinect for the arms.

Using colored lens you loose some color. For any really important colors one could probably find the best colors for the lens set and simply swap the real colors for that color during the processing. Upside to this is there is nothing unusual about the display. You could just use the existing laptop displays.

Using polarized lens you get much better color reproduction at the cost of a compatible display.

Greg McKaskle 23-11-2014 08:20

Re: Virtual Reality 1st Person Driver?
 
This form of interface is normally used when you are miles way from the real device/environment and needing to immerse yourself into the situation as if you could be there. This is true whether it is a visualization cave, 3D glasses, etc.

When you have a view of the situation, I think a HUD or secondary monitor is a far better approach. This allows pilots to "see" what radar sees, but doesn't generally remove and try to reproduce the things that are in front of the craft.

If you have a rift, I'd encourage you to experiment with it using any of the simulators available. Identify beneficial experiments and then try them on the robot. I highly doubt that the driver would benefit from wearing these, but perhaps the operator trying to manipulate the arms/rollers/etc would.

If you don't have a rift and are looking to justify the team buying one, I'd encourage you to write up the details of how it would be used.

Greg McKaskle

yash101 23-11-2014 10:28

Re: Virtual Reality 1st Person Driver?
 
Last year, on my vision system which I didn't finish, I had the goal to have some driver interface where the image is converted to a vector, only with keypoints. This way, you get a lot of information to your face, but it doesn't look that bad. You can also run this at a higher rate without exceeding bandwidth restrictions.

One practical idea that I thought of is a...HUD!
You are getting the video feed directly from the glasses, not the robot. This way, you can see around and not get nauseated by the intense motion blur on the robot. You have a coprocessor on the robot, calculating things such as distance to objects, or maybe even robot position on a minimap. This can be displayed at the corner of the display and the driver merely has to move his or her eyes to get the measurement.

iggy_gim 23-11-2014 11:53

Re: Virtual Reality 1st Person Driver?
 
Quote:

I think the OP was talking about integrating the gyroscopic capabilities of the Oculus to control the robot by steering the robot and raising/lowering a manipulating arm, which would independantly be possible (Via a TrackIR set or similar)
Yes that is what I was getting at. However, I was not thinking to control steering via the glasses because that would be impractical for 360 degree turn (you would have to turn all the way around). Although now that I think of it, it would be interesting to control steering via the glasses, that's something that i would have to think about.

Quote:

Also, as an alternative to using just tilt sensors in the Rift, you could use Kinect for the arms.
See that is where I wanted to take the next step of this system to, an interactive arm control system. But what about it you build gloves and put some sort of button sensor/switch in the arm of the robot so that when the robot successfully grabs an object a small micro-vibration motor in the gloves turns on as an added user response (just an idea...)

Quote:

One practical idea that I thought of is a...HUD!
No need to make a who rift apparatus for that, you need something like this:
http://www.instructables.com/id/DIY-...A-the-Beady-i/
Thats also an idea, but not as fun as a 1st person display.

Quote:

The problem lies with the infrastructure of FRC itself. There is a bandwidth cap for robot-driverstation communication, and to get 3d you would need 2 cameras streaming to the DS to work. The problem arises with the fact that you have to choose (Pick any two)
That is the only problem that i saw with this idea. That will take some thinking! If anyone else has any ideas I would love to continue this conversation.

ColinHalter 24-11-2014 08:14

Re: Virtual Reality 1st Person Driver?
 
Technology issues aside, I think that having a first person view of the bot would be a huge disadvantage. When you're driving the robot, you're able to see the whole field and what's going on, where (e.g. Map Awareness). With a first person view, you are limited to a small window of visibility. This may help with aiming, but this is a team game* and its important to know what's going on behind you.

*Going off of A.A.

g_sawchuk 24-11-2014 08:22

Re: Virtual Reality 1st Person Driver?
 
In the future, virtual reality could be used to make a driver SIM which would be just as adequate for driver training as driving a normal robot if it was done properly. It could be difficult to make though.

techhelpbb 24-11-2014 08:33

Re: Virtual Reality 1st Person Driver?
 
Quote:

Originally Posted by GrifBot (Post 1409757)
In the future, virtual reality could be used to make a driver SIM which would be just as adequate for driver training as driving a normal robot if it was done properly. It could be difficult to make though.

Years ago I worked out a 'virtual reality' simulation of a FIRST robot in VRML.
LOL some of you probably were not alive then.

It had really poor physics modeling and I suspect that most modern game engines would put it to shame.
Considering it was for fun I wasn't really putting much into it.

That said a modern game engine and Oculus Rift for immersion should be possible.
Considering what one can do with mere shutter glasses.

I am pretty sure DarkBASIC has Rift integration and real time accelerated physics support.

LOL I've had a DarkBASIC license since VB6 I bet that's older than some of you as well ;)

JesseK 24-11-2014 09:37

Re: Virtual Reality 1st Person Driver?
 
We had a HUD last year, with a screen on the glasses. It was pretty sweet, but when the camera's image was up it wasn't nearly as useful as you'd think. We won the Innovation in Controls award on Galileo since the Rockwell people were so impressed by the solution we used to comply with the rules and still not lose much time at the beginning of the match. This year we won't use the same setup since the drivers do not want the cumbersome-ness of it - yet they still want some of the info the HUD gave them. We're working it from a different angle for this coming year.

A whitepaper for 2014's HUD should be up sometime after Thanksgiving Day.

yash101 24-11-2014 21:50

Re: Virtual Reality 1st Person Driver?
 
Virtual Reality would be something that I would be able to use quite well. I want some sort of HUD which displays distances to objects, updated at a really high rate (30+FPS). It would also be a good place for diagnostics such as robot speed, direction relative to start and much more.

It's an amazing piece of technology. It's a bit impractical, especially for FRC, though!

BBray_T1296 25-11-2014 01:20

Re: Virtual Reality 1st Person Driver?
 
I feel like not the driver, but the guy/girl operating the manipulator/scoring device may have a competitive advantage by having a first-person view.

Take 2011 for example. A camera that gave the manipulating person that kind of view would be a great advantage. If you recall, the robot was scoring facing towards you, potentially 20 ft to your left/right, while your view is obstructed by tubes. Being able to see exactly what is going on from a sane orientation could help a lot.

iggy_gim 25-11-2014 16:18

Re: Virtual Reality 1st Person Driver?
 
Yes, definitely the person operating the manipulator should have the VR. Even if the driver operates the driving and manipulator, the second driver could have the glasses just to help with strategy, like for example telling the driver the robots position during a dog pile. I've been working on implementing a VR glasses and I've gotten to the point where I can get the video from the robot to the glasses but at a around 5 FPS, I have to find a better way to package the data.

sanelss 25-11-2014 17:28

Re: Virtual Reality 1st Person Driver?
 
a few years back i actually played with this idea. We had 2 axis cameras mounted on the bot for stereoscopic vision on the driver station. We used green/magenta glasses(could also use red/blue) and while it did work. it wasn't responsive enough to really be usable for a match. The low framerate and horrible latency made it be unusable. if the latency could be reduced( at least two orders of magnitude) then it may have potential but until then this type of system simply isn't worth the trouble.

yash101 26-11-2014 22:19

Re: Virtual Reality 1st Person Driver?
 
What about a joystick button? You could push the button to download frames from the robot or something similar. This way, you don't have to always be situated with one or the other -- HUD/No HUD.

I'm sure that a HUD would be most useful with some sort of tracking system, where post-processed data could be displayed. Distances to known objects could be printed on the object. Alignment could also be displayed.
For example:
Code:

TARGET POSITION:
10 degrees right
12 feet forward

SHOOTING POSITION:
10 degrees right
Move forward 2 feet


techhelpbb 26-11-2014 23:24

Re: Virtual Reality 1st Person Driver?
 
Quote:

Originally Posted by sanelss (Post 1410036)
a few years back i actually played with this idea. We had 2 axis cameras mounted on the bot for stereoscopic vision on the driver station. We used green/magenta glasses(could also use red/blue) and while it did work. it wasn't responsive enough to really be usable for a match. The low framerate and horrible latency made it be unusable. if the latency could be reduced( at least two orders of magnitude) then it may have potential but until then this type of system simply isn't worth the trouble.

How did you send the data across the network?
As a single preprocessed stream or as 2 cameras over the network?

sanelss 26-11-2014 23:44

Re: Virtual Reality 1st Person Driver?
 
Quote:

Originally Posted by techhelpbb (Post 1410275)
How did you send the data across the network?
As a single preprocessed stream or as 2 cameras over the network?

two cameras over network. The crio doesn't have the horsepower to pre-process the streams in any meaningful way. The main factors were, and still are, latency and bandwidth. if you're ok with pretty cruddy quality you can work with bandwidth but until latency becomes addressed it's pretty much not worth even dealing with for a frc game.

AlexanderTheOK 01-12-2014 16:49

Re: Virtual Reality 1st Person Driver?
 
I happened to do something almost exactly along the lines of what this thread is about over the past year. I wrote a nice 6 page article in SERVO on it if you want to take a look.

To answer some questions that seem to be hanging:

Without more bandwidth on the field this is not a feasible method. Over a direct LAN connection it was pulling a minimum of 18 mbps. This was just barely enough to keep a steady 30 fps at 600x400 per eye.

With the limits the FMS puts on the driver stations it's going to be either impossible to see anything or gut-wrenchingly slow. It's already a tad bit nauseating at the speed it's running.

It also isn't too easy to find an IP camera with a high FOV for cheap. The cameras I found for 40 dollars are webcams so they run through YAWCAM and hopefully later on MJPG-Streamer.

Still super fun to play with, but not the best idea for a fast paced FRC game with network constraints.


All times are GMT -5. The time now is 19:55.

Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi