View Single Post
  #3   Spotlight this post!  
Unread 23-11-2014, 00:03
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,620
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: Virtual Reality 1st Person Driver?

Quote:
Originally Posted by BBray_T1296 View Post
.. and to get 3d you would need 2 cameras streaming to the DS to work.
Combine the 2 images into a steroscopic image on the robot then send frames of that steroscopic image. So the viewer at the driver's station can wear either the old fashioned red/blue or polarized lenses. Should be able to do it well within the computing power offered by a laptop that fits within the COTS rules if those do not change this year.

http://www.3dglassesonline.com/learn...d-glasses-work

Which oddly enough in my previous suggestions I suggested sending full frames one at a time (aka key frames) instead of a stream of video changes. It should be less difficult to make the network handle that sort of work. If you snap the pictures from the 2 cameras, the proper distance appart, process them, then send that (as quickly as you can) that will likely work out. Even if you get a single static image it would still exhibit depth.

Last edited by techhelpbb : 23-11-2014 at 07:40.