![]() |
Re: 2009 - Live camera feed to drivers during a match?
Quote:
|
Re: 2009 - Live camera feed to drivers during a match?
Quote:
Does this mean they are implementing limitations on the data transfer rate? Using their Cisco equipment they could easily cap the bandwidth to make it impossible for a team to transfer live video to the dashboard. |
Re: 2009 - Live camera feed to drivers during a match?
Quote:
I wonder if "limitations on the data transfer rate provided by the wireless communication system" will become less onerous throughout the competition season? |
Re: 2009 - Live camera feed to drivers during a match?
Quote:
I think a lot of people don't realize (and FIRST has not made this very clear) that you will not be allowed to transmit whatever network traffic you like. You can do it at home for practice, but once you get to the field it will be blocked. |
Re: 2009 - Live camera feed to drivers during a match?
Quote:
|
Re: 2009 - Live camera feed to drivers during a match?
Bandwidth from 6 robots while managing a field has always been the concern with going to the new WIFI system. Many factors including possible interferrence from venues and overall security weigh on this. Especially when looking at situations like Atlanta with 4 fields running at once. While the new fields are using the latest State of the Art Wifi, there has not been enough characterization done of what adding a real time video stream from each robot will do to the overall performance of controlling the bot over the WIFI simultaniously. We will be collecting data from all regionals through this season to see what the possible impacts are.
Begrudgingly, the decision was made to put off being able to stream video from bots through this current regional timeline. I have faith that we will prove it can be done reliably after getting data from all venues this year and using robot streaming video in the OFF SEASON so it can be utilized in future regional events. :cool: BotBash BOB Pitzer 4FX Design |
Re: 2009 - Live camera feed to drivers during a match?
Streaming video to the teleoperators will only contribute to pilot workload and doesn't take advantage of the robot's onboard processing capability. My team plans to use a single bit head-up display as we did in 2006. When the robot's locked on (distance and azimuth) and ready to shoot, an LED in the co-pilot's safety glasses will light up and the co-pilot can fire away.
|
Re: 2009 - Live camera feed to drivers during a match?
Quote:
If you want to send back sensor readings, go for it. If you want to send back internal state, go for it. If you want to send back a stream of robotics related limericks that are generated programmaticly in response to field conditions, go for it. You can even use it to send back video if you'd like. However, 984 bytes at 50Hz is what you get. That won't fit full motion video, but it might be able to fit heavily compressed / decimated video. For example, you could easily send back a black/green/pink thresholded stream. If you are a total nutter, you could send back info on object positions to reconstruct a picture of what things should be on your laptop. For reference, this is about 7 times faster than a 56k dial up modem. Enough to do a lot of cool stuff, not enough to be wasteful. |
Re: 2009 - Live camera feed to drivers during a match?
This can lead to an interesting thought experiment.
* To be most useful, the images sent to the driver should be sent at reasonable video speeds (say 10 Hz or so) * This means we need to be able to compress a single frame down to 5*984 bytes = 4920 bytes * At 640x480x24 bits per pixel (standard RGB color), a single uncompressed image is 921,600 bytes. * At 160x120 at 1 bit per pixel (binary black and white), a single uncompressed image still needs 2400 bytes. Hmm... We could even have 2 bits per pixel (4 shades of gray) and still fit under the limit. Think about this one possible implementation: We have 5 different types of 984-byte packets. Each packet needs a header telling the dashboard PC which type it is, along with a frame number (to correlate out-of-order packets and drop old ones). It would also be helpful to have target positions (for, say, up to 3 targets we are tracking). Then comes the payload of raw image data (I want to avoid compression for now). Implementation: Packet 1: 1 byte for correlation (high 4 bits are the packet id (1); low 4 bits are the frame number that rolls over ever second and a half or so) 3 bytes for target #1: +1 byte indicating which type of target, and whether or not it is visible +1 byte indicating Y position in the frame +1 byte indicating X position in the frame 3 bytes for target #2: +1 byte indicating which type of target, and whether or not it is visible +1 byte indicating Y position in the frame +1 byte indicating X position in the frame 3 bytes for target #3: +1 byte indicating which type of target, and whether or not it is visible +1 byte indicating Y position in the frame +1 byte indicating X position in the frame 974 bytes left over for payload data (first 1948 pixels of the image)... Packets 2-5 would only need the 1 byte ID/frame #, and would have plenty of room left for the rest of the raw image data. There are various methods for generating your 2-bit thresholded image; many are fairly inexpensive in terms of processing time. --- I'm not sure about the utility of it, but I think that this shows that you might be able to get away with some "video" this year. |
| All times are GMT -5. The time now is 03:10. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi