|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
|||
|
|||
|
Live Video on Dashboard
On Friday of the Florida Regional Team 2152 was prohibited from using its Dashboard ostensibly because the live video would provide an unfair advantage over other teams.
We were not able to pursuade the Chief Referee that the live video was legal as long as it was passed through the user data. The language of Update #5 was considered ambiguous because it did not say yes. Q&A questions that do not say "yes" or "no" seem to remain ambiguous, especially where a simple yes or no before the explanation is omitted. The Chief Referee did graciously agree to contact FIRST headquarters Friday night for clarification and present the following letter: Quote:
As 2152 was doing well at the time (4/2), we made the decision not to make any changes to the cRIO software for the 3 remaining qualifiers. We are providing this information for the possible benefit of teams at future regionals and the championship. If you have similar capabilities, you should be prepared to bring them to the attention of the inspectors on Thursday and make sure that you are able to demonstrate to them that your live video uses the user data transport and not some other path. Wireshark is a good tool for doing this. I thought the FMS was blocking all other ports but this may not be the actual case. 2152 intends to post the live camera code and dashboard source in the next few days so other teams may take advantage of this capability if they wish. |
|
#2
|
|||||
|
|||||
|
Re: Live Video on Dashboard
Thank you so much and what type of screen did you use?
|
|
#3
|
|||||
|
|||||
|
Re: Live Video on Dashboard
We (1625) were running feed back to a laptop on our DS and another team reported this to the ref's, the ref's then checked the Q and A and determined it was allowed.
|
|
#4
|
|||
|
|||
|
Re: Live Video on Dashboard
Quote:
The Florida refs initially allowed the telemetry feedback but not the live video. |
|
#5
|
|||||
|
|||||
|
Re: Live Video on Dashboard
live video feed from the axis camera
|
|
#6
|
||||
|
||||
|
Re: Live Video on Dashboard
Was legal for 1708 in Pittsburgh. 17" screen, got it up to ~5 fps = very nice video feedback.
--Ryan |
|
#7
|
|||||
|
|||||
|
Re: Live Video on Dashboard
In Wisconsin it was ruled legal following an examination of the Q&A quoted above. For those who are reading this, the video is low resolution (limited by the camera and data bandwidth) and delayed from real time by what appeared to be the greater part of a second.
|
|
#8
|
|||
|
|||
|
Re: Live Video on Dashboard
2077 did this last weekend at WI.
We sent an 80x60 (later scaled up) image split into 5 interlaced fields at 8 bits per pixel with no compression in the usual sense. Is there an API for compression? We used the 8 bits as a 4-bit red channel and a 4-bit green channel, giving "sorta-color" sufficient to distinguish targets. We got usable video at 10 fps with occasional dropped fields (jerkiness, smears). Latency seemed to be around half a second with our final setup, though some earlier tests did better at the price of less acceptable blurring. Not sure what the limiting factor in this was. The key things we found were 1) controlling the rate at which we sent frames into the Dashboard class (the cRIO->DS loop is asynchronous at .02 seconds) and 2) using the lower level NI imaq APIs to access pixel data in the Image object quickly. I suspect considerably better performance is possible with some tweaking, and especially if real compression were used, but that's were we got in the time available, and it was certainly usable. |
|
#9
|
||||
|
||||
|
Re: Live Video on Dashboard
Question, and let me just preface this by saying I have no experience with this years control system.
Isn't the field, etc. using the 802.11n wifi protocol this year? It's my understanding that wireless-N is capable of throughput somewhere in the 100mbps range which makes me wonder why on earth you would be limited to 900 some K and have to compress video down to 160x120. It seems with N you ought to be able to do at least standard def video. I would think something like this would be the whole rationale for using Wireless-N for filed communications otherwise it seems like way overkill if you're just going to move a few K worth of data round. I apologize in advance if my severe lack of knowledge makes this question a total waste of time. Just curious, Justin Last edited by Justin : 16-03-2009 at 13:25. |
|
#10
|
|||
|
|||
|
Re: Live Video on Dashboard
Quote:
TCP protocol provides reliable streams at the expense of delay and jitter. TCP is unsuitable for real time applications except where the error rates are extremely low, e.g. a local Ethernet loop. With wireless, there are retransmissions that occur at layer 2 to get around interference issues (these apply to both UDP and TCP) but the retransmissions at Layer 3 for TCP would basically kill any hope of meeting the latency and jitter requirements. There are solutions (QoS) that can prioritize UDP traffic and allow a smooth coexistence of UDP and TCP. It's a good bet that these will be deployed in future competition. FIRST has wisely chosen to deploy slowly and move step by step as it rolls out the new system. The system seems to be working extremely well at the regionals. We still haven't seen how it performs with four fields in the Georgia Dome. The next logical step will likely be to deploy RTP/RTSP for the camera. The present camera uses MJPEG as multi-part MIME messages in an HTTP response carried over TCP/IP. This is really unsuitable for the field environment, although it may work well in the lab. The biggest gain for the moment is that 802.11N operating in the 5.4 GHZ band(s) provides many more channels and they are largely uncluttered at the moment. The 2.4MHZ junk bands (as the FCC calls them) are filled with, well, junk. As mobile phones (and venus) begin deploying 802.11N in the 5.4GHZ bands, the interference is going to dramatically increase. Accordingly, it's wise to remain ultraconservative. At the moment we have about 400kbps of reliable bandwidth in both directions. To those of us who started with 75 baud, this sure seems like a lot ![]() As for compression, the only way to get images under 2K bytes is to use low resolution (160x120) and very high compression. Remember, the camera produces MJPEG images - not MPEG. MJPEG video is just a series of jpg pictures at the frame rate. Each frame stands by itself (like an MPEG I-Frame). There is no compression involving differences with forward and backward frames and in the real-time environment these delays cannot always be tolerated. And don't apologize for asking questions - that's what the forum is about. |
|
#11
|
|||
|
|||
|
Re: Live Video on Dashboard
Quote:
I find the jpeg encoder on the 206 to be pretty good. The attached example is 1.54KB. When you put an image of this quality in motion at 25 fps, and expand it to 320x240 the eye integrates the pixelation and fills in the gaps. When the camera is setup to stream, the images arrive over the TCP/IP link as multi-part http response that essentially goes on forever or until the TCP/IP connection is torn down. The key is not to decode the images. The decoder in the library is pretty slow, there is no encoder, and additional delay would be introduced by transcoding. Instead the compressed jpeg images is just passed through the user data stream to the DS which forwards it on to the dashboard. If you want, you can also decode the image on the side into an HSL for processing the tracking. In order to control all the timing, we replaced the Dashboard, DriverStation, and RobotBase classes with our own variants. The main difficulty is that you cannot allow queues to form. You have to throw away stale images. The system timing is all controlled by the DS. The DS sends a packet every 20ms and it passes the last received packet to the Dashboard every 20ms. This is more or less synchronous. The MDC responds to each DS packet with an MDC packet. The MDC never originates a packet, it only responds. The DS never responds, it just streams a packet toward the MDC every 20ms (and another to the Dashboard in a different task). There are better ways to do this - but that's a discussion for post season. When packets are delayed and arrive bunched up at the MDC, it responds to each one with the latest buffer it has. When the DS receives these packets, it just places the data in the buffer, overwriting the previous content. Every 20ms, the Dashboard gets the latest data from the MDC. The Dashboard will generally get a reliable 50 pps stream with close to zero jitter and no dropped packets because the connection is local. However, the CONTENT of these packets reflect upstream packet losses, delays, and jitter. So the pipeline is reliable transport -> unreliable transport -> reliable transport. To summarize the timings: 1) The image is captured by the CCD (based on exposure time), always less than 33ms. 2) The image pixels are dumped in mass to the shift CCD's and marched out the CCD into the jpeg compressor. The image can be compressed on the fly since jpg works with multiple scan lines. This delay is also well under 33ms. 3) The jpg image is sent over the TCP/IP connection. This has minimal delay. 4) The MDC TCP/IP stack receives the image, unblocking the waiting read task. 5) Once the complete image is read, it is passed for formatting in the telemetry packets. This requires synchronization with the stream. If the image is too old, it is discarded here. Otherwise, it goes on to be packed within 2 or more telemetry frames which introduces a 30ms typical delay. 6) The actual packet transmission delays are nominally small. There is an additional 10ms average delay from DS to Dashboard. 7) The packet arrives in the Windows IP stack where it is passed to a waiting thread in the python code. 8) The image is decoded (to RGB) and expanded (from 160x120 to 320x480) by python PIL code and then written to the display. This has a delay of about 20ms. 9) The image will not be seen by anybody until the next video refresh comes around. This is an average delay of 8.3 ms. The entire process has an end to end delay of around 150ms. Longer delays will occur if you allow a queue to develop anywhere along the pipeline. You must drain and discard to avoid this. The most common places for this to occur is in the IP stacks in the MDC (for data coming from the camera) and in the Dashboard (for packets coming from the DS). If you don't keep up with all of the processing, queues will build and you will start to see large delays. |
|
#12
|
||||
|
||||
|
Re: Live Video on Dashboard
We used the Flatten Image to String VI, which allows you to specify a compression algorithm (PNG/JPG/TIFF/BMP - but TIFF and BMP probably wouldn't be suited for this application) and in the case of JPG, a quality setting which essentially controls the compression rate/lossiness (those who are familiar with the JPG format are aware of this setting).
This produces a string which we converted to a byte array using the usual String to Byte Array function, then packetized to send over the network. The one disadvantage to this approach is the compression algorithm takes a little time to run, but this was compensated for by running parallel while loops with 1-element queues to pass the data around. Unfortunately I won't have access to the code for another week or so, but I can post the code then if anyone is interested. I do like the idea of just passing through the images from the camera. [writchie] I think you hinted at this, but does the Axis cam supports changing JPEG quality levels? --Ryan Last edited by RyanCahoon : 17-03-2009 at 07:02. |
|
#13
|
|||
|
|||
|
Re: Live Video on Dashboard
Quote:
I missed the imaqFlatten function (the C API version). It looks like there is a lot of good stuff in the vision library that time didn't allow us to explore yet. BTW, I believe that NIVision.h is proprietary - not open source. This is one of the few files we cannot modify and distribute. We are licensed only to use it - like the other proprietary components. National Instruments is being very nice to us. We need to fully respect all of their IP rights. Depending on how good the jpg encoder is, i.e. speed and file size at high compression settings, it looks like it might be possible to handle higher quality 320x240 images (for better tracking), scale them to 160x120, compress them with a high setting, and ship them off to the dashboard while maintaining 25 fps. Since the jpg decoding speed is pretty slow, I would expect the encoding speed to have similar issues. Have you measured how much processor time is required to encode at high compression settings? |
|
#14
|
|||
|
|||
|
Re: Live Video on Dashboard
I would really like to get a look at the code when you get time to post it. I do not really get what everything means but I want to learn, and our programmers are going to start on a code to try to get this to work and we are prolly gonna need all the help we can get.
|
|
#15
|
|||
|
|||
|
Re: Live Video on Dashboard
Everybody seems to be doing great on this thread. I only have a few things to add. I've seen the camera timing get flaky when compression is set too high. I'm not sure where the threshold is or if it involves images size or is simply affected by compression factor. By flaky I mean that the image framerate will drop because some frames will take two periods to be sent.
Also, modifying the image and recompressing on the cRIO doesn't seem like a good use of CPU time. If you can find a single camera setting so that the image can be piped to the dashboard, that will have minimal impact on other tasks and will get the highest framerate to the dashboard. Greg McKaskle |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| LabView dashboard/live telemetry | smcmahon | National Instruments LabVIEW and Data Acquisition | 8 | 19-01-2009 19:45 |
| New Video Blog Entry: "Video from Live LabVIEW Session for FRC Competitors in Toronto | LVMastery | NI LabVIEW | 0 | 13-12-2008 08:09 |
| Labview Video Tutorial? Dashboard | Chris_Elston | National Instruments LabVIEW and Data Acquisition | 21 | 25-01-2008 00:29 |
| Live video with Kit Camera | BobcatProgramer | Electrical | 10 | 11-01-2005 10:43 |
| Live Action Video - 3D Animation Combination | Ryan Dognaux | 3D Animation and Competition | 18 | 06-08-2003 19:33 |