New Camera Class

Opposed to the stock camera server, my server gets 20+ FPS at 160x120. Other image sizes are slightly buggy, but I’ll fix that soon. Adding more features later.

Awesome! Without pouring through the code, how did you manage to do this?

Server

Step 1. Acquire Image
Step 2. Calculate how many packets will be required to send the image
Step 3. Send packet(s) via UDP (1032 byes), populate any outdated data
Step 4. Repeat 1 - 3

Receiver

Step 1. Acquire a 1032 byte packet
Step 2. Translate the packet into usable information (i.e Big Endian to Little Endian)
Step 3. Populate “Status” with new information
Step 4. If the image is finished, present it
Step 5. Repeat 1 - 4

This is, of course, a simplified overview.

So you segment the image into several packets and send it over UDP instead of TCP? I assume you have error-checking built in? I’ll have to check it out.

I carry very little about errors. I throw them out and wait for a new image.

I’m curious - how does this affect the total bandwidth used by a single Crio and dashboard.

In other words, have you increased 20-fold (or is it 20^2 for an image) the number of packets that are going to be sent to increase your framerate by 20 times?

For some reason I thought the communication protocalls were sancrosanct - that is I thought we weren’t allowed to touch them because of bandwidth concerns. I had thought that was why they limited the dashboard to a 1 hz update, though I might be wrong: you know what they say about assuming.

I’m not sure either. The TCP stream tries to send an image pretty fast, but fails. Also it implicitly fragments, but I do not know how this affects socket I/O. I know that I use ~1MB/s for 640x480@30Hz with Jpegs drawn on the computer, but they are much larger than the camera’s Jpegs at no compression. I’ll post bandwidth usage for each image size tomorrow.

Bandwidth Usage:


160x120: 100kbs
360x240: 400kbs (peak, average was around 300kbs)
640x480: Not Tested

Good work. However, before you train drivers with this video feed available, you might want to ask FIRST Q&A if UDP traffic will be blocked by FMS at competition. Unfortunately, I assume it will be blocked, just like the PCVideoserver port was blocked last year. FIRST is obviously concerned about the bandwidth usage at competition. It would be nice if QoS was used so video could be sent at a lower priority but utilize all available bandwidth.

Already being done. Yesterday we submitted a question to the Q & A forums. This system won’t use any more bandwith than the WPI version (in fact it will probably use less due to the more unstructured nature of UDP and the fact that this system doesn’t re-transmit packets that get dropped or lost).

The source code can be found by using svn.


svn checkout http://frc-video-collab.googlecode.com/svn/trunk/ frc-video-collab-read-only

The C++ code will be updated tomorrow morning at around 10:30 EST.

The GDC got back to us, and their answer is, in a word, no:

http://forums.usfirst.org/showthread.php?t=14284

We’ll have to see what this update brings before moving forward.

It looks like what they really said is no to UDP… can you make it work over TCP?

Sure, but we think that a lot of the slowness was caused by TCP. For example, if a frame of the video being transmitted gets screwed up, it’s not that big of a deal using UDP, especially if you have a high framerate. You can just drop the frame and wait for the next one (no big deal if you’re getting a good 20 of them every second - the user will barely notice). If we use TCP and the same thing happens to a packet, the receive() function stops the entire program execution, sends a request to resend the packet, and waits for the packet to get sent again, all for just one frame (or even just one part of a frame). You’re making the entire system, which could be better spending its time just displaying the next frame(s), wait on just one frame instead of moving on.

We’re also aware that some of the latency is being caused by the Classmate not being enough to keep up with the cRIO, but TCP doesn’t help in this respect.

They said no to UDP on port 1180. This makes perfect sense as our packets would corrupt the driver station’s packets. Currently, the video server works off of port 1234 (25FPS@160x120 resized to 640x480). However, I see a lot of erroneous packets as the robot gets farther away.

TCP hates not being acknowledged…

The GDC does not mention that UDP couldn’t be used on another port. They do mention custom TCP protocols are allowed, but not UDP. They do not mention no UDP for all ports or just 1180.

We’d need to have clearance to use a port though, because the FMS firewalls off any ports not cleared for use by the robot and driver station.

Also, the video feed and the rest of the driver station data run on different ports. The video uses a TCP connection to port 1180, while the rest of the dashboard data sends 1018 byte packets through UDP on port 1165. They wouldn’t corrupt each other at all.

I’ll have to post the whitepaper I made about the rest of the dashboard data some time.

You should ask for official clarification on this, but my understanding is that UDP is not provisioned on any port on the FMS except for the official control and status packets, and you certainly can’t use those.

That would be great!

I believe you (or someone else) mentioned sending video over the UserData dashboard data mechanism last year… it is implemented as UDP transfers, and only marginally smaller packets that you get with raw datagrams. I’m not aware of any changes that would have made this stop working. Is this method no longer feasible or desirable for some reason? This method has about 47kBytes/s throughput.

-Joe

Our team implemented this last year. We were trying to get a good video stream this year at high FPS with good quality. Each frame at 0 compression takes 5 packets. That’s 10 FPS . We did something high around 70-80 and got 16 FPS, but the quality couldn’t be used for image tracking.

The stream uses 100KB/s at 160x120 and we resize it to 640x480. It looks good actually.