Who used Driver Station for Vision?

I am just wondering who was using their Driver Station or an non cRIO computer for Vision Processing in Rebound Rumble, and what are you going to do next year with the new field data limits.

On a side note, because of the $400 limit of parts on the robot and the new allowance of laptops allowed on robots, would people like a price raise limited for laptops/alt. computing devices that could go on the robot?

Where did you hear about new field data limits?

Page 23 of the Einstein Investigation Report: <http://www3.usfirst.org/node/2426>

Planned changes to the wireless system to increase robustness were confirmed by the feedback from the wireless
experts consulted as part of the investigation. These items do not directly address failures that occurred on
Einstein but aim to make the wireless network configuration more robust:

Quality of Service (QoS) – With a fixed bandwidth cap in place, it becomes critical to prioritize robot
control packets over other types of data such as video. QoS can be used to implement this prioritization
so that robot control packets will continue to flow even if a team exceeds the bandwidth cap with video
or other data.

Ideally each of the 6 teams on the field will get ~50Mbps, most of which will go to FMS/Robot Control data.

I’m pretty sure the robot control data doesn’t take more than about 1KB per update, at around 50 Hz. This gives you 400 Kbps for robot control data.

To transmit a single frame of video with 320x240 resolution and 24bit color is 32024024 = 1.8 Mb. Note that the Axis cameras use MJPEG compression as well, so this is a gross overestimate. For targeting purposes, given the network lag that’s going to be inherit in the system, you shouldn’t need more than 10 fps. Maybe 15 if you want smoother-looking video for display to your drivers.

That’s still only 30 Mbps (even with uncompressed video).

Note the word Ideally, 300 Mbps it the theoretical bandwidth for 802.11n with the FMS router and robot in channel bonding mode (this doesn’t happen as far as I know), but can drop to under 130 Mbps which would then give each team ~21Mbps or under. Also to do nice detailed image processing my team used 640x480 resolution on two cameras in stereo, pushing us to ~14 Megabits(7Mb*2) for both images at ~15FPS, excluding network overhead, this according to the FMS people in Raleigh was getting close to taxing the FMS when we were the only team using vision on the field during a match.:confused: :eek:

Here’s is what the Q/A said for last year:

There are currently no bandwidth limits in place in the field network. In theory, each team has 50Mbits/second (300Mbits/6) available, but that’s not actually realistic. In reality, each team is likely to have ~10-12Mbits/s available. This rate will vary depending on the location of the radio on the Robot and the amount of wireless traffic present in the venue at 5GHz. While this information may help give teams an idea of what to expect, note that there is no guaranteed level of bandwidth on the playing field.

Acknowledged - just pointing out that the needed bandwidth is fairly close to the actually available bandwidth, not many orders of magnitude difference. I’m actually hoping they can implement a robust QOS system; it will make our jobs much easier instead of having to implement throttling ourselves.

Did you do any testing to compare your accuracy against a lower resolution?

341 used off board vision using Java/SmartDashboard. It worked really well for them. One of the top 5 teams in FRC this year. We used their code as well, it made a huge difference for us.

They were kind enough to put their code up on Delphi, so take a look if your interested.

-RC

Yes, we tested on lower resolution. Refer to my teams white paper for examples of a 640x480 image and how far apart our cameras were. The reason for the large images is for calculations, low resolution images give larger errors in our distance and angle calculations due to the pixels.

Edited top Question

That’s what I had assumed, but I’m curious if you noted how much larger the errors were. I’d also be interested in the errors of the stereo distance measurement vs the perspective distance measurement as a function of distance, if you have them.

Due to the events on Einstein some of the rules regarding networks have changed. One of those is that there’s going to be a bandwidth limit for each robot and QoS to ensure the driver commands can make it through with as little latency as possible.

We ran our vision on Dashboard.

I set the fps limit to 20fps. Due to the design of the control algorithm we used, more frames is better, so we try for 20. We get images at 320x240 (we determined this was the smallest size that achieved adequate resolution with the fish-eye lens we used).

If we encounter any bandwidth limit in the future, we’ll either run smaller or slower. Running at 320x240 at 10fps is still far better than what the cRio on its own is capable of (I don’t think we could get more than 6fps at any resolution, given the amount of processor utilized by non-vision code), and any solution that is weight neutral on the robot the solution we will almost always choose.

I would think a better plan would be to have the vision processing calculated on the robot, use of cheaper and better USB cameras with low image capture lag, with a light and powerful laptop and send relevant data to the Driver Station/Robot. But alas there is a dreaded $400 limit to ALL parts, if only laptops that would go on the robot could be a little more expensive the FMS would not have a need to worry about large images clogging the field network…:rolleyes: :wink:

If you’re worried about the price limit and weight, you shouldn’t even be looking at laptops. Build your own Mini-ITX board. You can buy/bill the components separately and to your needed specifications, and that way you don’t have to carry around the screen. There are power supplies on that site designed for car computers that are tolerant to voltage drop-out as well, so you can even power the computer off of the robot’s battery, saving you more weight.

For me the bigger question is, why doesn’t FIRST just ditch the cRio and go with laptop controlled robots?

All it would take is a USB motor controller + I/O board (and since we already have FIRST-specific electronics that ship with the cRio, I don’t see this as being a huge issue) and almost any laptop and you have a control system that can be easily upgraded/updated, has an independent backup battery (no worries about power loss on your controller), can be programmed in virtually any language, can run vision processing without eating network bandwidth, and, depending on what you get, can be cheaper and have more features than the cRio.

Given how much timing jitter we have now, I can only imagine how bad it would be if we ran a non-RTOS.

Or how bad the boot times would be.

<rant>I would be very, very happy with an embedded system. Something like a PowerPC or ARM that has a good RTOS without any extra junk, just raw hardware access, and an Ethernet stack.</rant>

The key to getting good execution timing and RT performance is to reduce overhead. Going to a non-RTOS without a real-time coprocessor would just be terrible for timing and performance.

[EDIT: this first set of arguments is further supported by [apalrd]'s arguments for RTOSes] If you start from FIRST’s requirement that the control system have a kill switch that can’t be blocked by user code (it’s the reason the IFI controllers have a “master” coprocessor and a big reason why we can’t touch the FPGA on the cRIOs) and step into their paranoid mindset for a second, either you accept that the laptop won’t be the primary control processor, in which case your I/O board expands to become a microcontroller board, and you basically get back to where we are now*, or the laptop probably won’t be able to run just any commodity OS, and that the user code will probably have to run in some sort of jail/VM/isolate. The latter two points combined means you have no guarantee that it can be programmed in “virtually any language.” Or else I would argue that the cRIOs can be programmed in “virtually any language” as well: assuming the language is open source, you should be able to cross-compile for the PowerPC target. This is how RobotPY works.

Also, you have to find somebody to maintain and support this new platform. FIRST employs only a handful of control system engineer staff; a lot of the work on the software for the current system is done by NI or the WPIlib project. If you move to a laptop-based system, you lose at least half the team. Who handles all the calls and emails when teams start having problems with the system? Also, the laptops have to come from somewhere, so you have to find a sponsor willing to donate, or sell at greatly reduced price, 2300+ laptops. You may argue that already happens with the driver stations - and i’m not saying it’s impossible, just that it would have to happen.

If we ignore all the above and suppose they did allow teams to use their own laptops, there’s also the point of maintainability at the competitions. The ability of the FTAs and CSAs to help troubleshoot problems becomes greatly reduced when you open up such a critical part of the control system. FIRST is having a difficult enough time keep the current system running, as evidenced by the communication problems, etc, even when there aren’t malicious parties involved. I’m not trying to insult FIRST at all - just saying the job is a difficult one already. It would become increasingly unclear if the problem was in the field system or if the team had messed something up themselves.

  • Perhaps the argument here comes down to the fact that the cRIOs are bulkier than they need to be. And I would agree with you. I doubt FIRST needs controllers that are certified for 50G shock loads, etc. See above points on logistics, though. It might have been interesting (political issues notwithstanding) if we had kept the old IFI controllers but made it easier to interface them with a laptop.

Despite my arguments to the contrary, I think it would be a great opportunity if FIRST did move to a laptop-based system. I guess the last point is that I am encouraged by FIRST opening up the driver station in the last couple of years. Perhaps this is a sign of things to come (I hope).

Keep in mind that JPEG compression is sensitive to the image contents. We found that by using a very intense lighting rig (two Superbright LED rings) combined with a very short exposure time for the camera, most of every image we took was very, very dark except for the target (and other lighting sources). Dark areas of the image have low SNR and compress very well.

The 640x480 images getting transmitted to our laptop were almost always under 20KB because they were so underexposed. Even at 30 fps, that is less than a megabyte per second.

See the attached file for an example of the images we were sending back for vision processing.

image (22).jpg


image (22).jpg

we wound up moving our vision code to the laptop, but only because an unbounded loop on the cRIO prevented it from running correctly. If we had found that loop, I think we would have had it working onboard. That said, I think this year we will go with a single board system and a USB camera being processed by a C app in debian (mainly because the axis hardware gave us more trouble then it was worth when trying to set things like exposure time and compression level). We didn’t see that much usage of network time, maybe 5MB/s, and I think we were working with 8 bit color (not sure on that, but it wasn’t much).

Team 11 found and used an AMD dual core netbook (it had a bigger screen than what most might consider a netbook) on our robot for Rebound Rumble. It came with a SSD in it. The screen was removed from it. The original battery was used in it (we had also considered ITX boards, PC104 boards, BeagleBones…didn’t want to fight with the power supply issues). It passed inspection at the 3 competitions it was used in. Later in the season it was removed (it worked fine it was removed to adjust for driving styles). It was just under the $400 limit.

They had 2 USB cameras connected to it. One high resolution (1080p), low speed (measured 5+ frames a second) and one high speed (measured 30+ frames a second…this was fun to watch and could swamp a single core), low resolution (640x480). It was running Linux and using custom Java software written by the students to process video and send control signals to the cRIO over the robot Ethernet. We tried quite a few USB cameras (I’ve got a 1 cubic foot box full of them now). Some had terrible white balance. Some didn’t work well in Video4Linux but were a little better in Windows (well it was a Microsoft camera LOL). Some had terrible frame rates or highly variable frame rates unexpectedly. We found oddly that several of the very cheap webcams on Amazon worked great ($5 webcam versus $125 webcam and the $5 webcam works better for this…go figure). (I didn’t mention exactly which cameras because I don’t want to take all the challenge out of this.)

One of the original concerns that prompted this design which has now spanned 2 years of competition (we actually thought about this the year before and didn’t have any weight to spare for it, though our soon to be programming captain made some very impressive tests) was the bandwidth sending video to the driver’s station. We had a great deal of problems locating working clear samples of Java code for the cRIO that could process video so this seemed like an idea worth testing (mind you I know the cRIO can do this we just couldn’t get the samples to work or to function in a way we preferred).

Though we didn’t use it, OpenCV is an extremely functional and professional vision library you can call in many languages. Our students actually communicated with Video4Linux (V4L) which OpenCV actually uses as well (though it can use other solutions to get the video sources).

Our team uses a lot of Linux. The programmers who worked on this part were quite comfortable with it and to my knowledge no mentor provided technical support because they didn’t need to. The netbook had Windows 7 on it and we removed it. I’m quite sure from my own work professionally that you could use Windows, Linux, BSD or Mac OSX and get workable results even with a single core Atom CPU (we originally tested with a Dell Mini 9 which is precisely that at the time it was running Ubuntu 9). My advice (take it or leave it) is try not to think you need to process every frame and every pixel of every frame.

Though we used Java (most precisely OpenJDK) I personally tested PyGames and it worked just fine stand alone.
If someone else is interested in trying it this shows you most everything you need to know:
http://www.pygame.org/docs/tut/camera/CameraIntro.html

I had that interfaced with a NXT controller for an experiment and that was also controlled with Python code.

I’m confused by this (not to appear too argumentative).

A few people warned us this year about the netbook we used and with proper mounting there are plenty of examples of our robot smashing over the bump in the center of the field at full throttle. We did that in practice on our own field and on the real field literally well over 150 times. No issues. Course we did have an SSD in it.

Also doesn’t FIRST allow you to use other laptops for the driver’s station and doesn’t that create to some extent the same support issue? I grant you the DS is basically Windows software so that did sort of reduce the variability. However, there’s nothing at all stopping FIRST from producing a Linux distro all their very own. This would give them control over the boot times, the drivers, the interfaces and the protocol stacks. It’s really much the same problem FIRST faces if they put DD-WRT or OpenWRT on the robot APs. I assure everyone that a laptop for processing video on the robot and literally entirely in lieu of the cRIO (with a replacement for the digital side car) can be done and I have no problems proving it.