|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||||
|
||||||
|
Re: 7mb/s, Will it be an Issue?
The same as last year. Last year was the first year of an explicit bandwidth limit.
|
|
#2
|
|||
|
|||
|
Re: 7mb/s, Will it be an Issue?
Hey FIRST... Gigabit Technology... Just Sayin.
![]() |
|
#3
|
||||
|
||||
|
Re: 7mb/s, Will it be an Issue?
I did a bandwidth test and found that streaming from two cameras with 38% compression, 640, 480, 600 KB/s were following through the network, which equals to 4.8 mbps. That means that the 7 mbps might a problem of you are using two cameras. We used an M1013 and M1011 for this test
|
|
#4
|
||||
|
||||
|
Re: 7mb/s, Will it be an Issue?
Quote:
Quote:
My point is not that you are wrong, but you didn't include any qualifiers. If you limit your comments to only one configuration, image processing on the DS, then your comments are fair. That said, you may want to consider alternatives. My example is going to be my own team. Two cameras this year. One USB, processed on-board with a PCDuino. (BTW, the total cost for this set-up is almost half the cost of an Axis camera alone.) The other is a Axis 603, again processed on-board by the cRio, but for only one or two 320X240 frames at 30% compression, then turned off. Essentially 0Mb/s bandwidth used with two cameras. There are ways to get around the 7Mb/s BW limits w/o disrupting the WiFi network. |
|
#5
|
||||
|
||||
|
Re: 7mb/s, Will it be an Issue?
Mr. Bill, I was pointing out a typical situation, where a robot will have one or two cameras, with the feed going directly to the DS, and no processing would be happening. Certainly, in your situation, the bandwidth would be low, but not every team processes images onboard. We won't be because it is now to late to order an onboard PC like the PandaBoard. BTW, how long does the PandaBoard typically take to ship?
|
|
#6
|
||||
|
||||
|
Re: 7mb/s, Will it be an Issue?
Quote:
Basing your comments on the configuration you just spelled out makes your comments 100% legit. Please understand, no offence was intended. Quote:
We are using a PCDuino sourced from SparkFun. Delivery is usually within 5 days. Based on withholding amounts, I believe you should be able to order one now, develop your code off the robot, and bring it to the first practice day of your first comp. and install it there. That would buy you at least an extra week or so, at minimum. Please check me on the rules here, I don't want to mislead anyone. |
|
#7
|
|||||
|
|||||
|
Re: 7mb/s, Will it be an Issue?
The issue is 802.11 WiFi, gigabit would not change anything there. They are already running 802.11N and cannot use more than 1 channel due to the number of fields running at the Championship.
If you honestly need more than 7 mb/s you are doing something wrong. |
|
#8
|
||||
|
||||
|
I think it, s time for the umblicals again, this time for LAN
![]() |
|
#9
|
||||
|
||||
|
Re: 7mb/s, Will it be an Issue?
Or the overhead mesh used to power bumper cars! (transmitting data would be tricky though)
We have used our camera at 360p 30% compression and 10 fps, and had not even close to an issue, as well as nice response time (low "lag") The camera acted to help our drivers line up with the goal glowing by our bright orange LEDs, and to do on-board processing For most vision processing, there is really not much of a need for high resolution, as that may increase the computation time per frame, reducing resources all around. This year we are outsourcing our vision to Raspberry Pi's with camera data captured and processed by them, and due to the nature of this game, a lot of things can likely be done by eye better. |
|
#10
|
||||
|
||||
|
Re: 7mb/s, Will it be an Issue?
On the field in 2013 I witnessed the FMS peak at ~2Mb/s by itself for some reason, even though the FMS whitepaper calls out ~1Mb/s. This is included in the 7Mb/s allocation. That meant we basically had 5Mb/s for everything else and even approaching 3Mb/s for our own data would cause latency issues for all robots on the field.
We only focused on FMS bandwidth last year because we really wanted 640x480@10Hz/30% and wanted a baseline bandwidth to figure out the boundary conditions. I was trying to stitch telemetry data with image data. Wasn't worth it, to be honest - we wound up having to go down to 6Hz before the FTA's stopped complaining about latency. This year we're going to try 320x240@20Hz with 30% compression. Hopefully our whiz kid network programming student is making the net code efficient enough for the telemetry data to keep up. |
|
#11
|
||||
|
||||
|
Re: 7mb/s, Will it be an Issue?
Quote:
Also, how do you guys get unlimited speed on the robot? Doesn't the data go through the FMS switches anyways, so is limited to 7mbps? I think what might be one problem is the Tx power is a bit low. On my DD-WRT router, I can set the power and at full power, I get up to a quarter mile range. I think the only way to reduce your bandwidth is to have the processing done onboard and only allow the basic FMS communications to run on the network. If I write an image processing script onboard with the ability to transmit back images, I'd probably write a bwm-ng script so I can monitor the bandwidth and pin the GUI to a specific frequency to keep a constant data rate, e.g. 1mbps. Another approach would be to just skip everything autonomous and make to robot fully autonomous. |
|
#12
|
|||||
|
|||||
|
Re: 7mb/s, Will it be an Issue?
If it weren't going to be an issue for somebody, FRC wouldn't have made a rule. If you run your network lean, or even fairly wide but smart, you shouldn't have any problem working with it. Rebound Rumble was our rookie year, and we couldn't effectively use the camera, and that was before explicit limits. (I was just a parent, not a mentor that year, so I don't know the details.)
For Ultimate Ascent, Matt programmed a raspberry pi to analyze the image and just return the MBR (minimum bounding rectangle) of the reflector areas it saw. I doubt we would have had a problem at 100 kbps, and everything else worked better as well. Oh, yeah, Gixxy had to put the comms with the raspberry pi on a separate thread so it wouldn't lock up the main thread when the pi went down due to brownouts. I understand this year that we're planning a camera feed for the driver. I understand that it will not only be compressed, but run at a lower network priority so that it doesn't kill the essential functions. The bottom line is, figure out whether it will be an issue for YOU and work around it if needed. |
|
#13
|
||||
|
||||
|
Re: 7mb/s, Will it be an Issue?
Quote:
|
|
#14
|
||||
|
||||
|
Re: 7mb/s, Will it be an Issue?
I am writing this on behalf of the folks at Team 900, though they should feel free to chime in. We have experienced issues with the Axis cameras and streaming back to the driver station even when settings were lowered and compression was turned on.
From our observations, there is a fair amount of inconsistency among regional events with how the wifi is configured (limits in place, bonding, etc). Our 'expert' (and I use the term loosely) advice is to limit your streaming to an absolute minimum and to instead grab single frames where appropriate. We did this this past year and it worked great. No bandwidth issues. If you are using an on-board system (Jetson/Raspberry Pi/BeagleBoard/Etc) then you are probably fine but keep in mind that it is possible to overwhelm the FMS traffic to the RoboRIO by sending too much crap to it over the on-board system. Again, keep your transmission of data to a minimum. IE, don't stream footage from the RoboRIO to the Raspberry Pi just to process it, keep your camera plugged into the Pi and process it locally then send the data you need back to the RoboRIO. Design for minimum bandwidth and you don't have to worry about these limits. |
|
#15
|
|||||
|
|||||
|
Re: 7mb/s, Will it be an Issue?
Quote:
A common use-case for an onboard camera is for automated alignment to a vision target. In this case, the target is not moving and (in theory*) you only need a single image to compute the transform between your robot and the target. Once you have computed this transform, you can use other sensors (ex. gyro) to close the loop. Camera-in-the-loop control schemes suffer from latency, low (control) bandwidth, and timing jitter, whereas using the camera to derive a setpoint for faster sensors sidesteps these problems. * In practice, multiple sources of error (intrinsic and extrinsic calibration of the camera, detection errors, robot odometry errors) will mean that you probably want to take a sequence of frames as you turn/move towards the target and iteratively refine your estimate of where the target is. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|