Quote:
|
1. Why was the there a significant amount of failures for just the red alliance teams?
|
Let's look for a mechanism by which the alliance color could influence the failure.
Is it actually the color of the bumpers? Will putting red bumpers on a robot make it fail, or will blue bumpers fix it? Of course not.
What about the wifi? It turns out that all traffic is sent out on the same wifi frequency and channel. Red and Blue robot traffic is literally chopped into small packets and transmitted one after the other in a big data stream. RF spectral noise cannot bias red or blue or any particular robot. Additional APs using the same channel will not bias based on color or robot.
The data is sent out over the same antennae. The AP has six antennae, but three are tuned for 2.4 GHz and three for 5 GHz. The three antennae are used in MIMO fashion to modulate the bits of each packet. No bias there. The packets are the same size, and only difference is that the control packets have either an ASCII R or B within an inner field. B is 0x42, R is 0x52. The adjacent field has an ASCII 1, 2, or 3 for the alliance station by the way. Team ID is used as the SSID name. This resolves to a unique BSSID which is one of the six MAC address of the router. The MAC is in the IP packet header and nothing about team number, color, or anything else is. No opportunity for bias that I see.
That leaves us with the physical cables that deliver the data from the DS to the wifi AP and the switches that merge and route them. I don't have an explanation for how they would bias.
Finally, the report gives details as to root cause. I don't see failures following colors or stations. I mostly see things following robots.
Please explain the mechanism or let this one die. It is a small sample size and robots in elims do not swap bumpers.
Quote:
|
2. Special interest in how much vision tracking contributes to network traffic, as we (beta team)... were concerned of overwhelming traffic from this for teams that wanted to process vision via driver station.
|
Vision processing that stays on the robot has no impact. Vision streams that are sent to the dashboard via TCP can be estimated and the utilization of the particular teams on Einstein was measured and compared to the estimate. A color 640x480 image stream has 921,600 bytes of pixel data per image. with compression, it is likely 30kB per image frame. If the team sets the framerate to the highest of 30fps, that gives less than a MB per second or about 10Mbits. For six robots that is a respectable 60Mbits for video alone. The rest of the field data takes about 6.5Mbits. True, some robots could have multiple cameras, and they could set compression such that images were three or four times larger, but most teams will not have 30fps and 640x480 and low compression. The Einstein fields were usually measured to use 25Mbits. I wouldn't want to use g networks, and n needs a clean channel to achieve lots of throughput, but the channel bandwidth is not taxed by vision cameras ... yet. mp4 compression will work fine if both camera and laptop have codecs. The cRIO doesn't.
Quote:
|
I don't want to talk about the political stuff but just want to throw some general things out there... 1. Is there a scapegoat political agenda going on? 2. No one can keep a secret as all will be revealed to those who want to know about it... it is just a matter of time. Why? friends tell friends, and those friends tell friends... and well you get the picture... just like FB itself.
|
1. How about no. FIRST staff, experts, and the 12 Einstein teams in attendance were each asked if they approve of the report findings. All agreed. Explain why they would all have the same agenda?
2. Well if it is true for FaceBook, ...
Greg McKaskle