View Single Post
  #3   Spotlight this post!  
Unread 15-07-2012, 10:46
Todd's Avatar
Todd Todd is offline
Software Engineer
FRC #1071 (Team Max)
Team Role: Mentor
 
Join Date: Feb 2005
Rookie Year: 2004
Location: Connecticut, Wolcott
Posts: 51
Todd is just really niceTodd is just really niceTodd is just really niceTodd is just really niceTodd is just really nice
Re: Lessons to be learned from Einstein reports

Quote:
Originally Posted by shawnz View Post
The LabVIEW default code actually sends driver station updates 1/25th as often as it recieves instruction packets, AFAIK. We've modified ours to have this factor be 1/5th instead, but haven't actually tested the change on a real field yet.
Looking at it again, you're right, my mistake.

Quote:
Originally Posted by shawnz View Post
Either way, networking technology is pretty good. We should be able to stream 1080p video over the air easily in this day and age. This kind of data should be inconsequential.
Keep in mind that 1080p video from most network services is heavily compressed (youtube for instance streams 1080p video using only ~4-6 mbps depending on compression algorithm). If some team were to implement something that ripped open the data structure of images being processed on the cRio and transferred them raw uncompressed 640x480 at 30 fps they'd actually use almost 27mbps for their single video stream alone. Something that their router could probably handle but would get their attention from an FTA (hopefully) pretty quickly.

To my understanding (stolen from a post of Greg Mckaskle's) the field network utilization during Einstein matches was recorded to be usually ~ 25 Mbits, or 3.12 mbps, which actually is already comparable with a 1080p video stream. Not to say that that's too much for the Cisco AP to handle, because it isn't, but Moore's law twisted to network demands and all, I think it's just something we need to make sure to keep in mind.

FIRST's report recommends adding QoS and bandwidth limiting to FMS router configurations, which should alleviate most issues that this could ever potentially cause.

Quote:
Originally Posted by shawnz View Post
It doesn't actually wait any minimum amount of time, but if other things are scheduled, it will go process them.) This should be just as good IMO -- In this case, if you're hitting 100% CPU but still executing all of your code regularly, all that means is that you're being 100% efficient with the hardware.
This is very true on many architectures, including I believe our's. However I'd argue that the loss of ability to quickly determine if a process could be overrunning (by checking CPU utilization) is a heavy price to pay in exchange for not having to fix the minimum timestep of a secondary thread which very likely isn't doing anything that demands all that CPU attention anyways.

Quote:
Originally Posted by shawnz View Post
In fact I think this is one area where the default template could be improved -- it could be friendlier to the idea of offloading whatever you can into other threads. Maybe this will be coming next year, along with the "more thorough documentation regarding threading".
Lately I've been encouraging the placement of control and I/O logic in separate tasks (structured like 'Timed Tasks' in our labview implementation) and having instruction packet triggered teleop code only process the command packets its receiving. I think we could all benefit from more thorough documentation though.