View Single Post
  #2   Spotlight this post!  
Unread 16-08-2013, 09:51
techhelpbb's Avatar
techhelpbb techhelpbb is offline
Registered User
FRC #0011 (MORT - Team 11)
Team Role: Mentor
 
Join Date: Nov 2010
Rookie Year: 1997
Location: New Jersey
Posts: 1,620
techhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond reputetechhelpbb has a reputation beyond repute
Re: NI Week Athena Announcement and Q&A Panel

Quote:
Originally Posted by jhersh View Post
It may be something that an advanced team would want to pay attention to if they are trying to push the controller to its limits. However, given that a single core of the roboRIO is approximately 5x faster than the cRIO at basic tasks means that a little inefficiency will likely go unnoticed for most teams.
I am a bit confused about this in relation to what was implemented in the RoboRio.

I work in highly parallel environments daily (8,000+ Linux servers globally running extremely high speed tasks...I will avoid the term real time here it is often misunderstood as a metric).

I can see how the abstraction of LabView could make the dual cores less apparent to the end user. Unless there's a way for the students to bind their process to a particular processor I don't see any way they can easily deadlock themselves.

However not all teams work in LabView. If a team is using C or Java can they create code that can target specific processor? If so they can create deadlocks because they could spawn up a 'serial process' between the 2 processor and get stuck between the 2 processors.

In any kind of multiple processor environment with user ability to direct resources this sort of complication can arise. Automated resource allocation can either fix this or itself might magnify the issue.

The control system we proposed to FIRST for example had Parallax Propellers (along with Atmel and ARM) and those chips have 8 'cogs' (cores). In that environment a student might create a set of interelated tasks that must operate in a serial fashion but because of the architecture they would be aware from the beginning that they've divided up the process. So for example: if process B stalls in cog 2 perhaps they should debug process A in cog 1. The design goal with the multiple processors in the proposed environment was not to centrally distribute the resources at execution time. It was to provide finite determinisitic resources as part of the initial design process so that the result had direct predictability. Anything that could not be performed within that timing constraint could then have added processing or be delegated to external circuitry (FPGA - currently Xilinix Spartan 3 - and conditioned I/O). Added processing was a low cost barrier because of the way the entire system was architected 100s of processors from any vendors could operate cooperatively till the robot power supply limits became an issue (yes each processor does inherit additional processing cost as a result of this expansion capability but it is a small cost considering the value of the capability).

For those that don't understand the techno-speak about what we proposed:
You could decide to use 4 cogs in a single controller such that each cog controls a single tire of the drive system.
You would know instantly which cog was tasked with what job and what to expect from it.
You could issue orders between these cogs something like this:
Cog_RightFront - Move forward at 10 RPM
Cog_LeftRear - Turn swerve 15 degrees
(I am not kidding about this sort of programming at all I actually made it work. Robot Control Language -RCL- from General Robotics and Lego Logo are examples from previous decades of the language structure. The change here is the way it maps to physical processing directly. The cogs are each 20MIPS they have plenty of time to parse relativily natural language if that is what is really needed and that can be further tokenized for a performance boost.)

Obviously in Linux you can spawn up processes or threads. This is something I exploit daily. What level of granuality is being exposed to students? Further what tools are being offered to the students to debug these circumstances if they are in fact able to control the distribution of resources? On Linux systems I have software I wrote that is able to monitor every process and it's children and either report over secure channels to a central management infrastructure or perform forced local management of anything that goes bonkers. In this case the 'central management' is really the field and DS.

Last edited by techhelpbb : 16-08-2013 at 11:54.
Reply With Quote