|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools |
Rating:
|
Display Modes |
|
|
|
#1
|
|||
|
|||
|
Re: NI Week Athena Announcement and Q&A Panel
I followed some of the link in this thread for RTlinx and labview. What changes 2 our programming in labview will we have to be concerned with? I see the words mutex and blocking non-blocking, 2 schedulers and other stuff that relates to running on a duel core. Do we have to deal with these issues or will labview take care of it? With the current single core c-rio our programming mentor teaches the new programers labview basics, The importance of real time and follows up with some lessons on state machines. After this the kids are let loose to get hands on experience. If we have to deal with the complexities of multiple cores this is going to require allot more of formal instruction on our mentors part. A serious load for a first time young programmer.
|
|
#2
|
|||
|
|||
|
Re: NI Week Athena Announcement and Q&A Panel
Quote:
|
|
#3
|
||||
|
||||
|
Re: NI Week Athena Announcement and Q&A Panel
Quote:
It’s also important to note that performance degradation can occur in both time critical and system tasks on multicore systems running NI Linux Real-Time if serially dependent tasks are allowed to run in parallel across processor cores. This is because of the inefficiency introduced in communicating information between the serially dependent tasks running simultaneously on different processor cores. To avoid any such performance degradation, follow the LabVIEW Real-Time programming best practice of segregating time-critical code and system tasks to different processor cores. You can accomplish this by setting a processor core to only handle time-critical functions, and specify the processor core to be used by any Timed Loop or Timed Sequence structure as illustrated in Figure 4. You can learn more about the best practices in LabVIEW Real-Time for optimizing on multicore systems at Configuring Settings of a Timed Structure.Is the above something that teams need to be aware of and take into account in their programming efforts? |
|
#4
|
|||
|
|||
|
Re: NI Week Athena Announcement and Q&A Panel
Quote:
|
|
#5
|
||||
|
||||
|
Re: NI Week Athena Announcement and Q&A Panel
Quote:
I work in highly parallel environments daily (8,000+ Linux servers globally running extremely high speed tasks...I will avoid the term real time here it is often misunderstood as a metric). I can see how the abstraction of LabView could make the dual cores less apparent to the end user. Unless there's a way for the students to bind their process to a particular processor I don't see any way they can easily deadlock themselves. However not all teams work in LabView. If a team is using C or Java can they create code that can target specific processor? If so they can create deadlocks because they could spawn up a 'serial process' between the 2 processor and get stuck between the 2 processors. In any kind of multiple processor environment with user ability to direct resources this sort of complication can arise. Automated resource allocation can either fix this or itself might magnify the issue. The control system we proposed to FIRST for example had Parallax Propellers (along with Atmel and ARM) and those chips have 8 'cogs' (cores). In that environment a student might create a set of interelated tasks that must operate in a serial fashion but because of the architecture they would be aware from the beginning that they've divided up the process. So for example: if process B stalls in cog 2 perhaps they should debug process A in cog 1. The design goal with the multiple processors in the proposed environment was not to centrally distribute the resources at execution time. It was to provide finite determinisitic resources as part of the initial design process so that the result had direct predictability. Anything that could not be performed within that timing constraint could then have added processing or be delegated to external circuitry (FPGA - currently Xilinix Spartan 3 - and conditioned I/O). Added processing was a low cost barrier because of the way the entire system was architected 100s of processors from any vendors could operate cooperatively till the robot power supply limits became an issue (yes each processor does inherit additional processing cost as a result of this expansion capability but it is a small cost considering the value of the capability). For those that don't understand the techno-speak about what we proposed: You could decide to use 4 cogs in a single controller such that each cog controls a single tire of the drive system. You would know instantly which cog was tasked with what job and what to expect from it. You could issue orders between these cogs something like this: Cog_RightFront - Move forward at 10 RPM Cog_LeftRear - Turn swerve 15 degrees (I am not kidding about this sort of programming at all I actually made it work. Robot Control Language -RCL- from General Robotics and Lego Logo are examples from previous decades of the language structure. The change here is the way it maps to physical processing directly. The cogs are each 20MIPS they have plenty of time to parse relativily natural language if that is what is really needed and that can be further tokenized for a performance boost.) Obviously in Linux you can spawn up processes or threads. This is something I exploit daily. What level of granuality is being exposed to students? Further what tools are being offered to the students to debug these circumstances if they are in fact able to control the distribution of resources? On Linux systems I have software I wrote that is able to monitor every process and it's children and either report over secure channels to a central management infrastructure or perform forced local management of anything that goes bonkers. In this case the 'central management' is really the field and DS. Last edited by techhelpbb : 16-08-2013 at 11:54. |
|
#6
|
|||
|
|||
|
Re: NI Week Athena Announcement and Q&A Panel
Students programming in LabVIEW don't have to be exposed at all. If they choose to, they can use timed loops and specify core affinity for each loop.
In C++, students are exposed directly to the Linux process / thread controls you would expect from a typical system, with the addition of real-time scheduler priorities. As for Java, I'm not sure how it's exposed. |
|
#7
|
||||
|
||||
|
Re: NI Week Athena Announcement and Q&A Panel
Quote:
It does mean a challenge for the students but I have great confidence they'll grasp it. What tools are being planned to help them debug the sorts of issues that may arise? Last edited by techhelpbb : 16-08-2013 at 10:30. |
|
#8
|
|||
|
|||
|
Re: NI Week Athena Announcement and Q&A Panel
Quote:
Standard open-source tools apply to C++ and Java. |
|
#9
|
|||||
|
|||||
|
Re: NI Week Athena Announcement and Q&A Panel
I don't think I've ever done anything in FRC that I couldn't do on a Vex Cortex, aside from stream video back to the driver station laptop. In fact, I've run Cortex code has run as fast as 100hz without complaining about CPU loading, and under RobotC (a terrible bytecode language with all the inefficiencies of Java and none of the benefits that dosn't even support the C language really at all) I was able to run all of my code in 2ms (measuring the time it took the task to execute).
I did come a bit short on IO (granted I used 5 LED's with individual GPIO pins), but I managed to live with the 12 DIO and 8 analog and 10 PWM. I think an extra two of each would be nice for FRC, but it's perfect for Vex robots. It's also got I2C and two UART ports. I would agree that, the peak performance of the roboRIO vs the Cortex provides more cost efficiency. But for 99% of teams, the Cortex would be just fine (in fact, way better because it's so easy to setup and dosen't require default code), so the doubled cost dosn't provide doubled benefit, or even any benefit to them. And then there are the 5% who insist vision procesing is important (it has not been important to me yet), and the 1% who might utilize the full benefits of the roboRIO and implement a good onboard vision system without compromizing their RT controls. We're still not doing anything controls-wise that we couldn't have done on the 2008 IFI controller. We now use floating point math, and LabVIEW, and other useful code features to do it, but we haven't found a challange which we simply couldn't do previously. Our development times have stayed about the same, our calibration efficiency is up a bit during online calibration-heavy activity but way way down for short between-match changes. We've also spent a lot more money on system components (cRios, more cRios, tons of digital sidecars, analog modules, solenoid modules, radios, more radios, more radios, ...) than with that system. In fact, I would argue that our code has gotten more complicated because of all the stuff we've had to do to get development and calibration times down. We wrote Beescript because it took too long to write new autonomous code and deploy it, especially in competition, and would never have done so (and possibly had more flexible autonomous modes using real programming features like math and if statements) if the compile and download times were short, or we could modify calibrations without rebuilding. We've thought a lot about implementing a calibration system that reads cal files and such, but we can't get the design to a point where we can retain the current online debugging, cal storage, and efficiency. And the more code we write, the longer the compile times get. I know I can't get a system with the flexibility I want and expect, while retaining all of the performance I expect, and it's incredibly frustrating to see systems in the real world operate on far lower resources running far more application code (running way faster) with good development and calibration tools that try hard to streamline and optimize the process as much as possible, and can do it efficiently with such little overhead, while we're running throwing more CPU speed and libraries at the problem and still nowhere near the performance (on all fronts - RT performance, boot times, development times and overhead, calibration efficiency, etc.). |
|
#10
|
|||
|
|||
|
Re: NI Week Athena Announcement and Q&A Panel
Also, so far? the GDC has given us one game EVER that actually needed machine vision for an optimal Auto mode: 2007.
And even then, lots of teams had successful deadreckoned keeper autos. Any time the target doesn't move after you've placed your bot, AND you can start your robot where you want, dead reckoning will work. If no interaction between red/blue robots is allowed, dead reckoning can't be defended. 2003 was the start of auto. You needed to be first to the top of that ramp. 2004 the target didn't move, but auto could be defended by cross field ramming. 2005 i didn't compete, and my memory is fuzzy, but was the first year we had the CMUcams. it was awful, as the targets were passive, and the arena lighting varied wildly. 2006, they switched to the green cold cathode boxes, which were much more reliable to detect, but the target didnt move, so no need to use them 2007, the rack moved after robots were placed, but typically didn't move a whole lot. 2008, the IR remote could be used to tell your robot where the balls were. most teams just dead reckoned. 2009, trying to dump in auto usually meant you got your own trailer beat up on by an HP 2010-2013 no game pieces, robots, or targets are moved before auto, AND red/blue interaction during auto is against the rules. |
|
#11
|
||||
|
||||
|
Re: NI Week Athena Announcement and Q&A Panel
Quote:
Also, many other teams have used vision as part of their main strategy. In 2006, wildstang had a nifty turret thing that was always pointed at the goal whenever it was in range so that they could get the balls in at any time. Also, 118 used a camera very well in 2012 with their shooter because it would let them shoot from anywhere near the key without having to line up. The point is, for some games and some teams, vision is a huge part of the game. I know teams used vision to line up for a full court shot this year, and teams also used vision to line up with the legs of the pyramid. |
|
#12
|
||||
|
||||
|
Re: NI Week Athena Announcement and Q&A Panel
Quote:
My 2 cents. |
|
#13
|
||||
|
||||
|
Re: NI Week Athena Announcement and Q&A Panel
Quote:
Personally I think that a better way to handle video recognition is on the robot not at the driver's station with the current FRC environment and for this purpose I feel that an auxilary device to process that is the more sensible. It hardly makes sense to try to find something faster than a general purpose COTS PC for the price. The market for that general purpose PC is huge compared to FIRST so of course it will be the greater performance for the price and without question each year that price will buy even more performance as long as it is allowed. Plus if you break an old laptop I doubt you'll spend more for the older model. The other way is to integrate the camera with the video recognition system in the same package. I really look at the Raspberry Pi and other COTS systems (besides a general purpose PC) as something a little more like an attempt to integrate the camera and the video recognition system (rough I admit). (Not against the Raspberry Pi or anything like that as has been demonstrated elsewhere on the forum.) In any case I think video recognition is one of those fantastic things that inspires people to think that the robot can adapt to it's environment based on sight. Most people start thinking of the way they see and imprint that on the robot. In so many ways the way humans use sight and the way a machine does are very different things. It is an ever evolving piece of technology. On the plus side that evolution drives jobs and innovation which I'm sure students would love to have. On the other hand video recognition is no PWM. There is a point at which you can implement PWM and there's no sense to try any harder. Video recognition has so many compromises there is always something to try and always a good opportunity to look at the robot as the vehicle and the camera / video recognition as a subsystem with ample opportunity for tinkering. I am not sure it makes sense to sell the Apple product of FRC robot control systems. That model works great when people can afford to upgrade. Making those upgrades the entire control system seems a touch more expensive than necessary. Last edited by techhelpbb : 16-08-2013 at 15:54. |
|
#14
|
|||
|
|||
|
Re: NI Week Athena Announcement and Q&A Panel
Absolutely, I agree that the performance per unit cost value of a cRIO or roboRIO is orders of magnitude higher than for the Vex Cortex. I'm ALSO not suggesting that FRC use a Vex Cortex. It was simply an example of some of the other options.
In terms of performance, they're not even in the same ballpark, so yes, being 'just' double the cost is a good value. If you're going to use that performance. Otherwise though, its like buying a Bugatti Veyron and never taking it to a racetrack or Germany's autobahns. Buying performance you won't be using is frivolous. Please don't misunderstand me. I'm not trying to take pot shots at NI. I'm a certified LabVIEW developer, and I work with NI equipment every day. roboRIO is a huge improvement over cRIO as an FRC control system. No contest. I'm very excited to get my hands on it and see it in action. I'm just disappointed that it seems like a couple of the spots where quantum leaps could have been made fell a bit short. BUT I'm also aware that much of the slow boot problem with the cRIO-based control system we've had since 09 is NOT the boot time of the cRIO, but rather, the radios. They're still working out what we're going to be using for radios, so maybe I'll be pleasantly surprised. @Don: I don't know what 'alternative' I'm proposing. The FIRST community is collectively VERY smart though. I've seen some neat 900MHz ethernet bridges, capable of throughputs in the 2Mbps range. I do know that 802.11 lacks the reliability factor I believe an FRC control system should have. Even my home 802.11 network, in a rural area, with minimal interference on the 2.4GHz band frequently drops, hiccups, or does otherwise rude things to the communications. There has to be a better solution. As to 1676 not being able to run their robot on a Cortex? That's cool. I wouldn't have guessed it. 1676 though is definitely one of those top tier teams that's good at making optimal use of the resources they're given. If 1676 had to choose between whatever functions its doing that couldn't be achieved with a Cortex, and booting to a driveable state in under 10s, as Chris suggests, would you still want that extra performance? I can say with confidence that 4343's 2013 robot is the first robot I've been involved with that couldn't have been done with a 2008 IFI system, and that's only because it streamed video back to the DS. I DO like this suggestion of multiple comms channels, so that mission-critical comms (FMS control, joystick values, etc) could be transmitted on a low-bandwidth, extremely reliable, fast link-up channel, while the extras like streaming video ride on typical 802.11. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|