![]() |
Re: NI Week Athena Announcement and Q&A Panel
Quote:
There is no longer a choice in the matter. Choices were offered and choices were made. On the one hand I do not think it is realistic to expect NI to comply with some of the changes. On the other hand I would be disappointed in NI if merely stating some concerns was enough to make them go silent in this forum. However I do not think that is NI's style. I admit to be often quite rough on them and they are still here. Whoever won that RFQ needed to be prepared to face the public. I even put that in the proposal. What the community thinks does matter. If what the community thinks does not matter then there's something wrong. There are times when I think what the community thinks is ignored. Let us all strive to deal with that responsibly. I have been avoiding this topic specifically because the proposal I worked on not being accepted makes this sound like sour grapes. The only reason I am posting in this topic now is because someone brought up the dual radios. Give credit where it is due...be that to NI...or whoever else goes the mile to do the hard things. Further know that there are other things that no other control system offered that I helped propose. So there are ideas coming into focus now that were just in grasp during the process. If someone wants to see the proposal from U.S. Cybernetical I will post it (I have approval from all parties). If not oh well. I did what I felt was right with what I had. That is all anyone can do. I even promised I would help Sasquatch out if their Kickstarter did not fund. I've ordered the unit from them and patiently await it's arrival. This was the arrangement they preferred I am happy to accommodate. In the end what I got out of that proposal will end up being far more valuable than what it appears. |
Re: NI Week Athena Announcement and Q&A Panel
Quote:
:] |
Re: NI Week Athena Announcement and Q&A Panel
Quote:
Community involvement should go with the territory for this RFQ. Sometimes community involvement means you get the criticism to. However I am not being critical of Adam. Just making my intent transparent. |
Re: NI Week Athena Announcement and Q&A Panel
Brian, by not not-knocking your proposal, I wasn't implicitly knocking it either.
I know of it, but I haven't read it and don't know what to call it. I was attempting to convey that any robot controller that makes it to market is likely to inspire kids. Theres this Danish company that I'm pretty involved with that wasn't not-knocked either. As for NI's presence on CD, I'm here in large part because I value skepticism and alternative ideas. I also believe that FUD fills an information void, and I'd prefer to be direct and open when possible. CD also lets me observe and participate with kids working their way through issues. The CD village needs all sorts, and as long as the focus doesn't veer too far away from building future generations of leaders/engineers/artists, I'm willing put up with quite a bit. Greg McKaskle |
Re: NI Week Athena Announcement and Q&A Panel
I followed some of the link in this thread for RTlinx and labview. What changes 2 our programming in labview will we have to be concerned with? I see the words mutex and blocking non-blocking, 2 schedulers and other stuff that relates to running on a duel core. Do we have to deal with these issues or will labview take care of it? With the current single core c-rio our programming mentor teaches the new programers labview basics, The importance of real time and follows up with some lessons on state machines. After this the kids are let loose to get hands on experience. If we have to deal with the complexities of multiple cores this is going to require allot more of formal instruction on our mentors part. A serious load for a first time young programmer.
|
Re: NI Week Athena Announcement and Q&A Panel
Quote:
|
Re: NI Week Athena Announcement and Q&A Panel
Quote:
It’s also important to note that performance degradation can occur in both time critical and system tasks on multicore systems running NI Linux Real-Time if serially dependent tasks are allowed to run in parallel across processor cores. This is because of the inefficiency introduced in communicating information between the serially dependent tasks running simultaneously on different processor cores. To avoid any such performance degradation, follow the LabVIEW Real-Time programming best practice of segregating time-critical code and system tasks to different processor cores. You can accomplish this by setting a processor core to only handle time-critical functions, and specify the processor core to be used by any Timed Loop or Timed Sequence structure as illustrated in Figure 4. You can learn more about the best practices in LabVIEW Real-Time for optimizing on multicore systems at Configuring Settings of a Timed Structure.Is the above something that teams need to be aware of and take into account in their programming efforts? |
Re: NI Week Athena Announcement and Q&A Panel
Quote:
Quote:
|
Re: NI Week Athena Announcement and Q&A Panel
Quote:
|
Re: NI Week Athena Announcement and Q&A Panel
Absolutely, I agree that the performance per unit cost value of a cRIO or roboRIO is orders of magnitude higher than for the Vex Cortex. I'm ALSO not suggesting that FRC use a Vex Cortex. It was simply an example of some of the other options.
In terms of performance, they're not even in the same ballpark, so yes, being 'just' double the cost is a good value. If you're going to use that performance. Otherwise though, its like buying a Bugatti Veyron and never taking it to a racetrack or Germany's autobahns. Buying performance you won't be using is frivolous. Please don't misunderstand me. I'm not trying to take pot shots at NI. I'm a certified LabVIEW developer, and I work with NI equipment every day. roboRIO is a huge improvement over cRIO as an FRC control system. No contest. I'm very excited to get my hands on it and see it in action. I'm just disappointed that it seems like a couple of the spots where quantum leaps could have been made fell a bit short. BUT I'm also aware that much of the slow boot problem with the cRIO-based control system we've had since 09 is NOT the boot time of the cRIO, but rather, the radios. They're still working out what we're going to be using for radios, so maybe I'll be pleasantly surprised. @Don: I don't know what 'alternative' I'm proposing. The FIRST community is collectively VERY smart though. I've seen some neat 900MHz ethernet bridges, capable of throughputs in the 2Mbps range. I do know that 802.11 lacks the reliability factor I believe an FRC control system should have. Even my home 802.11 network, in a rural area, with minimal interference on the 2.4GHz band frequently drops, hiccups, or does otherwise rude things to the communications. There has to be a better solution. As to 1676 not being able to run their robot on a Cortex? That's cool. I wouldn't have guessed it. 1676 though is definitely one of those top tier teams that's good at making optimal use of the resources they're given. If 1676 had to choose between whatever functions its doing that couldn't be achieved with a Cortex, and booting to a driveable state in under 10s, as Chris suggests, would you still want that extra performance? I can say with confidence that 4343's 2013 robot is the first robot I've been involved with that couldn't have been done with a 2008 IFI system, and that's only because it streamed video back to the DS. I DO like this suggestion of multiple comms channels, so that mission-critical comms (FMS control, joystick values, etc) could be transmitted on a low-bandwidth, extremely reliable, fast link-up channel, while the extras like streaming video ride on typical 802.11. |
Re: NI Week Athena Announcement and Q&A Panel
Quote:
I work in highly parallel environments daily (8,000+ Linux servers globally running extremely high speed tasks...I will avoid the term real time here it is often misunderstood as a metric). I can see how the abstraction of LabView could make the dual cores less apparent to the end user. Unless there's a way for the students to bind their process to a particular processor I don't see any way they can easily deadlock themselves. However not all teams work in LabView. If a team is using C or Java can they create code that can target specific processor? If so they can create deadlocks because they could spawn up a 'serial process' between the 2 processor and get stuck between the 2 processors. In any kind of multiple processor environment with user ability to direct resources this sort of complication can arise. Automated resource allocation can either fix this or itself might magnify the issue. The control system we proposed to FIRST for example had Parallax Propellers (along with Atmel and ARM) and those chips have 8 'cogs' (cores). In that environment a student might create a set of interelated tasks that must operate in a serial fashion but because of the architecture they would be aware from the beginning that they've divided up the process. So for example: if process B stalls in cog 2 perhaps they should debug process A in cog 1. The design goal with the multiple processors in the proposed environment was not to centrally distribute the resources at execution time. It was to provide finite determinisitic resources as part of the initial design process so that the result had direct predictability. Anything that could not be performed within that timing constraint could then have added processing or be delegated to external circuitry (FPGA - currently Xilinix Spartan 3 - and conditioned I/O). Added processing was a low cost barrier because of the way the entire system was architected 100s of processors from any vendors could operate cooperatively till the robot power supply limits became an issue (yes each processor does inherit additional processing cost as a result of this expansion capability but it is a small cost considering the value of the capability). For those that don't understand the techno-speak about what we proposed: You could decide to use 4 cogs in a single controller such that each cog controls a single tire of the drive system. You would know instantly which cog was tasked with what job and what to expect from it. You could issue orders between these cogs something like this: Cog_RightFront - Move forward at 10 RPM Cog_LeftRear - Turn swerve 15 degrees (I am not kidding about this sort of programming at all I actually made it work. Robot Control Language -RCL- from General Robotics and Lego Logo are examples from previous decades of the language structure. The change here is the way it maps to physical processing directly. The cogs are each 20MIPS they have plenty of time to parse relativily natural language if that is what is really needed and that can be further tokenized for a performance boost.) Obviously in Linux you can spawn up processes or threads. This is something I exploit daily. What level of granuality is being exposed to students? Further what tools are being offered to the students to debug these circumstances if they are in fact able to control the distribution of resources? On Linux systems I have software I wrote that is able to monitor every process and it's children and either report over secure channels to a central management infrastructure or perform forced local management of anything that goes bonkers. In this case the 'central management' is really the field and DS. |
Re: NI Week Athena Announcement and Q&A Panel
Quote:
In C++, students are exposed directly to the Linux process / thread controls you would expect from a typical system, with the addition of real-time scheduler priorities. As for Java, I'm not sure how it's exposed. |
Re: NI Week Athena Announcement and Q&A Panel
Quote:
It does mean a challenge for the students but I have great confidence they'll grasp it. What tools are being planned to help them debug the sorts of issues that may arise? |
Re: NI Week Athena Announcement and Q&A Panel
Quote:
Standard open-source tools apply to C++ and Java. |
Re: NI Week Athena Announcement and Q&A Panel
I don't think I've ever done anything in FRC that I couldn't do on a Vex Cortex, aside from stream video back to the driver station laptop. In fact, I've run Cortex code has run as fast as 100hz without complaining about CPU loading, and under RobotC (a terrible bytecode language with all the inefficiencies of Java and none of the benefits that dosn't even support the C language really at all) I was able to run all of my code in 2ms (measuring the time it took the task to execute).
I did come a bit short on IO (granted I used 5 LED's with individual GPIO pins), but I managed to live with the 12 DIO and 8 analog and 10 PWM. I think an extra two of each would be nice for FRC, but it's perfect for Vex robots. It's also got I2C and two UART ports. I would agree that, the peak performance of the roboRIO vs the Cortex provides more cost efficiency. But for 99% of teams, the Cortex would be just fine (in fact, way better because it's so easy to setup and dosen't require default code), so the doubled cost dosn't provide doubled benefit, or even any benefit to them. And then there are the 5% who insist vision procesing is important (it has not been important to me yet), and the 1% who might utilize the full benefits of the roboRIO and implement a good onboard vision system without compromizing their RT controls. We're still not doing anything controls-wise that we couldn't have done on the 2008 IFI controller. We now use floating point math, and LabVIEW, and other useful code features to do it, but we haven't found a challange which we simply couldn't do previously. Our development times have stayed about the same, our calibration efficiency is up a bit during online calibration-heavy activity but way way down for short between-match changes. We've also spent a lot more money on system components (cRios, more cRios, tons of digital sidecars, analog modules, solenoid modules, radios, more radios, more radios, ...) than with that system. In fact, I would argue that our code has gotten more complicated because of all the stuff we've had to do to get development and calibration times down. We wrote Beescript because it took too long to write new autonomous code and deploy it, especially in competition, and would never have done so (and possibly had more flexible autonomous modes using real programming features like math and if statements) if the compile and download times were short, or we could modify calibrations without rebuilding. We've thought a lot about implementing a calibration system that reads cal files and such, but we can't get the design to a point where we can retain the current online debugging, cal storage, and efficiency. And the more code we write, the longer the compile times get. I know I can't get a system with the flexibility I want and expect, while retaining all of the performance I expect, and it's incredibly frustrating to see systems in the real world operate on far lower resources running far more application code (running way faster) with good development and calibration tools that try hard to streamline and optimize the process as much as possible, and can do it efficiently with such little overhead, while we're running throwing more CPU speed and libraries at the problem and still nowhere near the performance (on all fronts - RT performance, boot times, development times and overhead, calibration efficiency, etc.). |
| All times are GMT -5. The time now is 21:30. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi