View Full Version : NI releasing/designing new controller for FRC
Steven Donow
26-04-2013, 10:02
They announced this during opening ceremonies(apparently...I'm watching without sound and going by tweets I've seen).
The reveal will be streamed August 8th at ni.com/first
engunneer
26-04-2013, 10:03
During the Opening ceremonies of Championship, they announced that the specs for the 2015-2019 control system will be announced August 8th around 830 AM Central from the NI conference in Austin, TX.
I'm excited to hear that much of the code and knowledge will translate to the new system, and that it will be smaller and lighter than the cRio.
orangemoore
26-04-2013, 11:16
That is my Birthday!
I will be 15 on August 8th
;)
It seems from this article that NI has won the RFP given that the controller will be used in 2015.
http://spectrum.ieee.org/automaton/robotics/robotics-hardware/first-robotics-competition-national-instruments-athena-robot-controller
Joe Ross
26-04-2013, 12:48
Here is the press release
http://m.prnewswire.com/news-releases/national-instruments-technology-partnership-with-first-puts-real-world-engineering-tools-in-student-hands-204866181.html
And, Hsu, adds, it’s also “super rugged.” That’s because one thing NI learned watching the FIRST teams using its controller is that, as Hsu puts, “Kids will do anything to it.” The controller gets dropped onto the hard floor; tiny metal shavings get into its modules; some teams have even left it in the rain. Athena is designed to better handle this abuse.
Can't wait to see a swarf and water proof robot controller :rolleyes:
engunneer
26-04-2013, 15:14
Can't wait to see a swarf and water proof robot controller :rolleyes:
Ah, but how will the students learn to think about actions without blowing up a Sidecar or two?
MagiChau
26-04-2013, 15:16
Ah, but how will the students learn to think about actions without blowing up a Sidecar or two?
They can still wire the speed controllers backwards ::ouch::
MrRiedemanJACC
26-04-2013, 15:21
Looks like we are keeping labview, but a different controller. At least that what it looks like from a mechanical guys standpoint....
http://www.ni.com/newsroom/release/ni-technology-partnership-with-first-puts-real-world-engineering-tools-in-student-hands/en/?sf12086786=1
mman1506
26-04-2013, 16:34
Hopefully it's smaller. The crio+sidecar combo take up more space than is necessary considering the size of the fpga part of the crio.
F22Rapture
26-04-2013, 18:12
Wonder if other non-LV languages will still be possible on the new controller
I'm sure C++ probably will be, Java I'm not so sure of.
mman1506
26-04-2013, 18:28
Wonder if other non-LV languages will still be possible on the new controller
I'm sure C++ probably will be, Java I'm not so sure of.
In one of the press releases they say it will be compatible with C++ and java
Can't wait to see a swarf and water proof robot controller :rolleyes:
Looking for a water game perhaps?
DonRotolo
26-04-2013, 19:51
I can say for certain that the FRC community had a lot of input into the RFP. The conclusion is that it will be even more awesome than the C-Rio.
My hunch is that the new controller will be a new product in the Rio line. It should be awesome, can't wait to see it!
I am somewhat sad that it's NI though. I would have liked to see a control system that is more non-labview friendly, and I would have liked to see what IFI (and other companies) would have come up with. Maybe it's still something that they will launch in the future as hobbyist tools...
Tom Line
26-04-2013, 21:01
My hunch is that the new controller will be a new product in the Rio line. It should be awesome, can't wait to see it!
I am somewhat sad that it's NI though. I would have liked to see a control system that is more non-labview friendly, and I would have liked to see what IFI (and other companies) would have come up with. Maybe it's still something that they will launch in the future as hobbyist tools...
I am unsure why you would want to exclude a language that nearly half the first teams use. How do you know that IFI didn't submit a proposal?
I am unsure why you would want to exclude a language that nearly half the first teams use. How do you know that IFI didn't submit a proposal?
I doubt that many (half of the) teams use labview. I don't want to start a fight about that here. My point was that I'd like for labview to be an option (like Java or C++), and not something that is heavily stressed because the control system is NI.
Also, I think I worded the second part of my comment badly. Here goes: I really want to see what IFI and other companies came up with. Some of them may not ever release what their idea was because they got turned down for this contract: that would be a shame.
And, can't wait to see what this partnership with Cross the Road brings. CAN Talons please? :ahh:
Joe Ross
26-04-2013, 21:44
I doubt that many (half of the) teams use labview. I don't want to start a fight about that here. My point was that I'd like for labview to be an option (like Java or C++), and not something that is heavily stressed because the control system is NI.
Actually, more then 50% of teams use LabVIEW. FIRST keeps track. http://www.chiefdelphi.com/forums/showpost.php?p=1261256&postcount=78
nickcvet89
26-04-2013, 21:46
NOOOOO, just as I was fully learning the potential of the crio..... just kidding, really excited to see what's in store for the future!
Meshbeard
27-04-2013, 00:38
I talked to the guys at the Cross the Road Electronics booth today. I can confirm that they are planning on having two versions of the Talon in the future: a pwm version and a CAN version.
I also took pictures of the informational posters they had out about the new control system (what they were allowed to say about it at least).
Control System Overview: http://i.imgur.com/eA3Bvfu.jpg
Power Distribution: http://i.imgur.com/bWDHpSt.jpg
Pneumatics: http://i.imgur.com/276hmyB.jpg
New things: http://i.imgur.com/YEtaHrp.jpg
I can try to elaborate more if someone asks for particular information.
Radical Pi
27-04-2013, 00:40
So Cross The Road had a table set up today and was answering questions about their part in the new control system. CTRE will be providing the new PD board and Pneumatics Control Module, along with the Talons and a new team-friendly configurable voltage regulator. I got some pictures of their info displays on my phone camera: http://imgur.com/a/pTtAL
The new controller will be called the Athena (the NI guy was supposed to mention this during Opening Ceremonies). CTRE didn't have much to say about it, since NI is doing the development for that. There is a small graphic in the album about its capabilities though (USB support!). The new system will be heavily based on CAN, as you'll see below. As far as I know, the current languages will be supported, and adding new ones is on the table.
The PD board is probably the most exciting part of this. At the lowest level it's identical to the current board, just a bit smaller physically (they had a plastic mockup at the table). They've added a microcontroller with the capability to monitor current, temperature, battery voltage, and breakers and save 60 matches worth of data on the board itself, along with data about the robot's state. It interfaces with the Athena via CAN, which can read out the data live or be viewed later for debugging.
The Pneumatics Control Module is also operated over CAN. It has 8 outputs, handles the compressor/pressure switch internally, and has a 24 volt boost regulator onboard. The outputs can be configured for either 12v or 24v operation (I neglected to ask if this was for the whole module or per output). Like the PD board, it also collects diagnostic data.
As for motor controllers, the only thing known for sure is that the current Talon SR will be available for use. They're hoping to make a CAN version of the Talon, but want to move away from the current RJ-11 connectors first. A version of the Talon with capabilities similar to the Jaguar is also on the table (sensor inputs, onboard closed-loop, etc).
If there's any other questions, I can swing by the table tomorrow and ask them. I'm also going to see what info I can get out of the NI booth about the Athena itself.
EDIT: oops, got ninja'd
If there's any other questions, I can swing by the table tomorrow and ask them. I'm also going to see what info I can get out of the NI booth about the Athena itself.
Thanks for the summary. Do you have any idea on weight and size of this new system? I really like this, particularly the pneumatics module.
Meshbeard
27-04-2013, 00:58
Thanks for the summary. Do you have any idea on weight and size of this new system? I really like this, particularly the pneumatics module.
I wasn't supposed to see this, but I might have seen a full size mockup of a prototype for the Athena board... If what I saw was correct, it was a black square about 5-6" on each side and about 1-1.5" tall. It looked like it integrated functionality of the cRIO and the sidecar into it, so it should be much much lighter than that combined weight. I don't remember much else since it was only flashed for a couple seconds and I wasn't expecting it at all.
Peter Johnson
27-04-2013, 04:32
I also took pictures of the informational posters they had out about the new control system (what they were allowed to say about it at least).
Control System Overview: http://i.imgur.com/eA3Bvfu.jpg
Power Distribution: http://i.imgur.com/bWDHpSt.jpg
Pneumatics: http://i.imgur.com/276hmyB.jpg
New things: http://i.imgur.com/YEtaHrp.jpg
Outstanding work by CTRE and a big thank you! This will make things like custom circuits/coprocessors significantly easier in the future (one of the ugly hurdles has always been the extra volume/weight of power conversion modules, the VRMs sound like the perfect solution). I love the additional CAN modules, particularly the pneumatics one and the addition of data logging. CAN has been a robust solution in the automotive industry for many years and it's good to see it gaining more traction in FRC.
I remain cautiously optimistic about NI's Athena.. while the new form factor is a really good idea (merging digital sidecar + crio = great!), and having integrated USB and CAN available is excellent news, what I really want to see is (a) if it's Linux based (rather than VxWorks) and (b) if boot times have been significantly improved. Linux would make life so much easier for development (out-of-the-box excellent USB driver support, robust TCP/IP stack, easier porting of tools/languages, a decent interactive shell prompt, non-kernel-mode code for easier debugging, better memory management, code reloads in any language without rebooting--just kill the user process and restart it, the list goes on), and we all complain about the current cRio boot time. However, I'm not holding out hope for either at this point given the "cRio platform" reporting so far.
MrForbes
27-04-2013, 06:13
I really need to get to the vendors booths today!
I like the idea of the size reduction of combining the processor and interface board. But the downside is that if you blow an interface, you have to replace/repair the whole thing. We've fried a couple sidecars, no damage to the cRio.....consider the relative cost of the two parts
Alan Anderson
28-04-2013, 18:24
I like the idea of the size reduction of combining the processor and interface board. But the downside is that if you blow an interface, you have to replace/repair the whole thing. We've fried a couple sidecars, no damage to the cRio.....consider the relative cost of the two parts
One common way to ruin a Digital Sidecar is to put battery voltage on any of its "ground" pins and fry its reverse power input protection. Integrating it could make that specific reverse power protection unnecessary* and thus remove a failure mode.
* It could be combined with a protection circuit for the entire device that is less susceptible to permanent damage.
cadandcookies
28-04-2013, 23:48
I wasn't supposed to see this, but I might have seen a full size mockup of a prototype for the Athena board... If what I saw was correct, it was a black square about 5-6" on each side and about 1-1.5" tall. It looked like it integrated functionality of the cRIO and the sidecar into it, so it should be much much lighter than that combined weight. I don't remember much else since it was only flashed for a couple seconds and I wasn't expecting it at all.
This excites me. Making an "electronics box" seems like it might be more feasible for some teams! Or at least easier.
Also, hopefully Jaguars will be repackaged or something to make them easier to deal with-- it's a minor thing, but their irregular shape makes them rather annoying to line up and place effectively-- or maybe I'm just missing something.
Brandon_L
29-04-2013, 00:27
I am somewhat sad that it's NI though. I would have liked to see a control system that is more non-labview friendly
Why? NI has done a wonderful job taking a product that wasn't FRC-Specific and bringing it into FRC. It was a little clunky but by far the most powerful system FRC has seen yet. What I loved about it - Its not FRC Specific. cRIO and LabVIEW are used in real world environments, its a true hands on experience. I can't wait to see what they come up with for the new system. From what I heard so far its absolutely amazing. My only concern - with it being all CAN based and the pneumatics module being the way it is - its becoming too "plug-and-play" for my taste. Theres no real electrical work.
As for the "non-labview friendly" statement, I don't know why you would want to limit your options. LabVIEW is built by NI and used with NI products, it shouldn't have issues. If there are, you know exactly who to contact.
I doubt that many (half of the) teams use labview. I don't want to start a fight about that here. My point was that I'd like for labview to be an option (like Java or C++), and not something that is heavily stressed because the control system is NI.
LabVIEW is no more stressed than any other language. When you set up your control system in week 1, the manual offers setup instructions for each language with no bias.
And, can't wait to see what this partnership with Cross the Road brings. CAN Talons please? :ahh:
If I remember correctly, CAN talons are coming its just a matter of when.
Billfred
29-04-2013, 00:51
I'm encouraged by what I saw at the CTRE booth, and I'm intrigued by the few things to come out of the NI camp. I'm hopeful that the net result will feature fewer hard-to-detect gotchas (current monitoring on the PD board could be HUGE for diagnosing electrical problems and preventing magic smoke!)
Bennett548
29-04-2013, 01:00
This new control system has the potential to make many robots more "robotic" rather than "RC cars with arms". Control systems are a very tricky concept, even for many in college, so I think that the move to make them more accessible to high schoolers is a great idea.
I had been pretty excited about AM's new shifter, but this definitely takes the cake.
Steven Sigley
29-04-2013, 03:00
So the pneumatics will be CAN, and the Power Distribution Panel, is there any way to integrate sensors like encoders into the CAN network without PWM cables in the future?
Chadfrom308
29-04-2013, 07:32
What if it is going to be arduino based :ahh:
probably not, but you can code arduinos in labview
mman1506
29-04-2013, 09:36
It's not, you cannot do vision anything on a arduino
The NI Booth had a prototype. I almost got a picture of it. In addition to Ethernet, it has client/server USB ports & a high speed Canbus. It is running a dual core processor & a bigger FGPA. Although the guts are based on standard NI products, it is specifically designed for First. It is NI's hope that the programming tools for it will be backwards compatible to the CRIO. It looks to be novice friendly while having expansion opportunities for the teams with the resources to take advantage of them. No comment on rather or not the current CRIO's will be competition legal in 2015. Cost is predicted to be in line with the current CRIO.
A big thanks to National Instruments for continuing to support First in the way they do. I hope we get to Beta test it. (We are a java team)
PS
This is way beyond arduino.
Tom Line
29-04-2013, 10:29
I talked to the guys at the Cross the Road Electronics booth today. I can confirm that they are planning on having two versions of the Talon in the future: a pwm version and a CAN version.
I also took pictures of the informational posters they had out about the new control system (what they were allowed to say about it at least).
Control System Overview: http://i.imgur.com/eA3Bvfu.jpg
Power Distribution: http://i.imgur.com/bWDHpSt.jpg
Pneumatics: http://i.imgur.com/276hmyB.jpg
New things: http://i.imgur.com/YEtaHrp.jpg
I can try to elaborate more if someone asks for particular information.
I'll preface this by saying that I am not a CSE, EE, or embedded engineer. This layout scares me.
Having the most failure prone component (digital sidecar) now built into the robot controller worries me. How often have shorted pins, miswired power leads, and other mistakes caused burned-out sidecars?
I sincerely hope the controller is over-engineered to a level that makes it virtually indestructable. Otherwise teams will be replacing their entire controller when someone shorts a 24V power to a 5V jumper.
I sincerely hope the controller is over-engineered to a level that makes it virtually indestructable.
I would say that's not over-engineering - but engineering to spec :)
Andy Baker
29-04-2013, 14:57
This is a heads up from the peanut gallery:
Everyone interested in the new Athena controller system needs to pay attention to crake (aka: Chris Rake) who posted directly above this post. Notice that his team number is "Athena". I can confirm that his focus to make Athena great for the FRC teams is great and I am confident that this will be a wonderful system. Along with Chris, there are many other folks at National Instruments and other supporting companies who are working hard to make this a wonderful system.
I love what I see already, and I am excited to see the full roll out in August.
Sincerely,
Andy B.
Peter Johnson
29-04-2013, 15:01
Having the most failure prone component (digital sidecar) now built into the robot controller worries me. How often have shorted pins, miswired power leads, and other mistakes caused burned-out sidecars?
I sincerely hope the controller is over-engineered to a level that makes it virtually indestructable. Otherwise teams will be replacing their entire controller when someone shorts a 24V power to a 5V jumper.
Was this an issue teams ran into with the pre-2009 IFI controller (which was also fully integrated)? You're absolutely correct that the I/O circuit design needs to have shorting, overvoltage, and inversion protections built in to avoid failures in our swarf-heavy and miswire-prone robots. The digital sidecar has indeed been problematic for a lot of teams (mine included) but I've not heard of a team damaging one of the cRio modules--we had an analog bumper get pretty hot and give incorrect results this year when we shorted the 5V and GND, but after getting rid of the short it worked again. I'm pretty confident that NI knows about these concerns and is capable of designing in appropriate protections for the Athena I/O.
Meshbeard
29-04-2013, 15:04
I sincerely hope the controller is over-engineered to a level that makes it virtually indestructable. Otherwise teams will be replacing their entire controller when someone shorts a 24V power to a 5V jumper.
I expect that since the Athena is going to be around the same price as the crio, it should be about as robust as the crio. The digital sidecar is not meant to be put on robots like we use. I think NI realizes that our equipment can take a beating and will probably take that into account.
connor.worley
29-04-2013, 15:04
CAN Talons may just get us to make the switch. Cool stuff.
CAN Talons may just get us to make the switch. Cool stuff.
There was a lot of things that made me like CAN, but the integrity of jaguars made us stray away from them. Having 8 working victors at the end of a season was more appealing than a (literal) pile of dead jaguars.
CAN talons could bring back my interest in CAN, I'm anxious to see what they come up with.
Meshbeard
29-04-2013, 15:33
So the pneumatics will be CAN, and the Power Distribution Panel, is there any way to integrate sensors like encoders into the CAN network without PWM cables in the future?
Most of the encoders used on FRC robots are quadrature encoders, which means they have four wires. They're usually four individual wires twisted together or a four wire ribbon cable, they're not really pwm wires. The way the digital sidecar is set up, you need to use three wire connectors, which is really inconvenient, but the new control system might have dedicated encoder inputs, which would be great.
AllenGregoryIV
29-04-2013, 15:36
The digital sidecar is not meant to be put on robots like we use.
I'm not sure I understand this statement. The DSC was made specifically for FRC robots.
Along with Chris, there are many other folks at National Instruments and other supporting companies who are working hard to make this a wonderful system.
Thanks Andy - As Ray announced at opening ceremonies this system is a result of collaboration between numerous companies and organizations - all of whom are dedicated to making this the best possible system for this program.
I also have to ask for some forgiveness from the forum. There are a lot of questions, and some of these may have to go unanswered for the time being. But folks won't have to wait for long - August will be here very soon!
Nate Laverdure
29-04-2013, 16:33
6mm lugs for the main power terminals on the next-gen PDB. WHY???
Jared Russell
29-04-2013, 16:52
Combining the functionality of the cRIO with the Digital Side Car is a great idea that will no doubt eliminate many current failure modes, and make wiring/fitting the control system easier than ever.
Looking Forward to the new system!
Why? NI has done a wonderful job taking a product that wasn't FRC-Specific and bringing it into FRC. It was a little clunky but by far the most powerful system FRC has seen yet. What I loved about it - Its not FRC Specific. cRIO and LabVIEW are used in real world environments, its a true hands on experience. I can't wait to see what they come up with for the new system. From what I heard so far its absolutely amazing. My only concern - with it being all CAN based and the pneumatics module being the way it is - its becoming too "plug-and-play" for my taste. Theres no real electrical work.
As for the "non-labview friendly" statement, I don't know why you would want to limit your options. LabVIEW is built by NI and used with NI products, it shouldn't have issues. If there are, you know exactly who to contact.
LabVIEW is no more stressed than any other language. When you set up your control system in week 1, the manual offers setup instructions for each language with no bias.
If I remember correctly, CAN talons are coming its just a matter of when.
As long as they keep NON-CAN talons too.
Meshbeard
29-04-2013, 18:17
I'm not sure I understand this statement. The DSC was made specifically for FRC robots.
I guess what I meant was that they are not suited to use in robots that often get covered in swarf and have students plug things in backwards. They are much too prone to failure for use in FRC. I guess it does teach students not to screw up with electronics, but it should not break so easily in a learning environment.
Brandon_L
29-04-2013, 19:05
As long as they keep NON-CAN talons too.
I'm pretty sure cross the road as a company would, but from what I'm hearing it sounds like they won't be compatible.
6mm lugs for the main power terminals on the next-gen PDB. WHY???
I second and third that. 7/16th or 1/2 would be nice.
timytamy
29-04-2013, 19:24
6mm lugs for the main power terminals on the next-gen PDB. WHY???
Because the rest of the world uses metric and is forced to use imperial, it's only fair that your forced to use metric every now and then ;)
Brandon_L
30-04-2013, 20:02
I'm pretty sure cross the road as a company would, but from what I'm hearing it sounds like they won't be compatible.
Correction, I've been told there may be a CAN controlled PWM "sidecar"
Correction, I've been told there may be a CAN controlled PWM "sidecar"
Where are you getting this information?
AllenGregoryIV
30-04-2013, 20:06
Correction, I've been told there may be a CAN controlled PWM "sidecar"
The Athena overview linked above says it will have PWM built in for motor controllers and servos.
Brandon_L
30-04-2013, 20:07
The Athena overview linked above says it will have PWM built in for motor controllers and servos.
ooooooo
EDIT: Went back looking for the link, I don't see it. Maybe I'm just blind.
EDITEDIT: http://i.imgur.com/eA3Bvfu.jpg
One thing I'd like to see would be onboard WiFi. It'd remove another thing to put on a robot, as well as removing another source of wiring error (speaking from experience, as I've let the smoke out of a router before).
To integrate WIFI into the Athena may not be a good idea. Ni would have to deal with FCC certification and allot of the time the controller is buried into the bowls of the robot. Not the best location for RF. We lost one WIFI this year. Better than last year. Would be nice if we had a hardened WIFI solution with a better power connection. Could be done but, cost would be a big issue.
Putting WiFi in would actually be pretty easy. But that locks us into WiFi. I understand they (The big they, not just NI) are looking at other options than standard WIFI.
Anybody notice the big Qualcomm booth at worlds? Any idea what they do? :]
Also, I think I worded the second part of my comment badly. Here goes: I really want to see what IFI and other companies came up with. Some of them may not ever release what their idea was because they got turned down for this contract: that would be a shame.
You can see our submission here. (http://www.team221.com/robotopen/product.php?id=114)
We submitted a combined sidecar/controller concept based on an Arduino Mega 2560. The idea was to make an entry level, easy to use controller that would appeal to educators and makers. We met every FIRST requirement except USB host and CAN capabilities. The price point for the controller was less than $200 in mass production.
We were careful not to shoehorn in extra power or glossy features so we could keep costs down. We were hoping FIRST might consider allowing multiple main processors so teams could choose the best fit. It's likely that Sasquatch would become popular with rookie teams because of the friendly Arduino development community.
We are moving forward with the board for hobbyists and are using it as the basis for our new line of controls products.
The proposal process was exciting and disappointing. We never had high hopes of beating out the other major competitors, but we did dream. :)
Putting WiFi in would actually be pretty easy. But that locks us into WiFi. I understand they (The big they, not just NI) are looking at other options than standard WIFI.
Anybody notice the big Qualcomm booth at worlds? Any idea what they do? :]
I'm fully expecting Qualcomm is involved for the radio communications on Athena's platform.
IIRC, didn't Qualcomm make the radio chipsets in the old Electrowave radios (branded as IFI) that we used to use in the pre-2007 days? http://www.electrowave.com/products/screamer422.shtml
Didn't think about the RF issues, though those could be helped by using an Antenna.
I also didn't consider different wireless systems. I assumed that it would be WiFi because the DS was shown to connect via Ethernet to the field. It's certainly possible that the Athena could use a different form of wireless. It'd certainly make some scouters happier, as they'd be free to set up wireless hotspots for tablet-based scouting.
*snip*
I saw the Sasquatch at the AndyMark booth, I loved the look of it, especially the web-based dashboard and the ability to program it as an ATMEGA. I'm a big fan of using generic microcontrollers, because they're incredibly versatile and used extensively in the real world.
The main problem I see with using it as an FRC Controller would be the lack of a system for FIRST to put their locked-down code. I believe they do this with the cRIO currently, based on all of the information that comes up when compiling code for it.
Can they just make an arduino a FRC-legal robot controller and be done with it?
mman1506
03-05-2013, 20:24
Can they just make an arduino a FRC-legal robot controller and be done with it?
As Arduino lover myself I can say that would suck. The C-Rio's FPGA is much faster than a Atmega 2560. You would not be able to do on-board vision processing and code would have to be optimized to run quickly. It also would alienate all current FRC programmers (Arduino code is not C++, BTW)
Hypnotoad
04-05-2013, 02:42
You can't do proper vision processing on a crio anyways, so nothing is lost by using an arduino. I ended up just doing it on the driver station since the image had to go there at some point anyways.
So a little 8 bit controller with no fpu, no fpga, no native lan, no RTOS, and etc etc etc, can replace what we have now. Every team has the expertise to implement a low level ISR and can drop down and write directly to the hardware at the register level when needed? Your going to take a 9th grader and throw that at him? I think allot of people do not realize the power we now have available to us. Even the old IFI controller had two 8 bit pics in it. The vexters at least have an arm chip to play with. The only way teams accomplished complex things with the old IFI solution was some excellent low level code done by Kevin Watson. I can't believe the number of people that want to go backwards. There is a place for an Arduino in First. One of our students used an Uno and some leds to make a heads up targeting system for the driver. Got an award for it. We are using a Uno to read a 3 axis accelerometer, gyro, and magnetometer to make an IMU. Reading the 9 16 bit values over I2C and doing a bunch of triangle math is saturating the Uno. Yes, Arduinos do have a place in first. They make good coprocessors.
Correct me if Im wrong, but does that poster say USB will be available on the "Athena"?
Naturally, hoping this unlocks the possibility of using the kinect without any extra computing hardware on the robot.
Correct me if Im wrong, but does that poster say USB will be available on the "Athena"?
Naturally, hoping this unlocks the possibility of using the kinect without any extra computing hardware on the robot.
It does, and I talked to another person earlier in the year that claimed it did (yes, I would say he was reliable). Kinect's USB protocol is pretty open, so even if WPILib doesn't support it in 2015, I bet it will in a later year or someone will write a library to communicate with it.
Hjelstrom
04-05-2013, 15:39
Correct me if Im wrong, but does that poster say USB will be available on the "Athena"?
Naturally, hoping this unlocks the possibility of using the kinect without any extra computing hardware on the robot.
agreed! This was a big hope i had for the next controller and it looks like it will be possible.
How about something like a BeagleBone or UDOO? They both have the i/o capabilities of an Arduino, but also have 1Ghz ARM CPUs (quad core in the latter) and Ethernet.
mman1506
04-05-2013, 21:46
How about something like a BeagleBone or UDOO? They both have the i/o capabilities of an Arduino, but also have 1Ghz ARM CPUs (quad core in the latter) and Ethernet.
The problem with the beaglebone and others is that they do not have a Real time operating system. This means that there is a delay when inputting data and things don't always work in real time. This becomes an issue for safety and a number of other things as proccesing can become delayed as you can not expect it to processe an input in a certain time frame.
While they do have I/O doing something simple like PWM generation requires special scripting.
techhelpbb
05-05-2013, 06:34
The problem with the beaglebone and others is that they do not have a Real time operating system. This means that there is a delay when inputting data and things don't always work in real time. This becomes an issue for safety and a number of other things as proccesing can become delayed as you can not expect it to processe an input in a certain time frame.
While they do have I/O doing something simple like PWM generation requires special scripting.
There was an alternative system that could handle that presented. What FIRST wanted was a company to be essentially already be in NI's/Vex's educational market with FIRST. So basically from the release date of the RFP to the time they made their decision you had to be in thriving production with a client base (less than 6 months).
I helped submit one of the bids and engineer the solution presented. The solution survived the bid. I would not waste this kind of time and money doing something like this if I did not have other plans for it (especially since it entailed writing a graphical language framework). I've not heard from the other team that put the Arduino Kickstarter up about my offer to simply hand them the same donation I gave them by the defunct Kickstarter.
So a little 8 bit controller with no fpu, no fpga, no native lan, no RTOS, and etc etc etc, can replace what we have now. Every team has the expertise to implement a low level ISR and can drop down and write directly to the hardware at the register level when needed? Your going to take a 9th grader and throw that at him? I think allot of people do not realize the power we now have available to us. Even the old IFI controller had two 8 bit pics in it. The vexters at least have an arm chip to play with. The only way teams accomplished complex things with the old IFI solution was some excellent low level code done by Kevin Watson. I can't believe the number of people that want to go backwards. There is a place for an Arduino in First. One of our students used an Uno and some leds to make a heads up targeting system for the driver. Got an award for it. We are using a Uno to read a 3 axis accelerometer, gyro, and magnetometer to make an IMU. Reading the 9 16 bit values over I2C and doing a bunch of triangle math is saturating the Uno. Yes, Arduinos do have a place in first. They make good coprocessors.
I've been programming since before I was 10 and one of my very first embedded platforms was the Intel 8051 and yes I spent many an hour writing assembly interrupt service routines. So yes *if* someone had to do it there are people out there that could step up with examples as you also pointed out.
That being said not every processor needs to have interrupts. In point of fact I helped present a non-interrupt centric solution to FIRST for this RFP. You could put interrupt capable devices into the system but you did not need to. It worked by basically polling which normally would be very resource intense but you could cheaply put so much processing into what we presented it was not an issue. There was still plenty of opportunity to implement custom logic via programmable logic in the system if a very sensitive timing constraint arose.
Again neither here not there, The system we proposed is currently being prepared for use in a non-FIRST commerical hardened real time application.
dtengineering
14-06-2013, 01:21
So a little 8 bit controller with no fpu, no fpga, no native lan, no RTOS, and etc etc etc, can replace what we have now. ..... Yes, Arduinos do have a place in first. They make good coprocessors.
Based on my observations for the majority of FRC teams an Arduino... and I mean an UNO, not even a Mega or Due, would be more than sufficent to meet their programming needs.
The vast majority of teams would have difficulty convincing me that they really needed more processing power than a Due could provide.
Back in my day a 1MHz 6510 CPU (http://en.wikipedia.org/wiki/MOS_Technology_6510)was just a good excuse to learn some assembler! What do you kids need all this new-fangled gadgetry for anyway? It just makes you lazy! Sheesh.... grump grump grump.
(Where's a balding, greying smiley when you need one?!?)
Jason
Michael Hill
14-06-2013, 06:24
Based on my observations for the majority of FRC teams an Arduino... and I mean an UNO, not even a Mega or Due, would be more than sufficent to meet their programming needs.
The vast majority of teams would have difficulty convincing me that they really needed more processing power than a Due could provide.
Back in my day a 1MHz 6510 CPU (http://en.wikipedia.org/wiki/MOS_Technology_6510)was just a good excuse to learn some assembler! What do you kids need all this new-fangled gadgetry for anyway? It just makes you lazy! Sheesh.... grump grump grump.
(Where's a balding, greying smiley when you need one?!?)
Jason
Except for that whole image processing thing...or all of the libraries that are needed. Arduinos don't have THAT much memory to put robust enough code on.
Think of all the stuff WPILib has in it. None of that would be available just because of memory restrictions.
Jim Zondag
14-06-2013, 12:59
Back in my day a 1MHz 6510 CPU (http://en.wikipedia.org/wiki/MOS_Technology_6510)was just a good excuse to learn some assembler! What do you kids need all this new-fangled gadgetry for anyway?
My first robot used an Intel 8748 as the CPU. 64 Bytes of RAM...WooHoo.
This was cutting edge at the time.
http://i.imgur.com/GUH299B.jpg
Despite the modest hardware, we could do some pretty cool stuff...Sonar ranging, odometry, programmable course input, full autonomous functionality, etc. This was more complex than many FRC robots today.
Sorry this is a little long, it is basically my thoughts on the control system. I want to get this out in case it is helpful in some way or stimulates discussion. I hope this isn't too late to be relevant.
Personally, I'd like to see the control system become more open. This could only happen if safety was ensured in a way that couldn't be compromised and if it didn't complicate things for teams that didn't want or need to make things more involved.
The main requirement around safety is having a fail-safe way to guarantee all output devices go to a known state when there is loss of contact with the field or driver station or when either of these things is used to disable a robot. Any sort of failure between the field/driver station and the output device must result in the output device becoming disabled.
To keep things simple, you really want something close to plug-n-play for a minimal robot control system, but that doesn't constrain what you can do to expand the system. This has parallels to how some teams do vision processing, using the CRIO, adding an onboard dedicated system to do this, or running this on the driver station (or simply not doing it at all).
I'd approach this by having smart modules that communicate over CAN (I know there are a lot of people who are uneasy about CAN, but this is more a reflection of what has been available in FIRST rather than the technology itself -- another topic, as another technology could be substituted if it were determined to be a better fit).
The CAN-based pneumatics module presented by CTR is a good example of this approach. A big piece of the puzzle here would be an excellent replacement for the Jaguar, a CAN-based smart motor controller. More on this later...
To round things out, you'd also have either something like a CAN-based digital side car or perhaps a CAN-based PWM output module and another module to provide general purpose digital I/O and analog input.
The next piece of the puzzle would be something similar to the 2CAN, but with a little more to it. On the Ethernet side, this would connect to the radio and provide bandwidth monitoring and management (including prioritization, particularly for upstream data) and would have several ports for local Ethernet on the robot (camera, PC, NI Athena, Arduino, etc.).
It would also be the interface to CAN (like the 2CAN) but would additionally provide the safety function and an output for the robot signal light. This would be totally closed with no user code. Safety heartbeats would be sourced from the driver station (or the field through the driver station) and flow via CAN to all control modules, probably using a dedicated line, as described further on for the smart motor controller.
This would allow the control module firmware on the smart modules to be opened up. A minimal system might only have a radio, the 2CAN-like router/safety device, and some number of controllers, plus power (you might even have power for the radio supplied by the router/safety device). The driver station could send commands that would be routed over CAN to the control modules to run such a minimal robot.
A more typical configuration would have a compute device of some sort (NI) and this would communicate with the driver station and the control modules. Other CAN-based modules that might be nice additions are an IMU and a high-current LED driver. The driver station would remain NI and not need to be changed much. In fact, the most typical robot configuration would be essentially as it is today, except that the I/O capabilities on the NI really wouldn't have to be used (they certainly could be though).
I'll skip to some detail on the smart motor control module, as this illustrates how the safety function is implemented. Again, sorry about the length! The rest of this is fairly detailed proposal for what is essentially an improved Jaguar (including more detail on how safety is provided), starting with some requirements:
- Master/slave mode where more than one motor can effectively be run by a single controller acting as the master and sending messages that determine H-bridge duty-cycle for slave(s) (for things like drivetrain with more than one motor being controlled using a single encoder for velocity or position feedback, avoiding the need to send encoder data to more than one controller; this depends on safety scheme described below to be safe/legal)
- Return position, velocity, and acceleration and allow these to be used for closed-loop control, same for output current and voltage (return only of input voltage and temp. as well)
- Properly handle indexed encoder (for position control with index providing position reference)
- Traction control (limited acceleration, cut power when slipping/too much acceleration or pulse power similar to ABS braking on command or when slipping)
- Can replace Spike relay module (might not be cost-effective, but should be able to control same loads and legal in these uses, including replacing two relays when reverse direction is not needed)
- Non-volatile configuration (remembers not only CAN ID but also mode and various settings so these are there from power up or reset; this replaces configuration jumpers)
- Setting that governs current limit that is based on list of legal motors (plus option for manual specification of this value, or no limit -- this protects the motors, the controller handles anything up to what it takes to trip the breaker)
- Consider reverse-polarity protection
- Support PID and bang-bang control algorithms (possibly others as well)
- Good status indicators (LEDs)
- Personally, I'd like to see the firmware opened up (again, safety considerations would require care here, see below)
- Really nice doc on theory, how it works, etc. (to educate and inspire users, plus allow people to work on the firmware)
- Consider using WAGO connectors (to match PDB)
Some possible parts:
- Infineon TLE7182EM H-bridge controller (there's a nice evaluation board available for this part and also some great FETs and other parts from this supplier)
- Fairchild FOD0710 Optoisolator (for any input that could involve a ground loop, not needed for sensors that only connect to controller -- used for the safety input, for example)
- LSI/CSI LS7366R Encoder handling with SPI I/F
- I didn't get to the point of selecting uC, but PSoC or uC with support for generating PWM, CAN, and SPI would be a good fit, also needs inputs for sensors, etc. (preceding part handles encoder, leaving limit switch inputs, potentiometer, and internal needs -- I used something more expensive and powerful than required in a prototype, there is a lot of flexibility here, something like Microchip dsPIC30F4012 would do nicely)
Other thoughts:
- Consider not including PWM input (if other controllers in product line have this covered)
- Safety uses H/W watchdog chip and resets H-bridge controller only (but not uC, or just gates off H-bridge inputs) and uses either the PWM input only for heartbeat, or possibly include a dedicated safety line on the CAN cable for this purpose (this is the cleaner/preferred approach, the safety line just carries a square wave from the router/safety device that directly feeds the H/W watchdog, causing it to trip if the signal is lost for any reason)
- There are parts that do H/W watchdog, power on reset, provide power for uC and external sensors, etc. (automotive parts are a good fit because they run on 12V, are high volume, and solve the a similar set of problems in a robust way)
Thanks for reading!
The only reason it's so darn hard to do stuff now is because of inefficiency in the current system - The WPIlib in LabVIEW is so unoptimized that it's nearly impossible to run code that uses IO calls faster than 20ms task time without saturating the CPU.
The fact that we're currently saturating a 400mhz PowerPC amazes me. I don't like the idea that we should just throw more power at it to deal with it, since there's no reason to need anything near a 400mhz PowerPC.
I have a project I'm working on right now that uses a bunch of unoptimized floating-point math, a whole bunch of interpolations, and runs in two high-speed tasks (TDC and 200hz - TDC is ~5ms at 12000rpm) plus a few CAN interrupts (Jaguars should learn about CAN interrupts...). Bandwidth on a 56mhz PowerPC (MPC536) is extremely low, last time I checked it was under 30%. PWM IO is done on a PWM/timer on-chip module (MIOS), angular IO and angular synchronization are done in another on-chip module (TPU) which I did not write code for, which is essentially an optimized match/compare/timer module with microcode engine to autonomously reschedule matches and trigger ISRs. Between the MIOS and TPU, all of the current FRC FPGA non-analog functionality could be implemented with similar host-side overhead. Code includes some high-speed PI and bang-bang controllers, and LOTS of table interpolations. I did no real optimization on the math, it's about as much code/math as a complex FRC robot.
apples000
14-06-2013, 14:54
The only reason it's so darn hard to do stuff now is because of inefficiency in the current system - The WPIlib in LabVIEW is so unoptimized that it's nearly impossible to run code that uses IO calls faster than 20ms task time without saturating the CPU.
The fact that we're currently saturating a 400mhz PowerPC amazes me. I don't like the idea that we should just throw more power at it to deal with it, since there's no reason to need anything near a 400mhz PowerPC.
In my opinion, the new control system should be less powerful than the current cRIO setup. I've looked at the great robots from 2008 and before, and they don't really lack anything that we have today. The CMUcam wasn't as great as the current axis cam, but the only successful implementations of vision that I know of don't use the cRIO. In Java, the libraries are a little better than LV, but we still see high processor utilization when barely running anything. Having a slower and less powerful control system would force teams to come up with solutions that aren't completely inefficient (some of the code I see helping at competitions is amazingly inefficient) which would cause teams to come up with innovative control solutions.
Also, the current system is WAY overkill. The FPGA is a much higher-end model than what is needed, and there is no reason why the sidecar, digital module, analog module, the SSR module, the analog breakout, the pneumatics breakout, and the radio could not be integrated into one device. A good control system would have one enclosure for everything but power distribution. It would be MUCH cheaper than a $2,000 cRIO replacement.
Ricky Q.
14-06-2013, 15:14
Also, I think I worded the second part of my comment badly. Here goes: I really want to see what IFI and other companies came up with. Some of them may not ever release what their idea was because they got turned down for this contract: that would be a shame.
IFI / VEX did not submit a bid for the control system as a whole, we opted to focus on the motor controller portion of the RFP instead.
Best,
Ricky
dtengineering
14-06-2013, 21:38
Except for that whole image processing thing...or all of the libraries that are needed. Arduinos don't have THAT much memory to put robust enough code on.
Think of all the stuff WPILib has in it. None of that would be available just because of memory restrictions.
<Chuckling> Yes... you're correct. It was, actually, impossible to have a meaningful FRC robot before 2009 (http://team358.org/files/programming/ControlSystem2004-2008/).
While the old IFI controller did have more memory and power than an Arduino Uno, it pales in comparison to what an Arduino Due (http://arduino.cc/en/Main/arduinoBoardDue)can do, eh? ;)
While there are a few teams out there that will use every clock cycle that they are given, there are far, FAR more teams out there who are struggling to figure out how to get a limit switch to stop their arm from destroying itself, or how to get their robot to move forward for three seconds and stop in autonomous. "Simple and supported" is likely to benefit more teams than "sophisticated and speedy".
And I'll also suggest that just as we have limits on motor power that force us to design elegant mechanical systems, it may be that meaningful limits on processing power would force teams to design more elegant software systems.
Jason
While there are a few teams out there that will use every clock cycle that they are given, there are far, FAR more teams out there who are struggling to figure out how to get a limit switch to stop their arm from destroying itself, or how to get their robot to move forward for three seconds and stop in autonomous. "Simple and supported" is likely to benefit more teams than "sophisticated and speedy".
EXACTLY!
I've noticed over the years that the authors of the WPIlib seem to continuously pile on features with no concern for library cohesiveness or efficiency, while there are still issues (e.g. the execution cost of writing a motor value or especially a relay value) in the core IO access. We really don't need more features, we really need something that works reliably within the constraints of the 400mhz processor. This dosen't even include all the CAN issues, which I'm sure you've all heard me rant about.
I did some testing a few days before kickoff 2013 and found that the DEFAULT CODE from 2012 (without Network Tables) ran at about 40% CPU utilization on the 4-slot cRio (Back in 2011 I measured the default code to be about 65% CPU utilization on the 8-slot), running a single task that runs at something around 25ms iteration time (nowhere near consistent) and does nothing but set two motors to the values of two joysticks. By comparison, I got around 20-25% utilization running an early PalLib (http://www.chiefdelphi.com/media/papers/2841) in a 10ms RT task with <20us average jitter, reading and writing an entire analog and digital card of IO. The processor is capable of far more than is possible due to pure library inefficiency.
Some numbers for efficiency comparison: Our 2012 code ran at ~80% utilization running a 10ms RT task for gun speed control only and ~22ms non-RT task for everything else, while our 2013 code was able to run in a single 10ms RT task with extremely solid timing. Our 2012 code had a LOT of WPIlib mods to improve efficiency to get it to run at all (mid-build season we hit 100% continuous loading before we even merged in about half of the code), including a Set Motor Simple VI which we released on CD. Our 2013 code never encountered any issues using a totally new library, in fact we were under 50% CPU load for almost all of build season.
I talked to a friend of mine who is a programmer on another local team, and they struggled to get their (relatively simple) code to run under 100% CPU utilization, while getting the arm PID controller to run as fast as possible (they eventually got to 15ms by pushing some other tasks as slow as 100ms). 10hz control should never be considered an acceptable solution on a 400mhz system.
I also worked with several other teams with electronics or software issues during various events, and I was amazed how s l o w the compile/download process STILL is, it's now quite a few minutes. I can do a full build of Chrysler PCM software (1.85 million lines, 1.3million of code) in under 20 minutes on my laptop and flash in under a minute on Nexus or ETK. If it takes 5 minutes to compile a team project with ~15 team VI's and another few minutes to download, on 100mbit Ethernet, we've got a SERIOUS problem.
I see it all happening again with the new Athena controller. I really shouldn't say a lot about it, but IMHO NI/Athena team really don't know what's important to the vast majority of teams, and they keep focusing on the expansion possibilities that <5% of teams will think about using. CTRE gets it though, their solutions are fantastic, simple, efficient, and light.
Michael Hill
14-06-2013, 22:30
<Chuckling> Yes... you're correct. It was, actually, impossible to have a meaningful FRC robot before 2009 (http://team358.org/files/programming/ControlSystem2004-2008/).
While the old IFI controller did have more memory and power than an Arduino Uno, it pales in comparison to what an Arduino Due (http://arduino.cc/en/Main/arduinoBoardDue)can do, eh? ;)
While there are a few teams out there that will use every clock cycle that they are given, there are far, FAR more teams out there who are struggling to figure out how to get a limit switch to stop their arm from destroying itself, or how to get their robot to move forward for three seconds and stop in autonomous. "Simple and supported" is likely to benefit more teams than "sophisticated and speedy".
And I'll also suggest that just as we have limits on motor power that force us to design elegant mechanical systems, it may be that meaningful limits on processing power would force teams to design more elegant software systems.
Jason
In reality, they already do by limiting the motors we can use
"Simple and supported" is likely to benefit more teams than "sophisticated and speedy".
Genius of the AND: simple and supported and sophisticated and speedy.
choosing between seemingly contradictory concepts—focusing on this or that—leads to missed opportunities
Genius of the AND: simple and supported and sophisticated and speedy.
choosing between seemingly contradictory concepts—focusing on this or that—leads to missed opportunities
If the current cRio system is any example, choices have to be made to retain the Simple and Supported requirement. As much as we want everything, the cRio system clearly isn't anywhere close.
The VAST majority of teams want to be able to drive their robot and actuate their mechanisms with joysticks or buttons, and possibly do something in autonomous. For those teams, the current control system has a LOT of setup and puzzle pieces to fit together and configure separately, THEN they have to write code to make it do anything. I would estimate that at least half to two thirds of all FRC teams are in this place, maybe adding a limit switch or two. These are the teams that benefit most from any control system improvements.
The next class of teams uses sensors and feedback controls in some way. These teams want to be able to connect their analog potentiometers and quadrature encoders easily, read them easily, and execute their code. The current LV environment makes no attempt to maintain any sort of timing determinism, which makes basic example control loops including calculus terms hard to deal with. These teams spend a lot of time fighting this, and most LV teams in this category will also hit 100% CPU utilization trying to run their feedback controllers at a moderate speed using the 2013 libraries. I've talked to MANY teams and programming leaders who asked for advice on code optimization, trying to get their code to run at all, let alone in a reasonable execution time with reasonable determinism.
There is also the <1% of teams who design custom circuits (other than COTS computing devices) and complain about how hard the cRio is to interface to, because they want higher-speed SPI or LIN or some other protocol which it doesn't support. These are NOT the teams we should be focusing on, because we still haven't met the needs of the 99% (or come anywhere close). In fact, we were closer in 2010 than we are now - Code compile/download times and CPU hogging 'bonus' library features have gone up significantly since the cRio was released, and in many ways the usability has gone DOWN.
Teams REALLY want a controller that just works. They want to be able to hook it up and drive their robot without doing too much electrical and software work, configuring a whole bunch of separate devices using separate tools and instructions, and writing code. Anything else is secondary to this goal.
Speaking of this, why is there no default code for this control system like IFI and Vex provide? It's a HUGE help to Vex teams to be able to just drive and test things without writing code.
Greg McKaskle
15-06-2013, 11:11
More details about Athena will be made available in about seven weeks. I'll respond to just a few of the recent rants here, but this isn't the place for the details.
CPU usage:
I'm out of town attending a wedding, so I don't have a cRIO with me, but I do not believe that your measurements of the default system are accurate. On my computer, I have 50,000 log files from numerous teams acquired during the 2013 season. I do not see a rampant CPU usage problem in that data. Most teams are below 50% on their finished robot.
Default Code:
This was an intentional shift. Default code was replaced by default source code for the same reasons that the frame elements aren't preassembled, the electrical prewired, etc. Preassembled components to avoid a challenge will likely speed up one task, but may deny others from contributing.
WPILib:
I'm well aware that you do not like and do not use the higher level components of WPILib. As you found by digging deeper into WPILib, you are not forced to use floating point numbers, math, and fancy stuff. Features like NetworkTables do not take away your ability to do low-level UDP protocols and the majority of teams use them.
Getting it:
Lucky for you, CTRE and NI and WPI and AM are all contributing to Athena.
Greg McKaskle
CPU Usage:
I am good friends with the programming leaders from three local teams (51, 1718, 2337). All three use LabVIEW and felt severely limited by the compile/download cycle time, especially when making changes between matches, and CPU utilization. I believe all tree hit 100% CPU at some point, all asked me for advice on reducing CPU utilization and were forced to run control loops slower than they wanted, and at least one was unable to run even a single control loop faster than 15ms. To my knowledge, none were using any sort of vision processing on-robot. I do not directly talk to other software team leaders who use LabVIEW, but I've seen similar sentiments on ChiefDelphi over the past two years. The fact that most teams are under 50% shows that most teams aren't doing anything reasonably advanced or using a lot of IO.
Default code:
IFI provided the source for their default code, as well as binaries, and pre-imaged controllers as shipped with it. You could use it as-is or modify it to your needs. Every test chassis we have built using an IFI control system has run default code, and we sometimes modify it to suit the specific chassis (e.g. when we built the DualDrive pre-2011, we implemented a C version of the auto-lift control, but we only modified the lines where relay1 was set to command it from something different). Source-only default code is better than nothing, but the current source-only default code is nowhere near the IFI default code in functionality. With the IFI default code, there was a table that maps all joystick inputs to motors, and all buttons to relays, and one relay and DIO pair were used for the compressor. You could wire it up and test it. The current default source only provides for a tank or arcade drive, even the compressor control isn't enabled by default.
WPIlib:
In 2012, I was able to reduce CPU usage on a 10-PWM robot by ~20% by rewriting a single VI and dependencies (this created Motor Set Simple), out of the hundreds of VI's in the WPIlib only ~10 were touched. Many team programmers personally thanked me at the Championships and via CD that year, as the 20% CPU utilization was significant to their robot. I do not believe that high CPU usage is limited to only a few teams. I think it's quite rampant with teams who control their robot using feedback controls, especially those who don't know how to optimize code to the LabVIEW execution system.
AllenGregoryIV
15-06-2013, 14:28
CPU Usage:
Default code:
IFI provided the source for their default code, as well as binaries, and pre-imaged controllers as shipped with it. You could use it as-is or modify it to your needs. Every test chassis we have built using an IFI control system has run default code, and we sometimes modify it to suit the specific chassis (e.g. when we built the DualDrive pre-2011, we implemented a C version of the auto-lift control, but we only modified the lines where relay1 was set to command it from something different). Source-only default code is better than nothing, but the current source-only default code is nowhere near the IFI default code in functionality. With the IFI default code, there was a table that maps all joystick inputs to motors, and all buttons to relays, and one relay and DIO pair were used for the compressor. You could wire it up and test it. The current default source only provides for a tank or arcade drive, even the compressor control isn't enabled by default.
This is seems like something we as a community can help with without NI or WPILIB people getting involved. If a couple people just make default code (LabView, C++ and Java) that is at the level of the old IFI code and then just distribute to teams. It would be easy to have on a thumb drive at competition, to fix the teams that tried very hard to write their own code but failed.
Greg McKaskle
16-06-2013, 09:58
by the compile/download cycle time
This is not related to CPU usage and was caused by bugs in the latest LabVIEW release related to the compiler cache. Please understand that I'm not claiming WPILib or LabVIEW are perfect. Internal to NI I called a special meeting with VPs and the President in order to highlight these bugs, the impact they had on teams/customers, and to motivate that they not only get fixed, but that testing is improved to keep them fixed. I continue to use FRC as motivation to push various internal teams to improve areas of the product, but this has nothing to do with cpu usage.
I have two years logs for 1718. Their typical usage for matches in 2013 St Louis was under 30%. At earlier 2013 competitions it is several points lower, and in 2012 at the end of the season it was 40%. I don't have logs for the other teams on my laptop.
CPU usage is an interesting challenge. One loop with more work to do than time to do it in -- and the result is that the CPU will be pegged. Finding that loop can be a challenge, but they seem to have accomplished it and hopefully learned because of it. There are numerous tools to help professionals and FRC teams alike in monitoring and controlling CPU usage, loop rates, etc. I am more than happy to help here or by other means, but to me a CPU usage challenge doesn't mean WPILib is broken. It means that FRC isn't easy.
Defaults:
The decision not to have default binary was made five years ago, and I've been approached only once since, by an alum, and we discussed the tradeoffs related to it. To a large degree, test mode was added as a result of that previous discussion. Features should be added because they will increase the success of students in FRC, and that needs to include a variety of opinions. This forum thread is not the right place to design this ... but perhaps another thread?
In 2012, I was able to ... And that was a nice accomplishment. If WPILib were perfect, would you have learned more, or less? While WPILib isn't intentionally trying to get in your way, I don't want it to hide real-world programming issues from you either. I'm pretty sure that the changes you made in 2012 were already incorporated in the 2013 code along with a number of other performance improvements -- not all WPILib changes add on top and make it heavier. Other changes in 2013 were test mode and an newer, leaner, interop version of Network Tables. These were intentionally added to aid newer teams.
If you have strong feelings about WPILib, and I know you do, please don't rant in various threads all over CD. Create a thread that is dedicated to it, and we can get into discussions about how much it should do for teams, what it should not do for teams, etc.
Greg McKaskle
Greg McKaskle
16-06-2013, 10:16
to fix the teams that tried very hard to write their own code but failed.
This portion of the post jumps out at me, and again, I'd like to discuss it in another thread. I've also needed to "start over" a few times on Thursday at an event, and we do it by opening the template and writing the code together. I assist, but they drive. Starting over is not that common, and typically we just debug and fix their code. We just "finish" it.
It is not great that this happens at the event rather than within the team during the season, but this is no different than helping the teams with weight or wiring issues. Should we default those elements too?
If the template code needs other features, that is where I'd prefer to start. I'd like to hear other thoughts on this ... in another thread.
Greg McKaskle
I started a new thread for default/template discussion. See Here (http://www.chiefdelphi.com/forums/showthread.php?p=1279632#post1279632).
I also started a new thread for library discussions. See Here. (http://www.chiefdelphi.com/forums/showthread.php?p=1279635#post1279635)
Continuing on,
As for the LV issues, I didn't know that. I assumed it was library bloat, since the time goes up every year and compile/download times seem primarily related to number of files. A Buzz18 build (2013) are a lot shorter than a Buzz17 build (2012) and about the same as a Buzz15 build (2010). Other teams 2013 builds (from what I've seen, fairly small sample size) take 5-10x as long. It's painfully slow to help teams work through their issues between matches when you can't hardly deploy to look at anything. It's good to see it's being fixed, but the compile times have been on an upward trend for a while now.
Edit: Greg, was there ever a fix to the no-app issues (at high CPU load, bootload seemed to be starved and unable to download new code without no-app DIP)?
Jim Giacchi
17-06-2013, 23:27
In 2001 as a (High School) Sophomore I decided to teach myself the control system. I took home the new system with a motor and a battery and withing an hour had everything setup using radios, to make the motor spin back and forth. Latter that season I wrote the code for the robot because no one else wanted to and did so by downloading and installing the several megabyte program (I'm pretty sure I may have even copied it on a floppy disk, yes I am that old, I also built my robots using candle light and hot wax burns!!!).
In 2009 - 2010 as a graduated and employed Mechanical Engineer who works everyday on robots I attempted, and I stress attempted to get the control system to work. I failed miserably, giving up in frustration and turning it back over to the electrical advisor telling him to fix it because I just didn't care anymore.
The funny thing is that our robots are the same complexity as what I did in 2001, they are no more complicated, but in order to get them up and running the control system is orders of magnitude more complicated and the compiler takes multiple DVDs (Thats gigabytes, with a big old capital G) God help us when our computer crashed last year halfway through the season, took the students and mentor over a week to get it back up and fully updated.
So basically what I am saying is Please.... Please I beg you and wholeheartedly agree with the others who have posted that it needs to be less complicated and easier to setup. The metric should be, can a student get this running on their own. I understand that they may not be, can the student get this to use a camera, pick out a shape and have the robot do a backflip frisbee throw into the top goal in autonomous, but a student should within an afterschool meeting get a robot to drive from start to finish. Right now the system is not even close to that.
Spending time troubleshooting something this complicated always eats away at the fun I have working on this and I have longed for the days of that beautiful black IFI box, oh how i miss thee (and it's not the IFI part, it's the simplicity)
Thanks for reading my book of a post,
Jim
The funny thing is that our robots are the same complexity as what I did in 2001, they are no more complicated, but in order to get them up and running the control system is orders of magnitude more complicated
Interesting analogy.
Our 2001 robot ran two PI controllers, each had 2 potentiometers and a single motor (it actuated a complex cable/spring driven linkage so it switched sensor part way through the travel), plus an auto bridge balance program. All of this ran in 63 bytes of variable space on a BASIC Stamp, with code in PBASIC, in a 26ms task. (authors note: The linkage claws on the 2001 robot are some of the coolest things I have ever seen)
Our 2013 robot ran one PI controller, with one potentiometer, and several state machines. Autonomous has no closed-loop controls, just thresholds and sequences. Granted, we use interpolation tables (which include For loops, not entirely lightweight) and floating-point math, but we're running a 10ms task on a 400mhz PowerPC and using more than half of the bandwidth.
Both robots were world finalists. The 2013 robot still has more feedback controls than most of FRC. FRC does NOT need more processing power.
It takes our 2013 robot a long time to boot up and find the field, and builds take a minute or so. The 2001 control system would have booted and established a radio link in 5s, under 1s on tether (~200ms I've heard).
Speaking of boot times, I've worked on programs which mandate a 250ms boot time from power applied to ready to synchronize and start, and can do a soft reset (module stops executing code and starts from the beginning without powering down) without stalling the engine. These programs run over a million lines of C code on a processor that runs half as fast as the cRio, with control loops at 1khz, on a PowerPC core similar to the cRio.
I've heard VxWorks boots in a few seconds on the cRio, why does it take soo long for user code to come up and init? I know some of the init code is even more inefficient than the runtime code (Encoder4x does a typedef conversion of the three lines A/B/Index 12 times or more at init), but it's still 30s or so before user code even begins to init.
apples000
18-06-2013, 06:59
Does anybody know where the extra time is actually coming from?
JamesTerm
19-06-2013, 07:32
The only reason it's so darn hard to do stuff now is because of inefficiency in the current system - The WPIlib in LabVIEW is so unoptimized that it's nearly impossible to run code that uses IO calls faster than 20ms task time without saturating the CPU.
I'm wondering if any c++ teams had any performance issues as stated here... From my wind-river experience, I have no complaints with WPIlib performance, and we do some complex code. We run using a 10ms sleep but could probably do 5ms quite easily... the entire loop clocks around 1 - 2 ms... I'll verify later today... but the cpu usage was under 30%.
P.S. it was amazing what could be done with 6502 assembly! C=
MrRoboSteve
19-06-2013, 10:34
I looked at a lot of C++ code at the regionals where I was CSA, and with the exception of vision processing didn't run across a team who had CPU utilization problems related to WPILib performance.
It would be interesting to see a sample project that demonstrates the issue.
Radical Pi
19-06-2013, 16:18
I'm wondering if any c++ teams had any performance issues as stated here... From my wind-river experience, I have no complaints with WPIlib performance, and we do some complex code. We run using a 10ms sleep but could probably do 5ms quite easily... the entire loop clocks around 1 - 2 ms... I'll verify later today... but the cpu usage was under 30%.
Same thing here. The only time we ever saw 100% CPU was due to a bug in SmartDashboard that I missed the patch for. Even then it didn't cause any significant issues. Same thing with Java from what I've heard.
Frank has done it again. He smuggled out a picture of the 2015 control board on his blog. It is chip-less!!
Tom Line
19-06-2013, 17:20
I have two years logs for 1718. Their typical usage for matches in 2013 St Louis was under 30%. At earlier 2013 competitions it is several points lower, and in 2012 at the end of the season it was 40%. I don't have logs for the other teams on my laptop.
To add information to this:
In 2012 the two biggest chunks of processor usage were our vision processing and our speed control on the shooter. The vision processing would increase CPU utilization 100% but only for a fraction of a second before we started shooting.
Our speed control ran in a timed loop and that was the single biggest contributor to our CPU usage that year. We scaled it back from 5 ms to 10 or 15 (I can't remember the final one) to improve CPU utlization.
In 2013, our speed control loop was the biggest user again. We didn't use vision at all. During the year, we stopped using the old '09 classmate because the lag was noticeably worse than using a new laptop with an I5 processor, even with the stock driverstation.
I wouldn't say that CPU usage has ever limited what we've done in competition. Teams need to understand how the changes they make affect CPU usage though. Perhaps it's time to flash a message in the diagnostic window that cpu usage is approaching 100%. That will at least let users know when something is wrong. Many new users don't know enough to look at the charting tab.
There is no getting around it though: an FRC control system is not simple. Personally, I don't think that it's too complicated for high school students to utilize.
The single biggest issue I have with the cRIO and other systems is compile time. In order to completely remove compile time issues from our robot, we now have every single constant or modifiable value stored in text files. Updating something is a matter of changing the text in the file and uploading to to the cRIO via FTP. It takes about 5 seconds after power on, since you don't have to wait for code init or anything else. The cRIO operating system boots very quickly. The change was necessitated by the time in 2011 at Worlds when we weren't able to finish tweaking our two tube auton because it was taking too long to compile. We can do it while the robot is live, too, and a single button press reloads all the constants from the text files.
The only time this year we actually reprogrammed anything was when we added a drive-to-mid-line and stop on the center discs in case we played against 469. On another note, this is the second time we specfically had to write an auto mode to try to stop them - the first time was in 2010.
Frank has done it again. He smuggled out a picture of the 2015 control board on his blog. It is chip-less!!
This is now part of the 6 week design process. Some assembly required.
JamesTerm
19-06-2013, 17:29
To add information to this:
Our speed control ran in a timed loop and that was the single biggest contributor to our CPU usage that year. We scaled it back from 5 ms to 10 or 15 (I can't remember the final one) to improve CPU utlization.
Which platform/language are you using?
If it is c++, Do you use the PID functionality included in the WPI Libraries?
Tom Line
19-06-2013, 17:36
The CMUcam wasn't as great as the current axis cam, but the only successful implementations of vision that I know of don't use the cRIO.
There were many successful vision implementations that used the cRIO. It is important to understand the difference between real-time-targetting and taking a single frame to aim.
Tom Line
19-06-2013, 17:39
Which platform/language are you using?
If it is c++, Do you use the PID functionality included in the WPI Libraries?
We use LabVIEW exclusively, and use Velocity PID loops that we wrote. We were using the 2011 banner light sensor with reflective tape on half the wheel just like Jared's implementation on 341 Miss Daisy.
The PID is not what eats the CPU. The timed loop in LabVIEW actually forces the loop to a certain timing, rather than just waiting or sleeping it. To simplify it somewhat, if you set a timed loop to 5ms, all the other tasks will take a back seat to that single loop running every 5ms. It really hurts CPU.
I don't know if c++ has an equivalent.
JamesTerm
19-06-2013, 18:19
The PID is not what eats the CPU. The timed loop in LabVIEW actually forces the loop to a certain timing, rather than just waiting or sleeping it. To simplify it somewhat, if you set a timed loop to 5ms, all the other tasks will take a back seat to that single loop running every 5ms. It really hurts CPU.
I don't know if c++ has an equivalent.
In the WPI Lib PID code in PIDController.cpp (as of version 3487) opens a Notifier object with a default parameter of 50ms. It does like you describe forces the loop to a certain timing, and the reason for this is that this design does not wish to calculate uneven time slices within the PID algorithm. The result is good results from a functionality standpoint, but will have the same issue of context switching especially if one wishes to use a lower period. In c++ the programmer does not have to go in this direction, they can write their own PID http://www.termstech.com/files/Archives/PIDController.zip and avoid context switching all together. I had proposed this solution for the WPI library but there was not enough expressed interest in going in this direction. Really when you think of it... a machine can only execute one instruction at a time... when you have tasks/threads... the context switch overhead can really eat up cpu processing, and it doesn't have to be this way. I have spent several years on this issue documented here http://www.termstech.com/articles/PID_Kalman.html
Interesting,
The last time I ran the VI profiler with the full WPIlib the highest time VI's were the Relay Set and Motor Set, and the inefficiencies of them stacked up.
One of the real reasons LV is so inefficient even with seemingly simple code is because of the way it deals with with execution. In C and most other languages a function is an almost logical construct which just segments code, LV does not separate VI's this way.
In LV, each VI is a node in the execution system. LV then manages a smaller set of VxWorks tasks and an execution state of each VI. For this reason, by default each VI has a single instance and any local memory is retained between calls (you can use a VI for data storage by having a get/set input plus a data input/output and a shift register, the WPIlib does this a lot if you look). Any single VI can also only execute once at a given time, so an execution in another thread blocks the same VI from executing in a different thread. When all of the inputs necessary for a VI to execute is ready, the VI will be scheduled into a task to be run at the earliest opportunity, and then the data output will be set and the dependent nodes can execute.
This execution system is significantly more inefficient than a C function call, but virtually nobody realizes this. For this reason, a VI call is considered an expensive entity, not as expensive as another task entirely but not as cheap as a C function call. The ways around it are to set the VI as subroutined (this will not work if any contents are non-subroutined or blocking nodes) which cheapens it to near a C function call, or set the VI as inlined (this does not require non-subroutined subVI's but does require the VI to be re-entrant) which is inlined at compile time (this can reduce compile efficiency if you change a VI which is inlined as it has to rebuild all VI's which include it). Both subroutined and inlined VI's cannot display front panel data or probes in realtime when debugging, but you can still pass data through the connector as usual.
In a lot of ways the LV execution system helps a lot when you want to do multitasking (which is trivial in LV) and the single local variable set helps with data storage in quite a few cases, but if you don't understand it and set the subroutine and inlined properties rigorously for every VI in a project, the inefficiencies of the execution system stack up really fast, especially for a library the size of WPIlib plus a team project with over a hundred VI's. In 2012 I thoroughly went through my WPIlib copy to subroutine and inline VI's where possible, I believe some of this was later integrated to WPIlib.
I personally think it's quite reasonable in RT embedded systems to essentially cheat the OS. Every other RT embedded system I've worked on runs purely statically allocated RAM and uses processor ISR's to deal with tasks, so the OS kernel is a single function (timed ISR) and there are no context switches. There is then no penalty for doing context switches frequently, but we still try to optimize it.
In C++ the PID only runs at 50ms (20hz)? That seems insanely slow! I would expect at least 20ms to be considered a realtime control loop.
Tom Line
19-06-2013, 22:09
In the WPI Lib PID code in PIDController.cpp (as of version 3487) opens a Notifier object with a default parameter of 50ms. It does like you describe forces the loop to a certain timing, and the reason for this is that this design does not wish to calculate uneven time slices within the PID algorithm. The result is good results from a functionality standpoint, but will have the same issue of context switching especially if one wishes to use a lower period. In c++ the programmer does not have to go in this direction, they can write their own PID http://www.termstech.com/files/Archives/PIDController.zip and avoid context switching all together. I had proposed this solution for the WPI library but there was not enough expressed interest in going in this direction. Really when you think of it... a machine can only execute one instruction at a time... when you have tasks/threads... the context switch overhead can really eat up cpu processing, and it doesn't have to be this way. I have spent several years on this issue documented here http://www.termstech.com/articles/PID_Kalman.html
Our homebrew PID normalizes for time, so the calculation itself does not need to be inside a timed loop. We place it in a timed loop so that we can attempt to match the speed controller update rate for the best 'idealized' performance. I'm not sure how you'd go about this without a timed loop.
JamesTerm
20-06-2013, 04:37
I personally think it's quite reasonable in RT embedded systems to essentially cheat the OS. Every other RT embedded system I've worked on runs purely statically allocated RAM and uses processor ISR's to deal with tasks, so the OS kernel is a single function (timed ISR) and there are no context switches. There is then no penalty for doing context switches frequently, but we still try to optimize it.
That is very cool in regards to the ISR's dealing with tasks... I'll want to research this a little tomorrow. What I have found (using Intel processor I7). Is to treat threads (aka tasks) as sparingly as possible to only be reserved for timing and I/O tasks. We tried some parallelization techniques, and found unless there is NO convergence (which is usually not the case) we don't come ahead in saving time except for very rare cases which only benefit on some machines. Doing less work that is simplified in one task usually wins. I'm not saying that this is not possible, but to successfully achieve optimal parallelization infringes on making the code less readable and maintainable than what we cared to pursue.
In C++ the PID only runs at 50ms (20hz)? That seems insanely slow! I would expect at least 20ms to be considered a realtime control loop.
This is what this looks like:
PIDController(float p, float i, float d,
PIDSource *source, PIDOutput *output,
float period = 0.05);
I am not sure why this was chosen as the default parameter, but as I recall from Brad, this was tested by several teams to work well for many general cases. I think I may have an email on that somewhere. One can easily construct this class to use a different rate, but I know some teams may have used the robot builder for this season and they probably could have over-looked this in the auto generated c++ code.
JamesTerm
20-06-2013, 04:57
Our homebrew PID normalizes for time, so the calculation itself does not need to be inside a timed loop. We place it in a timed loop so that we can attempt to match the speed controller update rate for the best 'idealized' performance. I'm not sure how you'd go about this without a timed loop.
There are a few questions I have on this:
1. How well would this work mechanically if you used a 10ms timed loop?
2. How much improvement would the cpu usage be for this?
Unfortunately, I do not know much about LabView at all, but in c++ you call GetTime() in your main OperatorControl() loop and pass this time as a parameter. For our code the entire loop consists of "void TimeChange(double dTime_s)" call delegated out to various classes including the PID controller class (per rotary system).
Here is our main loop
double LastTime = GetTime();
while (IsOperatorControl() && !IsDisabled())
{
const double CurrentTime=GetTime();
//I'll keep this around as a synthetic time option for debug purposes
//const double DeltaTime=0.020;
const double DeltaTime=CurrentTime - LastTime;
LastTime=CurrentTime;
//Framework::Base::DebugOutput("%f\n",time),
m_Manager.TimeChange(DeltaTime);
Wait(0.010);
}
Greg McKaskle
20-06-2013, 08:01
Performance:
The attached image shows how performance has been tracked over a few years. Times in ms. While you may not feel that anything/enough has changed in WPILib, the attached image shows the relative call timing for various WPILib functions and with different WPILibs that were shipped. The tests have been in place and the results used to verify that the speeds were where we wanted them.
So, the numbers for 2012 sped up. Why? The default for LabVIEW WPILib was to leave debugging on and make the code easier to explore and run subVIs. After discussions and tests, we decided to change a few implementation details and to migrate to a newer built-in error merge node. We decided to turn off debugging on some core functions and to inline a few others. The libraries could be faster, but the goal is to strike a balance. This is not meant to be a maxxed out race car, but a training/learning car. The runtime performance of WPILib seems fine from the logs I've seen and from the performance tests we regularly monitor. Vision can easily max out the PPC or any other CPU, and that is a relatively independent issue. If WPILib were for professional engineers it would be written far differently, perhaps like the sensor classes that our commercial robotics product ships.
LV is inefficient:
LV subVIs offer many features, and when used, they naturlly add overhead. Inefficient implies it isn't useful work. A subVI has a set of Execution Properties that affect how it executes. They are ...
Debugging: Allow for breakpoints, probes, and realtime panel viewing. The code is a bit larger, slower, and much friendlier for mentor/student debugging sessions often without even needing to recompile.
Reentrancy: Select between nonreentrant, preallocated, and shared pool reentrancy. Nonreentrant means that the subVI has a critical section around it to protect state data, a hardware session, etc. It is a parallel language, and this is the default to avoid race conditions. Preallocated is useful when state data needs to be per instance such as with a PID. Shared has a common set of pooled temporaries and the lack of critical section enables parallelism. For computational subVIs that are very small and called often, it is sometimes useful to make them reentrant to lower call time by losing the critical section.
Inlining: The function call overhead is jettisoned entirely and the code is compiled into callers. This trades off call overhead for debugability and potentially code size.
Priority: All but subroutine are implemented by choosing which priority the OS thread has when it executes this VI. Boosting is also used to avoid inversion. The subroutine is special in that it serializes code within the VI, doesn't generate cooperative yields, and has the least amount of debugging features. For high function count calls this is an optimization tradeoff you can make.
System: This is typically used to isolate I/O libraries that use thread-local storage, sleep, or otherwise block their thread.
------------
Execution System Explanation:
As discussed above, VIs can be low overhead or higher overhead. The execution system of LV allows for data flow of wires to trigger and suspend execution. It does it very efficiently, and with simple syntax. Features have a cost, and for example, C++ dynamic dispatching isn't as fast as static dispatching. Debugging code tends to be slower to execute than optimized code. Overly inlined Vis are just as hard to debug as C++ code. The fact that you are aware of these tradeoffs is great, and one of the benefits of projects such as FRC. I do wish that you could accept that WPILib needs to balance runtime performance and development experience. Even within your team, I'd expect that there are benefits to using floats and higher level abstractions. Put another way, is the CPU usage wasted if it is used to clarify your thought process or to
simplify how something is taught?
I'm happy to answer questions or look into performance issues you perceive in WPILib, but it would be great if the feedback was more direct and less of a rant that hops from subject to subject.
Greg McKaskle
Greg McKaskle
20-06-2013, 08:10
was there ever a fix to the no-app issues
It is better understood. It has less to do with high CPU usage and more to do with error handling, but since the current error hook is slow, they go together.
The issue is that the auto-error handling hook that we added which allows us to send errors to the console is not typically used on RT or even on desktop. It is not very optimized and it is difficult to modify. It also turns out that it can confuse the RT protocol that is trying to exit the current app and start the new one.
I have written code that will route errors more efficiently and will not cause issues with RT, but at the moment it is shelved waiting for the LV release to complete. It is expected to make it into the next version of LV, and there is potential for it to go into an FRC patch.
If this doesn't happen, we now know what causes it and will be able to work around it. So in effect, yes. It should get fixed this next year.
Greg McKaskle
MrRoboSteve
20-06-2013, 13:56
Real-world software developers have to make approachability/speed of development vs raw performance tradeoffs every day. Consider implementing a desktop app. Do you use a cross platform environment like Java, gaining the ability to run in many places but losing some fidelity to platform look and feel and less control over the memory footprint of your application? Or do you use C# and lose the ability to move across platforms, but get high productivity and platform fidelity if you're on Windows? Or write in C++, gaining the performance benefits it brings but also bringing lower developer productivity? Depends on the situation.
The FRC control system provides a range of options for control system development that let you trade off approachability and speed of development against raw performance.
Generally, the LV environment is more approachable and easier to get up and running, with some tradeoff in performance. Whether the tradeoff matters varies based on your scenario.
To beat up on the LV implementation because "it's slow" misses the point -- as Greg points out, the LV implementation is intended to support approachability first, while not unnecessarily giving up on performance. But, like every other environment, if you care about speed greatly (e.g., you're going to debate 10 vs 20 vs 50 ms update times), you're going to end up having to learn about the internals of the environment, and may end up replacing parts of it with custom coding that implement a different approachability/speed tradeoff.
Or you may decide that LV isn't for you and you should use Java or C++ because it's better suited to your scenario.
A few things,
First, I don't know of anyone else on a team who debugs the WPIlib. It would make sense to me to heavily optimize it, since every other LabVIEW team I've talked to treats it as a golden black box. They don't see it as something to be modified, they see it as something to use and assume it works as they expect. I acknowledge that it's improving over the years, but I still don't like it.
Second, I come from a realtime software world. We write our code in C or autocode from Simulink/Stateflow, and assume a lot about timing determinism and stuff. It bugs me a lot to see a software environment that should be capable of real-time control algorithms running with such poor timing determinism and everyone accepting this as good or acceptable thing. LabVIEW itself has proven to be a great tool, but there is a lot of overhead in a lot of the FRC specific stuff which bogs it down too much.
Third, the RT software world assumes timing determinism, it's considered a total failure of the operating system and control algorithm software if timing goals cannot be achieved. I'm not saying we should demand such careful and extensive optimization, but if a 56mhz PowerPC can run some RT PID loops at 2ms we should be able to get 5ms out of our 400mhz PowerPC.
FRC is not a desktop app. We do have realtime control system performance to consider. Every year we have re-evaluated switching to C programming (procedural, not OO) for performance, and keep coming back to the debugging tools of LabVIEW vs what the C environment has to offer.
JamesTerm
21-06-2013, 08:10
FRC is not a desktop app. We do have realtime control system performance to consider. Every year we have re-evaluated switching to C programming (procedural, not OO) for performance, and keep coming back to the debugging tools of LabVIEW vs what the C environment has to offer.
procedural, not OO eh? Sounds like something I use to say ;)
I feel your pain brother,
When I was a kid making games in BASIC for my commodore and the sprite collision detection was so late... I got so frustrated. I had the need for speed. I was 11 at the time... BASIC got me into programming but if I really wanted speed I had to take a leap of faith as no-one was going to make the BASIC go any faster... so I invested in a Vicmon and started 6502 assembly. I got the speed I needed and this decision ended up changing my life to help me find my career where I am today.
If you are like me and you have the need for speed... join the elite 18% of teams who have found it. You can do C programming in c++ and I believe it supports inline assembly as well. You can still make diagrams too... I make them in PowerPoint and then make the code from them afterward.
Don't complain and hope for change... be the change!
Greg McKaskle
21-06-2013, 09:50
This morning I created a new templated FRC project and ran it on my 4 slot cRIO. It still contains network tables, empty teleop loops, etc. I did not run the dashboard.
CPU usage was 20%.
In Periodic tasks, I drop a timed loop, set it to 5ms period, and placed four pot reads, four PIDs, and four motor sets in the loop.
CPU usage was 45%.
I split the code into four timed loops running in parallel, each with one pot PID and motor set.
CPU usage was 50%.
I deleted three of these and set the remaining one to 1ms timing.
CPU usage was 60%.
The only time the CPU usage went to 100% was when I made a mistake and was receiving error messages and when I was charting my time intervals to look at the determinism.
-------------------
If I were to place more than 5ms of work inside of a 5ms loop, I would indeed lose determinism, peg the CPU, etc. In that case, I'd either need to up the loop period or simplify the processing, perhaps by using the lower level layers of WPILib.
Given the table I posted earlier, you can calculate how many motor sets you can make per second, how many pot reads you can make, etc. Faster loops make more calls and that is the major determining factor, not he loop or scheduling overhead.
The timed loop runs at time critical priority, and its CPU load will be a bit higher than normal loops because it causes more context switches and runs on a different scheduler, but this is not a big overhead if you want good determinism. The default framework uses normal loops because of their simplicity, and while not as deterministic, they seem fine for typical use.
Do you see an issue with my measurements?
To summarize, I concede that WPILib is not the ultimate fast library. It is a learning library meant to introduce people to programming and robotics. It also isn't perfect, and if you see an issue with it, please let us know, ideally with some sample code. And if you have outgrown it and choose not to use much of it, that is fine. It was written so that you can choose your level of abstraction, choose your level of challenge, etc.
Greg McKaskle
pigpenguin
30-07-2013, 00:35
Am I reading the diagram correctly? Will there be no router needed either? (The diagram in question: http://i.imgur.com/eA3Bvfu.jpg)
wilsonmw04
30-07-2013, 00:59
Am I reading the diagram correctly? Will there be no router needed either? (The diagram in question: http://i.imgur.com/eA3Bvfu.jpg)
It still say's "wireless communications" bridges the robot and the control station. I would assume we are still using wireless from the diagram. The pics of the board don't look like they have wifi on them to me. I would assume a router of some sort will still be needed.
The prototype at worlds did not have on board wireless. I got the impression that coms between the DS & Athena will still be Ethernet. Makes sense to leave wireless part off board. Makes it easier to change. There was a large Qualcom booth at worlds...
But you will still need a router (actually more of a switch & a bridge)
ayeckley
30-07-2013, 18:33
Will we be seeing more of Athena at NI Week?
Steven Donow
30-07-2013, 18:35
Will we be seeing more of Athena at NI Week?
Per this tweet:
https://twitter.com/nifirstrobotics/status/357680410395820035
The event will be livestreamed on August 8th @ 8:30 CST (9:30 EST?) at NI.com/FIRST
https://pbs.twimg.com/media/BPa8cC2CQAE_3bX.png:large
ayeckley
30-07-2013, 19:08
Thanks - I'll put it on my agenda. Any other C-D folks going to be there in person?
Thanks - I'll put it on my agenda. Any other C-D folks going to be there in person?
Yup :)
s1900ahon
31-07-2013, 12:53
Any other C-D folks going to be there in person?
Absolutely.
jvriezen
31-07-2013, 16:30
We have a mentor and two students going to NI Week from our team.
Tom Line
01-08-2013, 15:20
We'll be there as well. Cameras and video cameras will be allowed into the event, so we'll get all the pictures and video we can.
Chris_Ely
01-08-2013, 18:20
Does anyone know if the release webcast will be archived afterward?
arizonafoxx
02-08-2013, 12:55
Is it August 8th yet.
Does anyone know if the release webcast will be archived afterward?
Yes - we will have both the keynote announcement and the Q&A session available for viewing after the event. We will try to get both links up on ni.com/first as quickly as possible.
Chris_Ely
02-08-2013, 17:20
Yes - we will have both the keynote announcement and the Q&A session available for viewing after the event. We will try to get both links up on ni.com/first as quickly as possible.
Thanks!
vBulletin® v3.6.4, Copyright ©2000-2017, Jelsoft Enterprises Ltd.