|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
| Thread Tools |
Rating:
|
Display Modes |
|
#91
|
||||
|
||||
|
Re: NI releasing/designing new controller for FRC
Does anybody know where the extra time is actually coming from?
|
|
#92
|
||||
|
||||
|
Re: NI releasing/designing new controller for FRC
Quote:
I'm wondering if any c++ teams had any performance issues as stated here... From my wind-river experience, I have no complaints with WPIlib performance, and we do some complex code. We run using a 10ms sleep but could probably do 5ms quite easily... the entire loop clocks around 1 - 2 ms... I'll verify later today... but the cpu usage was under 30%. P.S. it was amazing what could be done with 6502 assembly! C= Last edited by JamesTerm : 06-19-2013 at 10:45 AM. |
|
#93
|
|||
|
|||
|
Re: NI releasing/designing new controller for FRC
I looked at a lot of C++ code at the regionals where I was CSA, and with the exception of vision processing didn't run across a team who had CPU utilization problems related to WPILib performance.
It would be interesting to see a sample project that demonstrates the issue. |
|
#94
|
|||
|
|||
|
Re: NI releasing/designing new controller for FRC
Quote:
|
|
#95
|
||||
|
||||
|
Re: NI releasing/designing new controller for FRC
Frank has done it again. He smuggled out a picture of the 2015 control board on his blog. It is chip-less!!
|
|
#96
|
||||
|
||||
|
Re: NI releasing/designing new controller for FRC
Quote:
In 2012 the two biggest chunks of processor usage were our vision processing and our speed control on the shooter. The vision processing would increase CPU utilization 100% but only for a fraction of a second before we started shooting. Our speed control ran in a timed loop and that was the single biggest contributor to our CPU usage that year. We scaled it back from 5 ms to 10 or 15 (I can't remember the final one) to improve CPU utlization. In 2013, our speed control loop was the biggest user again. We didn't use vision at all. During the year, we stopped using the old '09 classmate because the lag was noticeably worse than using a new laptop with an I5 processor, even with the stock driverstation. I wouldn't say that CPU usage has ever limited what we've done in competition. Teams need to understand how the changes they make affect CPU usage though. Perhaps it's time to flash a message in the diagnostic window that cpu usage is approaching 100%. That will at least let users know when something is wrong. Many new users don't know enough to look at the charting tab. There is no getting around it though: an FRC control system is not simple. Personally, I don't think that it's too complicated for high school students to utilize. The single biggest issue I have with the cRIO and other systems is compile time. In order to completely remove compile time issues from our robot, we now have every single constant or modifiable value stored in text files. Updating something is a matter of changing the text in the file and uploading to to the cRIO via FTP. It takes about 5 seconds after power on, since you don't have to wait for code init or anything else. The cRIO operating system boots very quickly. The change was necessitated by the time in 2011 at Worlds when we weren't able to finish tweaking our two tube auton because it was taking too long to compile. We can do it while the robot is live, too, and a single button press reloads all the constants from the text files. The only time this year we actually reprogrammed anything was when we added a drive-to-mid-line and stop on the center discs in case we played against 469. On another note, this is the second time we specfically had to write an auto mode to try to stop them - the first time was in 2010. Last edited by Tom Line : 06-19-2013 at 05:33 PM. |
|
#97
|
|||
|
|||
|
Re: NI releasing/designing new controller for FRC
This is now part of the 6 week design process. Some assembly required.
|
|
#98
|
||||
|
||||
|
Re: NI releasing/designing new controller for FRC
Quote:
If it is c++, Do you use the PID functionality included in the WPI Libraries? |
|
#99
|
||||
|
||||
|
Re: NI releasing/designing new controller for FRC
There were many successful vision implementations that used the cRIO. It is important to understand the difference between real-time-targetting and taking a single frame to aim.
|
|
#100
|
||||
|
||||
|
Re: NI releasing/designing new controller for FRC
Quote:
The PID is not what eats the CPU. The timed loop in LabVIEW actually forces the loop to a certain timing, rather than just waiting or sleeping it. To simplify it somewhat, if you set a timed loop to 5ms, all the other tasks will take a back seat to that single loop running every 5ms. It really hurts CPU. I don't know if c++ has an equivalent. |
|
#101
|
||||
|
||||
|
Re: NI releasing/designing new controller for FRC
Quote:
|
|
#102
|
|||||
|
|||||
|
Re: NI releasing/designing new controller for FRC
Interesting,
The last time I ran the VI profiler with the full WPIlib the highest time VI's were the Relay Set and Motor Set, and the inefficiencies of them stacked up. One of the real reasons LV is so inefficient even with seemingly simple code is because of the way it deals with with execution. In C and most other languages a function is an almost logical construct which just segments code, LV does not separate VI's this way. In LV, each VI is a node in the execution system. LV then manages a smaller set of VxWorks tasks and an execution state of each VI. For this reason, by default each VI has a single instance and any local memory is retained between calls (you can use a VI for data storage by having a get/set input plus a data input/output and a shift register, the WPIlib does this a lot if you look). Any single VI can also only execute once at a given time, so an execution in another thread blocks the same VI from executing in a different thread. When all of the inputs necessary for a VI to execute is ready, the VI will be scheduled into a task to be run at the earliest opportunity, and then the data output will be set and the dependent nodes can execute. This execution system is significantly more inefficient than a C function call, but virtually nobody realizes this. For this reason, a VI call is considered an expensive entity, not as expensive as another task entirely but not as cheap as a C function call. The ways around it are to set the VI as subroutined (this will not work if any contents are non-subroutined or blocking nodes) which cheapens it to near a C function call, or set the VI as inlined (this does not require non-subroutined subVI's but does require the VI to be re-entrant) which is inlined at compile time (this can reduce compile efficiency if you change a VI which is inlined as it has to rebuild all VI's which include it). Both subroutined and inlined VI's cannot display front panel data or probes in realtime when debugging, but you can still pass data through the connector as usual. In a lot of ways the LV execution system helps a lot when you want to do multitasking (which is trivial in LV) and the single local variable set helps with data storage in quite a few cases, but if you don't understand it and set the subroutine and inlined properties rigorously for every VI in a project, the inefficiencies of the execution system stack up really fast, especially for a library the size of WPIlib plus a team project with over a hundred VI's. In 2012 I thoroughly went through my WPIlib copy to subroutine and inline VI's where possible, I believe some of this was later integrated to WPIlib. I personally think it's quite reasonable in RT embedded systems to essentially cheat the OS. Every other RT embedded system I've worked on runs purely statically allocated RAM and uses processor ISR's to deal with tasks, so the OS kernel is a single function (timed ISR) and there are no context switches. There is then no penalty for doing context switches frequently, but we still try to optimize it. In C++ the PID only runs at 50ms (20hz)? That seems insanely slow! I would expect at least 20ms to be considered a realtime control loop. |
|
#103
|
||||
|
||||
|
Re: NI releasing/designing new controller for FRC
Quote:
|
|
#104
|
||||
|
||||
|
Re: NI releasing/designing new controller for FRC
Quote:
Quote:
This is what this looks like: Code:
PIDController(float p, float i, float d, PIDSource *source, PIDOutput *output, float period = 0.05); |
|
#105
|
||||
|
||||
|
Re: NI releasing/designing new controller for FRC
Quote:
1. How well would this work mechanically if you used a 10ms timed loop? 2. How much improvement would the cpu usage be for this? Unfortunately, I do not know much about LabView at all, but in c++ you call GetTime() in your main OperatorControl() loop and pass this time as a parameter. For our code the entire loop consists of "void TimeChange(double dTime_s)" call delegated out to various classes including the PID controller class (per rotary system). Here is our main loop Code:
double LastTime = GetTime();
while (IsOperatorControl() && !IsDisabled())
{
const double CurrentTime=GetTime();
//I'll keep this around as a synthetic time option for debug purposes
//const double DeltaTime=0.020;
const double DeltaTime=CurrentTime - LastTime;
LastTime=CurrentTime;
//Framework::Base::DebugOutput("%f\n",time),
m_Manager.TimeChange(DeltaTime);
Wait(0.010);
}
Last edited by JamesTerm : 06-20-2013 at 05:27 AM. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|