|
|
|
![]() |
|
|||||||
|
||||||||
![]() |
|
|
Thread Tools | Rate Thread | Display Modes |
|
|
|
#1
|
||||
|
||||
|
Re: OPR-computation-related linear algebra problem
This matrix is quite small compared to those generally solved in finite elements, CFD, or other common codes. As was mentioned a little bit earlier, the biggest benefit to speedup can be done by processing everything as sparse matrices.
On my 2.0 GHz Macbook Air running Matlab Student R2012a, I can run: tic d = load('d.dat'); N = load('N.dat'); toc tic output = N\d; toc and get the output: Elapsed time is 2.768235 seconds. <--loading files into memory Elapsed time is 0.404477 seconds. <--solving the matrix If I now change the code to: tic d = load('d.dat'); N = load('N.dat'); toc tic Ns = sparse(N); toc tic output = Ns\d; toc With output: Elapsed time is 2.723927 seconds. <--load files Elapsed time is 0.040358 seconds. <--conversion to sparse Elapsed time is 0.017368 seconds. <--solving There are only 82267 nonzero elements in the N matrix, (vs 2509*2509 ~ 6.3 million) so the sparse matrix runs much faster - it essentially skips over processing entries that are zero, so doesn't have to do that part of the inversion process. Here's an iterative method solving the problem. I haven't tuned any iteration parameters for bicgstab (biconjugate gradients, stabilized) so it could be a bit better but the mean squared error is pretty small. tic d = load('d.dat'); N = load('N.dat'); toc tic Ns = sparse(N); toc tic output = bicgstab(Ns,d); toc % compute a true output output_true = Ns\d; % compute mean squared error of OPR output_mse = sum((output_true - output).^2)/length(output) Elapsed time is 2.728844 seconds. Elapsed time is 0.040895 seconds. bicgstab stopped at iteration 20 without converging to the desired tolerance 1e-06 because the maximum number of iterations was reached. The iterate returned (number 20) has relative residual 2e-06. Elapsed time is 0.015128 seconds. output_mse = 9.0544e-07 Not much benefit in the iterative method here...the matrix is quite small. The speedup is much more considerable when you are solving similarly sparse matrices that are huge. In industry and research in my career my finite element models can get to matrices that are millions by millions or more...at that point you need sophisticated algorithms. But for the size of the OPR matrix, unless we get TONS more FRC teams soon, just running it with sparse tools should be sufficient for it to run quite fast. Octave and MATLAB have it built in, and I believe NumPy/SciPy distributions do as well. There are also C++ and Java libraries for sparse computation. A final suggestion would be that if you construct your matrices in the sparse form explicitly from the get-go (not N, but the precursor to it) you can alleviate even the data loading time to a small fraction of what it is now. Hope that helps. Added: I did check the structure of N, and it is consistent with a sparse least squares matrix. It is also symmetric and positive definite. These properties are why I chose bicgstab instead of gmres or another iterative algorithm. If you don't want to solve it iteratively, Cholesky Factorization is also very good for dealing with symmetric positive definite matrices. Last edited by Nikhil Bajaj : 26-05-2013 at 11:01. |
|
#2
|
||||
|
||||
|
Re: OPR-computation-related linear algebra problem
Sounds great. I had to actually code up some different solvers in C. We could use matlab but now allowed to use any functions more complicated than adding etc.
nice to see some of the matlab tools to do that. Just wondering, Where do you work for? Quote:
|
|
#3
|
||||
|
||||
|
Re: OPR-computation-related linear algebra problem
Thank you, Borna. I am currently a Ph.D. student in mechatronics and control systems at Purdue University. I did my Master's Degree in Heat Transfer and Design Optimization, and the tools I learned through that included finite element methods for structural, thermal, and fluid flow analysis, as well as the mathematical underpinnings of those methods and the numerical implementation. I also spent a lot of time looking at optimization algorithms. Some of my work was industry sponsored and so I got to help solve large problems that way.
I also did an internship at Alcatel-Lucent Bell Labs where I did CFD modeling for electronics cooling. I also use finite elements often when designing parts for my current research. For coding some of these algorithms in C by hand, if you are interested, one of the best possible references is: Matrix Computations by Golub and Van Loan. which will get you much of the way there. |
|
#4
|
||||
|
||||
|
Re: OPR-computation-related linear algebra problem
C code implementing Cholesky decomposition-based solver. With minimal optimization, the calculation runs in 3.02 seconds on my system.
|
|
#5
|
||||
|
||||
|
Re: OPR-computation-related linear algebra problem
Quote:
ryan.exe 2509 N.dat d.dat x.dat So I dug up an old piece of code I wrote back in 1990 with a Cholesky factoring algorithm in it1 and modified it for this application and ran it. It took about 22.5 seconds: Nx=d build 5/26/2013 921p If your code took only 3 seconds to run on your machine, but 80 on mine, I'm wondering what the Rice algorithm would do on your machine. 1John Rischard Rice, Numerical Methods, Software, and Analysis, 1983, Page 139 (see attachments) |
|
#6
|
||||
|
||||
|
Re: OPR-computation-related linear algebra problem
Hi Ether,
Quote:
Quote:
Makes me wonder what you may have done better in your coding of the algorithm. EDIT: Changing the order of the summations got me down to 2.68 and changing to in-place computation like your code got me to 2.58. Beyond that, any improvements would seem to be in the way the Pascal compiler is generating code. Best, Last edited by RyanCahoon : 27-05-2013 at 22:32. |
|
#7
|
||||
|
||||
|
Re: OPR-computation-related linear algebra problem
...
|
|
#8
|
|||
|
|||
|
Re: OPR-computation-related linear algebra problem
Finally had time to speak with the math guys.
The built-in LV linear algebra I was using links to an older version of Intel's MKL, but if I had used the SPD option on the solver it would indeed have been faster than the general version. There is a toolkit called "Multicore Analysis and Sparse Matrix Toolkit", and they ran the numbers using that tool as well. Due to a newer version of MKL, the general solver is much faster. The right column converts the matrix into sparse form and uses a sparse solver. Greg McKaskle |
|
#9
|
||||
|
||||
|
Re: OPR-computation-related linear algebra problem
Thanks Greg. Are the "time" units in the attachment milliseconds?
|
|
#10
|
|||
|
|||
|
Re: OPR-computation-related linear algebra problem
Yes, they are in milliseconds. SPD stands for symmetric positive definite, column three enables the algorithms to utilize more than one core -- though this doesn't seem to help that much.
Greg McKaskle |
|
#11
|
||||
|
||||
|
Re: OPR-computation-related linear algebra problem
Quote:
What machine & OS was used? |
|
#12
|
||||
|
||||
|
Re: OPR-computation-related linear algebra problem
Quote:
Interestingly, the Pascal and C++ compilers I used are essentially identical. Only the front ends are different (for the different languages). Is it possible that the difference in timing is due to the differences in the memory access due to the data structures we used? |
|
#13
|
|||
|
|||
|
Re: OPR-computation-related linear algebra problem
Nikhil,
Is there a reason why you are not using the "pcg" function which assumes symmetric positive definite inputs? This should be faster. Also please consider using the diagonal as a preconditioner. Unfortunately I do not have access the MATLAB at the moment. Could you try please the following? And sorry in advance for any bugs: Ns = sparse(N); D = diag(Ns); Ds = sparse(diag(D)); #This was a bug... maybe it still is! # Reference Solution tic output = Ns\d; toc # CG Solution tic output = pcg(Ns,d) toc # Diagonal PCG Solution tic output = pcg(Ns,d,[],[],Ds) toc # Reverse Cutthill-McKee re-ordering tic p = symrcm(Ns); # permutation array Nr = Ns(p,p); # re-ordered problem toc # Re-ordered Solve tic output = Nr\d; #answer is stored in a permuted matrix indexed by 'p' toc Another advantage to the conjugate gradient methods is concurrent form of the solution within each iteration (parallel processing). Best regards Last edited by James Critchley : 26-05-2013 at 15:43. |
|
#14
|
||||
|
||||
|
Re: OPR-computation-related linear algebra problem
New Code, based on what James put up (I just added some disp's so that the results would be more clear. disps are outside of tics and tocs. I did not find any bugs though had to change #'s to %'s.
Code:
clc
disp('Loading Data...')
tic
d = load('d.dat');
N = load('N.dat');
toc
Ns = sparse(N);
D = diag(Ns);
Ds = sparse(diag(D)); %This was a bug... maybe it still is!
% Reference Solution
disp('Reference Solution:')
tic
output1 = Ns\d;
toc
% CG Solution
disp('CG Solution:');
tic
output2 = pcg(Ns,d);
toc
% Diagonal PCG Solution
disp('Diagonal PCG Solution:');
tic
output3 = pcg(Ns,d,[],[],Ds);
toc
% Reverse Cutthill-McKee re-ordering
disp('Re-ordering (Reverse Cutthill-McKee:');
tic
p = symrcm(Ns); % permutation array
Nr = Ns(p,p); % re-ordered problem
toc
% Re-ordered Solve
disp('Re-ordered Solution:');
tic
output4 = Nr\d; %answer is stored in a permuted matrix indexed by 'p'
toc
Code:
Loading Data... Elapsed time is 3.033846 seconds. Reference Solution: Elapsed time is 0.014136 seconds. CG Solution: pcg stopped at iteration 20 without converging to the desired tolerance 1e-06 because the maximum number of iterations was reached. The iterate returned (number 20) has relative residual 4.8e-05. Elapsed time is 0.007545 seconds. Diagonal PCG Solution: pcg converged at iteration 17 to a solution with relative residual 8.9e-07. Elapsed time is 0.009216 seconds. Re-ordering (Reverse Cutthill-McKee: Elapsed time is 0.004523 seconds. Re-ordered Solution: Elapsed time is 0.015021 seconds. . Thanks for calling me out. And you're right, I should have used pcg. Thanks for the suggestion. |
![]() |
| Thread Tools | |
| Display Modes | Rate This Thread |
|
|