![]() |
OPR-computation-related linear algebra problem
1 Attachment(s)
Looking for interested parties to crunch the numbers and report how long it takes to solve Nx=d for x with the tools, libraries, and computing platforms you use. Attached ZIP file contains N and d. N is a symmetric positive definite 2509x2509 square matrix; d is a 2509 element column vector. |
Re: OPR-computation-related linear algebra problem
My linear algebra is very rusty and isn't part of my day job, so nothing special and I hope I did it right.
The time to invert and multiply is shown on the panel and is about 5.5 seconds plus another 1.something to load them. This is in a VM on a Macbook Pro. It was clearly running on a single CPU. If you are interested I can talk to the math guys on Tuesday. CD seems to have problems uploading at the moment, so the first few elements are 10.1758, 29.6333, 11.1155. Greg McKaskle |
Re: OPR-computation-related linear algebra problem
1 Attachment(s)
Using MATLAB 2012a on a Intel Core i7-3615QM:
Using linear equation solver (backslash operator): 0.26977 seconds Using invert-and-multiply: 2.4433 seconds Code:
N = dlmread('N.dat'); |
Re: OPR-computation-related linear algebra problem
Wouldn't it me much less computationally intensive to actually solve the matrix into reduced row echelon form?
|
Re: OPR-computation-related linear algebra problem
3 Attachment(s)
Using Python with Numpy
System: Code:
Ubuntu 12.04 32-bitCode:
import sysStandard Deviation: 5.1 seconds The file output.txt contains versions and the solution for x. The file runs.txt contains the run data. Note that I was doing work while letting this run in the backround, which skews the times. I collected CPU usage data to try and account for this; one interesting note is that there are two different clusters of execution times - I believe this is from my laptop throttling the CPU when I unplugged and was running off battery for a while (if you plot runs over time, you will see three distinct sections where the execution times are consistently higher). |
Re: OPR-computation-related linear algebra problem
1 Attachment(s)
Quote:
This test was run on my 6-year-old Core 2 Duo (T7200 @ 2.00GHz) laptop with MATLAB R2010a. Sometime later this week I'll see about running the matrix solve on a real computer, maybe one with a little extra horsepower. Code:
sizes = floor(logspace(1, 2.5, 10));EDIT: It's been pointed out to me that a matrix inversion is also inherently O(n^3), and so there's something else at work making it slow. In this case, the catch is that rref() is written in plain MATLAB code (try "edit rref"), while inversion and solving are implemented as highly-optimized C code. Gaussian elimination is not the fastest, but it's not nearly as bad as I made it out to be. Thanks to those who pointed this out. Obviously I need to go study some more linear algebra. :o That's on the schedule for the fall. |
Re: OPR-computation-related linear algebra problem
2 Attachment(s)
CD let me attach again. I attached the things I intended for the previous post.
As with many of the analysis functions, this calls into a DLL, to the function InvMatrixLUDri_head. So it seems to be using LU decomposition. I think the matrix qualifies as sparse, so that helps with performance. The direct solver was almost ten seconds. Greg McKaskle |
Re: OPR-computation-related linear algebra problem
Quote:
Code:
tic;PS - can someone with a working Octave installation please run this? also SciLab and R. |
Re: OPR-computation-related linear algebra problem
Couple of things.
In a PDE class I tool for CFD, we had to solve really large sparse matrices. the trick was to never actually store the entire matrix. However ours was much more structured and more sparse. Not sure if I can apply something similar. in this case. What is the accuracy you are looking for. Could use some iterative methods for much faster results. You can pick an accuracy of 1e-1 (inf norm) and be fine I think for OPRs. Loading it into my GTX 580 GPU right now to get some values. Will do that with and without the time taken to load it into the GPU memory and back. |
Re: OPR-computation-related linear algebra problem
Quote:
Linear solver 0.19269 Invert and multiply 1.8698 Code:
N = dlmread('N.dat'); |
Re: OPR-computation-related linear algebra problem
This matrix is quite small compared to those generally solved in finite elements, CFD, or other common codes. As was mentioned a little bit earlier, the biggest benefit to speedup can be done by processing everything as sparse matrices.
On my 2.0 GHz Macbook Air running Matlab Student R2012a, I can run: tic d = load('d.dat'); N = load('N.dat'); toc tic output = N\d; toc and get the output: Elapsed time is 2.768235 seconds. <--loading files into memory Elapsed time is 0.404477 seconds. <--solving the matrix If I now change the code to: tic d = load('d.dat'); N = load('N.dat'); toc tic Ns = sparse(N); toc tic output = Ns\d; toc With output: Elapsed time is 2.723927 seconds. <--load files Elapsed time is 0.040358 seconds. <--conversion to sparse Elapsed time is 0.017368 seconds. <--solving There are only 82267 nonzero elements in the N matrix, (vs 2509*2509 ~ 6.3 million) so the sparse matrix runs much faster - it essentially skips over processing entries that are zero, so doesn't have to do that part of the inversion process. Here's an iterative method solving the problem. I haven't tuned any iteration parameters for bicgstab (biconjugate gradients, stabilized) so it could be a bit better but the mean squared error is pretty small. tic d = load('d.dat'); N = load('N.dat'); toc tic Ns = sparse(N); toc tic output = bicgstab(Ns,d); toc % compute a true output output_true = Ns\d; % compute mean squared error of OPR output_mse = sum((output_true - output).^2)/length(output) Elapsed time is 2.728844 seconds. Elapsed time is 0.040895 seconds. bicgstab stopped at iteration 20 without converging to the desired tolerance 1e-06 because the maximum number of iterations was reached. The iterate returned (number 20) has relative residual 2e-06. Elapsed time is 0.015128 seconds. output_mse = 9.0544e-07 Not much benefit in the iterative method here...the matrix is quite small. The speedup is much more considerable when you are solving similarly sparse matrices that are huge. In industry and research in my career my finite element models can get to matrices that are millions by millions or more...at that point you need sophisticated algorithms. But for the size of the OPR matrix, unless we get TONS more FRC teams soon, just running it with sparse tools should be sufficient for it to run quite fast. Octave and MATLAB have it built in, and I believe NumPy/SciPy distributions do as well. There are also C++ and Java libraries for sparse computation. A final suggestion would be that if you construct your matrices in the sparse form explicitly from the get-go (not N, but the precursor to it) you can alleviate even the data loading time to a small fraction of what it is now. Hope that helps. Added: I did check the structure of N, and it is consistent with a sparse least squares matrix. It is also symmetric and positive definite. These properties are why I chose bicgstab instead of gmres or another iterative algorithm. If you don't want to solve it iteratively, Cholesky Factorization is also very good for dealing with symmetric positive definite matrices. |
Re: OPR-computation-related linear algebra problem
Sounds great. I had to actually code up some different solvers in C. We could use matlab but now allowed to use any functions more complicated than adding etc.
nice to see some of the matlab tools to do that. Just wondering, Where do you work for? Quote:
|
Re: OPR-computation-related linear algebra problem
Thank you, Borna. I am currently a Ph.D. student in mechatronics and control systems at Purdue University. I did my Master's Degree in Heat Transfer and Design Optimization, and the tools I learned through that included finite element methods for structural, thermal, and fluid flow analysis, as well as the mathematical underpinnings of those methods and the numerical implementation. I also spent a lot of time looking at optimization algorithms. Some of my work was industry sponsored and so I got to help solve large problems that way.
I also did an internship at Alcatel-Lucent Bell Labs where I did CFD modeling for electronics cooling. I also use finite elements often when designing parts for my current research. For coding some of these algorithms in C by hand, if you are interested, one of the best possible references is: Matrix Computations by Golub and Van Loan. which will get you much of the way there. |
Re: OPR-computation-related linear algebra problem
1 Attachment(s)
C code implementing Cholesky decomposition-based solver. With minimal optimization, the calculation runs in 3.02 seconds on my system.
|
Re: OPR-computation-related linear algebra problem
The reason I say it's computationally intensive is this article: http://www.johndcook.com/blog/2010/0...t-that-matrix/
|
| All times are GMT -5. The time now is 11:48. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi