![]() |
calling all Python gurus...
I have an M by N matrix A and an M by 1 column vector b. I want to find the N by 1 column vector x which minimizes sum(abs(A*x-b)). What is the recommended way to do this in Python? |
Re: calling all Python gurus...
This can be formulated as a linear program and solved with something like SciPy's scipy.optimize.fmin_cobyla (http://docs.scipy.org/doc/scipy/refe...fmi n_cobyla), PyGLPK (http://tfinley.net/software/pyglpk/), or cvxopt (http://cvxopt.org/).
|
Re: calling all Python gurus...
I used the cvxopt example implementation of L1 norm minimization on this page. fmin_cobyla
I exported the A and b (revised) matrices to CSV files and used the following script: Code:
import argparseCode:
pcost dcost gap pres dres k/tCode:
[ 40.85566343 54.50291262 57.51132686 74.48562198 121.75464378 |
Re: calling all Python gurus...
Just from a cursory glance, it looks like you're talking about implementing linear least squares, no?
|
Re: calling all Python gurus...
1 Attachment(s)
Quote:
Least Squares is L2 norm minimization, which is what is used for "OPR" computation here on CD. I am postulating that min L1 norm may be a better metric for FRC scouting purposes. In the attached example, only 19% of the min_L2_norm computed alliance scores fall within +/-10 points of the actual alliance scores, whereas 41% of the min_L1_norm computed ones do. |
Re: calling all Python gurus...
Quote:
As I mentioned to you privately, I was able to duplicate your Python/cvxopt results exactly on a Debian Linux installation of Python/cvxopt. Since then I've had some free time and came back to the challenge of getting it to work in Octave (MatLab). The solution turned out to be to convert the original nonlinear/nondifferentiable problem -- min(sum(abs(Ax-b))) -- into a linear program and solve it with Octave's glpk (or MatLab's linprog) LP solver. According to tic/toc, Octave/glpk took 0.125 seconds to solve on my 8-year-old desktop running XP. The resulting solution, however, is interestingly different from yours. Although the L1 norm of the residuals of both solutions are virtually identical (to the 4th decimal place), the L2 norm of the glpk LAD residuals is 0.2 smaller than cvxopt's. Although min L2 norm of residuals is mathematically unique, L1 is not. I assume that accounts for the difference. From all the data that I've looked at so far, min L1 norm of residuals (Least Absolute Deviation or LAD) appears to be a slightly better metric than L2 (Least Squares). Now that efficient methods of computing LAD have been identified perhaps that opens a whole new world of fun number crunching for the stats mavens here on CD. Are there any R, Mathematica, SciLab, etc gurus out there who would be interested in doing the LAD computation and posting results? Quote:
|
Re: calling all Python gurus...
1 Attachment(s)
Quote:
|
Re: calling all Python gurus...
Quote:
http://sourceforge.net/projects/winglpk/ It 0.1 sec to generate the LP and solve it. Still looking for any gurus who might be able to help with the syntax for solving this using R. |
| All times are GMT -5. The time now is 04:51 AM. |
Powered by vBulletin® Version 3.6.4
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.
Copyright © Chief Delphi