Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Solve linear equations starting from approximate solution

In a problem I'm working on, there is a need to solve Ax=b where A is a n x n square matrix (typically n = a few thousand), and b and x are vectors of size n. The trick is, it is necessary to do this many (billions) of times, where A and b change only very slightly in between successive calculations.

Is there a way to reuse an existing approximate solution for x (or perhaps inverse of A) from the previous calculation instead of solving the equations from scratch?

I'd also be interested in a way to get x to within some (defined) accuracy (eg error in any element of x < maxerr, or norm(error in x) < maxerr), rather than an exact solution (again, reusing the previous calculations).

like image 305
Alex I Avatar asked Oct 27 '25 02:10

Alex I


1 Answers

If the individual problems can be solved by the conjugate gradient method (i.e., A is symmetric, positive-definite, and real), then I would suggest

  • Before the first time you solve for x, and at intervals of Θ(n) times thereafter, compute the Cholesky decomposition of A.
  • Use the Cholesky decomposition for the preconditioned conjugate gradient method.

The idea is: with a dense matrix, the asymptotic cost of Θ(1) steps of preconditioned conjugate gradient is Θ(n²), and the asymptotic cost of Cholesky is Θ(n³), so every Θ(n) solutions for x costs Θ(n³). If we used the exact A that we are trying to solve for as the preconditioner, then preconditioned conjugate gradient would converge in one step! The hope is that, however A is changing, it does so slowly enough that the preconditioning is effective enough for constant iterations to suffice.

To reach a defined accuracy (by some metric), you can look at the residual from preconditioned conjugate gradient. (Then I’d recompute the Cholesky decomposition every Θ(1) iterations of preconditioned conjugate gradient.)

like image 123
David Eisenstat Avatar answered Oct 30 '25 04:10

David Eisenstat