In a problem I'm working on, there is a need to solve Ax=b where A is a n x n square matrix (typically n = a few thousand), and b and x are vectors of size n. The trick is, it is necessary to do this many (billions) of times, where A and b change only very slightly in between successive calculations.
Is there a way to reuse an existing approximate solution for x (or perhaps inverse of A) from the previous calculation instead of solving the equations from scratch?
I'd also be interested in a way to get x to within some (defined) accuracy (eg error in any element of x < maxerr, or norm(error in x) < maxerr), rather than an exact solution (again, reusing the previous calculations).
If the individual problems can be solved by the conjugate gradient method (i.e., A is symmetric, positive-definite, and real), then I would suggest
The idea is: with a dense matrix, the asymptotic cost of Θ(1) steps of preconditioned conjugate gradient is Θ(n²), and the asymptotic cost of Cholesky is Θ(n³), so every Θ(n) solutions for x costs Θ(n³). If we used the exact A that we are trying to solve for as the preconditioner, then preconditioned conjugate gradient would converge in one step! The hope is that, however A is changing, it does so slowly enough that the preconditioning is effective enough for constant iterations to suffice.
To reach a defined accuracy (by some metric), you can look at the residual from preconditioned conjugate gradient. (Then I’d recompute the Cholesky decomposition every Θ(1) iterations of preconditioned conjugate gradient.)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With