Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

alternatives or speedups for mpmath matrix inversion

I'm writing some code in python that requires frequently inverting large square matrices (100-200 rows/colums).

I'm hitting the limits of machine precision so have started trying to use mpmath to do arbitrary precision matrix inversion but it is very slow, even using gmpy.

Inverting random matrices of size 20, 30, 60 at precision 30 (decimal) takes ~ 0.19, 0.60, and 4.61 seconds whereas the same operations in mathematica take 0.0084, 0.015, and 0.055 seconds.

This is using python3 and mpmath 0.17 (not sure of gmpy version) on an arch linux machine. I'm not sure why mpmath is so much slower but is there any open source library that will approach the speeds mathematica manages for this (even 1/2 as fast would be good)?

I don't need arbitrary precision -- 128 bit would probably be good enough. I also just don't understand how mpmath can be so much slower. It must be using a very different matrix inversion algorithm. To be specific I'm using M**-1.

Is there a way to get it to use a faster algorithm or to speed it up.

like image 391
user2153813 Avatar asked Oct 26 '25 03:10

user2153813


1 Answers

Linera algebra in mpmath is rather slow, unfortunately. There are many libraries that solve this problem much better (Sage for example). That said, as a followup to Stuart's suggestion, it is fairly easy to do reasonably fast high-precision matrix multiplication in Python without installing any libraries, using fixed-point arithmetic. Here is a version using mpmath matrices for input and output:

def fixmul(A, B, prec):
    m = A.rows; p = B.rows; n = B.cols;
    A = [[A[i,j].to_fixed(prec) for j in range(p)] for i in range(m)]
    B = [[B[i,j].to_fixed(prec) for j in range(n)] for i in range(p)]
    C = [([0] * n) for r in range(m)]
    for i in range(m):
        for j in range(n):
            s = 0
            for k in range(p):
                s += A[i][k] * B[k][j]
            C[i][j] = s
    return mp.matrix(C) * mpf(2)**(-2*prec)

At 256-bit precision, this multiplies two 200x200 matrices 16 times faster than mpmath for me. It is also not difficult to write a matrix inversion routine directly this way. Of course if the matrix entries are very large or very small, you want to rescale them first. A more solid solution would be to write your own matrix functions using the floating-point types in gmpy, which should be essentially as fast.

like image 196
Fredrik Johansson Avatar answered Oct 27 '25 16:10

Fredrik Johansson