Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python: how to speed up this function and make it more scalable?

I have the following function which accepts an indicator matrix of shape (20,000 x 20,000). And I have to run the function 20,000 x 20,000 = 400,000,000 times. Note that the indicator_Matrix has to be in the form of a pandas dataframe when passed as parameter into the function, as my actual problem's dataframe has timeIndex and integer columns but I have simplified this a bit for the sake of understanding the problem.

Pandas Implementation

indicator_Matrix = pd.DataFrame(np.random.randint(0,2,[20000,20000]))
def operations(indicator_Matrix):
   s = indicator_Matrix.sum(axis=1)
   d = indicator_Matrix.div(s,axis=0)
   res = d[d>0].mean(axis=0)
   return res.iloc[-1]

I tried to improve it by using numpy but it is still taking ages to run. I also tried concurrent.future.ThreadPoolExecutor but it still take a long time to run and not much improvement from list comprehension.

Numpy Implementation

indicator_Matrix = pd.DataFrame(np.random.randint(0,2,[20000,20000]))
def operations(indicator_Matrix):
   s = indicator_Matrix.to_numpy().sum(axis=1)
   d = (indicator_Matrix.to_numpy().T / s).T
   d = pd.DataFrame(d, index = indicator_Matrix.index, columns = indicator_Matrix.columns)
   res = d[d>0].mean(axis=0)
   return res.iloc[-1]

output = [operations(indicator_Matrix) for i in range(0,20000**2)]

Note that the reason I convert d to a dataframe again is because I need to obtain the column means and retain only the last column mean using .iloc[-1]. d[d>0].mean(axis=0) return column means, i.e.

2478    1.0
0       1.0

Update: I am still stuck in this problem. I wonder if using gpu packages like cudf and CuPy on my local desktop would make any difference.

like image 626
user1769197 Avatar asked Oct 26 '25 05:10

user1769197


2 Answers

Assuming the answer of @CrazyChucky is correct, one can implement a faster parallel Numba implementation. The idea is to use plain loops and care about reading data the contiguous way. Reading data contiguously is important so to make the computation cache-friendly/memory-efficient. Here is an implementation:

import numba as nb

@nb.njit(['(int_[:,:],)', '(int_[:,::1],)', '(int_[::1,:],)'], parallel=True)
def compute_fastest(matrix):
    n, m = matrix.shape
    sum_by_row = np.zeros(n, matrix.dtype)
    is_row_major = matrix.strides[0] >= matrix.strides[1]
    if is_row_major:
        for i in nb.prange(n):
            s = 0
            for j in range(m):
                s += matrix[i, j]
            sum_by_row[i] = s
    else:
        for chunk_id in nb.prange(0, (n+63)//64):
            start = chunk_id * 64
            end = min(start+64, n)
            for j in range(m):
                for i2 in range(start, end):
                    sum_by_row[i2] += matrix[i2, j]
    count = 0
    s = 0.0
    for i in range(n):
        value = matrix[i, -1] / sum_by_row[i]
        if value > 0:
            s += value
            count += 1
    return s / count

# output = [compute_fastest(indicator_Matrix.to_numpy()) for i in range(0,20000**2)]

Pandas dataframes can contain both row-major and column-major arrays. Regarding the memory layout, it is better to iterate over the rows or the column. This is why there is two implementations of the sum based on is_row_major. There is also 3 Numba signatures: one for row-major contiguous arrays, one for columns-major contiguous arrays and one for non-contiguous arrays. Numba will compile the 3 function variants and automatically pick the best one at runtime. The JIT-compiler of Numba can generate a faster implementation (eg. using SIMD instructions) when the input 2D array is known to be contiguous.


Experimental Results

This computation is about 14.5 times faster than operations_simpler on my i5-9600KF processor (6 cores). It still takes a lot of time but the computation is memory-bound and nearly optimal on my machine: it is bounded by the main-memory which has to be read:

On a 2000x2000 dataframe with 32-bit integers:
 - operations:           86.310       ms/iter
 - operations_simpler:    5.450       ms/iter
 - compute_fastest:       0.375       ms/iter
 - optimal:               0.345-0.370 ms/iter

If you want to get a faster code, then you need to use more compact data types. For example, a uint8 data type is large enough to contain the values 0 and 1, and it is 4 times smaller in memory on Windows. This means the code can be up to 4 time faster in this case. The smaller the data type, the faster the program. One could even try to compact 8 columns in 1 using bit tweaks though it is generally significantly slower using Numba unless you have a lot of available cores.


Notes & Discussion

The above code works only with uniformly-typed columns. If this is not the case, you can split the dataframe in multiple groups and convert each column group to Numpy array so to then call the Numba function (modified to support groups). Note the @CrazyChucky code has a similar issue: a dataframe column with mixed datatypes converted to a Numpy array results in an object-based Numpy array which is very inefficient (especially a row-major Numpy array).

Note that using a GPU will not make the computation faster unless the input dataframe is already stored in the GPU memory. Indeed, CPU-GPU data transfers are more expensive than just reading the RAM (due to the interconnect overhead which is generally a quite slow PCI one). Note that the GPU memory is quite limited compared to the CPU. If the target dataframe(s) do not need to be transferred, then using cudf is relatively simple and should give a small speed up. For a faster code, one need to implement a fast CUDA code but this is clearly far from being easy for dataframes with mixed dataype. In the end, the resulting speed up should be main_ram_throughput / gpu_ram_througput assuming there is no data transfer. Note that this factor is generally 5-12. Note also that CUDA and cudf require a Nvidia GPU.

Finally, reducing the input data size or just the amount of computation is certainly the best solution (as indicated in the comment by @zvone) since it is very computationally intensive.

like image 148
Jérôme Richard Avatar answered Oct 28 '25 18:10

Jérôme Richard


You're doing some extra math you don't have to. In plain English, what you're doing is:

  1. Summing each column
  2. Turning the list of sums "sideways" and dividing each column by it
  3. Taking the mean of each column, ignoring values ≤ 0
  4. Returning only the rightmost mean

After step one, you no longer need anything but the rightmost column; you can ignore the other columns, only dividing and averaging the one whose result you care about. Changing your code accordingly:

def operations_simpler(indicator_matrix):
    sums = indicator_matrix.sum(axis=1)
    last_column = indicator_matrix.iloc[:, -1]
    divided = last_column / sums
    return divided[divided > 0].mean()

...yields the same result, and takes about a hundredth of the time. Extrapolating from shorter test runs, this cuts the time for 400,000,000 runs on my machine from about 114 years down to... about 324 days. Still not great. So far I've not managed to get it to run any faster by converting to NumPy, compiling with Numba, or employing multiprocessing, but I'll go ahead and post this for now in case it's helpful.

Note: You're unlikely to see any improvements with compute-heavy work like this from threading; if anything, you'd want to use multiprocessing. concurrent.futures offers executors for both. Threads are mostly useful to avoid waiting around for I/O.

like image 43
CrazyChucky Avatar answered Oct 28 '25 19:10

CrazyChucky