Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Fast image normalisation in python [closed]

I am looking for a faster approach to normalise image in Python. I want to convert all pixels to values between 0 and 1.

INPUT: 150x150 RGB images in JPEG format.

OS/HARDWARE: LINUX/P40 GPU with 8GB RAM

USE-CASE: Image Preprocessing for a real-time classification task.

Current time per image is ~5-10 milliseconds. I am looking for a method that can reduce this time.

I tried two approaches, with numpy and opencv.

Using numpy (Approx time: 8ms):

norm = (img - np.min(img)) / (np.max(img) - np.min(img))

Using opencv (Approx time: 3ms):

norm = cv2.normalize(img, None, alpha=0, beta=1, norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)

Both these methods are slow for my usecase. Can anyone guide me with a faster method for image normalisation?

like image 947
Pradip Gupta Avatar asked Oct 28 '25 04:10

Pradip Gupta


1 Answers

Your timings seem very slow to me. Perhaps something is wrong with your install?

I tried this test program:

#!/usr/bin/python3

import sys
import numpy as np
import cv2
from PIL import Image
from profilehooks import profile

@profile
def try_numpy(img):
    ar = np.array(img).astype(np.float32)
    for i in range(1000):
        mn = np.min(ar)
        mx = np.max(ar)
        norm = (ar - mn) * (1.0 / (mx - mn))

@profile
def try_cv2(img):
    for i in range(1000):
        norm = cv2.normalize(img, None, alpha=0, beta=1,
                             norm_type=cv2.NORM_MINMAX, dtype=cv2.CV_32F)

img = Image.open(sys.argv[1])
try_numpy(img)

img = cv2.imread(sys.argv[1])
try_cv2(img)

And on this modest 2015 i5 laptop running Ubuntu 19.04 I see:

$ ./try291.py ~/pics/150x150.png 
*** PROFILER RESULTS ***
try_cv2 (./try291.py:17)
function called 1 times

         1002 function calls in 0.119 seconds

   Ordered by: cumulative time, internal time, call count

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.001    0.001    0.119    0.119 try291.py:17(try_cv2)
     1000    0.118    0.000    0.118    0.000 {normalize}

*** PROFILER RESULTS ***
try_numpy (./try291.py:9)
function called 1 times

         10067 function calls in 0.113 seconds

   Ordered by: cumulative time, internal time, call count
   List reduced from 52 to 40 due to restriction <40>

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.064    0.064    0.113    0.113 try291.py:9(try_numpy)
     2000    0.004    0.000    0.045    0.000 fromnumeric.py:69(_wrapreduction)

So they both take about 0.1ms per call, ~50x faster than the numbers you see.

To speed it up further:

  • Do you have any a priori knowledge about the range of pixel values? Perhaps you could skip the search for the max and min.
  • Depending on your sampling density, it could be faster to normalize the whole input image, then cut out your 150x150 patches afterwards.
like image 110
jcupitt Avatar answered Oct 30 '25 17:10

jcupitt



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!