From statistics point of view standard deviation when all values are equal should be 0.
For arr1 result is as expected: 0, but for arr2 is 1.3877787807814457e-17 - very small but not 0, which leads to issues with e.g. zscore.
Is this a proper behavior or weird bug?
import numpy as np
arr1 = [20.0] * 3
#[20.0, 20.0, 20.0]
arr2 = [-0.087] * 3
#[-0.087, -0.087, -0.087]
np.std(arr1) #0.0
np.std(arr2) #1.3877787807814457e-17
The Numpy documentation for std states:
The standard deviation is the square root of the average of the squared deviations from the mean, i.e.,
std = sqrt(mean(abs(x - x.mean())**2)).The average squared deviation is normally calculated as
x.sum() / N, whereN = len(x). If, however, ddof is specified, the divisorN - ddofis used instead. In standard statistical practice,ddof=1provides an unbiased estimator of the variance of the infinite population.ddof=0provides a maximum likelihood estimate of the variance for normally distributed variables. The standard deviation computed in this function is the square root of the estimated variance, so even withddof=1, it will not be an unbiased estimate of the standard deviation per se.Note that, for complex numbers, std takes the absolute value before squaring, so that the result is always real and nonnegative.
For floating-point input, the std is computed using the same precision the input has. Depending on the input data, this can cause the results to be inaccurate, especially for float32 (see example below). Specifying a higher-accuracy accumulator using the dtype keyword can alleviate this issue.
a = np.zeros((2, 512*512), dtype=np.float32) a[0, :] = 1.0 a[1, :] = 0.1 np.std(a) >>>0.45000005but for
float64:a = np.zeros((2, 512*512), dtype=np.float64) a[0, :] = 1.0 a[1, :] = 0.1 np.std(a) >>>0.45
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With