I'm just curious to hear other people's thoughts on why this specific piece of code might might run slower in Python 3.11 than in Python 3.10.6. Cross-posted from here. I'm new here - please kindly let me know if I'm doing something wrong.
test.py script:
import timeit
from random import random
def run():
for i in range(100):
j = random()
t = timeit.timeit(run, number=1000000)
print(t)
Commands:
(base) conda activate python_3_10_6
(python_3_10_6) python test.py
5.0430680999998
(python_3_10_6) conda activate python_3_11
(python_3_11) python test.py
5.801756700006081
This looks like it's probably the PEP 659 optimizations not paying off for random.random.
PEP 659 is an effort to JIT-optimize many common operations. (Not JIT compilation, but definitely JIT optimization.) It pays off for most Python code, but I think random.random isn't covered.
random.random is a method (of a hidden random.Random instance) written in C, with no arguments other than self, so it should be using the METH_NOARGS calling convention. This calling convention has no specialized fast path. Both specialize_c_call and _Py_Specialize_Call just bail out instead of specializing the call.
When PEP 659 doesn't pay off, the work that goes into supporting it is just overhead. I'm not sure what parts contribute how much overhead, but the bytecode is longer than before, due to generating PRECALL and CALL instructions (although I think there's some work going on to improve that), plus attempting specialization and tracking when to attempt specialization has its own overhead.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With