In Bash I can do:
python3 -OO -m py_compile myscript.py
And build deployment zip with __pycache__
inside, for multiple scripts I can run:
python3 -OO -m compileall .
Executing this in the same underlying AMI image.
Is it wise for AWS Lambda performance improvement?
The answer is yes, but it's probably a bit of a premature optimisation.
Lambda has two parts to it's performance:
.pyc
files offer you some optimisation of 1, or the "cold start" time. This is because you can ship only the pyc
files, and they tend to be smaller (reducing transfer time), and because you have already compiled to byte code, which takes away a step of the build process (note that python is still compiled further, but it's an optimisation none-the-less).
Frankly, I'd be surprised if this made a difference enough to justify the added complexity at deployment and the resulting opaqueness of the code in the lambda console. And so I would challenge you to profile using something like X-Ray before you commit to this optimisation over anything in your actual code.
(n.b. MapBox have a good article about reducing size and discussing the effect of .pyc
deployments: https://blog.mapbox.com/aws-lambda-python-magic-e0f6a407ffc6)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With