It was around 8-9 years ago I saw a tool for Visual Studio (I don't really remember the name) which can visualize the function calls and their performance. I really liked it so I was wondering if there is anything similar to that in Python. Let's say you have three functions:
def first_func():
...
def second_func():
...
for i in xrange(10):
first_function()
...
def third_func():
...
for i in xrange(5):
second_function()
...
So, the final report of that tool was something like this (including connection diagrams):
first_func[avg 2ms] <--50 times--< second_func[avg 25ms] <--5 times--< third_func[avg 140ms]
A tool like this would make it easier to find the bottlenecks into a system. Especially for the large systems.
You could use the profiler bundled with the python installation. Python Profiler Link
Line-by-line timing and execution frequency with a profiler:
First, install line_profiler
Second, modify your source code by decorating the function you want to measure with the @profile decorator.
Third, kernprof -l -v yourscript.py
The -l option tells kernprof to inject the @profile decorator into your script’s builtins, and -v tells kernprof to display timing information once you’re script finishes.
output:

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With