I have some issues with performance of my application. I found this answer on Stackoverflow: https://stackoverflow.com/a/378024/5363
which I like. One bit I don't really understand is what is the relation between code optimization and profiling. Because obviously one wants to profile optimized code, but at the same time a lot of information is lost during optimizations. So is it practical to run optimized code in a debugger and break into it as suggested in the quoted answer?
I am using CMake with gcc under Linux, if this makes any difference.
The general Law is called the Law of Pareto, the law of 80/20:
By profiling, you are going to indentify the 20% of the most important causes that makes your application slow/consuming memory, or other consequences. And if you fix the 20% causes, you'll tackle 80% of the slowliness/memory consumption etc...
Of course the figures are just figures. Just to give you the spirit of it:
Technically, with gcc under linux, an answer to the question you refering to " How can I profile C++ code running in Linux? " suggests to use, in a nutshell :
If you need to collect stack samples, why do it through a debugger. Run pstack at regular time intervals. You can redirect the output to a different file for each run and analyze those files later. By looking at the call stack of these files, you may figure out the hot function. You do not need a debug binary and can do above on a fully optimized binary.
I would prefer using a profiler tool to doing the above or doing what is listed in the thread that you refer to. They quickly pinpoint the top hot functions and you can understand the call stack by looking at the caller callee graph. I would spend time understanding the caller callee stack rather than analyze random stacks using the above method.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With