Low Overhead Method Profiling with Java Mission Control | Marcus Hirt
Once you've found your top methods, you either want to make sure that these methods are faster to execute, or that you call the method less. To find out how to call the method less, you normally look for ways to call along the predecessor stack traces less, and you look for the closest frame that you can change. It is quite common that the case is that some JDK core class and method is among the top methods.
Since you cannot rewrite, say, HashMap.getEntry(), you need to search along the path for something that you can change/control. In my recording, the next one is HashMap.get(), which does not help much. The next one after that might be a good candidate for optimization. An alternative would be to find somewhere along the entire path where we can reduce the number of times we need to call down to get whatever we need to get from the HashMap.
After you've done your optimizations you do a new recording to see if something else has popped up to the top. Notice that it really doesn't matter exactly how much faster the method itself became. The only interesting characteristic is the relative time you spend executing that part of the Java code. It gives you a rough estimate of the maximum performance gain you can get from optimizing that method.
Command Line Flags
Aside from the normal command line flags to control the FlightRecorder (see this blog), there are some flags that are especially useful in the context of the method sampling events.
There is one flag you can use to control whether the method profiler should be available at all:
-XX:FlightRecorderOptions=samplethreads=[true|false]
There is also a flag to limit the stack depth for the stack traces. This is a performance optimization and a safety, so that the performance hit doesn't run away, say if you have an insane stack depth and a lot of deep recursive calls. I believe it is set to 64 by default:
-XX:FlightRecorderOptions=stackdepth=<the wanted stack depth>
Limitations
The Flight Recorder method profiler is quite good at describing where the JVM is spending the the most time executing Java code at a very low overhead. There are, however, some limitations/caveats that can be useful to know about:
- If you have no CPU load, do not care too much about what the method profiling information tells you. You will get way fewer sample points, not to mention that the application may behave quite differently under load. Now, should your application be under heavy load and you still aren't saturating the CPU, you should probably go check your latency events.
- The method profiler will not show you, or care about, time spent in native. If you happen to have a very low JVM generated CPU load and a high machine CPU load in your recording, you may be spending quite a lot of time in some native library.
Further reading and useful links
Read full article from Low Overhead Method Profiling with Java Mission Control | Marcus Hirt
No comments:
Post a Comment