With the Hotspot JDK 7u40 there is a nifty new tool called Java Mission Control. Users of the nifty old tool JRockit Mission Control will recognize a lot of the features. This blog focuses on the Flight Recorder tool, and the method profiling information you can get from the flight recorder.
So you want to know why all your CPUs are saturated. Perhaps you even want to get some hints as to what changes can be done to your application to make it less CPU-hungry. Don’t despair – Java Mission Control to the rescue!
Built into the HotSpot JVM is something called the Java Flight Recorder. It records a lot of information about/from the JVM runtime, and can be thought of as similar to the Data Flight Recorders you find in modern airplanes. You normally use the Flight Recorder to find out what was happening in your JVM when something went wrong, but it is also a pretty awesome tool for production time profiling. Since Mission Control (using the default templates) normally don’t cause more than a per cent overhead, you can use it on your production server.
Getting the Data
So, to do method profiling with JMC, simply go about producing a recording like you normally would. Here is an example:
- Start the application you want to profile with the following arguments to enable the flight recorder:
- Next start Mission Control. You can just double click on jmc in the bin folder of your 7u40 JDK.
- (Close the Welcome screen if this is your first time starting JMC.)
- Right click on the JVM you wish to start a flight recording on in the JVM browser and select Start Flight Recording.
- Leave all the default settings and select the ‘Profiling – on server’ template for your Event Settings. You can usually just hit finish at this point, but I’d like to talk a bit on how you can control the method sampler.
- Click Next to go to the higher level event settings. These are groupings of named settings in the template. Here you can select how often you want JFR to sample methods by changing the Method Sampling setting.
- If this level of granularity is not enough, you can click next and modify the event settings on a per event type basis. Type Method in the filter text box.
Note that you can go back and forth between the previous wizard page to find out what the high level settings really represent.
- When satisfied, click finished. The recording will be downloaded automatically and displayed in Mission Control. Click the tab group for Code to start visualizing your Method Profiling Sample events.
(Since Java 2D hardly produces any load on my machine, I actually switched to a SpecJBB recording here. )
On this tab you get a overview breakdown of where the JVM is spending the most time executing your application.
- Switch to the method profiling tab to find a top list of the hottest methods in your application.
The top ones are usually a good place to start optimizing.
Once you’ve found your top methods, you either want to make sure that these methods are faster to execute, or that you call the method less. To find out how to call the method less, you normally look for ways to call along the predecessor stack traces less, and you look for the closest frame that you can change. It is quite common that the case is that some JDK core class and method is among the top methods.
Since you cannot rewrite, say, HashMap.getEntry(), you need to search along the path for something that you can change/control. In my recording, the next one is HashMap.get(), which does not help much. The next one after that might be a good candidate for optimization. An alternative would be to find somewhere along the entire path where we can reduce the number of times we need to call down to get whatever we need to get from the HashMap.
After you’ve done your optimizations you do a new recording to see if something else has popped up to the top. Notice that it really doesn’t matter exactly how much faster the method itself became. The only interesting characteristic is the relative time you spend executing that part of the Java code. It gives you a rough estimate of the maximum performance gain you can get from optimizing that method.
Command Line Flags
Aside from the normal command line flags to control the FlightRecorder (see this blog), there are some flags that are especially useful in the context of the method sampling events.
There is one flag you can use to control whether the method profiler should be available at all:
There is also a flag to limit the stack depth for the stack traces. This is a performance optimization and a safety, so that the performance hit doesn’t run away, say if you have an insane stack depth and a lot of deep recursive calls. I believe it is set to 64 by default:
-XX:FlightRecorderOptions=stackdepth=<the wanted stack depth>
The Flight Recorder method profiler is quite good at describing where the JVM is spending the the most time executing Java code at a very low overhead. There are, however, some limitations/caveats that can be useful to know about:
- If you have no CPU load, do not care too much about what the method profiling information tells you. You will get way fewer sample points, not to mention that the application may behave quite differently under load. Now, should your application be under heavy load and you still aren’t saturating the CPU, you should probably go check your latency events.
- The method profiler will not show you, or care about, time spent in native. If you happen to have a very low JVM generated CPU load and a high machine CPU load in your recording, you may be spending quite a lot of time in some native library.
Further reading and useful links
The Mission Control home page:
Mission Control Base update site for Eclipse:
Mission Control Experimental update site (Mission Control plug-ins):
The Mission Control Facebook Community Page (not kidding):
Mission Control on Twitter:
Me on Twitter: