Continuous Production Profiling and Diagnostics

I’ve gotten a lot of questions about continuous production profiling lately. Why would anyone want to profile in production, or, if production profiling seems reasonable, why the heck leave it on continuously? I thought I’d take a few moments and share my take on the problem and the success I’ve seen the past years applying continuous production profiling in systems in the real world.

Trigger warning: this blog will not contain code samples. 😉

Profiling?

So what is software profiling then? It’s the ancient black magic art of trying to figure out how something is performing, for some aspect of performing. In American TV-series, the profiler is usually some federal agent who is adept at understanding the psychology of the criminal mind. The profiler attempts to understand key aspects of the criminal to make it easier for the law enforcement agents to catch him. In software profiling we’re kind of doing the same thing, but for software – your code as well as all the third party code you might be depending on.

We’re trying to build an accurate profile of what is going on in the software when it is being run, but in this case to find ways to improve a program. And to understand what is going on in your program, the profiler has to collect call traces and usually some additional context to make sense of it all.

In comparison to other observability tools, like metrics and logs, profilers will provide you with a holistic view of a running program, no matter the origin of the code and requiring no application specific instrumentation. Profilers will provide you with detailed information about where in the actual code, down to the line and byte code index, things are going down. A concrete example would be learning which line in a function/method is using most of the CPU, and how it was being called.

It used to take painting a red pentagram on the floor, and a healthy stock of black wax candles, to do profiling right. Especially in production. Overhead of early profilers weren’t really a design criteria; it was assumed you’d run the process locally, and in development. And, since it was assumed you’d be running the profiling frontend on the same machine, profiling remote processes were somewhat tricky and not necessarily secure. Production profilers, like JFR/JMC came along, but they usually focus on a single process, and since security is a bit tricky to set up properly, most people sidestep the problem altogether and run (yep, in production) with authentication and encryption off.

Different Kinds of Profiling

Profiling means different things to different people. There are various types of resources that you may be interested in knowing more about, such as CPU or locks, and there are different ways of profiling them.

Most people will implicitly assume that when talking about profiling, one means CPU-profiling – the ancient art of collecting data about where in the code the most CPU-time is spent. It’s a great place to start when you’re trying to figure out how to make your application consume less CPU. If you can optimize your application to do the same work with less resources, this of course directly translates into lowering the bill to your cloud provider, or being able to put off buying those extra servers for a while.

Any self-respecting modern profiling tool will be able to show more than just the CPU aspect of your application, for example allocation profiling or profiling thread halts. Profiling no longer implies just grabbing stack-traces, and assigning meaning to the stack trace depending on how it was sampled; some profilers collaborate closely with the runtime to provide more information than that. Some profilers even provide execution tracing capabilities.

Execution tracing is the capability to produce very specific events when something interesting happens. Execution tracing is available on different levels. Operating systems usually provide frameworks allowing you to listen on various operating system events, some even allowing you to write probe definitions to decide what data to get. Examples include ETW, DTrace and eBPF. Some runtimes, like the OpenJDK Java VM, provide support for integrating with these event systems, and/or have their own event system altogether. Java, being portable across operating systems, and wanting to provide context from the runtime itself, has a high performance event recorder built in, called the JDK Flight Recorder. Benefits include cheap access to information and emission of data and state already tracked by the runtime, not to mention an extensible and coherent data model.

Here are a few of my favourite kinds of profiling information:

  • CPU profiling
  • Wall-clock profiling
  • Allocation profiling
  • Lock / Thread halt / Stop-the-World profiling
  • Heap profiling

Let’s go through a few of them…

CPU Profiling

CPU profiling attempts to answer the question about which methods/functions are eating up all that CPU. If you can properly answer that question, and if you can do something about it (like optimizing the function or calling it less often) you will use less resources. If you want to reduce your cloud provider bill, this is a great place to start. Also, if you can scope the analysis down to a context that you care about, let’s say part of a distributed trace, you can target improving the performance of an individual API endpoint.

Wall-Clock Profiling

Wall-clock profiling attempts to answer the question about which method/function is taking all that time, no matter if on CPU or not. For runtimes supporting massively multithreaded applications, this information is much less useful without some context.

For example, let’s say you have a Java application with various thread pools running various kinds of operations. You may have hundreds of threads, all of them mostly parked, awaiting some work to do. Unless you have some context, all the wall-clock profiling will tell you is that most threads were parked. But if you do have some context, let’s say context around which span in a distributed trace is running when samples are taken, your wall-clock profiling data can tell you in which methods most of the time was spent during a particularly long lasting span. [1]

As a general rule of thumb, wall-clock profiling is useful for finding and optimizing away latencies, whereas CPU profiling is more suited for optimizing throughput. Also, execution tracing is a great complement to wall-clock profiling.

If you can tell where the wall-clock time is spent, you can help remove performance obstacles by seeing which method calls take time and optimize them, or reduce the number of calls to them.

Allocation Profiling

Allocation profiling is trying to answer where all that allocation pressure is coming from, and from allocating what. This is important, since all that allocated memory will usually have to be reclaimed at some point in time, and that uses both CPU and possibly causes stop-the-world pauses from GC (though modern GC technologies, for example ZGC for the Java platform, is making this less of an issue for some types of services).

If you can properly answer where the allocation pressure comes from, you can bring down GC activity by optimizing the offending methods, or have your application call them less.

Lock / Thread Halt / Stop-the-World (STW) Profiling

This kind of profiling tries to answer the question about why my thread didn’t get to run right there and right then. This is typically what you would use the wall-clock-profiler for, but the wall-clock-profiler usually has some serious limitations, making it necessary to collaborate with the runtime to get some additional context. The wall-clock profiler typically only gets sampled stack traces showing you which method you spent time in, but without context it may be hard to know why.

Here are some examples:

  • Your thread is waiting on a monitor
    Context should probably include which thread is currently holding the monitor, which address the monitor has, the time you had to wait etc.
  • Your runtime is doing something runtimey requiring stopping the world, showing your method taking its own sweet time, but not offering any clues as to why
    • STW phase due to GC happening in the middle of running your method.
    • STW phase due to a heap dump
    • STW phase due to full thread stack dump
    • STW phase due to bad behaving framework, or your well meaning colleague(s), forcing full GCs all the time, since they “know that a GC really improves performance if done right there”, not quite realizing that it’s just a small part of a much bigger system.
  • Your thread is waiting for an I/O operation to complete
    Context should probably include the IP address (socket I/O) or file (file I/O), the bytes read/written etc.

There are plenty of more examples, wait, sleep, park etc. To learn more, open JDK Mission Control and take a look at individual event types in the event browser.

Heap Profiling

This kind of profiling attempts to answer questions about what’s on your heap and, sometimes, why. This information can be used to reduce the amount of heap required to run your application, or help you solve memory leaks. Information may range from heap histograms showing you the number of instances of each type on the heap, to leak candidates, their allocation times and allocation stack traces, together with the reference chains still holding on to them.

Continuous Production Profiling

Assuming that your application always has the same performance profile, which implies always having exactly the same load and never being updated, with no edge cases or failure modes, and assuming perfectly random sampling, your profiler could simply take a few samples (let’s say 100 to get a nice distribution) over whichever time period you are interested in (let’s say 24 hours), and call it a day. You would have a very cheap breakdown over whatever profiling information you’re tracking.

These days, however, new versions of an application are deployed several times a day, evolving to meet new requirements at a break-neck speed. They are also subjected to rapidly changing load profiles. Sometimes there may be an edge case we didn’t foresee when writing the program. Being able to use profiling data to not only do high level performance profiling, but detailed problem resolution, is becoming more and more common, not to mention useful.

At Datadog, we’ve used continuous production profiling for our own services for many months now. The net result is that we’ve managed to lower the cost of running our services all over the company by quite large amounts of money. We’ve even used the profiler to improve our other components, like the tracer. I had the same experience at Oracle, where dedicated continuous profiling analysis was used to a great extent for problem resolution in production systems.

Aside from being incredibly convenient, there are many different reasons why you might want to have the profiler running continuously.

Change Analysis

These days new versions are deployed several times a day. This is certainly true for my team at Datadog. There is great value in being able to compare the performance profile, down to the line of code. This is true across new releases, specific time intervals, over other attributes like high vs low CPU load, and countless other facets.

Fine Grained Profiling

Some production profiling environments allow you to add context, for example custom events, providing the means to look at the profiling data in the light of something else happening in a thread at a certain time. This can be used for doing breakdowns of the profiling data for any context you put there, any time, anywhere.

Adding some contextual information can be quite powerful. For example, if we were able to extend the profiling data with information about what was actually going on in that thread, at that time, any other profiling data captured could be seen in the light of that context. For example, WebLogic Server produced Flight Recorder Events for things like SQL calls, servlet invocations etc, making it much easier to attribute the low-level information provided by the profiler to higher level constructs. These events were also associated with an Execution Context ID which spanned processes, making it possible to follow along in distributed transactions.

With the advent of distributed tracing, this can be done in a fairly general way, so that profiling data can be associated with thread local activations of spans in a distributed trace (so called scopes). [1]

That said, with a general recording framework, there is no limit to the kinds of contexts you can invent and associate your profiling data with.

Diagnostics

It’s 2:03 a.m., all of a sudden some spans in your distributed trace end up taking a really long time. Looking at the spans, there is nothing indicating something is actually going wrong, or that the data is bad. From what is present in the tag data, nothing seems to be related between the spans. You decide to open up the profile.
The automated analysis informs you that a third-party library has initiated safe pointing VM operations from a certain thread, in this case for doing full heap dumps. The analysis text points you to more documentation about what a safe point is. You read up on safe pointing VM operations, and the library, and find out that under certain conditions, the library can initiate an emergency heap dump, but that the feature can be turned off. You turn it off, redeploy and go back to sleep.

Or, perhaps the automated analysis informs you that there is heavy lock contention on the apache logger, and links you to the lock profiling information. Looking at the lock profiling information, it seems most of the contention is being caused by the logging done on one particular line. You decide that the logging there is not essential, remove it, commit, redeploy and go back to sleep.
When something happens in production, you will always have data at hand with a continuous profiler. There is no need to try to reproduce the exact environment and conditions under which the problem occurs. You will always have actionable data readily available.

Of course, the cure must not be worse than the ailment. If the performance overhead you pay for the information costs you too much, it will not be worth it. Therefore this rather detailed information must be collected quite inexpensively for a continuous production profiler.

Low-overhead Production Profiling

So, how can one go about producing this information at a reasonable cost? Also, we can’t introduce too much observer effect, as this will skew the data, and not truly represent the application behaviour without the instrumentation.

There are plenty of different methods and techniques we can use. Let’s dig into a few.

Using Already Available Information

If the runtime is already collecting the data, exporting it can usually be done quite cheaply. For example, if the runtime is already collecting information about the various garbage collection phases, perhaps to drive decisions like when to start initiating the next concurrent GC-cycle, that information is already readily available. There is usually quite a bit of information that an adaptively optimizing runtime keeps track of, and some of that information can be quite useful for application developers.

Sampling

One technique we can use is to not take every single possible value, but do statistical sampling instead. In many cases this is the only way which makes sense. Let’s take CPU profiling for example. In most cases, we will be able to select an upper boundary for how much data we produce by either selecting the CPU quanta between samples, or by selecting a fixed number of threads to look at any given time and the sampling period. There are also more advanced techniques for getting a fixed data rate.

An interesting example from Java is the new upcoming allocation profiling event. Allocation in Java is most of the time approximately the cost of bumping a pointer. The allocation takes place in thread local area buffers (TLABs). There is no way to do anything in that code path without introducing unacceptable overhead. There are however two “slow” paths in the allocator. One for when the TLAB is full. The other one for when the object is too large to fit in a TLAB (usually by allocating an enormous array) leading to the object being allocated directly on heap. By sampling our allocations at these points, we get relatively cheap allocation events that are proportional to the allocation pressure. If we were able to configure how often to subsample over the average amount of memory allocated between samples, we would be able to regulate the acceptable overhead. That said, what we’re really looking for is a constant data production rate, so regulating that is better left to a PID-style controller, giving us a relatively constant data production.

Of course, the less sample points we have, the less we can say about the behaviour over very short periods of time.

Thresholding

One sort of sampling is to simply only collect outliers. For some situations, we really would like to get more information. One example might be thread halts that take longer than, say, 10ms. Setting a threshold allows us to do a little bit of more work, when it’s very much warranted. For example, I might only be interested in tracking blocking I/O reads/writes lasting longer than a certain threshold, but for them I’d like to know the amount of bytes read/written, the IP address read from/written to etc.

Of course, the higher the threshold, the more data we will miss (unless we have other means to account for that time). Also, thresholds make it harder to reason about the actual data production rate.

Protect Against Edge Cases

Edge cases which make it hard to reason about their potential overhead should be avoided, or at least handled. For example, when calculating reference chains, you may provide a time budget for which you can scan, and then only do it when absolutely needed. Or, since the cost of walking a stack trace can be proportional to the number of frames on the stack, you can set an upper limit to how many frames to walk, so that recursion gone wild won’t kill your performance. Be careful to identify these edge cases, and protect against them.

One recent example is the Exception event available in the Flight Recorder (Java), which can be configured to only capture Errors. The Java Language Specification defines an Error like this:

“Error is the superclass of all the exceptions from which ordinary programs are not ordinarily expected to recover.”

You would be excused for believing that Errors would happen very rarely, and that recording all of them would not be a problem. Well, a very popular Java framework, which will remain unnamed, subclassed Error in an exception class named LookAheadSuccess. That error was used in a parser and used for control flow, resulting in the error being thrown about a gazillion times per minute. We ended up developing our own solution for exception profiling at DD, which records Datadog specific events into the JDK Flight Recorder.

Some Assembly Required

These techniques, and more, can be used together to provide a best-of-all-worlds profiling environment. Just be careful, as with most things in life a balance must be found. Just like there is (trigger warning) no single energy source that will solve our energy problems in a carbon neutral way (we should use all at our disposal – including nuclear power – to have a chance to go carbon neutral in a reasonable time [2][3]), a balance must be struck between sampling and execution tracing, and a balance for how much data to capture for the various types of profiling you’re doing.

Continuous Profiling in Large Deployments
Or, Finding What You’re Looking For

In a way this part of the blog will be a shameless plug for the work I’ve been involved with at Datadog, but it may offer insights into what matters for a continuous profiler to be successful. Feel free to skip if you dislike me talking about a specific commercial solution.

So, you’ve managed to get all that juicy profiling down to a reasonable amount of data (for Datadog / Java, on average about 100k events per minute, with context and stacktraces, or 2MB per minute, at less than 2% CPU overhead), that you can process and store without going broke. What do you do next?

That amount of data will be overwhelming to most people, so you’ll need to offer a few different ways into the data. Here are a few that we’ve found useful at Datadog:

  • Monitoring
  • Aggregation
  • Searching
  • Association by Context
  • Analysis

Monitoring

All that detailed data that has been collected can, of course, be used to derive metrics. We differentiate between two kinds in the profiling team at Datadog:

  • Key Performance Metrics
  • High Cardinality Metrics

Key performance metrics are simple scalar metrics, you typically derive a value, periodically, per runtime. For example CPU utilization or allocation rate.

Here’s an example showing a typical key performance metric (note that all pictures are clickable for a better look):

kpm

The graph above shows the allocation rate. It’s a simple number per runtime that can change over time. In this case the chart is an aggregate over the service, but it could just as well be a simple metric plotted for an individual runtime.

High Cardinality Metrics are metrics that can have an enormous amount of different buckets with which the values are associated with. An example would be the cpu time per method.

We use these kinds of metrics to support many different use cases, such as allowing you to see the hottest methods in your entire datacenter. The picture below shows the hottest allocation sites across a bunch of processes.

hcm2

Here are some contended methods. Yep, one is a demo…

locks

Metrics also allow you to monitor for certain conditions, like having alerts / watchdogs when certain conditions or changes in conditions occur. That said, they aren’t worth that much unless you can, if you find something funny, go see what was going on – for example see how that contended method was reached when under contention.

Aggregation

Another use case is when you simply don’t care about a specific use case. You just want to look at the big picture in your datacenter. You may perhaps want to see, on average, across all your hosts and for a certain time range, what the CPU profiling information looks like? This would be a great place to start if, for example, looking for ways to lower the CPU usage for Friday nights, 7 to 10 p.m.

Here, for example, is an aggregation flame graph for the profiling data collected for a certain service (prof-analyzer), where there is some load (I set it to a range to filter out the profiles with very little load).

aggregation

A specific method can be selected to show how that specific method ended up being called:
methodselect

Searching

What if you just want to get to an example of the worst possible examples of using a butt-load of CPU? Or if you want to find the worst example of a spike in allocation rate? Having indexed key performance metrics for the profiling data makes it possible to quickly search for profiling information matching certain criteria.

Here is an example of using the monitor enter wait time to filter out an atypically high lock contention:

atypicallock

Association by Context

Of course, if we can associate the profiling data with individual traces, it would be possible to see what went on for an individual long lasting span. If using information from the runtime, even things that are normally hidden from user applications (including profilers purely written in Java), like stop-the-world pauses, would be visible.

breakdown

Analysis

When having access to all that yummy, per thread and time, detailed, profiling data, it would be a shame to not go looking for some interesting patterns to highlight. The result of that analysis can provide a means to focus on the most important parts of the profiling data.

analysis1

So, nothing terribly interesting going on in our services right now. The one below is from a silly demo app.
analysis2

That said, if you’re interested in the kind of patterns we can detect, check out the JDK Mission Control rules. The ones at Datadog are a superset, and work similarly.

Summary

Profiling these days is no longer limited to high overhead development profilers. The capabilities of the production time profilers are steadily increasing and their value is becoming less controversial, some preferring them for complex applications even during development. Today, having a continuous production profiler enabled in production will offer unparalleled performance insights into your production environment, at an impressively low performance overhead. Data will always be at your fingertips when you need it.

Additional Reading

https://www.datadoghq.com/blog/datadog-continuous-profiler/
https://www.datadoghq.com/blog/engineering/how-we-wrote-a-python-profiler/

Many thanks to Alex Ciminian, Matt Perpick and Dan Benamy for feedback on this blog.


[1]: Deep Distributed Tracing blog: https://hirt.se/blog/?p=1081

Unrelated links regarding the very interesting and important de-carbonization debate:

[2]: https://theness.com/neurologicablog/index.php/there-is-no-one-energy-solution/

[3]: https://mediasite.engr.wisc.edu/Mediasite/Play/f77cfe80cdea45079cee72ac7e04469f1d
(No longer available, but this youtube clip is related, and also presented by Dr. Jesse Jenkins):
https://youtu.be/ZYfD1Z_zkfc

A Closer Look at JFR Streaming

By Marcus Hirt and JP Bempel

Since JDK 14 there is a new JFR kid on the block – JFR streaming. 🙂 This blog post will discuss some of the things that you can do with JFR streaming, as well as some of the things you may want to avoid.

An Introduction to JFR Streaming

In the most recent version of the JDK a new JFR-related feature was introduced – JFR streaming. It is a feature allowing a developer to subscribe to select JFR data and to decide what to do with that data in the host process. JFR events can also be consumed from a separate process by pointing to the file repo of a separate JVM process – the mechanism is the same.

The JFR streaming works by allowing the reading from the JFR file whilst it is being written, the emissions to disk happening more frequently (usually every seconds, or when the in memory buffers are full) than during your normal flight recordings, where the data is emitted when the in-memory buffers are full. It does not support streaming directly from in-memory buffers, and the events are not delivered synchronously when they are occurring in the JVM.

The new functionality mostly resides in jdk.jfr.consumer. This is how you would open an event stream and start consuming the CPU load with 1 second intervals and the monitor class when blocked to enter a monitor for 10 ms:

try (var rs = new RecordingStream()) {
  rs.enable("jdk.CPULoad").withPeriod(Duration.ofSeconds(1));
  rs.enable("jdk.JavaMonitorEnter").withThreshold(Duration.ofMillis(10));
  rs.onEvent("jdk.CPULoad", event -> {
    System.out.println(event.getFloat("machineTotal"));
  });
  rs.onEvent("jdk.JavaMonitorEnter", event -> {
    System.out.println(event.getClass("monitorClass"));
  });
  rs.start();
}

The RecordingStream is what you would use to control what is gathered from within the Java process, effectively also controlling the recorder.

Here is another example using the default recording template, and printing out the information for garbage collection events, cpu load and the JVM information:

Configuration c = Configuration.getConfiguration("default");
 try (var rs = new RecordingStream(c)) {
     rs.onEvent("jdk.GarbageCollection", System.out::println);
     rs.onEvent("jdk.CPULoad", System.out::println);
     rs.onEvent("jdk.JVMInformation", System.out::println);
     rs.start();
   }
 }

The EventStream class can be used together with the standard flight recorder mechanisms to gather information from ongoing recordings, even ones being done in separate processes or an already recorded file. Here is an example using the EventStream to get some other attributes of the CPU load and information from garbage collections from within the Java process (needs an ongoing recording):

try (var es = EventStream.openRepository()) {
   es.onEvent("jdk.CPULoad", event -> {
     System.out.println("CPU Load " + event.getEndTime());
     System.out.println(" Machine total: " + 100 * event.getFloat("machineTotal") + "%");
     System.out.println(" JVM User: " + 100 * event.getFloat("jvmUser") + "%");
     System.out.println(" JVM System: " + 100 * event.getFloat("jvmSystem") + "%");
     System.out.println();
   });
   es.onEvent("jdk.GarbageCollection", event -> {
     System.out.println("Garbage collection: " + event.getLong("gcId"));
     System.out.println(" Cause: " + event.getString("cause"));
     System.out.println(" Total pause: " + event.getDuration("sumOfPauses"));
     System.out.println(" Longest pause: " + event.getDuration("longestPause"));
     System.out.println();
   });
   es.start();
 }

This is the EventStream interface used to consume and filter an event stream:

public interface EventStream extends AutoCloseable {
  public static EventStream openRepository();
  public static EventStream openRepository(Path directory);
  public static EventStream openFile(Path file);

  void setStartTime(Instant startTime);
  void setEndTime(Instant endTime);
  void setOrdered(boolean ordered);
  void setReuse(boolean reuse);

  void onEvent(Consumer handler);
  void onEvent(String eventName, Consumer<RecordedEvent> handler);
  void onClose(Runnable handler);
  void onError(Runnable handler);
  void remove(Object handler);
  void start();
  void startAsync();
  void awaitTermination();
  void awaitTermination(Duration duration);
  void close();
}

The open* methods allow you to open a specific file or a specific file repository (for example from a different process). The set* methods allow you to filter on time and to select if you want to enforce that the events are delivered in time order. You can also allow the reuse of the event object that gets delivered, to get the memory pressure down a bit.

The onEvent* allows you to register a consumer for handling the events, either all of the events or by event name (type). The start method kicks off the processing in the current thread, startAsync is a convenience method for kicking off the process in a single separate thread.

Where to use streaming

There are several advantages to JFR event streaming. It is a great way to access JFR data for monitoring purposes. You get access to detailed information that was previously unavailable to you, even from different processes, should you want to.

Here are some examples:

  • Directly send monitoring data to your favourite monitoring service
    For example streaming select metrics over to Datadog. 😉 Not that you would need to – we already derive interesting performance metrics from the (full) flight recordings we capture. We even track complex metrics like top hottest methods, or top allocations sites, over time, using what we internally call high cardinality metrics.
  • Pre-aggregating data before sending it off
    For example, you could get the CPU load every second, and then every five minutes roll it up to an average, median, min, max and a standard deviation, not having to send every single entry.
  • Act on profiling data in-process
    You could, for example, make decisions for controlling the normal flight recordings given some statistics you track, like enabling certain events when it looks like it could be interesting.
  • Expose JFR data through other management APIs
    For example, adding an MBean exposing select JFR data over JMX.
    That said, there might be an API to directly connect to an MBeanServerConnection directly in the future[1]:

    MBeanServerConnection conn = connect(host, port);
    try (EventStream es = new RemoteRecordingStream(conn)) {
      es.onEvent("jdk.GarbageCollection", e -> ... );
      es.onEvent("jdk.ExceptionThrown". e -> ...);
      es.onEvent("jdk.JavaMonitorBlocked", e-> ...);
      es.start();
    }
    

It also allows you to skip the metadata part of a normal flight recording. The metadata in JFR contains the information about what was recorded, so that you can parse and view data that you may not even know about beforehand. In the case of monitoring a few well known data points, this is redundant information to keep sending over and over again.

Erik Gahlin has a neat example for producing health reports using JFR streaming, displaying pre-aggregation of the top frames for execution samples and allocation, as well as doing stats for some common data points, like GC metrics and CPU load.

Where Not to Use Streaming

This is from JEP-349[2], the JEP where JFR streaming was introduced:

To consume the data today, a user must start a recording, stop it, dump the contents to disk and then parse the recording file. This works well for application profiling, where typically at least a minute of data is being recorded at a time, but not for monitoring purposes.

Let’s explore why the JEP differentiates between monitoring and profiling. Some events in JFR are simple data points in time. Some are more complex, containing plenty of constants. For example stack traces. JFR takes great care to record these complex data structures in a binary format that doesn’t take a lot of processing time to produce, and which is still compact.

Some of the JFR events occur quite frequently – for example, a typical one minute recording of data can contain hundreds of thousands of events. The file size for such a recording is typically only a couple of MB large. There is a mix of techniques used to keep the size down, such as using constant pools to ensure that information like method names are not repeated, LEB128 encoding of integers etc.

For profiling you typically want quite a few of these events enabled. JFR was built to emit this data at a very low overhead, and the data is eminently useful to get detailed information about things like why your thread is halting. For example, the stack trace to a place where your code had to wait to enter a monitor, complete with the class of the monitor waited on, the exact duration of the wait, which thread was holding on to the monitor (making you not able to enter), the monitor address and more. Not only that, there may have been other events providing context about what was going on in that thread at the time of the monitor enter, shining further light on what was going on. Events that you may not even know about.

If using JFR streaming for profiling, you would spend a lot of effort either naively sending constant information over and over again in an inefficient way (say, serializing all of it to JSon), or trying to spend a lot of effort reproducing the JFR format (introducing your own constant pools etc).

For example, the RecordedEvent class contains a method to get the RecordedStackTrace, which is a List of RecordedFrame. For each event you would walk through, the in-memory object model would be created.

You can externalize some of that cost, i.e. how the process you are monitoring is affected, by using another process to read the data as described above. That would, for example, lessen the allocation pressure in the process you’re monitoring. That is great, for example if you have a very latency sensitive process. That said, you have now created another Java process and put the costs over there (including the cpu overhead of dealing with the memory pressure as well as the memory overhead of running another JVM), typically on the same host. If you can afford to dedicate the memory and pin the event stream reader process to its own processor (cpu affinity), this can be a good solution though. Note that the same can be done for normal flight recordings, i.e. you can stream the recorded data directly from the file repository from a separate process.

Benchmarks of Using JFR Streaming Wrongly

For laughs and giggles, here are some benchmarks using just standard JFR to get all the data in the profiling template, compared to getting the equivalent information and serializing it to a JSon-like format using JFR Streaming. In other words, abusing JFR Streaming as a streaming replacement for getting the full JFR dataset. This is of course not what you should be using JFR Streaming for, but exemplifies how wrong you can land if you use the technology in a way it was never intended. We’ll look at the latency of http requests, the CPU time spent and the allocation pressure. We’ll also look at the size of the payload of information extracted. The benchmark is admittedly being a bit extra mean as well, to explore edge cases.

Note that this is a simplified example – we’re not even bothering with extracting the full stacktrace information and re-encoding it for streaming, whilst the JFRs in comparison already contain the full stack traces for all events efficiently encoded in constant pools. We could trade (even more) CPU for trying to bring the streamed data back to a JFR style format with constant pools again before storing/sending it. For serialization we’re simply doing toString() on the event objects, which gives us the events in a JSon-like format with only the top five frames of the stack trace. Including the full stack traces would add quite a bit more overhead to the streaming example.

The benchmark is available in this GitHub repository. It is based on the standard PetClinic application with some modifications to make it relevant to measure overhead in general.

Note: We are using an early access version of OpenJDK 15 in the benchmarks, since we discovered a bug whilst building this benchmark. See JDK-8245120.

The first chart shows the impact on http request latencies:

Note: Y-axis is in log scale to magnify the difference.

Next chart shows the CPU consumption. We measure in ticks from /proc/<pid>/stat at the end of the benchmark. This way we have a unique number easy to compare with other runs. It represents the CPU consumed when threads were scheduled on the CPU.

 

The following chart is the total amount of heap allocated during the benchmark. Numbers were extracted from GC logs.

The final chart represents the size of extracted information, as described above:

Note that the JFR file contains the full stacktraces, and that the jfr-streaming one is limited to the top 5 frames.

So what is the conclusion of these benchmarks? Not much, except for: “don’t use technology for things it was never intended for”. 😉

Summary

  • JFR streaming is a great new way to expose JFR data for monitoring purposes.
  • JFR streaming is available from JDK 14 and above.
  • JFR streaming is complementary to the already existing JFR APIs, which remains the go-to way to use JFR for low-overhead detailed information (i.e. profiling / execution tracing).
  • Knowing where and how to use JFR streaming is key to avoiding sad pandas.

[1]:https://www.reddit.com/r/java/comments/e97vos/jfr_event_streaming_with_jdk_14_in_outprocess/faiapm8/

[2]:https://openjdk.java.net/jeps/349

Fantastic JVMs and Where to Find Them

Since you’re reading this blog, chances are that you’re writing software which will eventually run on a JVM. Most of you are using the Java language. Many of you are using a variety of other languages that target the JVM, such as Scala, Kotlin, Clojure, Groovy, (J)Ruby etc. Eventually you’ll need to decide on which JDK/JRE to deploy your software on in production. This is much easier said than done. There are quite a few different vendors out there, providing support and taking responsibility for the binaries they produce. They can have different support lengths for specific versions, and whereas you can sometimes find a vendor providing extended support for a version that has been officially end-of-lifed at Oracle, you may not find builds with the latest fixes in them publicly available. You’ll need to get those directly from the vendor.

After trying to figure out what’s what, I thought I’d simply write a blog post on the various JDKs available out there. This is especially important, since you might be consuming your JDK from a container provided by a third party, e.g. Docker Hub, and you may not know exactly what you’re getting[1].

Release Version Chicken Race

Typically most companies will require that you keep your dependencies up-to-date. For example, if you’ve written something with a dependency on Tomcat, you are pretty likely to keep your dependencies up-to-date. GitHub may even warn you if you’re running with a version that has known security implications. However, not everyone is keeping their JDKs/JVMs up-to-date. Which is funny, in a way, since everything you’ll be running could be affected.

Let’s take the Oracle JDK as an example. JDK 7 was GA in July 2011. Publicly available updates and fixes ceased in April 2015. Oracle’s Premier Support ended in July 2019, and even the Extended Support ends 2022.

Let’s say you’re running on JDK 7. If you got your JDKs from Oracle, without a support contract chances are that the latest version of JDK 7 you got was built in 2015. You are now five (5!) years behind on critical security patches.

In other words, if you’re still running your software on JDK 7, you may want to at least begin upgrading to 8. JDK 7 is dying and support is being dropped left and right. If you aren’t buying support and have someone provide you with (security) patches, you might want to accelerate the effort. Also, this particular upgrade (7->8) should be relatively painless – in most cases it will be a drop in replacement. Now, if you’re not running a JDK 7 with the latest patches (sanity check – was the JDK at least built this year?), you may not only be missing out on bug fixes, but you may also be missing out on security patches[2]!

The same arguments could be made for JDK 8 as well, on a slightly pushed out time-line. The good news is that there are still public (and free) updates coming from the OpenJDK 8 maintenance project. That said, there are plenty of advantages for upgrading to JDK 11+, better performance being one of them.

Now, when the new, faster, release schedule was announced, Oracle announced that every 3 years, there would be an LTS (Long Term Support) version of Java. The releases in between the LTS releases would only be supported until the next release came out. Most vendors have adopted the same support scheme, which means that, at the time of writing, you should not be running ANYTHING on JDK 9,10,12 and 13 (unless you’re using Azul distributions, see [3]). They are not supported. Running them will only mean that you are lacking bug- and security fixes. To take a somewhat arbitrary example – if you stopped upgrading JDK 8 after 8u74, you are literally lacking thousands of fixes.

At the time of writing this blog, the new CPU (Critical Patch Update) releases have just been published, and these are the releases you should be running in July 2020 (sooner rather than later):

  • JDK 8u262
  • JDK 11.0.8
  • JDK 13.0.4 [3]
  • JDK 14.0.2

If you’re running anything else in production, without a support contract, it could be argued you’re not doing things quite right.

What’s what?

OpenJDK, being open sourced, has builds provided by plenty of vendors. Here is a non-exhaustive list of some vendors shipping supported versions of OpenJDK (in alphabetical order, distribution(s) in parenthesis):

These providers usually ship distributions with pretty much the same bits from the OpenJDK repository, sometimes differing by what features are enabled, for example like a GC (Shenandoah / Red Hat), or by adding proprietary features like a new compiler (Falcon / Azul (Zing)). Some vendors have a free distribution (e.g. Oracle OpenJDK, Azul Zulu) and one that requires a commercial license (Oracle JDK, Azul Zing). Which vendor and distribution you should select depends on your demands – e.g. which vendor can provide reliable support to you (Oracle is one of the biggest contributors to OpenJDK), or which one provides the feature you need at a price point you can afford (e.g. JDK Flight Recorder on JDK 8 without the need for a commercial license, or support for a specific GC or compiler).

There are also upstream builds, not supported by anyone, built on Red Hat infrastructure and hosted by AdoptOpenJDK. For example, if you get a JDK 8 from Docker Hub (openjdk/jdk8u252, openjdk/jdk8), that is what you would get.

Where to get JFR – Public Service Announcement

As you probably know, JDK Flight Recorder, a technology close to my heart, has been backported to JDK 8. Since we’re talking about where to get your JVMs and versions, I thought I’d include a small table for which provider will be including JFR in what version of their JDK 8 builds.

Vendor First JDK 8 Version with JFR Release Date Docker Image
Azul (Zulu) u212* (u262+ recommended) 2019-04-16 azul/zulu-openjdk/8
AdoptOpenJDK u262 2020-07-16 adoptopenjdk/8u262
Red Hat u262 2020-07-15 In Fedora and RHEL
Amazon (Corretto) u262 2020-07-14 amazoncorretto:8u262
Bell-Soft (Liberica) u262 (separate binary) 2020-07-14 N/A
Upstream builds [4] u272 2020-10-20 openjdk/jdk8u272, openjdk/jdk8

Summary

  • Use the latest version of an LTS which is still supported, or the latest version
  • Use a supported build in production (even if you haven’t bought support)

Thanks to Mario Torre, JP Bempel and Gil Tene for feedback!

[1]: Mystery meat OpenJDK builds strike again: https://mail.openjdk.java.net/pipermail/jdk8u-dev/2019-May/009330.html

[2]: To check the vulnerabilities you may be exposed to, see e.g. https://www.cvedetails.com/version-list/93/19116/1/Oracle-JDK.html?sha=b856721542b66953c859bd95be067255dd4c6098&order=1&trc=188

[3]: Upstream JDK 13u is being supported, and Azul has announced 13 to be “Medium Term“ supported – you can keep getting updates for JDK 13 for Azul distributions.

[4]: These are built by Red Hat and hosted by AdoptOpenJDK, and are different from Red Hat’s and AdoptOpenJDK’s supported builds.

The “Best of the JDK” Tournament

Over the last few weeks, there has been a knock-out tournament raging on Twitter, where various Java technologies have battled out which JDK technology is the best. It’s all part of the activities taking place around the celebration of Java turning 25 years. And boy, have those years been interesting.

Like many languages in use today, Java started out with a simple interpreter. That is, by the way, how Java got a reputation for being slow. Today, Java peak performance can surpass that of statically compiled languages, owing to optimizations only possible when runtime information is available. But I digress

As many of you know, I started out co-founding a company named Appeal – the company that created the JRockit JVM. We did quite a few cool things during that time; some of them relevant to the knock-out competition. We built the world’s first JVM management console, mostly since the application to become a Java licensee (so that JRockit could become a Sun certified JVM) required us to state a value-add. Our original application stated “better performance”, and was summarily turned down. 😉 With the work on the management console we eventually consolidated an API to monitor and manage the JVM – JMAPI (the JRockit Management API), which later inspired – and was superseded by – JSR-174 (java.lang.management)[1].

We also built a tool we called JRA (JRockit Runtime Analyzer). It really started out as a tool for finding out how the JVM was performing at customer installations – we needed information to better understand how to improve the JVM for real world usage. Customers, quite understandably, refused to let us borrow their applications to run them in our labs. To make it easy for them to understand and verify the data they were sharing, it was all emitted as text (XML). It didn’t take long for customers to see us use the tool and the (accidental) value it brought for optimizing their applications – was the tool perhaps for sale? As a startup, we of course said yes, and made it into a product. When we later introduced the JRockit DetGC (deterministic GC), there was a need to be able to prove that the GC was keeping the latency contract, and show where in the customer code any thread halts were caused (e.g. due to bad synchronization). So the JRockit Runtime Analyzer was extended with LAT (the Latency Analysis Tool), which now introduced a binary artifact for the latency data for better data density and less serialization cost. In the end the JRA and LAT was unified into a single model – JFR (JRockit Flight Recorder, later Java Flight Recorder, and finally re-dubbed into JDK Flight Recorder when it was open sourced). We also created an impossibly cool on-line memory analysis tool (which was sadly never ported to hotspot), together with a slew of other little tools and utilities.

The good old JMC memleak tool

Some of these tools converged into Java Mission Control, which became the hub for the cool tools we were developing.

JMC Logo

I was happily surprised to see JDK Mission Control included in the “Best of the JDK” feature face off. I was doing little dad-dances (to the embarrassment of my kids) in total astonishment when JDK Mission Control got up against the runtime and language features and ultimately won the whole thing.

Competition Results

Tech Poetry Throw-Down

One of the best parts of this whole competition was when Erik Costlow wrote some poetry in support of JDK Mission Control. This sparked an epic tech-poetry throw-down with little poems in favour of various Java technologies.

Here are a few of my favourites entries for JMC & JFR (in no particular order):

Of JDK Mission Control

whose benefits I will extol:

It watches performance

while still in conformance

So therefore it should win this poll.

  – @costlow

(The one which started the it all)

2 am in the morning, my mobile chimed,

The war room conf call had to be primed.

JVM’s are down, the helpdesk said,

Touch troubleshooting road ahead.

CPU? GC? Bad Code?, the questions abound,

The root cause was far from being found.

Tumultuous voiced from Dev to Ops, each one declaring the were clean

No path to the solution was to be seen.

With a prayer, I fired up the Java Flight Recorder,

Hoping this would restore some war room order.

Lo! And behold, the histogram revealed

‘Twas a code deadlock, the system could yet be healed!

Helpful NullPointer messages, I hear you say,

Who will alert you whilst you are away?

  – @perfclarity

To see or not to see (perf data)

That is the question (mission control answers).

Whether ‘tis nobler in the code

To suffer the zings and harrows of outrageous finger pointing

Or to stream events and by analyzing, end it

  – @costlow

I have never

had to deal

with NullPointer

Exceptions

and which

many people want

to have

better messages

Forgive me

but my vote goes to JMC

it is so sweet

and so cold

  – @stuartmarks

To think that I could ever see

A tool so lovely: JMC

A tool that streams events all day

Yet still performs without delay.

  – @costlow

If you need to control a mission

OpenJDK had an omission

And then JMC

Was suddenly free

Without even rights of rescission

  – @stuartmarks

So much value inside JMC

Yet usage was low, tis it wasn’t free

But low and behold

Oracle open sourced it in whole

And now productivity is as easy can be

  – @Sharat_Chander

As I stream through the events of my workload perf pain

I take a look post 8 life and realize this tool should reign

‘cause that’s just perfect for a coder like me

You know we love fancy things like JDK MC

Been spendin’ most our lives livin’ in a coder’s paradise

@costlow

Here are a few of my favourites for the other technologies:

Null pointer exception

Is a old familiar friend

And she wants to be

more helpful again

With deep information

I can only begin to extol

Love for NPE

For she should win

this Java poll

  – @manicode

There was a NullPointerException

Whose message needs amplification

To the VM some hacks

Add the relevant facts

And no longer is it an obsession

  – @stuartmarks

As I try to decipher my NPE in grails

The Greater Sage-Grouse wanders the sage brush

The grouse and I are one

For I can’t decipher less helpful NPE’s in grails

Any more than the sage-grouse knows why it wanders the sagebrush

  – @manicode

I’m on a boat motherf$%^r take a look at me

Straight floatin’ on a boat debugging NPE

Busting five knots, wind whipping out my coat

You can’t stop me motherf$%^r cause I’m debugging on a boat

  – @manicode

The usability of NullPointerExceptions

have historically been an issue

by adding static code to dynamic exceptions

our problems we can diffuse

Let go of your stack trace debugging hate

And vote for JEP Three Fifty Eight!

  – @manicode

Many thanks to @costlow, @manicode, @stuartmarks, @perfclarity and @Sharat_Chander for all the laughs! 🙂

Thanks!

Yes, I know this is a silly little Twitter competition. But, if nothing else, this silly little competition provides an excellent opportunity for me to give some overdue thanks:

  • Plenty of thanks and love to all of the users of JMC out there, using JMC to solve tricky problems in production systems on a daily basis.
  • Many thanks to everyone who voted for JMC. I didn’t think a tool would stand a chance against language and runtime features.
  • Huge thanks to all the developers on the JDK Mission Control team, and to all the developers on the JDK serviceability team. You’re a really awesome bunch, and it’s a privilege for me to be working with you.
  • Major kudos to Oracle for open sourcing JDK Mission Control and JDK Flight Recorder.
  • Many thanks to the main sponsors of the development of JDK Mission Control:

JRockit and Duke hanging

[1]: Sadly, not all of the features in JMAPI got rolled into the standardized API. JMAPI could, for example, change the CPU affinity of the JVM process on the fly, dynamically change the heap size target, and independently (and dynamically) switch the GC to use a nursery or not as well as switch between concurrent and parallel mark and/or sweep phases. Of course differences in GC capabilities etc required the standardized API to be limited to what made sense to most runtimes. That said, I’m still kinda bummed that it became a JMX API (java.lang.management depending on the javax.management specification), instead of a pure local Java API, which could also have been exposed through JMX. See, for example, the JFR APIs, where there is a local API and also a JMX API.

Oracle Releases JDK Mission Control 7 GA

Oracle just released their GA build of JDK Mission Control 7.0.0. I, of course, had to download it to give it a spin.

Here are my main takeaways:

  1. Compared to the early access builds, it no longer comes with an embedded JDK. This is actually nice, since you can run it on whichever JDK you’d like. That said, it does require you to have a JDK already installed. Since local auto-discovery of locally running JVMs will not work unless running on a JDK (it does not work on a JRE), it also makes it a little bit easier to get things wrong.

    You may want to configure the jmc.ini file to point to a JDK manually. Simply add a -vm entry just before the -vmargs, like so:

    ...
    --launcher.appendVmargs
    -vm
    C:\Java\JDKs\jdk-11.0.5\bin
    -vmargs
    -XX:+UnlockDiagnosticVMOptions
    ...
  2. Oracle has put up a properly configured update site. This means that in Oracle’s builds of Mission Control, there are additional plug-ins that can be installed by going to Help | Install New Software…
    updatesite
  3. Everything, except for the Oracle specific optional plug-ins from the update site, is released under the very permissive UPL license. The Oracle ones are under a separate group named Mission Control (Oracle) on the update site, so they are easy to spot.
  4. Working my way back from the updatesite.properties file in the application, I found an Eclipse update site available here:
    https://download.oracle.com/technology/products/missioncontrol/updatesites/openjdk/7.0.0/ide/
    (Edit: After posting this blog, I noticed that reading the release notes would have been easier. ;))

TL;DR

Oracle releases a solid first (though a bit delayed) release of JMC 7. A notable difference to Oracle’s early access builds, is that there is no longer an embedded JDK. A notable difference to other JMC releases is that there are published update sites – both for the stand alone application, and for installing it all into the Eclipse IDE.

So, in short, yay!

Fetching and Building Mission Control 8+

As described in a previous post, Mission Control is now on GitHub. Since this alters how to fetch and build OpenJDK Mission Control, this is an updated version of my old post on how to fetch and build JMC from version 8 and up.

Getting Git

First step is to get Git, the SCM used for OpenJDK Mission Control. Installing Git is different for different platforms, but here is a link to how to get started:

https://git-scm.com/book/en/v2/Getting-Started-Installing-Git

Installing the Skara Tooling (Optional)

This is an optional step, making it easier if you want to contribute to Mission Control:

https://hirt.se/blog/?p=1186

Cloning the Source

Once Git is installed properly, getting the source is as easy as cloning the jmc repo. First change into the directory where you want to check out jmc. Then run:

git clone https://github.com/openjdk/jmc.git

Getting Maven

Since you probably have some Java experience, you probably already have Maven installed on your system. If you do not, you now need to install it. Simply follow the instructions here:

https://maven.apache.org/install.html

Building Mission Control

First we need to ensure that Java 8 is on our path. Some of the build components still use JDK 8, so this is important.

java –version

This will show the Java version in use. If this is not a Java 8 JDK, change your path. Once done, we are now ready to build Mission Control. Open up two terminals. Yep, two!

In the first one, go to where your cloned JMC resides and type in and execute the following commands (for Windows, replace the dash (/) with a backslash (\)):

cd releng/third-party
mvn p2:site
mvn jetty:run

Now, leave that terminal open with the jetty running. Do not touch.

In the second terminal, go to your cloned jmc directory. First we will need to build and install the core libraries:

cd core
mvn install

Next run maven in the jmc root:

mvn clean package

JMC should now be building. The first time you build Maven will download all of the third party dependencies. This will take some time. Subsequent builds will be less painful. On my system, the first build took 6:01 min. The subsequent clean package build took 2:38.

Running Mission Control

To start your recently built Mission Control, run:

Windows

target\products\org.openjdk.jmc\win32\win32\x86_64\jmc.exe -vm %JAVA_HOME%\bin

Mac OS X

target/products/org.openjdk.jmc/macosx/cocoa/x86_64/JDK\ Mission\ Control.app/Contents/MacOS/jmc -vm $JAVA_HOME/bin

Contributing to JDK Mission Control

To contribute to JDK Mission Control, you need to have signed an Oracle Contributor Agreement. More information can be found here:

http://openjdk.java.net/contribute/

Don’t forget to join the dev list:

http://mail.openjdk.java.net/mailman/listinfo/jmc-dev

We also have a Slack (for contributors), which you can join here:

https://join.slack.com/t/jdkmissioncontrol/signup

More Info

For more information on how to run tests, use APIs etc, there is a README.md file in the root of the repo. Let me know in the comments section if there is something you think I should add to this blog post and/or the README!

Using the Skara Tooling

I’m writing this for myself as much as I’m writing this to share. After only a day of using JMC with Skara, I’ve fallen in love with it. I spend less time painstakingly putting together review e-mails, copying and pasting code to comment on certain lines of code, cloning separate repos to do parallel work efficiently, setting up new workspaces for the these repos etc. Props to the Skara team for saving me time by cutting out a big chunk of the stuff not related to coding and a whole lot of ceremony.

Note that the Skara tooling can be used outside of the scope of OpenJDK – git sync alone is a good reason for why everyone who wants to reduce ceremony can benefit from the Skara tooling.

So, here are a few tips on how to get started:

  1. Clone Skara:
    git clone https://github.com/openjdk/skara
  2. Build it:
    gradlew (win) or sh gradlew (mac/linux)
  3. Install it:
    git config --global include.path "%CD%/skara.gitconfig" (win) or git config --global include.path "$PWD/skara.gitconfig" (mac/linux)
  4. Set where to sync your forks from:
    git config --global sync.from upstream

For folks on Red Hat distros, 2 and 3 can be replaced by make install. For more information on the installation, see the Skara wiki.

Some Examples

To sync your fork with upstream and pull the changes:
git sync --pull

Note: if the sync fails with the error message “No remote provided to fetch from, please set the –from flag”, remember to set the remote for your repo, e.g.
git remote add upstream https://github.com/openjdk/jmc

To list the open PRs:
git pr list

To create a PR:
git pr create

To push your committed changes in your branch to your fork, creating the remote branch:
git publish

JMC Workflow

Below is the typical work-flow for JMC.

First ensure that you have a fork of JMC. Either fork it on github.com, or on the command line:
git fork https://github.com/openjdk/jmc jmc

You typically just create that one fork and stick with it.

  1. (Optional) Sync up your fork with upstream:
    git sync --pull
  2. Create a branch to work on, with a name you pick, typically related to the work you plan on doing:
    git checkout –b <branchname>
  3. Make your changes / fix your bug / add amazing stuff
  4. (Optional) Run jcheck locally:
    git jcheck local
  5. Push your changes to the new branch on your fork:
    git publish (which is pretty much git push --set-upstream origin <branchname>)
  6. Create the PR, either on GitHub, or from the command line:
    git pr create

Summary / TL;DR

  • I ❤️ Skara

Mission Control is Now Officially on GitHub!

Since this morning, the JDK Mission Control (JMC) project has gone full Skara! mc_512x512This means that the next version (JMC 8.0) will be developed over at GitHub.

To contribute to JDK Mission Control, you (or the company you work for) need to have signed an OCA, like for any other OpenJDK-project. If you already have an OpenJDK username, you can associate your GitHub account with it.

Just after we open sourced JMC, I created a temporary mirror on GitHub to experiment with working with JMC at GitHub. That mirror is now closed for business. Please use the official OpenJDK one from now on:

https://github.com/openjdk/jmc

If you forked or stared the old repo, please feel free to fork and/or star the new one!

Compressing Flight Recordings

Flight recordings are nifty binary recordings of what is going on in the runtime and the application running on it. A flight recording contains a wide variety of information, such as various kinds of profiling information, threat stall information and a whole host of other information. All adhering to a common event model and with the ability to dynamically add new event types.

In the versions of JFR since JDK 9, some care was taken to reduce the memory footprint by LEB 128 encoding integers, noting that many things, like constant pool indices, usually occupy relatively low numbers. The memory footprint was cut in about half, compared to previous versions of JFR.

Now, sometimes you may want to compress the JFR data even further. The question then is – how much can you save if you compress the recordings further, and what algorithms would be best suited for doing the compression? What if you want the compression activity to use as little CPU as possible?

My friend and colleague at Datadog, Jaroslav Bachorik, set out to answer that question for some typical recording shapes that we see at Datadog, using a set of compression algorithms from Apache Commons Compress (bzip2, LZMA, LZ4), the built in GZip, a dedicated LZ4 library, XZ, and Snappy.

Below is a table of his findings for “small” (~1.5 MiB) and “large” (~5 MiB) recordings from one of our services. The benchmark was run on a MacBook Pro 2019. Now, you’d have to test on your own recordings to truly know, but I suspect that these results will hold up pretty well with other kinds of loads as well.

Algorithm Recording Size Throughput Compression Ratio Utility
Gzip small 24.299 3.28 79.647
Gzip large 5.762 3.54 20.436
BZip2 small 6.616 3.51 23.198
BZip2 large 1.518 3.84 5.826
LZ4 small 133.115 2.40 319.394
LZ4 large 38.901 2.57 100.009
LZ4 (Apache) small 0.055 2.74 0.152
LZ4 (Apache) large 0.013 3.00 0.039
LZMA small 1.828 4.31 7.882
LZMA large 0.351 4.37 1.533
Snappy small 134.598 2.27 305.494
Snappy large 35.986 2.49 89.692
XZ small 1.847 4.31 7.964
XZ large 0.349 4.37 1.523

Throughput is recordings/s. Utility is throughput * compression ratio, and meant to capture the combination of compression strength and performance. Note that the numbers are not normalized – only compare numbers in the same size category.

Summary / TL;DR

  • The built-in GZip is doing a fairly good/balanced job of compressing flight recordings
  • You can get the best utility out of LZ4, closely followed by Snappy, but you sacrifice some compression
  • If you’re prepared to pay for it, LZMA and XZ give a good compression ratio
  • All credz to Jaroslav for his JMH-benchmark and all the data

JFR is Coming to OpenJDK 8!

I recently realized that this isn’t common knowledge, so I thought I’d take the opportunity to talk about the JDK Flight Recorder coming to OpenJDK 8! The backport is a collaboration between Red Hat, Alibaba, Azul and Datadog. These are exciting times for production time profiling nerds like me. Smile

The repository for the backport is available here:

http://hg.openjdk.java.net/jdk8u/jdk8u-jfr-incubator/

The proposed CSR is available here:

https://bugs.openjdk.java.net/browse/JDK-8230764

The backport is keeping the same interfaces and pretty much the same implementation as is available in OpenJDK 11, and is fully compatible. There were a few security fixes, due to there not being any module system to rely upon for isolation of the internals, also, some events will not be available (e.g. the Module related events) but other than that the API and tools work exactly the same.

JDK Mission Control will, of course, be updated to work flawlessly with the OpenJDK 8 version of JFR as well. The changes will be minute and are only necessary since Mission Control has some built-in assumptions that no longer hold true.

You can already build and try out OpenJDK 8 with JFR simply by building the JDK available in the repository mentioned above. Also, Aleksey Shipilev provide binaries – see here for details.

Have fun! Smile