Back to Zing Documentation Home

Reducing Deoptimizations

All Java Virtual Machines use optimistic optimization to try improve throughput. Deoptimization of a method or part of a method is a transition in which compiled code is thrown away and a given method is executed either with an interpreter or another compiled codeblob that is prepared by a less optimizing compiler. Deoptimization occurs when these optimistic optimizations need to be undone. Unfortunately, deoptimization can be a problem for latency sensitive applications.

Even during warmup, JIT compiled code is deoptimized with some regularity. deoptimization logs indicate that common causes are:

  • Unreached – a path is never taken during warmup
  • Unloaded/uninitialized class – a class that has not been resolved in the calling context uses a previously untaken path

Zing's ReadyNow! provides a set of flags and options to help reduce commonly occurring types of deoptimizations. These include:

  • DynamicBranchEliminationLevel
  • ImplicitNullChecks
  • UseEarlyClassLoading
  • EagerInitializeDuringEarlyClassLoading
  • UseEnhancedClassResolution

To apply deoptimizations:

  1. Verify that deoptimizations are a problem.
  1. Apply the appropriate adjustments as either a flag to the Java command or as an option to the identified method.

Identifying Deoptimizations with PrintCompilation

Using the PrintCompilation flag helps identify if deoptimization is a problem.

To identify deoptimizations:

  1. Review the PrintCompilation log.
  2. Count the number of times each Tier2 method and Tier1 method was compiled.
  3. Ensure the count is accurate.
  • Exclude OSR compiles (lines containing %) in the count.
  • Check for overloaded methods.

In the compilation log, only the name of the method is shown not the full signature. If the number of Tier1 compilations exceeds the number of Tier2 compilations, then the method is probably overloaded.

  1. Determine if deoptimization is occurring.

If the number of Tier2 compilations is higher than the Tier1 compilations, then deoptimization is probably occuring.

Identifying Deoptimizations with TraceDeoptimization

Running the TraceDeoptimization flag with your Java application, provides information about the location and reason for deoptimizations.

Below is a sample output from a TraceDeoptimization log. Some variables and output files may contain "C1" and "C2" compiler names. They should be read as, respectively, "Tier 1" and "Tier 2/Falcon".

Figure. Top-Level Deoptimized Method Diagram 1

 

Top-Level Deoptimized Method Diagram

As shown in the output for thread[27434], each deoptimization may result in two or more lines in the console log.

The first line starts with the Uncommon trap that occurred when the compiled code encountered a trap. This line includes the top-level method being deoptimized and the reason for deoptimization.

The following one or more DEOPT REPACK lines correspond to the interpreter frames created during the machine frame during deoptimization. Multiple interpreter frames are created if the deoptimization occurred inside an inlined method.

To identify deoptimization types:

  1. Review the TraceDeoptimization log.
  2. Search for Uncommon trap entries.
  1. Count the number of times each method was deoptimized. 

In the sample output, the Top-Level Deopted Method is java.lang.String::equalsIgnoreCase.

Figure. Top-Level Deoptimized Method Diagram 2

Top-Level Deoptimized Method Diagram

  1. Count the number of times each deoptimization reason occurs.

In the sample output, the Deopt Reason is unreached.

  1. Search for occurrences where the inlining assumption is violated due to class loading. For example:

Figure. Top-Level Deoptimized Method Sample 1

Top-Level Deoptimized Method Sample

  1. Identify and apply a deoptimization remedy corresponding to the deoptimization type. See the following sections:

In Zing 5.10, the deoptimization log content and layout have a timestamp at the beginning of each line and a compile_id at the end of the line as shown in the example below.  The compile_id references the ID from the PrintCompilation log.

 

Figure. Top-Level Deoptimized Method Sample 2

Top-Level Deoptimized Method Sample

Reducing Unreached Deoptimizations

The most common type of deoptimization for most applications is unreached. This deoptimization type usually accounts for more than 80% of all deoptimizations.

This deoptimization occurs when Tier2’s dynamic branch elimination decides to not compile a portion of Java code that is used later. In most applications, the majority of the unreached deoptimizations occur at start-up and are benign.

However, some applications have rarely taken, but time critical business logic. With these types of applications, dynamic branch elimination can result in ill-timed deoptimizations. To combat this problem, Zing provides the DynamicBranchEliminationLevel flag and option. The DynamicBranchEliminationLevel supported levels are:

2, conservative – Untaken branches at pointer comparison sites are eliminated. This includes unvisited sites.

3, heuristic – Not implemented.

4, aggressive – Untaken branches at all comparison sites are eliminated. This includes  unvisited sites.

Using DynamicBranchEliminationLevel as a Flag

Zing's DynamicBranchEliminationLevel default flag is level 4, aggressive, because aggressive branch elimination yields a significant increase in throughput.

Note:

Use caution if you reduce the elimination level to 2, conservative, across the entire application because it might decrease throughput, from a small (1-3%) to very large (20%) amount.

Using DynamicBranchEliminationLevel as an Option

If your applications use rarely executed, but time critical business logic, reducing the DynamicBranchEliminationLevel on a per method basis to control compiler choices, is recommended.

To help avoid unreached deoptimizations in business logic:

  • Use the DynamicBranchEliminationLevel option at a per method level in the CompileCommandFile. For example:

option package/SomeClass::topLevelMethod DynamicBranchEliminationLevel=2

  • When the DynamicBranchEliminationLevel option is used, you must also specify the top-level method being compiled in the file. Do this even if an inlined method triggered the deoptimization.

Using UseOldBranchProfileAdjustmentAtDeopt as a Flag

If an application has repeated “unreached” deoptimizations at the same BCI setting the 5.10 UseOldBranchProfileAdjustmentAtDeopt to false may allow the application to stabilize sooner.

Reducing Null_Check Deoptimizations

A common type of deoptimization for many applications is a null_check deoptimization. This deoptimization usually occurs when an implicit null check fails and the null pointer is handled by the SEGV handler.

Using ImplicitNullCheck as a Flag

Zing provides an ImplicitNullCheck flag that turns on and off the null_check deoptimization.

Note:

Use caution when turning this flag off for the entire application, because it might decrease throughput considerably.

Fortunately in most applications, the majority of null_check deoptimizations occur during warm-up and are benign.

Using ImplicitNullCheck as an Option

If a small number of methods are experiencing post warm-up deoptimizations caused by null_check, then it is useful to disable ImplicitNullChecks for only those specific methods. this is done in the CompileCommandFile.

To disable ImplicitNullChecks for null_check deoptimizations:

  • Set the ImplicitNullChecks option at a per method level in the CompileCommandFile. For example:

option package/SomeClass::topLevelMethod -ImplicitNullChecks

  • When the ImplicitNullChecks option is used, you must also specify the top-level method being compiled in the file. Do this even if an inlined method triggered the deoptimization.

Reducing Unloaded Deoptimizations

In some applications, unloaded deoptimizations are common. This problem typically occurs in applications with a warm-up phase that does not compile all the necessary methods.

To reduce unloaded deoptimizations, Zing provides the flag, UseEarlyClassLoading. This flag loads required classes before performing a Tier2 compilation.

Reducing unloaded deoptimizations by combining UseEarlyClassLoading and EagerInitializeDuringEarlyClassLoading can eliminate the need for using the warmup period to run placeholder activity through the application to ensure that common paths are compiled just in time. See also Reducing Uninitialized Deoptimizations.

Using UseEarlyClassLoading as a Flag

In benchmarks, using UseEarlyClassLoading had no impact on throughput. However, UseEarlyClassLoading can simultaneously load a large number of classes.

The additional class loading can simultaneously invalidate a large number of class hierarchy analysis (CHA) inlining decisions, that in turn, can result in a deoptimization storm. If the deoptimized methods are still hot, the deoptimization storm is followed by compilation and does impact throughput.

If and when a deoptimization storm is occurring, look for Safepoint for Deopt patching lines in the TraceDeoptimization log that contains a long method list. If compiling a specific method is causing the problem, a CompileCommandFile option can be applied to that method to disable the feature.

To disable UseEarlyClassLoading for deoptimization storms:

  • Set the UseEarlyClassLoading option at a per method level in the CompileCommandFile. For example:

option package/SomeClass::topLevelMethod -UseEarlyClassLoading

  • When the UseEarlyClassLoading option is used, you must also specify the top-level method being compiled in the file. Do this even if an inlined method triggered the deoptimization.

Using UseEarlyClassLoading as an Option

If and when a deoptimization storm is occurring, look for Safepoint for Deopt patching lines in the TraceDeoptimization log that contains a long method list. If compiling a specific method is causing the problem, a CompileCommandFile option can be applied to that method to disable the feature.

To disable UseEarlyClassLoading for deoptimization storms:

  • Set the UseEarlyClassLoading option at a per method level in the CompileCommandFile. For example:

option package/SomeClass::topLevelMethod -UseEarlyClassLoading

  • When the UseEarlyClassLoading option is used, you must also specify the top-level method being compiled in the file. Do this even if an inlined method triggered the deoptimization.

Reducing Uninitialized Deoptimizations

In some applications, uninitialized deoptimizations are common. This problem typically occurs in applications with a warm-up phase that does not compile all the necessary methods.  Resolving dependencies in the Java thread as compilation is being scheduled is key. It removes many trapping situations and by combining eager initialization with eager resolution, traps can often be eliminated completely, even for new objects and static field and method access.

Using UseEnhancedClassResolution as a Flag

To reduce uninitialized deoptimizations, Zing provides the flag, UseEnhancedClassResolution. This flag reduces uninitialized deoptimizations when the class is already loaded and initialized, but has not been resolved within a particular class

This optimization only applies to allocation sites and might not help applications that are deliberated design to avoid garbage collection.

Using UseEnhancedClassResolution as an Option

The UseEnhancedClassResolution option can be used on per method basis in the CompileCommandFile, but given the low impact on throughput, this is usually unnecessary.

Using EagerInitializeDuringEarlyClassLoading as a Flag

If UseEarlyClassLoading is already being used and UseEnhancedClassResolution failed to reduce uninitialized deoptimizations, using the flag EagerInitializeDuringEarlyClassLoading may help.

 

 

See Also


© Azul Systems, Inc. 2020 All rights reserved.

Privacy Policy | Legal | Terms of Use