Visit Support

Issues Known in Previous Releases of Azul Zulu Prime Builds of OpenJDK

Looking for Zing? The Azul Zing Virtual Machine is now Azul Zulu Prime Builds of OpenJDK and part of Azul Platform Prime. Learn more.

This section summarizes issues known in Azul Zulu Prime Builds of OpenJDK (Azul Zulu Prime JVM).

Large Amounts of Virtual Memory Shown by top and ps for Azul Zulu Prime Java Processes

Linux tools like top, ps, etc. show large amounts of virtual memory address space for all Azul Zulu Prime Java processes in the range of several TBytes. This is an expected and normal behavior as the Azul Zulu Prime JVM utilizes a large address space for its GC algorithm. It will not take away memory resources from other processes as the memory itself is not used, just the large address space is needed.

Unexpected Resident Set Size Memory Metric Shown by top and ps for Azul Zulu Prime Java Processes

For the Azul Zulu Prime JVM without the Azul Zulu Prime System Tools (ZST) component (the default mode of Azul Zulu Prime JVM), the Linux top, ps, and other tools overestimate the memory use of Azul Zulu Prime Java processes. As a result, resident set sizes (RSS, RES) of more than 3 times the -Xmx value display. The reason for this is Azul Zulu Prime JVM’s usage of virtual memory multi-maps, i.e., multiple references to the same physical memory locations. To get precise metrics for the actual memory utilization, you can run the Linux smem -P java command which shows the memory usage in the PSS (proportional set size) column. However, you should only use this tool interactively for diagnostic purpose and not in a scheduled way on production systems to avoid a performance impace. There is a general latency performance impact on Linux systems when accessing the /proc/PID/smaps metric which is read by the smem tool and similar tools like atop and process-exporter, depending on their configuration. The better alternative to get an overview of the complete system memory condition is the Linux free command or /proc/meminfo as those show correct metrics without impacting performance.

For Azul Zulu Prime JVM with the installed ZST component, the memory metric resident set size (RSS, RES) does not include the memory used for the Java heap and thereby shows an unexpectedly small value of only a few hundred MBytes even when you set multiple GBytes of -Xmx. The reason for this is the dedicated memory management by the ZST. For performance reasons, ZST handles the memory for Java heaps independently from Linux default memory management. Only the total amount of memory used for Java heaps is visible by standard Linux tools like, for example, shown with the free command. To list the heap memory usage for Azul Zulu Prime Java processes when ZST is installed, use the zing-ps -s command. With that, the heap memory, managed by the ZST, is shown in the Xmx column, the small amount of Linx RSS - in the LRSS column, and the memory used by Azul Zulu Prime JVM internally and managed by the ZST - in the ZRSS column.

Also on Azul Zulu Prime JVM with the installed ZST, a similar effect is visible regarding the Linux metric Mlocked in /proc/meminfo. While the ZST-managed memory is protected from being swapped out, similar to a Linux mlock, it is not visible as Mlocked in /proc/meminfo.

Slow Process Restart for Large Heap Sizes on Small Linux Pages

In non-ZST mode on system with Linux standard configuration, i.e. disabled shared memory transparent huge pages, restarting the java process can take up to 10 seconds per 100 GB heap size. Solution: Enable shared memory transparent huge pages as described in Enable Huge Pages or install ZST. In addition, on Ubuntu, Amazon Linux or similar systems with Linux kernel 4.19.7 and newer upgrade to Azul Platform Prime or newer.

Version-Specific Known Issues

The following table lists issues identified prior to the release of Azul Zulu Prime

Issue ID Occurs With Release Description


Aarch64 support is limited to Graviton 2 and 3. Graviton 1 is not yet supported.


Async profiler activemq crashed with 'assert(false) failed: Should never reach here'


Wildfly app-server hangs when Async Java Profiler is attached.


Async profiler does not show object type in "-e alloc" mode on Zulu Prime


Applications using munlockall() require -XX:-UseThreadStateNativeWrapperProtocol on the command line to avoid crash or inconsistency if the rare situation occurs that the application gets swapped out after the munlockall() invocation.


ZVM crash is seen when there is a mismatch between ZST versions on a host and containers. Workaround: Ensure ZST versions on a host and containers match. Also, note that since ZVM, Zing does not need the ZST component which was required in earlier versions of Zing. Starting with ZVM, install only a ZVM package inside your container to run Zing within a container.


Heap dumps and JVMTI object tagging are not supported with UseEpsilonGC.


The combination of the KeepCodeEntrantOnAsyncExceptions command-line option with the Tick Profiler can have issues.


Test failure due to code cache exhaustion on configurations with heaps of 4 GB and below are more likely to happen on ZVM as the default CodeCache size progressively shrinks for such configurations. Workaround: Increase the code cache size using the -XX:ReservedCodeCacheSize=<size> option (you can specify up to 1280 MB for <size>). Alternatively, switch off the FalconGenerateProfilerInfo flag, set to true by default, for improving observability and gathering more accurate and complete information for debugging.


When used in -XX:-UseZST mode, applications that allocate significant amounts of large (multi-MB) objects at high rates and on a sustained basis may observe higher than normal time to safepoint behaviors.


When Zing is used in the Non-ZST mode, the RSS/RES value reported by such Linux tools as top or ps are known to be inaccurate because multi-mapped regions are counted multiple times by those commands, even though they point to the same physical memory. For more details, see Unexpected Resident Set Size Memory Metric Shown by top and ps above.


Zing License Verification failed when /tmp is full/readonly. Workaround: Clean up the /tmp directory.


Zing 11 JFR events refer to null bootstrap class loader as the class loader for anonymous classes, which is not accurate.


The Live Objects Azul Mission Control representation does not show anything for Zing 11 JFR recordings as the JFR Leak Profiler is not yet enabled in Zing 11 JFR and the OldObjectSample event is not generated.


Zing releases from to

The -XX:+UseNUMA command-line option can cause reduced application performance when used on versions - of Zing VM.

If you have -XX:+UseNUMA active, the symptoms to look for are as follows: * very high system CPU time, mostly spent in kernel ioctl calls * several seconds long thread allocation delays visible over GCLogAnalyzer in the gc.log * high CPU load, even outside of GC cycles and without application activity

Workaround: Remove -XX:+UseNUMA from the command line or upgrade to ZVM


In Java 11 testing, a rare ZVM 18.12 crash was observed when returning from the invocation of a MethodHandle routine. Analysis revealed the failure to be reproducible under rare circumstances in Java 8 with previous ZVM releases. Frequency of reproduction appears to be influenced by use of ReadyNow! with Java 11. It is recommended that users of ReadyNow! do not upgrade to Java 11 until this issue is resolved.


Handshake messages can be strictly ordered. Workaround: Use alpn-boot-8.1.13.v20181017.jar.


When run on ZST 5.21.x and older, ZVM cannot tick-profile native threads.


In and, the combination of ReadyNow!, compile stashing (enabled by default with ReadyNow!), and a profile which is fed from one run to another in a rolling manner can in some circumstances result in severe peak performance loss. If using ReadyNow! with a rolling profile on or, it is strongly recommended to upgrade to,, or a later release which include fixes for this issue. If a peak performance problem is observed, we recommend discarding the profile collected with or and recollecting a fresh one on an unaffected version.


When using reserve-at-launch, you may see an exception such as "MemoryUsage ERROR: initialReserved 0 size -4738568192 used -4736471040" in Zing MXBean calls that use MemoryUsage objects for System Linux Memory. As of ZVM, the probability of seeing this error has been reduced. If the error is seen, it can safely be ignored.


WebSphere fails to launch when using -XX:+UseZingMXBeans. Workaround: Set the following by using server.xml or WAS admin console (note the empty value for the system property):

genericJvmArguments="-XX:+UseZingMXBeans "


all Zing releases

Loop predicate and loop limit check code problem. . The java spec says explicitly that operations on integers can overflow and no exception will be thrown. Indeed, unlike the C++ spec that says integer overflow is undefined, java says integer overflow is required to happen. . Java coding guidelines always point out that using Integer.MAX_VALUE in comparison expressions is dangerous programming. Especially if it is used in a loop bound. .. This loop will terminate because eventually i will be equal to Integer.MAX_VALUE: for (int i=0; i<Integer.MAX_VALUE; i+=1) { …​ } .. This loop will never terminate, because i will overflow: for (int i=0; i<Integer.MAX_VALUE; i+=2) { …​ } . Zing will sometimes terminate the loop in (2b).

Note: -XX:+DisableLoopOptimizations will often avoid the problem but is is only recommend as a workaround on a per-method basis.



JBoss7 throws an exception on startup with -XX:+UseZingMXBeans flag. Workaround: Add the following lines to the standalone.conf file:

JAVA_OPTS="$JAVA_OPTS -Djava.util.logging.manager=org.jboss.logmanager.LogManager"

JAVA_OPTS="$JAVA_OPTS -Xbootclasspath/p:../modules/org/jboss/logmanager/main/jboss-logmanager-1.2.0.GA.jar:../modules/org/jboss/logmanager/log4j/main/jboss-logmanager-log4j-1.0.0.GA.jar:../modules/org/apache/log4j/main/log4j-1.2.16.jar"


ZVM running in environment with multiple scheduling policies, RR, BATCH, and OTHER, encounters checkpoint sync timeout in thread running with BATCH policy.