Visit Support

Azul Platform Prime Errors

Need help?
Schedule a consultation with an Azul performance expert.
Contact Us

This section lists common errors and possible solutions when using Azul Platform Prime to run your Java applications:

Zing Error: Running without ZST requires memfd_create(2), which is not supported by this version of the operating system. Zing JVM Error: See documentation for more details.

The error message is displayed when running Azul Zing Builds of OpenJDK (Zing) on your machine with an unsupported kernel or operating system.


Install the ZST component. See Installing the Azul Zing System Tools for related installation details.

AzulZing Error: cannot open shared object file: No such file or directory

When running zing-licensed --configure on Ubuntu 16.04, the following error message displays:

zing-licensed --configure

Zing Error: cannot open shared object file: No such file or directory

ERROR: Please install libcurl package with '' or ''


  1. Install libcurl3 package.

  2. cd /usr/lib/zing

  3. ln -s /usr/lib/x86_64-linux-gnu/

  4. ldconfig

  5. zing-licensed --c

Cannot overwrite /proc/sys/kernel/core_pattern

When abrtd running zing core_pattern cannot overwrite /proc/sys/kernel/core_pattern.


Scan the file, core_pattern, for any reference piped ( \| ) core_pattern. For example:

$ more /proc/sys/kernel/core_pattern \|/usr/libexec/abrt-hook-ccpp /var/spool/abrt %s %c %p %u %g %t %h %e 636f726500

If a piped ( \| ) core_pattern is found:

  1. Stop the process included in core_pattern. For example:

    $ service abrtd stop

    For RHEL 6.3:

    # service abrt-ccpp stop

  2. Clear or change core-pattern to not include any piped (\|) core pattern.

    $ echo core > /proc/sys/kernel/core_pattern

  3. Restart zing-core-pattern.

    $ /etc/init.d/zing-core-pattern start

  4. Ensure the piped ( \| ) core-pattern process is not restarted on boot.

    $ chkconfig abrdt off

    For RHEL 6.3:

    $ chkconfig abrt-ccpp off

The Event Tick Buffer Profiling system status is: "Shutdown."

The tick profiler shutdowns automatically when an attempt is made to allocate more than the configured maximum number of tick profiling buffers. The reason is printed: Max Buffers Exceeded.


Increase the maximum number of permissible tick profiling buffers by increasing the value of EventTickBuffersMaxAllocatedBeforeShutoff in the command line arguments. To avoid data structure resizes, use EventTickBuffersAllocated to simultaneously increase the chunk size used at initialization and for expansion. For example:

java -XX:ARTAPort=9990 –XX:+UseTickProfiling -XX:EventTickBuffersAllocated=4096 -XX:EventTickBuffersMaxAllocatedBeforeShutoff=12288 <java-class>

For best performance, keep the number of threads in the application to a reasonable number based on the CPU resources available.

Crash When Using Tight Polling Loops

Application hangs, and eventually crashes with a checkpoint timeout, when real time threads run tight loops.


In these conditions, lower priority threads are not getting sufficient CPU time to reach checkpoint.


When the parameter CheckpointBoostPriority is configured, Zing temporarily boosts the priority of real time threads to the configured value.

For example, one way to use this safely includes:

  1. Configure -XX:CheckpointBoostPriority=<Y>

  2. Start Zing on node 0 with SCJED_RR and a priority, <X>, ensuring that the value for <X> is greater than <Y>.

  3. Using JNI, move the threads with relative real time priority requirements to node 1.

  4. Using JNI, assign relative priorities to the threads you moved to node 1, so that the highest among them is <Y>.

Out of Space KlassTable

The OOP (Ordinary Object Pointer) Table - KlassTable contains a unique ID for each loaded class. Each entry is 8 bytes long. The entries will be freed when the corresponding classes are unloaded.


The KlassTableSize option needs to be set appropriately for the number of live classes in the application, which can be monitored using PrintGCDetails.

The default value of KlassTableSize is based on the Java heap size, For example:

  • Java heaps less than 2 GB - default 2 MB

  • Java heaps 2 GB or greater - default 8 MB

Dependency on the JNIDetachThreadsIfNotDetached flag

The flag -XX:+JNIDetachThreadsIfNotDetached is needed by any application that attaches a JNI thread to the VM but does not detach the thread before calling pthread_exit. For example with IBM WebSphere MQ Low Latency Messaging (LLM) applications.

If this error occurs, an error report file with more information is saved in, for example, /home/<user>/tmp5/hs_err_pid16620.log.


If you receive this error, include the following flag in your Zing commands:


Problems when uninstalling or upgrading Zing

You must stop all Zing Java processes before uninstalling or upgrading. Otherwise, you might see the following error message:

%preun(zing-zst-<version>) scriptlet failed, exit status1


Run the zing-ps tool to list any running Zing processes. Stop the processes. Re-run zing-ps to verify all Zing processes are stopped. Reissue the uninstallation or upgrade commands.

Could not create the Java virtual machine

ZST is either not loaded properly or has not been configured yet.


Please confirm the zing-zst package is properly installed and configured.

Please run system-config-zing-memory as root. See man zing-installation for more information.

Fatal error: Not enough free memory

Running Linux processes are taking up a large amount of memory while attempting to configure Azul Platform Prime memory. Or, you have selected values that are too aggressive using the wizard or directly in the pmem.conf configuration file.

Solution 1:

Stop any processes that use significant resources while configuring Azul Platform Prime. You need to reevaluate your resource budget if you plan to run these concurrently with Azul Platform Prime.

Solution 2:

Try again with less aggressive memory settings.

ZST Fails to Allocate Requested Memory

Long after system startup (and after much memory has been consumed back and forth), ZST might fail to allocate the requested amount of memory, as it might (a) already be occupied or (b) exist in fragmented state that cannot be defragmented into 2MB pages. Transparent huge pages on RHEL 6.x can help recover 2MB pages, but might not be absolutely reliable, and on other systems (RHEL 5.x, SUSE 11, and other non-RHEL-6 kernel prior to 2.6.38) it is unlikely that you can find 75% of system memory in 2MB page form after the system has run loads for a while and cached files.


See Enhanced Linux Memory Defragmentation.

Cannot find kernel config/boot/config-<kernel version>

The kernel you are running has no associated configuration file in the system /boot directory. A normal kernel installation and build will always include such a file. This is a serious error indicating that you are running a kernel in a non-standard way.


You are likely using a non-supported system configuration. The solution is to use a standard configuration.

Running Azul Platform Prime on Paravirtualized Kernels

Kernel panic occurs when you are starting Azul Platform Prime on a paravirtualized instance.


Xen virtualization and in Amazon EC2 environments, paravirtualization is not supported in Azul Platform Prime. Only HVM (Hardware-assisted Virtual Machine) instances are supported. See the Azul Platform Prime System Requirements document for the complete list of the supported operating systems and kernels.

UseLargePages Not Supported with ZST

Azul Platform Prime does not support the -XX:+UseLargePages option. Azul Platform Prime pages are always large pages and they come from the ZST.

ZST and the HugePages memory conflict, as they both reserve memory for themselves. The ZST memory service reserves and manages its own memory space, so Large Memory Pages configured by the operating system make it difficult or impossible for the ZST memory service to reserve enough pages for Azul Platform Prime. Therefore, set the output from the three Huge variables to zero.


To set the huge variables to zero:

  1. Check if your system can support large page memory:

    # cat /proc/meminfo \| grep Huge

    • If Large Pages are available but not configured or reserved, the response is similar to:

      # cat /proc/meminfo \| grep Huge HugePages_Total: 0 HugePages_Free: 0 Hugepagesize: 2048 kB
    • If Large Pages are available, configured, and reserved, the response is similar to the following.

      # cat /proc/meminfo \| grep Huge HugePages_Total: 30000 HugePages_Free: 264 HugePages_Rsvd: 88 Hugepagesize: 2048 kB
  2. Set the operating system hugepages memory settings to zero.

    1. Log in as root.

    2. Reset the hugepages value, type:

      # echo 0 > /proc/sys/vm/nr_hugepages

  3. Optionally, because the /proc values resets after reboot, set the value in an init script, such as rc.local or sysctl.conf.

JVM Process is Killed by the Linux Out Of Memory Process Killer

The reason for the killed process is the memory pressure the VM Balloon Driver creates by over-expanding its virtual balloon and consuming the memory used by Azul Platform Prime.

Sample error message:

java invoked oom-killer Out of memory: Kill process 6465 (java) score 38 or sacrifice child Killed process 6465 (java) total-vm: ...

Solution Because the VM Balloon Driver is running on most standard Linux systems by default, it is recommended to disable the Balloon Driver by either of the following methods:

  1. For VMware, add the following line to the VMX file and restart the VM:

    sched.mem.maxmemctl = "0"

  2. Disable the balloon driver within the VM:

    • Find the module name:

      /sbin/lsmod \| grep balloon

    • Remove the module:

      sudo modprobe -r vmw_balloon

Problem running Azul Platform Prime on an unsupported kernel using reserve-at-launch

You might encounter the following error when running ZST on an unsupported kernel using reserve-at-launch:

Enter reserve-at-(c)onfig or reserve-at-(l)aunch [default 'c']: l Zing supports only the 'reserve-at-config' policy with the OS/kernel version on this system. To override, please refer to the Zing documentation.

This error might occur because the system-config-zing-memory script needs to identify if the kernel supports node compaction or not.


To bypass the corresponding check:

  1. Login to your host system as root administrator or use sudo.

  2. Create an empty file named force_reserve_at_launch in the /etc/zing directory.

Problem starting Azul Platform Prime memory (

You might encounter the following error when trying to start Azul Platform Prime memory on Azul Linux:

zing-memory: INFO: Starting... Can't locate Sys/


Ensure you have installed the following packages are installed:

yum localinstall --nogpgcheck openssl binutils curl yum install perl-Sys-Syslog.x86_64
Problem starting Azul Platform Prime memory (Kernel update)

You might encounter the following error when trying to start Azul Platform Prime memory:

zing-memory: INFO: Starting... /usr/lib/zing/bin/tar: zm.o_<kernel>: Not found in archive /usr/lib/zing/bin/tar: zm_linux.o_<kernel>: Not found in archive /usr/lib/zing/bin/tar: Exiting with failure status due to previous errors zing-memory: INFO: Zing ZST support for kernel <kernel> is unavailable. Please upgrade ZST.


Upgrade ZST as described in Upgrading Azul Zing System Tools.

Problem running Azul Platform Prime on CoreOS

You might encounter the following error when trying to run Azul Platform Prime on CoreOS:

Error response from daemon: linux runtime spec devices: error gathering device information while adding custom device "/dev/zing_mm0": lstat /dev/zing_mm0: no such file or directory Error: failed to start containers: <container ID>


Run the following commands on privileged container:

ln -s /run/systemd/journal/dev-log /dev/log service zing-memory restart

and the following command on the CoreOS host:

/lib/modules/zing-memory create_devices
Fatal error: Checkpoint sync time longer than 50000 ms detected

You might encounter the following error when observing high time-to-checkpoint or event time-to-checkpoint crashes:

fatal error: Checkpoint sync time longer than 50000 ms detected

Before this fatal error, you must have seen several warnings when either 1 second, 20%, or 80% of CheckpointTimeout was reached:

Detected TTSP issue: start: nnn.nnn wait: nnn.nnn For details on how to modify profiler configuration (for example, increase the timeout limit with -XX:CheckpointTimeoutDelay=100000 or turn off crashing with -XX:-DieOnSafepointTimeout), see Safepoint Profiler.


Use any from below to fix the error:

  1. If CPU load is high, make sure that no non-JVM processes are affecting CPU availability for JVM.

  2. If CPU load is high and JVM contributes a large portion of it, try -XX:+PromoteCheckpoints. Depending on the load specifics, forcing earlier promotions with additional -XX:CheckpointPromotionDelay=<ms> may give better results.

    However, setting it to a too low value may negatively affect latency, as working java threads will be stopped more frequently.

  3. If CPU utilization is far from saturation, contact [email protected].

<path>has not been configured as an alternative for java

You might encounter the following error when updating information in the alternatives system:

update-alternatives --set java /opt/zing/zing-jdk8/bin/java /opt/zing/zing-jdk8/bin/java has not been configured as an alternative for java


Use the full path to the zing-jdk directory. For example:

update-alternatives --set java /opt/zing/zing-jdk1.8.0- no version information available (required by …​ impalad)

The library has a dependency on the libstdc++ version 3.4.20 or higher.


Upgrade libstdc++ to be greater than 3.4.20.

Alternatively, resolve the library dependency by using the library that is shipped with Azul Platform Prime: append the LD_LIBRARY_PATH environment variable to libstdc path in zing-jdk (e.g., export libstdc LD_LIBRARY_PATH=$LD_LIBARARY_PATH:/opt/zing/zing-jdk8//jre/lib/amd64/server/../../../../etc/libc++/) and relaunch your application.

Error: dl failure on line 1095
$ java -version Error: dl failure on line 1095 Error: failed /opt/zing/zing-jdk1.8.0-, because cannot open shared object file: No such file or directory*

This error message is displayed when running Azul Platform Prime on containers due to a missing pam library.


If you received this error, install pam.x86_64:

$ sudo yum install pam.x86_64

Error: 'core-bundler' failed: gpg not found
$ core-bundler c core.<pid> --docker-container <container_id> core-bundler version 0.3 Error: 'core-bundler' failed: gpg not found Run 'core-bundler help' for more information

In order to bundle Azul Platform Prime core files, GPG is required to be installed.


If you received this error message, install GPG:

$ sudo apt install gpg

[-UseZST][ERROR] Could not find a suitable mount point for the backing storage
Error occurred during initialization of VM Failed to initialize memory management system

This error can occur when migrating from a system where a JVM was configured to use static large pages.


To check if a number of static large pages were previously configured on this system, run the following command to list the number of 2MByte pages:

sysctl vm.nr_hugepages

If no large pages are configured, remove -XX:+UseLargePages from your java command line.

If large pages are configured and performance tests show a benefit for your application, refer to Using Huge Pages with Azul Platform Prime for information about enabling transparent huge pages.

If Using Huge Pages with Azul Platform Prime is not applicable to your use case, use -XX:GPGCNoZSTBackingStoragePath. See Using Azul Zing Builds of OpenJDK Command-Line Options for details.

Error occurred during initialization of VM: Memory management: unable to fund java heap account

The error message displays for both ZST and non-ZST modes of Azul Platform Prime at startup in the following cases:

  • In ZST reserve-at-config mode, when there is not enough memory left in the ZST partition to fund the whole of Xmx. Here the solution is to increase the ZST partition or reduce Xmx or stop other running Azul Platform Prime processes.

  • In ZST reserve-at-launch mode, when ZST memory could not be extended sufficiently to fund the whole of Xmx. Here the solution is to add more memory or increase the overall limit for ZST memory or reduce Xmx or stop other running processes.

  • In non-ZST mode with -XX:+AlwaysPreTouch, when the whole of Xmx cannot be funded from the system.


Add more memory, reduce Xmx, or stop other running processes.

[-UseZST][ERROR] Available physical memory is not enough to fund the backing storage
Error occurred during initialization of VM Failed to initialize memory management system

This error message is displayed in the non-ZST mode if the JVM detects at startup that there is not enough memory available on the system to fund the whole of Xmx.


Add more memory, reduce Xmx, or stop other running processes.

[ERROR] Mapping to backing storage failed with errno 12
An unexpected error has been detected by Java Runtime Environment: Internal Error (vmem_spaceManager.cpp:309), pid=63145, tid=63236 External Fatal Error: Multimap to aliases failed during multimap_from_space

Insufficient virtual memory areas (VMA) may lead to a crash in Azul Platform Prime.


Increase the VMA as described in Ensuring Sufficient Virtual Memory Areas.

Exception: Too many open files

The error is generated when the open file limit for a user or system exceeds the default setting.


Increase the open files limit on your system.

System-wide settings

  1. View the current limit value for maximum open files at the system level:

    # cat /proc/sys/fs/file-max

  2. To change the system-wide maximum open files limit, edit your /etc/sysctl.conf file as root and add the following line at the end of the file:

    fs.file-max = <num>

  3. To activate your change in the live system, run the following command:
    sysctl -p

User settings

  1. To check the current limit value for maximum open files for a user, run the following commands as root:

    $ su - <user> $ ulimit -n

    The default setting is usually 1024.

  2. Change the open files limit for either a specific user or all users as described below.

    • To change the open files limit for a specific user, edit /etc/security/limits.conf as root:

      user - nofile <num>

    • To increase the system-wide limit for all users, edit the /etc/security/limits.conf file as root:

      nofile <num>

  3. To apply your changes in the limits.conf file, close all active sessions, log back in, and restart your application.

Note: The <num> value varies according to the amount of RAM in the system. It is approximately 100000 files per GB of RAM.