Visit Azul.com Support

Training ReadyNow!

Table of Contents

Looking for Zing? The Azul Zing Virtual Machine is now Azul Zulu Prime Builds of OpenJDK and part of Azul Platform Prime. Learn more.

Since ReadyNow! learns from previous executions and improves warm-up and start-up time incrementally at every new invocation, you need to train ReadyNow! to improve your application performance.

ReadyNow! gathers data from training runs and stores it in a profile log which ensures better performance after the first and subsequent runs.

Depending on your goals and constraints, different approaches to train ReadyNow! profile can apply.

Optimal Approach (Pre-Production)

To produce a profile that minimizes or eliminates learning in production, follow this approach.

Train the profile across two separate runs in the pre-production environment with the following command-line options:

 -XX:ProfileLogIn=<file> -XX:ProfileLogOut=<file> 

That is, perform a run of the application until it reaches optimal performance, restart the JVM using the profile just generated, and again run the application until it reaches optimal performance.

Note, to reach an optimal profile, 50,000 executions of all important application methods are needed.

Basic Approach (Pre-Production)

If time for training ReadyNow! is limited, perform one run of your application in the pre-production environment with the following command-line options:

 -XX:ProfileLogIn=<file> -XX:ProfileLogOut=<file> 

This captures a very good profile. Such a profile may feature a possible odd outlier on the first day of production run but will improve with subsequent runs.

No Profile Approach (Production)

If you cannot create a profile in a pre-production environment, allow it to learn in production with the following command-line options:

 -XX:ProfileLogIn=<file> -XX:ProfileLogOut=<file> 

This results in the poor warm-up first day in production but better performance on subsequent days.

Understanding if Training Was Enough

To validate the sufficiency of ReadyNow! training, check for outliers. The absence of outliers denotes the training went well. Reduced start-up latency also proves there is enough training data gathered in a profile log. Otherwise, a few more runs are needed to enhance the profile content and get a fully optimized profile log.