Visit Support

Sizing and scaling your Cloud Native Compiler installation

In order for the Cloud Native Compiler to perform the JIT compilation in time, you need to make sure the installation is sized correctly.

You scale Cloud Native Compiler by specifying the total number of vCores you wish to allocate to the service. The formula within the Helm chart sets individual component sizing according to recommended ratios.


We recommend using the following formula when doing your initial run:

CNC_vCores = 4 * JDK_vCores_requesting_compilation

Meaning take the number of vCores for JVMs that will be concurrently requesting compilations, and provision CNC service four times that number of vCores. You can then evaluate performance in the CNC Grafana dashboard and adjust sizing accordingly.

Depending on your autoscaling settings, there are three variables you will need to set:

simpleSizing: vCores: 18 minVCores: 18 maxVCores: 81
  • vCores - Total number of vCores that will be allocated to the CNC service. This does NOT include resources required by monitoring, if you enable it. The minimum amount of vCores for provisioning CNC is 18.

  • minVCores - The minimum amount of resources that are always allocated when autoscaling is enabled.

  • maxVCores - The maximum amount of resources that are always allocated when autoscaling is enabled.

Configuring Autoscaling

Since the Cloud Native Compiler (CNC) service uses a large amount of resources (recommended 4 CNC vCores for every JVM vCore), it is imperative to correctly configure autoscaling. Kubernetes Horizontal Pod Autoscaler (HPA) automatically increases/decreases the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization.

Autoscaling is enabled by default in the Helm chart. To disable autoscaling, add the following to values-override.yaml:

autoscaler: false
The compile-broker component is fairly fast to start but the cache component takes several minutes to fully synchronize with the other existing cache nodes and be ready to respond to requests.

If you use the Azul-provided cluster config file, the pre-defined node groups for the gateway, compile-broker and cache components already contain instructions to work with Autoscaler. If the Autoscaler Node sees any unused nodes, it deletes them. If a replication controller, deployment, or replica set tries to start a container and cannot do it due to lack of resources, the Autoscaler Node knows which service is needed and adds this service to the Kubernetes cluster. For more information, see

In order to use HPA autoscaling, you need install the Metrics Server component in Kubernetes.

Manually Scaling Up Compile Broker and Cache Replicas

To see how many replicas of Compile Broker and Cache are currently running, run the following command:

kubectl -n compiler get deployment/compile-broker statefulset/cache NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/compile-broker 1/1 1 1 21d NAME READY AGE statefulset.apps/cache 1/1 21d

To change the number of replicas, change the sizing values in the simple sizing or advanced sizing sections and run the following command:

helm upgrade compiler cnc-helm/prime-cnc -n compiler -f values-override.yaml