Visit Azul.com Support

Installing Cloud Native Compiler

Cloud Native Compiler (CNC) is shipped as a Kubernetes cluster which you provision and run on your cloud or infrastructure. You should install Cloud Native Compiler in a location to which the JVM machines have unauthenticated access. You can run Cloud Native Compiler in the same Kubernetes cluster as the client VMs or in a separate cluster.

Note
By downloading and using Cloud Native Installer you are agreeing to the Cloud Native Compiler Evaluation Agreement.

The basic steps are:

  1. Set up your Kubernetes cluster using kubeadm, kops, or Amazon Web Services Elastic Kubernetes Service. For evaluation purposes, you can also install Cloud Native Compiler on minikube.

  2. Download cnc-install.zip, which contains the Kubernetes manifest files and other configuration artifacts.

  3. Edit the deployment YAML files to configure CNC capacity.

  4. If running in secure mode with SSL, register your SSL certificate in the CNC config files.

  5. Apply the YAML files.

Cloud Native Compiler Deployment Overview

Cloud Native Compiler is divided into three node types, each of which has its own YAML and is deployed as a unit. Each node has a role label used to set the affinity for the nodes. If you set up your cluster on AWS EKS using the Azul-provided cluster config file, nodes are created with these labels. If you manually create the CNC Service nodes, make sure you apply these labels to the nodes before applying the YAML files.

If you need to use different names for the roles, make sure you change the nodeSelector property in each of the YAML files:

 
$ nodeSelector: $ kubernetes.io/arch: amd64 $ role: cncserver
Note
You can install Cloud Native Compiler on minikube if you want to try it without configuring a multi-node Kubernetes cluster. You should never attempt to run Cloud Native Compiler in minikube in production or for any real-world testing. As there is only on node, you must comment out the nodeSelector sections from each YAML file. Also ensure that you have enough CPU on the machine where you install minikube to power the resource requirements listed in each of the config YAMLs.

The nodes in a Cloud Native Compiler instance are as follows:

  • Compile Broker - Performs JIT compilations.

    • role=cncserver

    • System Requirements: CPU 8, RAM 32GB, HDD 100GB

    • There is one pod per Compile Broker node. To scale up, you create more replicas. We recommend a minimum of 4 Compile Broker vCores for every JVM client vCore that is currently warming up.

  • Cache - Stores information about the JVM that the compiler needs to perform compilations.

    • role=cnccache

    • System Requirements: CPU 8, RAM 32GB, HDD 100GB

    • There is one pod per Cache node. To scale up, create more replicas. We recommend 1 Cache instance for every 15 Compile Broker instances.

  • Infrastructure - Provides supporting functionality.

    • role=cncinfra

    • System Requirements: CPU 8, RAM 32GB, HDD 100GB. Make sure the disk connection is fast (use SSD) and that the storage volume is persistent between runs.

    • Although the infrastructure node may not be required to scale, it may come to that. The gateway component supports autoscaling and may need to scale up when the amount of active connected JVMs approaches 1000. The pods included in this node are:

      • db

      • gateway

      • storage

      • broker

Installing Cloud Native Compiler

Note
This section describes setting up and evaluation or developer version of the CNC service without SSL authentication. To set up a production version with full SSL authentication, see Configuring Cloud Native Compiler with SSL Authentication.

To install Cloud Native Compiler :

  1. Install Azul Zulu Prime Builds of OpenJDK 21.09.01 or later on your client machine.

  2. Download cnc-install.zip. This ZIP file contains all of the configuration files for the CNC cluster.

  3. Make sure your kubectl client version is v1.21.2 or later.

     
    $ kubectl version
  4. From the kubernetes directory of the cnc-install directory, apply the provided base/01-namespace.yaml file to create a namespace called compiler. You can change the namespace name by editing the YAML file.

     
    $ kubectl apply -f base/01-namespace.yaml
  5. Edit service-dns in base/03-cache.yaml to point to your namespace. Edit <cluster.local> to point to your cluster domain name:

     
    $ network: $ data: $ cache.yaml: | $ hazelcast: $ network: $ join: $ auto-detection: $ enabled: false $ multicast: $ enabled: false $ kubernetes: $ enabled: true $ service-dns: cache.<my-namespace>.svc.<cluster.local>
  6. Configure the number of Compile Broker replicas needed to give your Cloud Native Compiler enough CPU to perform the JIT compilation in time. To set a fixed number of replicas, set the spec:replicas property accordingly.

     
    $ spec: $ replicas: 1 $ selector: $ matchLabels: $ app: compile-broker

    To configure autoscaling, see Configuring Cloud Native Compiler Autoscaling.

  7. Configure the number of compiler threads in each Compile Broker. We recommend setting 1.5 compiler threads for each vCore in your Compile Broker. Set the number in the compiler.parallelism property in base/04-compile-broker.yaml.

     
    $ - "-Dcompiler.parallelism=12"
  8. Configure the number of Cache replicas needed in your Cloud Native Compiler in base/03-cache.yaml. We recommend 1 Cache instance for every 15 Compile Broker instances. To set a fixed number of replicas, set the spec:replicas property accordingly.

     
    $ spec: $ replicas: 1 $ serviceName: cache $ selector: $ matchLabels: $ app: cache

    To configure autoscaling, see Configuring Cloud Native Compiler Autoscaling.

  9. If installing on Minikube, comment out or delete the nodeSelector sections of all of the config YAMLs.

    Note
    Ensure you have enough CPU capacity on the machine running Minikube to power all of the nodes. You can also comment out the resources sections in the YAMLs to run the service without the recommended CPU resources but note that the CNC performance will be affected.
  10. Apply YAMLs with the following command:

     
    $ kubectl apply -n compiler -k base
  11. Verify that all started pods are ready:

     
    $ kubectl get all -n compiler

Manually Scaling Up Compile Broker and Cache Replicas

To see how many replicas of Compile Broker and Cache are currently running, run the following command:

 
$ kubectl -n compiler get deployment/compile-broker statefulset/cache $ NAME READY UP-TO-DATE AVAILABLE AGE $ deployment.apps/compile-broker 1/1 1 1 21d $ NAME READY AGE $ statefulset.apps/cache 1/1 21d

To change the number of replicas, use the kubectl scale command:

 
$ $ kubectl -n compiler scale deployment/compile-broker --replicas=30 $ $ kubectl -n compiler scale statefulset/cache --replicas=2 $ $ kubectl -n compiler scale deployment/gateway --replicas=2

Cleaning Up

To delete the artifacts created by the above scripts, run the following commands:

 
$ kubectl delete -n compiler -k base