Installing Cloud Native Compiler on Kubernetes
Cloud Native Compiler (CNC) uses Helm as the deployment manifest package manager. There is no need to manually edit any Kubernetes deployment manifests. You can configure the installation overriding the default settings from values.yaml in a custom values file. Here we refer to the file as
values-override.yaml but you can give it any name.
|This section describes setting up an evaluation or developer version of CNC without SSL authentication. To set up a production version with full SSL authentication, see Configuring Cloud Native Compiler with SSL Authentication.|
You should install Cloud Native Compiler in a location to which the JVM machines have unauthenticated access. You can run Cloud Native Compiler in the same Kubernetes cluster as the client VMs or in a separate cluster.
|If you are upgrading an existing installation, make sure to check "Upgrading Cloud Native Compiler".|
Cloud Native Compiler Helm Charts
Azul provides CNC Helm Charts on GitHub, you can download the full package as a zip.
|Since version 1.5 of CNC, the Helm charts include ServiceAccount, Role, and RoleBinding objects to allow CNC to directly call Kubernetes APIs. This is required for member discovery and scaling of compile-brokers managed by the CNC operator.|
Installing Cloud Native Compiler
Install Azul Zulu Prime Builds of OpenJDK 21.09.1.0 or newer on your client machine.
Make sure your Helm version is
Add the Azul Helm repository to your Helm environment:helm repo add cnc-helm https://azulsystems.github.io/cnc-helm-charts/ helm repo update
Create a namespace (i.e.
compiler) for CNC.kubectl create namespace compiler
values-override.yamlfile in your local directory.
If you have a custom cluster domain name, specify it in
Configure sizing and autoscaling of the CNC components according to the sizing guide. By default, autoscaling is on and CNC can scale up to 10 Compile Brokers. For example, you could set the following:simpleSizing: vCores: 29 minVCores: 29 maxVCores: 92
If needed, configure external access in your cluster. If your JVMs are running within the same cluster as CNC, you can ignore this step. Otherwise, it is necessary to configure an external load balancer in
For clusters running on AWS an example configuration file is available on Azul’s GitHub.
Install using Helm, passing in the
values-override.yaml:helm install compiler cnc-helm/prime-cnc -n compiler -f values-override.yaml
In case you need a specific CNC version, please use
--version <version>flag. The command should produce output similar to this:NAME: compiler LAST DEPLOYED: Thu Apr 7 19:21:10 2022 NAMESPACE: compiler STATUS: deployed REVISION: 1 TEST SUITE: None
Verify that all started pods are ready:kubectl get all -n compiler
Configuring Persistent Storage
By default, CNC pods allocate data directories on the root disk or in an
emptyDir volume, both residing in the pod’s ephemeral storage. If the pod dies, all data is lost and has to be regenerated after restart.
When you move the pods' data directories to persistent volumes, the data survives pod crashes, restarts and even scale down/up events. Furthermore, this allows you to lower the local storage sizing of target Kubernetes worker nodes, since large data directories will be stored in separate volumes outside of these worker nodes.
When you use persistent volumes, you create 2 additional Kubernetes objects per pod:
persistentVolumeClaim(PVC), whose name is derived from parent pod
persistentVolume(PV), which is allocated automatically by chosen the storage class and has an auto generated name.
PV and PVC objects lifecycles are separate from other CNC Kubernetes objects. When you uninstall CNC using the helm chart, these objects remain in cluster for as long as the installation namespace exists. Removal of namespace or manual deletetion of PVCs within the namespace automaticallys remove their associated PVs from the Kubernetes cluster as well.
You can set configure persistent volumes for the
cache components. The configuration is the same for both components. Your target Kubernetes cluster needs to have at least one storage class configured. By default CNC uses the default configured storage class.
|If you are using AWS EBS storage for your persistent storage, us gp3 volumes instead of gp2 volumes. gp2 volumes have have limited IOPS which can affect CNC performance.|
Configuration with Custom Resources Values
Example pod sizing with 10GiB for root volume and 100GiB for data volume:
db: resources: requests: cpu: "5" memory: "20Gi" ephemeral-storage: "10Gi" limits: cpu: "5" memory: "20Gi" ephemeral-storage: "10Gi" persistentDataVolume: enabled: true size: "100Gi"
If you want to use recommended sizing of pods, you still need to explicitly override the default size of the ephemeral storage. This is in order to not waste resources and increase pod schedulability on smaller sized nodes.
db: resources: requests: ephemeral-storage: "10Gi" limits: ephemeral-storage: "10Gi" persistentDataVolume: enabled: true size: "100Gi"