Installing Cloud Native Compiler
Cloud Native Compiler (CNC) is shipped as a Kubernetes cluster which you provision and run on your cloud or on-premise servers. You can install CNC on any Kubernetes cluster:
-
Kubernetes clusters that you manually configure with kubeadm
-
A single-node minikube cluster. You should run CNC on minikube only for evaluation purposes. Make sure your minicube meets the 18vCore minimum for running CNC. Alternatively, you can use this values override YAML file to disable all resource defininitions.
-
Managed cloud Kubernetes services such as Amazon Web Services Elastic Kubernetes Service (EKS), Google Kubernetes Engine, and Microsoft Azure Managed Kubernetes Service. For EKS, we provide a cluster config file.
Note
|
By downloading and using Cloud Native Installer you are agreeing to the Cloud Native Compiler Evaluation Agreement. |
Installing Cloud Native Compiler
Cloud Native Compiler service uses Helm as the deployment manifest package manager. There is no need to manually edit any Kubernetes deployment manifests. You can configure the installation overriding the default settings from values.yaml in a custom values file. Here we refer to the file as values-override.yaml
but you can give it any name.
Note
|
This section describes setting up an evaluation or developer version of the CNC service without SSL authentication. To set up a production version with full SSL authentication, see Configuring Cloud Native Compiler with SSL Authentication. |
You should install Cloud Native Compiler in a location to which the JVM machines have unauthenticated access. You can run Cloud Native Compiler in the same Kubernetes cluster as the client VMs or in a separate cluster.
To install Cloud Native Compiler:
-
Install Azul Zulu Prime Builds of OpenJDK 21.09.01 or later on your client machine.
-
Make sure your Helm version is
v3.8.0
or later. -
Add the Azul Helm repository to your Helm environment:
$ helm repo add cnc-helm https://azulsystems.github.io/cnc-helm-charts/ $ helm repo update -
Create a namespace (i.e.
compiler
) for the CNC service.$ kubectl create namespace compiler -
Create the
values-override.yaml
file in your local directory. -
If you have a custom cluster domain name, specify it in
values-override.yaml
:$ clusterName: "example.org" -
Configure sizing and autoscaling of the CNC components according to the sizing guide. By default autoscaling is on and the CNC service can scale up to 10 Compile Brokers. For example, you could set the following:
$ simpleSizing: $ vCores: 18 $ minVCores: 18 $ maxVCores: 81 -
If needed, configure external access in your cluster. If your JVMs are running within the same cluster as CNC, you can ignore this step. Otherwise, it is necessary to configure an external load balancer in
values-override.yaml
:$ gateway: $ service: $ type: "LoadBalancer" $ annotations: $ service.beta.kubernetes.io/aws-load-balancer-internal: “true” $ service.beta.kubernetes.io/aws-load-balancer-type: “nlb” -
Install using Helm, passing in the
values-override.yaml
:$ helm install compiler cnc-helm/prime-cnc -n compiler -f values-override.yamlIn case you need a specific CNC version, please use
--version <version>
flag. The command should produce output similar to this:$ NAME: compiler $ LAST DEPLOYED: Thu Apr 7 19:21:10 2022 $ NAMESPACE: compiler $ STATUS: deployed $ REVISION: 1 $ TEST SUITE: None -
Verify that all started pods are ready:
$ kubectl get all -n compiler
Configuring Persistent Storage
By default, CNC pods allocate data directories on the root disk or in an emptyDir
volume, both residing in the pod’s ephemeral storage. If the pod dies, all data is lost and has to be regenerated after restart.
When you move the pods' data directories to persistent volumes, the data survives pod crashes, restarts and even scale down/up events. Furthermore, this allows you to lower the local storage sizing of target Kubernetes worker nodes, since large data directories will be stored in separate volumes outside of these worker nodes.
When you use persistent volumes, you create 2 additional Kubernetes objects per pod:
-
persistentVolumeClaim
(PVC), whose name is derived from parent pod -
persistentVolume
(PV), which is allocated automatically by chosen the storage class and has an auto generated name.
PV and PVC objects lifecycles are separate from other CNC Kubernetes objects. When you uninstall CNC using the helm chart, these objects remain in cluster for as long as the installation namespace exists. Removal of namespace or manual deletetion of PVCs within the namespace automaticallys remove their associated PVs from the Kubernetes cluster as well.
You can set configure persistent volumes for the db
and cache
components. The configuration is the same for both components. Your target Kubernetes cluster needs to have at least one storage class configured. By default CNC uses the default configured storage class.
Note
|
If you are using AWS EBS storage for your persistent storage, us gp3 volumes instead of gp2 volumes. gp2 volumes have have limited IOPS which can affect CNC performance. |
Configuring for minikube
As the supplied values file for minikube resets pod resources to null, we can simply add only the persistent volume section:
$ db:
$ resources:
$ persistentDataVolume:
$ enabled: true
You can also set the volume size if the default 200Gi is too big for local testing:
$ db:
$ resources:
$ persistentDataVolume:
$ enabled: true
$ size: "50Gi"
Configuration with Custom Resources Values
Example pod sizing with 10GiB for root volume and 100GiB for data volume:
$ db:
$ resources:
$ requests:
$ cpu: "5"
$ memory: "20Gi"
$ ephemeral-storage: "10Gi"
$ limits:
$ cpu: "5"
$ memory: "20Gi"
$ ephemeral-storage: "10Gi"
$ persistentDataVolume:
$ enabled: true
$ size: "100Gi"
If you want to use recommended sizing of pods, you still need to explicitly override the default size of the ephemeral storage. This is in order to not waste resources and increase pod schedulability on smaller sized nodes.
$ db:
$ resources:
$ requests:
$ ephemeral-storage: "10Gi"
$ limits:
$ ephemeral-storage: "10Gi"
$ persistentDataVolume:
$ enabled: true
$ size: "100Gi"
Updating your Installation
To change a setting in your installation, modify values-override.yaml
and use the Helm upgrade
command:
$ helm upgrade compiler cnc-helm/prime-cnc -n compiler -f values-override.yaml
If you are targeting a specific version, use the --version <target version>
flag. Depending on where you’re installing, do not forget to add any further values files you may be using, e.g. values-eks.yaml
using further -f <values file>
clauses. It is necessary to keep all values you used with the original installation unless told otherwise.
Cleaning Up
To uninstall deployed CNC service, run the following command:
$ helm uninstall compiler -n compiler
Cloud Native Compiler Deployment Overview
If you want to go with the default AWS setup (values-aws.yaml
), the setup there is divided into three node types (four if you also want to use the optional monitoring stack). Each node has a role
label used to set the affinity for the nodes. If you set up your cluster on AWS EKS using the Azul-provided cluster config file, nodes are created with these labels.
Note
|
Make sure that the instances you run your Cloud Native Compiler service on have enough CPU to handle your requests. For example, for AWS we use the compute-optimized m5.2xlarge instances. On Google Cloud Platform, we use the c2-standard-8 instances. |
The nodes in a Cloud Native Compiler instance are as follows:
-
Compile Broker - Performs JIT compilations.
-
AWS node type:
role=cncserver
-
System Requirements: CPU 8, RAM 32GB, HDD 100GB
-
-
Cache - Stores information about the JVM that the compiler needs to perform compilations.
-
AWS node type:
role=cnccache
-
System Requirements: CPU 8, RAM 32GB, HDD 100GB
-
There is one pod per Cache node. To scale up, create more replicas.
-
-
Infrastructure - Provides supporting functionality.
-
AWS node type:
role=cncinfra
-
System Requirements: CPU 8, RAM 32GB, HDD 100GB. Make sure the disk connection is fast (use SSD) and that the storage volume is persistent between runs.
-
The pods included in this node are:
-
db
-
gateway
-
storage
-
-
-
Infrastructure - Non-CNC supporting functionality, such as monitoring.
-
AWS node type:
role=infra
-
System Requirements: CPU 8, RAM 32GB, HDD 100GB.
-
Pods included in this node:
-
grafana
-
prometheus
-
-