Installing Cloud Native Compiler
Cloud Native Compiler (CNC) is shipped as a Kubernetes cluster which you provision and run on your cloud or on-premise servers. You can install CNC on any Kubernetes cluster:
-
Kubernetes clusters that you manually configure with kubeadm
-
A single-node minikube cluster. You should run CNC on minikube only for evaluation purposes. Make sure your minicube meets the 18vCore minimum for running CNC. Alternatively, you can use this values override YAML file to disable all resource defininitions.
-
Managed cloud Kubernetes services such as Amazon Web Services Elastic Kubernetes Service (EKS), Google Kubernetes Engine, and Microsoft Azure Managed Kubernetes Service. For EKS, we provide a cluster config file.
Note
|
By downloading and using Cloud Native Installer you are agreeing to the Cloud Native Compiler Evaluation Agreement. |
Installing Cloud Native Compiler
Cloud Native Compiler service uses Helm as the deployment manifest package manager. There is no need to manually edit any Kubernetes deployment manifests. You can configure the installation overriding the default settings from values.yaml in a custom values file. Here we refer to the file as values-override.yaml
but you can give it any name.
Note
|
This section describes setting up an evaluation or developer version of the CNC service without SSL authentication. To set up a production version with full SSL authentication, see Configuring Cloud Native Compiler with SSL Authentication. |
You should install Cloud Native Compiler in a location to which the JVM machines have unauthenticated access. You can run Cloud Native Compiler in the same Kubernetes cluster as the client VMs or in a separate cluster.
To install Cloud Native Compiler:
-
Install Azul Zulu Prime Builds of OpenJDK 21.09.01 or later on your client machine.
-
Make sure your Helm version is
v3.8.0
or later. -
Add the Azul Helm repository to your Helm environment:
$ helm repo add cnc-helm https://azulsystems.github.io/cnc-helm-charts/ $ helm repo update -
Create a namespace (i.e.
compiler
) for the CNC service.$ kubectl create namespace compiler -
Create the
values-override.yaml
file in your local directory. -
If you have a custom cluster domain name, specify it in
values-override.yaml
:$ clusterName: "example.org" -
Configure sizing and autoscaling of the CNC components according to the sizing guide. By default autoscaling is on and the CNC service can scale up to 10 Compile Brokers. For example, you could set the following:
$ simpleSizing: $ vCores: 18 $ minVCores: 18 $ maxVCores: 81 -
If needed, configure external access in your cluster. If your JVMs are running within the same cluster as CNC, you can ignore this step. Otherwise, it is necessary to configure an external load balancer in
values-override.yaml
:$ gateway: $ service: $ type: "LoadBalancer" $ annotations: $ service.beta.kubernetes.io/aws-load-balancer-internal: “true” $ service.beta.kubernetes.io/aws-load-balancer-type: “nlb” -
Install using Helm, passing in the
values-override.yaml
:$ helm install compiler cnc-helm/prime-cnc -n compiler -f values-override.yamlIn case you need a specific CNC version, please use
--version <version>
flag. The command should produce output similar to this:$ NAME: compiler $ LAST DEPLOYED: Thu Apr 7 19:21:10 2022 $ NAMESPACE: compiler $ STATUS: deployed $ REVISION: 1 $ TEST SUITE: None -
Verify that all started pods are ready:
$ kubectl get all -n compiler
Updating your Installation
To change a setting in your installation, modify values-override.yaml
and use the Helm upgrade
command:
$ helm upgrade compiler cnc-helm/prime-cnc -n compiler -f values-override.yaml
If you are targeting a specific version, use the --version <target version>
flag. Depending on where you’re installing, do not forget to add any further values files you may be using, e.g. values-eks.yaml
using further -f <values file>
clauses. It is necessary to keep all values you used with the original installation unless told otherwise.
Cleaning Up
To uninstall deployed CNC service, run the following command:
$ helm uninstall compiler -n compiler
Cloud Native Compiler Deployment Overview
If you want to go with the default AWS setup (values-aws.yaml
), the setup there is divided into three node types (four if you also want to use the optional monitoring stack). Each node has a role
label used to set the affinity for the nodes. If you set up your cluster on AWS EKS using the Azul-provided cluster config file, nodes are created with these labels.
Note
|
Make sure that the instances you run your Cloud Native Compiler service on have enough CPU to handle your requests. For example, for AWS we use the compute-optimized m5.2xlarge instances. On Google Cloud Platform, we use the c2-standard-8 instances. |
The nodes in a Cloud Native Compiler instance are as follows:
-
Compile Broker - Performs JIT compilations.
-
AWS node type:
role=cncserver
-
System Requirements: CPU 8, RAM 32GB, HDD 100GB
-
-
Cache - Stores information about the JVM that the compiler needs to perform compilations.
-
AWS node type:
role=cnccache
-
System Requirements: CPU 8, RAM 32GB, HDD 100GB
-
There is one pod per Cache node. To scale up, create more replicas.
-
-
Infrastructure - Provides supporting functionality.
-
AWS node type:
role=cncinfra
-
System Requirements: CPU 8, RAM 32GB, HDD 100GB. Make sure the disk connection is fast (use SSD) and that the storage volume is persistent between runs.
-
The pods included in this node are:
-
db
-
gateway
-
storage
-
-
-
Infrastructure - Non-CNC supporting functionality, such as monitoring.
-
AWS node type:
role=infra
-
System Requirements: CPU 8, RAM 32GB, HDD 100GB.
-
Pods included in this node:
-
grafana
-
prometheus
-
-