Visit Azul.com Support

Installing Cloud Native Compiler on Elastic Kubernetes Service

Table of Contents
Need help?
Schedule a consultation with an Azul performance expert.
Contact Us

You can configure the Kubernetes cluster with kubeadm or kops. For evaluation purposes, you can also run Cloud Native Compiler on Minikube. If you’re using Amazon Web Services, however, you can simplify the process of starting and maintaining your cluster considerably by using the Elastic Kubernetes Service (EKS).

Provisioning on EKS

To provision Cloud Native Compiler on EKS:

  1. Install and configure the eksctl and aws command-line tools.

    If you don’t have permissions to set up networking components, have your administrator create the Virtual Public Cloud.

  2. Navigate to the cnc-install/eks directory. You can pass the cnc_eks.yaml file to the eksctl to create the cluster. For more information, look at the eskctl config file schema.

  3. Replace the placeholders <your-cluster-name>, <your-region>, and <path-to-your-key> with the correct values.

  4. If you are working with an existing VPC and do not want eksctl to create one, uncomment the vpc section and replace <your-vpc> and <your-subnet> with the correct values.

  5. Apply the file with the following command:

     
    $ eksctl create cluster -f cnc_eks.yaml

    This command takes several minutes to execute.

    Successful command output:

     
    $ 2021-08-20 20:09:53 [ℹ] eksctl version 0.60.0 $ 2021-08-20 20:09:53 [ℹ] using region eu-central-1 $ 2021-08-20 20:09:54 [ℹ] setting availability zones to [eu-central-1a eu-central-1b eu-central-1c] $ 2021-08-20 20:09:54 [ℹ] subnets for eu-central-1a - public:192.168.0.0/19 private:192.168.96.0/19 $ 2021-08-20 20:09:54 [ℹ] subnets for eu-central-1b - public:192.168.32.0/19 private:192.168.128.0/19 $ 2021-08-20 20:09:54 [ℹ] subnets for eu-central-1c - public:192.168.64.0/19 private:192.168.160.0/19 $ 2021-08-20 20:09:54 [ℹ] nodegroup "infra" will use "ami-05f67790af078876f" [AmazonLinux2/1.19] $ 2021-08-20 20:09:54 [ℹ] using SSH public key "/Users/XXXXXXXX/.ssh/id_rsa.pub" as "eksctl-eks-cc-cluster-nodegroup-infra-19:01:7b:fb:83:19:12:bb:17:59:40:37:22:dc:82:86" $ 2021-08-20 20:09:54 [ℹ] nodegroup "cncservice" will use "ami-05f67790af078876f" [AmazonLinux2/1.19] $ 2021-08-20 20:09:54 [ℹ] using SSH public key "/Users/XXXXXXXX/.ssh/id_rsa.pub" as "eksctl-eks-cc-cluster-nodegroup-cncserver-19:01:7b:fb:83:19:12:bb:17:59:40:37:22:dc:82:86" $ 2021-08-20 20:09:54 [ℹ] nodegroup "cccache" will use "ami-05f67790af078876f" [AmazonLinux2/1.19] $ 2021-08-20 20:09:54 [ℹ] using SSH public key "/Users/XXXXXXXX/.ssh/id_rsa.pub" as "eksctl-eks-cc-cluster-nodegroup-cccache-19:01:7b:fb:83:19:12:bb:17:59:40:37:22:dc:82:86" $ 2021-08-20 20:09:54 [ℹ] nodegroup "cncinfra" will use "ami-05f67790af078876f" [AmazonLinux2/1.19] $ 2021-08-20 20:09:54 [ℹ] using SSH public key "/Users/XXXXXXXX/.ssh/id_rsa.pub" as "eksctl-eks-cc-cluster-nodegroup-cncinfra-19:01:7b:fb:83:19:12:bb:17:59:40:37:22:dc:82:86" $ 2021-08-20 20:09:55 [ℹ] using Kubernetes version 1.19 $ 2021-08-20 20:09:55 [ℹ] creating EKS cluster "eks-cc-cluster" in "eu-central-1" region with un-managed nodes $ 2021-08-20 20:09:55 [ℹ] 4 nodegroups (cccache, cncinfra, cncserver, infra) were included (based on the include/exclude rules) $ 2021-08-20 20:09:55 [ℹ] will create a CloudFormation stack for cluster itself and 4 nodegroup stack(s) $ 2021-08-20 20:09:55 [ℹ] will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s) $ 2021-08-20 20:09:55 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-central-1 --cluster=eks-cc-cluster' $ 2021-08-20 20:09:55 [ℹ] CloudWatch logging will not be enabled for cluster "eks-cc-cluster" in "eu-central-1" $ 2021-08-20 20:09:55 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-central-1 --cluster=eks-cc-cluster' $ 2021-08-20 20:09:55 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "eks-cc-cluster" in "eu-central-1" $ 2021-08-20 20:09:55 [ℹ] 2 sequential tasks: { create cluster control plane "eks-cc-cluster", 3 sequential sub-tasks: { wait for control plane to become ready, 1 task: { create addons }, $ 4 parallel sub-tasks: { create nodegroup "infra", create nodegroup "cncserver", create nodegroup "cccache", create nodegroup "cncinfra" } } } $ 2021-08-20 20:09:55 [ℹ] building cluster stack "eksctl-eks-cc-cluster-cluster" $ 2021-08-20 20:09:55 [ℹ] deploying stack "eksctl-eks-cc-cluster-cluster" $ 2021-08-20 20:10:25 [ℹ] waiting for CloudFormation stack "eksctl-eks-cc-cluster-cluster" $ 2021-08-20 20:10:55 [ℹ] waiting for CloudFormation stack "eksctl-eks-cc-cluster-cluster" $ 2021-08-20 20:19:57 [ℹ] waiting for CloudFormation stack "eksctl-eks-cc-cluster-cluster" $ ... $ 2021-08-20 20:20:58 [ℹ] waiting for CloudFormation stack "eksctl-eks-cc-cluster-cluster" $ 2021-08-20 20:25:06 [ℹ] building nodegroup stack "eksctl-eks-cc-cluster-nodegroup-cncinfra" $ 2021-08-20 20:25:06 [ℹ] building nodegroup stack "eksctl-eks-cc-cluster-nodegroup-cccache" $ 2021-08-20 20:25:06 [ℹ] building nodegroup stack "eksctl-eks-cc-cluster-nodegroup-cncserver" $ 2021-08-20 20:25:06 [ℹ] building nodegroup stack "eksctl-eks-cc-cluster-nodegroup-infra" $ 2021-08-20 20:25:07 [ℹ] deploying stack "eksctl-eks-cc-cluster-nodegroup-infra" $ 2021-08-20 20:25:07 [ℹ] waiting for CloudFormation stack "eksctl-eks-cc-cluster-nodegroup-infra" $ 2021-08-20 20:25:07 [ℹ] deploying stack "eksctl-eks-cc-cluster-nodegroup-cncserver" $ 2021-08-20 20:25:07 [ℹ] waiting for CloudFormation stack "eksctl-eks-cc-cluster-nodegroup-cncserver" $ 2021-08-20 20:25:07 [ℹ] deploying stack "eksctl-eks-cc-cluster-nodegroup-cccache" $ 2021-08-20 20:25:07 [ℹ] waiting for CloudFormation stack "eksctl-eks-cc-cluster-nodegroup-cccache" $ 2021-08-20 20:25:07 [ℹ] deploying stack "eksctl-eks-cc-cluster-nodegroup-cncinfra" $ 2021-08-20 20:25:07 [ℹ] waiting for CloudFormation stack "eksctl-eks-cc-cluster-nodegroup-cncinfra" $ 2021-08-20 20:25:23 [ℹ] waiting for CloudFormation stack "eksctl-eks-cc-cluster-nodegroup-infra" $ 2021-08-20 20:25:24 [ℹ] waiting for CloudFormation stack "eksctl-eks-cc-cluster-nodegroup-cccache" $ ... $ 2021-08-20 20:32:16 [ℹ] waiting for CloudFormation stack "eksctl-eks-cc-cluster-nodegroup-cccache" $ 2021-08-20 20:32:16 [ℹ] waiting for the control plane availability... $ 2021-08-20 20:32:16 [✔] saved kubeconfig as "/Users/XXXXXXXX/.kube/config" $ 2021-08-20 20:32:16 [ℹ] no tasks $ 2021-08-20 20:32:16 [✔] all EKS cluster resources for "eks-cc-cluster" have been created $ 2021-08-20 20:32:16 [ℹ] adding identity "arn:aws:iam::912192438162:role/eksctl-eks-cc-cluster-nodegroup-infra-NodeInstanceRole-9VFWHMM30SSV" to auth ConfigMap $ 2021-08-20 20:32:16 [ℹ] nodegroup "infra" has 0 node(s) $ 2021-08-20 20:32:16 [ℹ] waiting for at least 1 node(s) to become ready in "infra" $ 2021-08-20 20:32:49 [ℹ] nodegroup "infra" has 1 node(s) $ 2021-08-20 20:32:49 [ℹ] node "ip-192-168-90-183.eu-central-1.compute.internal" is ready $ 2021-08-20 20:32:49 [ℹ] adding identity "arn:aws:iam::912192438162:role/eksctl-eks-cc-cluster-nodegroup-ccser-NodeInstanceRole-16JA2COTZHLWQ" to auth ConfigMap $ 2021-08-20 20:32:49 [ℹ] nodegroup "cncserver" has 0 node(s) $ 2021-08-20 20:32:49 [ℹ] waiting for at least 1 node(s) to become ready in "cncserver" $ 2021-08-20 20:33:49 [ℹ] nodegroup "cncserver" has 1 node(s) $ 2021-08-20 20:33:49 [ℹ] node "ip-192-168-90-115.eu-central-1.compute.internal" is ready $ 2021-08-20 20:33:49 [ℹ] adding identity "arn:aws:iam::912192438162:role/eksctl-eks-cc-cluster-nodegroup-cccac-NodeInstanceRole-5KIIEOTU3ELU" to auth ConfigMap $ 2021-08-20 20:33:49 [ℹ] nodegroup "cccache" has 0 node(s) $ 2021-08-20 20:33:49 [ℹ] waiting for at least 1 node(s) to become ready in "cccache" $ 2021-08-20 20:34:21 [ℹ] nodegroup "cccache" has 1 node(s) $ 2021-08-20 20:34:21 [ℹ] node "ip-192-168-70-66.eu-central-1.compute.internal" is ready $ 2021-08-20 20:34:21 [ℹ] adding identity "arn:aws:iam::912192438162:role/eksctl-eks-cc-cluster-nodegroup-cncinf-NodeInstanceRole-103G0W4M1XCZ7" to auth ConfigMap $ 2021-08-20 20:34:21 [ℹ] nodegroup "cncinfra" has 0 node(s) $ 2021-08-20 20:34:21 [ℹ] waiting for at least 1 node(s) to become ready in "cncinfra" $ 2021-08-20 20:35:37 [ℹ] nodegroup "cncinfra" has 1 node(s) $ 2021-08-20 20:35:37 [ℹ] node "ip-192-168-46-62.eu-central-1.compute.internal" is ready $ 2021-08-20 20:37:39 [ℹ] kubectl command should work with "/Users/XXXXXXXX/.kube/config", try 'kubectl get nodes' $ 2021-08-20 20:37:39 [✔] EKS cluster "eks-cc-cluster" in "eu-central-1" region is ready

Here is everything that cnc_eks.yaml creates in your AWS account:

  • CloudFormation stacks for the main EKS cluster and each of the NodeGroups in the cluster.

  • A Virtual Private Cloud called eksctl-<cluster-name>-cluster/VPC. If you chose to use an existing VPC, this is not created. You can explore the VPC and its related networking components in the AWS VPC console. The VPC has all of the required networking components configured:

    • A set of three public subnets and three private subnets

    • An Internet Gateway

    • Route Tables for each of the subnets

    • An Elastic IP Address for the cluster

    • A NAT Gateway

  • An EKS Cluster, including four nodegroups with one m5.2xlarge instance provisioned:

    • infra - For running Grafana and Prometheus.

    • cncinfra - For running the Cloud Native Compiler infrastructure components.

    • cnccache - For running the Cloud Native Compiler cache.

    • cncserver - For running the Cloud Native Compiler compile broker settings. Note that the m5.2xlarge instances have 8 vcores. This has implications on how you size your Compile Broker instances. For best performance, you should have 4x the number of compiler broker vcores as the number of vcores running your application logic on your client. Each m5.2xlarge instance has 8 vcores but we recommend oversubscribing the number of compiler threads by 1.5x, so you should set -Dcompiler.parallelism=12

  • IAM artifacts for the Autoscaling Groups:

    • Roles for the Autoscaler groups for the cluster and for each subnet

    • Policies for the EKS autoscaler

Cleaning Up

Run the following command:

 
$ eksctl delete cluster -f cnc_eks.yaml