Visit Azul.com Support

Installing Optimizer Hub on AWS Elastic Kubernetes Service

If you are using Amazon Web Services, you can simplify the process of starting and maintaining your cluster considerably by using the Elastic Kubernetes Service (EKS).

Provisioning on EKS

To provision Optimizer Hub on EKS:

  1. Install and configure the eksctl and aws command-line tools.

    If you don’t have permissions to set up networking components, have your administrator create the Virtual Public Cloud.

  2. Download opthub-install-1.9.4.zip. Navigate to the opthub-install/eks directory. You can pass the opthub_eks.yaml file to the eksctl to create the cluster. For more information, look at the eskctl config file schema.

  3. Replace the placeholders {your-cluster-name}, {your-region}, and {path-to-your-key} with the correct values.

  4. If you are working with an existing VPC and do not want eksctl to create one, uncomment the vpc section and replace {your-vpc} and {your-subnet} with the correct values.

  5. Apply the file with the following command:

     
    eksctl create cluster -f opthub_eks.yaml

    This command takes several minutes to execute.

    Successful command output:

     
    2021-08-20 20:09:53 [ℹ] eksctl version 0.60.0 2021-08-20 20:09:53 [ℹ] using region eu-central-1 2021-08-20 20:09:54 [ℹ] setting availability zones to [eu-central-1a eu-central-1b eu-central-1c] 2021-08-20 20:09:54 [ℹ] subnets for eu-central-1a - public:192.168.0.0/19 private:192.168.96.0/19 2021-08-20 20:09:54 [ℹ] subnets for eu-central-1b - public:192.168.32.0/19 private:192.168.128.0/19 2021-08-20 20:09:54 [ℹ] subnets for eu-central-1c - public:192.168.64.0/19 private:192.168.160.0/19 2021-08-20 20:09:54 [ℹ] nodegroup "infra" will use "ami-05f67790af078876f" [AmazonLinux2/1.19] 2021-08-20 20:09:54 [ℹ] using SSH public key "/Users/XXXXXXXX/.ssh/id_rsa.pub" as "eksctl-eks-opthub-cluster-nodegroup-infra-19:01:7b:fb:83:19:12:bb:17:59:40:37:22:dc:82:86" 2021-08-20 20:09:54 [ℹ] nodegroup "opthubservice" will use "ami-05f67790af078876f" [AmazonLinux2/1.19] 2021-08-20 20:09:54 [ℹ] using SSH public key "/Users/XXXXXXXX/.ssh/id_rsa.pub" as "eksctl-eks-opthub-cluster-nodegroup-opthubserver-19:01:7b:fb:83:19:12:bb:17:59:40:37:22:dc:82:86" 2021-08-20 20:09:54 [ℹ] nodegroup "opthubcache" will use "ami-05f67790af078876f" [AmazonLinux2/1.19] 2021-08-20 20:09:54 [ℹ] using SSH public key "/Users/XXXXXXXX/.ssh/id_rsa.pub" as "eksctl-eks-opthub-cluster-nodegroup-opthubcache-19:01:7b:fb:83:19:12:bb:17:59:40:37:22:dc:82:86" 2021-08-20 20:09:54 [ℹ] nodegroup "opthubinfra" will use "ami-05f67790af078876f" [AmazonLinux2/1.19] 2021-08-20 20:09:54 [ℹ] using SSH public key "/Users/XXXXXXXX/.ssh/id_rsa.pub" as "eksctl-eks-opthub-cluster-nodegroup-opthubinfra-19:01:7b:fb:83:19:12:bb:17:59:40:37:22:dc:82:86" 2021-08-20 20:09:55 [ℹ] using Kubernetes version 1.19 2021-08-20 20:09:55 [ℹ] creating EKS cluster "eks-opthub-cluster" in "eu-central-1" region with un-managed nodes 2021-08-20 20:09:55 [ℹ] 4 nodegroups (opthubcache, opthubinfra, opthubserver, infra) were included (based on the include/exclude rules) 2021-08-20 20:09:55 [ℹ] will create a CloudFormation stack for cluster itself and 4 nodegroup stack(s) 2021-08-20 20:09:55 [ℹ] will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s) 2021-08-20 20:09:55 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=eu-central-1 --cluster=eks-opthub-cluster' 2021-08-20 20:09:55 [ℹ] CloudWatch logging will not be enabled for cluster "eks-opthub-cluster" in "eu-central-1" 2021-08-20 20:09:55 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=eu-central-1 --cluster=eks-opthub-cluster' 2021-08-20 20:09:55 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "eks-opthub-cluster" in "eu-central-1" 2021-08-20 20:09:55 [ℹ] 2 sequential tasks: { create cluster control plane "eks-opthub-cluster", 3 sequential sub-tasks: { wait for control plane to become ready, 1 task: { create addons }, 4 parallel sub-tasks: { create nodegroup "infra", create nodegroup "opthubserver", create nodegroup "opthubcache", create nodegroup "opthubinfra" } } } 2021-08-20 20:09:55 [ℹ] building cluster stack "eksctl-eks-opthub-cluster-cluster" 2021-08-20 20:09:55 [ℹ] deploying stack "eksctl-eks-opthub-cluster-cluster" 2021-08-20 20:10:25 [ℹ] waiting for CloudFormation stack "eksctl-eks-opthub-cluster-cluster" 2021-08-20 20:10:55 [ℹ] waiting for CloudFormation stack "eksctl-eks-opthub-cluster-cluster" 2021-08-20 20:19:57 [ℹ] waiting for CloudFormation stack "eksctl-eks-opthub-cluster-cluster" ... 2021-08-20 20:20:58 [ℹ] waiting for CloudFormation stack "eksctl-eks-opthub-cluster-cluster" 2021-08-20 20:25:06 [ℹ] building nodegroup stack "eksctl-eks-opthub-cluster-nodegroup-opthubinfra" 2021-08-20 20:25:06 [ℹ] building nodegroup stack "eksctl-eks-opthub-cluster-nodegroup-opthubcache" 2021-08-20 20:25:06 [ℹ] building nodegroup stack "eksctl-eks-opthub-cluster-nodegroup-opthubserver" 2021-08-20 20:25:06 [ℹ] building nodegroup stack "eksctl-eks-opthub-cluster-nodegroup-infra" 2021-08-20 20:25:07 [ℹ] deploying stack "eksctl-eks-opthub-cluster-nodegroup-infra" 2021-08-20 20:25:07 [ℹ] waiting for CloudFormation stack "eksctl-eks-opthub-cluster-nodegroup-infra" 2021-08-20 20:25:07 [ℹ] deploying stack "eksctl-eks-opthub-cluster-nodegroup-opthubserver" 2021-08-20 20:25:07 [ℹ] waiting for CloudFormation stack "eksctl-eks-opthub-cluster-nodegroup-opthubserver" 2021-08-20 20:25:07 [ℹ] deploying stack "eksctl-eks-opthub-cluster-nodegroup-opthubcache" 2021-08-20 20:25:07 [ℹ] waiting for CloudFormation stack "eksctl-eks-opthub-cluster-nodegroup-opthubcache" 2021-08-20 20:25:07 [ℹ] deploying stack "eksctl-eks-opthub-cluster-nodegroup-opthubinfra" 2021-08-20 20:25:07 [ℹ] waiting for CloudFormation stack "eksctl-eks-opthub-cluster-nodegroup-opthubinfra" 2021-08-20 20:25:23 [ℹ] waiting for CloudFormation stack "eksctl-eks-opthub-cluster-nodegroup-infra" 2021-08-20 20:25:24 [ℹ] waiting for CloudFormation stack "eksctl-eks-opthub-cluster-nodegroup-opthubcache" ... 2021-08-20 20:32:16 [ℹ] waiting for CloudFormation stack "eksctl-eks-opthub-cluster-nodegroup-opthubcache" 2021-08-20 20:32:16 [ℹ] waiting for the control plane availability... 2021-08-20 20:32:16 [✔] saved kubeconfig as "/Users/XXXXXXXX/.kube/config" 2021-08-20 20:32:16 [ℹ] no tasks 2021-08-20 20:32:16 [✔] all EKS cluster resources for "eks-opthub-cluster" have been created 2021-08-20 20:32:16 [ℹ] adding identity "arn:aws:iam::912192438162:role/eksctl-eks-opthub-cluster-nodegroup-infra-NodeInstanceRole-9VFWHMM30SSV" to auth ConfigMap 2021-08-20 20:32:16 [ℹ] nodegroup "infra" has 0 node(s) 2021-08-20 20:32:16 [ℹ] waiting for at least 1 node(s) to become ready in "infra" 2021-08-20 20:32:49 [ℹ] nodegroup "infra" has 1 node(s) 2021-08-20 20:32:49 [ℹ] node "ip-192-168-90-183.eu-central-1.compute.internal" is ready 2021-08-20 20:32:49 [ℹ] adding identity "arn:aws:iam::912192438162:role/eksctl-eks-opthub-cluster-nodegroup-opthubser-NodeInstanceRole-16JA2COTZHLWQ" to auth ConfigMap 2021-08-20 20:32:49 [ℹ] nodegroup "opthubserver" has 0 node(s) 2021-08-20 20:32:49 [ℹ] waiting for at least 1 node(s) to become ready in "opthubserver" 2021-08-20 20:33:49 [ℹ] nodegroup "opthubserver" has 1 node(s) 2021-08-20 20:33:49 [ℹ] node "ip-192-168-90-115.eu-central-1.compute.internal" is ready 2021-08-20 20:33:49 [ℹ] adding identity "arn:aws:iam::912192438162:role/eksctl-eks-opthub-cluster-nodegroup-opthubcac-NodeInstanceRole-5KIIEOTU3ELU" to auth ConfigMap 2021-08-20 20:33:49 [ℹ] nodegroup "opthubcache" has 0 node(s) 2021-08-20 20:33:49 [ℹ] waiting for at least 1 node(s) to become ready in "opthubcache" 2021-08-20 20:34:21 [ℹ] nodegroup "opthubcache" has 1 node(s) 2021-08-20 20:34:21 [ℹ] node "ip-192-168-70-66.eu-central-1.compute.internal" is ready 2021-08-20 20:34:21 [ℹ] adding identity "arn:aws:iam::912192438162:role/eksctl-eks-opthub-cluster-nodegroup-opthubinf-NodeInstanceRole-103G0W4M1XCZ7" to auth ConfigMap 2021-08-20 20:34:21 [ℹ] nodegroup "opthubinfra" has 0 node(s) 2021-08-20 20:34:21 [ℹ] waiting for at least 1 node(s) to become ready in "opthubinfra" 2021-08-20 20:35:37 [ℹ] nodegroup "opthubinfra" has 1 node(s) 2021-08-20 20:35:37 [ℹ] node "ip-192-168-46-62.eu-central-1.compute.internal" is ready 2021-08-20 20:37:39 [ℹ] kubectl command should work with "/Users/XXXXXXXX/.kube/config", try 'kubectl get nodes' 2021-08-20 20:37:39 [✔] EKS cluster "eks-opthub-cluster" in "eu-central-1" region is ready

Here is everything that opthub_eks.yaml creates in your AWS account:

  • CloudFormation stacks for the main EKS cluster and each of the NodeGroups in the cluster.

  • A Virtual Private Cloud called eksctl-{cluster-name}-cluster/VPC. If you chose to use an existing VPC, this is not created. You can explore the VPC and its related networking components in the AWS VPC console. The VPC has all of the required networking components configured:

    • A set of three public subnets and three private subnets

    • An Internet Gateway

    • Route Tables for each of the subnets

    • An Elastic IP Address for the cluster

    • A NAT Gateway

  • An EKS Cluster, including four nodegroups with one m5.2xlarge instance provisioned:

    • infra - For running Grafana and Prometheus.

    • opthubinfra - For running the Optimizer Hub infrastructure components.

    • opthubcache - For running the Optimizer Hub cache.

    • opthubserver - For running the Optimizer Hub compile broker settings.

  • IAM artifacts for the Autoscaling Groups:

    • Roles for the Autoscaler groups for the cluster and for each subnet

    • Policies for the EKS autoscaler

Setting Up an External Load Balancer

If you need to connect to Optimizer Hub from outside the Kubernetes cluster, you need to setup up a load balancer in front of the gateway instances:

To set up a load balancer, please follow AWS documentation regarding load balancer controller setup.

Installing Optimizer Hub on EKS

Because opthub_eks.yaml file creates the nodegroups in the cluster, you have to pass in an additional configuration file when installing via Helm. The opthub_eks.yaml file is located in opthub-install/eks/values-eks.yaml and includes the nodegroup affinity settings and other settings EKS expects.

To continue with the full installation instructions for Optimizer Hub, please refer to "Installing Optimizer Hub on Kubernetes". In case you don’t want to install the full Optimizer Hub, but only a part of the services, check "Configuring the Active Optimizer Hub Services".

To install using the values-eks.yaml config file, run the following command:

 
helm install opthub opthub-helm/azul-opthub -n my-opthub -f values-eks.yaml -f values-override.yaml

When adding multiple values files, remember the last one takes precedence.

Configuring AWS S3 Storage

You can configure your Optimizer Hub to use AWS S3 storage instead of the internal blob storage in the internal builtInStorage pod. When you use AWS S3 storage, the builtInStorage pod is not created at all.

You can configure S3 storage by adding the following to values-override.yaml:

 
storage: blobStorageService: s3 # available options: builtin-storage, azure-blob, s3 s3: # opthub-* buckets examples: opthub-sandbox, opthub-demo commonBucket: opthub-storage0

Using Kubernetes Nodes and Permissions

To configure AWS S3 storage, use the following configuration. Ensure that your Kubernetes nodes with opthub-compilebroker and opthub-gateway have RW permissions to S3 bucket(s), and the target buckets exist.

A role with the below policy must be assigned to instances (EC2, EC2 ASG, Fargate, etc) for the opthub-compilebroker and opthub-gateway pods.

 
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::opthub-*" ], "Effect": "Allow" }, { "Action": [ "s3:*Object" ], "Resource": [ "arn:aws:s3:::opthub-*/*" ], "Effect": "Allow" } ] }

Using AWS Service Accounts

If your security practices do not allow you to give nodes access to S3 buckets, you can also grant access to just the key services in Optimizer Hub. You can do this by configuring AWS IAM, roles, and permissions as described in the AWS documentation.

In the next steps, Optimizer Hub assumes the role name is opthub-s3-role. The IAM role trust relationship entry needs the following additional settings in AWS (you will need to change the IDs in this example to align with your configuration):

 
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::163957972732:oidc-provider/oidc.eks.us-west-2.amazonaws.com/id/F7E8B430691CFE3B776B8CA663896762" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringLike": { "oidc.eks.us-west-2.amazonaws.com/id/F7E8B430691CFE3B776B8CA663896762:sub": "system:serviceaccount:*:opthub*", "oidc.eks.us-west-2.amazonaws.com/id/F7E8B430691CFE3B776B8CA663896762:aud": "sts.amazonaws.com" } } } ] }

After creating the Service Accounts, add the following settings to your values-override.yaml file:

 
deployment: serviceAccount: annotations: eks.amazonaws.com/role-arn: arn:aws:iam::<...>:role/opthub-s3-role

The Helm chart of Optimizer Hub creates the following Service Accounts:

  • opthub-cache

  • opthub-compile-broker

  • opthub-gateway

  • opthub-operator

Storage for ReadyNow Orchestrator

You can limit the usage of persistent storage by ReadyNow Orchestrator with the appropriate settings.

Cleaning Up

Run the following command:

 
eksctl delete cluster -f opthub_eks.yaml