Visit Azul.com Support

Installing Optimizer Hub on Kubernetes

Optimizer Hub uses Helm as the deployment manifest package manager. There is no need to manually edit any Kubernetes deployment manifests. You can configure the installation overriding the default settings from values.yaml in a custom values file. Here we refer to the file as values-override.yaml but you can give it any name.

Note
This section describes setting up an evaluation or developer version of Optimizer Hub without SSL authentication. To set up a production version with full SSL authentication, see Configuring Optimizer Hub with SSL Authentication.

You should install Optimizer Hub in a location to which the JVM machines have unauthenticated access. You can run Optimizer Hub in the same Kubernetes cluster as the client VMs or in a separate cluster.

Note
If you are upgrading an existing installation, make sure to check "Upgrading Optimizer Hub".

Optimizer Hub Helm Charts

Azul provides Optimizer Hub Helm Charts on GitHub. You can download the full package as a zip.

Installing Optimizer Hub

These instructions are for installing a full Optimizer Hub instance with both Cloud Native Compiler and ReadyNow Orchestrator. In case you only want to install the full Optimizer Hub, but only a part of the services, see "Configuring the Active Optimizer Hub Services".

  1. Install Azul Zulu Prime Builds of OpenJDK 21.09.1.0 or newer on your client machine.

  2. Make sure your Helm version is v3.8.0 or newer.

  3. Add the Azul Helm repository to your Helm environment:

     
    helm repo add opthub-helm https://azulsystems.github.io/opthub-helm-charts/ helm repo update
  4. Create a namespace (i.e. my-opthub) for Optimizer Hub.

     
    kubectl create namespace my-opthub
  5. Create the values-override.yaml file in your local directory.

  6. If you have a custom cluster domain name, specify it in values-override.yaml:

     
    clusterName: "example.org"
  7. If you want specific labels being added to your Kubernetes objects, define them in your values-override.yaml, for example as follows:

     
    gateway: applicationLabels: # Additional labels for Deployment/StatefulSet podTemplateLabels: # Additional labels for POD serviceLabels: # Additional labels for Service
  8. Configure sizing and autoscaling of the Optimizer Hub components according to the sizing guide. By default, autoscaling is on and Optimizer Hub can scale up to 10 Compile Brokers. For example, you could set the following:

     
    simpleSizing: vCores: 32 minVCores: 32 maxVCores: 106
  9. If needed, configure external access in your cluster. If your JVMs are running within the same cluster as Optimizer Hub, you can ignore this step. Otherwise, it is necessary to configure an external load balancer in values-override.yaml.

    For clusters running on AWS an example configuration file is available on Azul’s GitHub.

  10. Install using Helm, passing in the values-override.yaml.

     
    helm install opthub opthub-helm/azul-opthub -n my-opthub -f values-override.yaml
    • In case you need a specific Optimizer Hub version, please use --version 1.9.4 flag.

    • The command should produce output similar to this:

       
      NAME: opthub LAST DEPLOYED: Wed Jan 31 12:19:58 2024 NAMESPACE: my-opthub STATUS: deployed REVISION: 1 TEST SUITE: None
  11. Verify that all started pods are ready:

     
    kubectl get all -n my-opthub

Configuring Persistent Storage

By default, Optimizer Hub pods allocate data directories on the root disk or in an emptyDir volume, both residing in the pod’s ephemeral storage. If the pod dies, all data is lost and has to be regenerated after restart.

You can move the pods' data directories to persistent volumes, so the data survives pod crashes, restarts and even scale down/up events. Furthermore, this allows you to lower the local storage sizing of target Kubernetes worker nodes, since large data directories are stored in separate volumes outside of these worker nodes.

When you use persistent volumes, you create 2 additional Kubernetes objects per pod:

  • persistentVolumeClaim (PVC), whose name is derived from parent pod

  • persistentVolume (PV), which is allocated automatically by chosen the storage class and has an auto-generated name.

PV and PVC objects lifecycles are separate from other Optimizer Hub Kubernetes objects. When you uninstall Optimizer Hub using the helm chart, these objects remain in cluster for as long as the installation namespace exists. Removal of namespace or manual deletion of PVCs within the namespace automatically removes their associated PVs from the Kubernetes cluster as well.

You can configure persistent volumes for the db and builtinStorage components. The configuration is the same for both components. Your target Kubernetes cluster needs to have at least one storage class configured. By default, Optimizer Hub uses the default configured storage class.

Note
If you are using AWS EBS Storage for your persistent storage, use gp3 volumes instead of gp2 volumes. gp2 volumes have limited IOPS which can affect Optimizer Hub performance. Additional configuration info for AWS S3 Storage is available here.
Note
If you are using Azure Blob Storage, please check "Installing Optimizer Hub on Azure" for additional settings.

Configuration with Custom Resources Values

Example pod sizing with 10GiB for root volume and 100GiB for data volume:

 
db: resources: requests: cpu: "5" memory: "20Gi" ephemeral-storage: "10Gi" limits: cpu: "5" memory: "20Gi" ephemeral-storage: "10Gi" persistentDataVolume: enabled: true size: "100Gi"

If you want to use recommended sizing of pods, you still need to explicitly override the default size of the ephemeral storage. This is in order to not waste resources and increase pod schedulability on smaller sized nodes.

 
db: resources: requests: ephemeral-storage: "10Gi" limits: ephemeral-storage: "10Gi" persistentDataVolume: enabled: true size: "100Gi"

Configuration with Custom Storage Class

If your cluster has multiple configured storage classes, and you want to use a non-default storage class, do the following:

 
db: resources: persistentDataVolume: enabled: true storageClassName: "my-storage-class"

Enabling the Management Gateway

The Management Gateway enables two pieces of functionality:

  • Access to REST APIs for managing ReadyNow profiles

  • Cross-region synchronization of ReadyNow Profiles

To enable the Management Gateway, set mgmtGateway.enabled to true in value-override.yaml, see Management Gateway Parameters for more info.

Cleaning Up

To uninstall a deployed Optimizer Hub, run the following command:

 
helm uninstall opthub -n my-opthub kubectl delete namespace my-opthub