Visit Azul.com Support

Optimizer Hub Architecture Overview

Optimizer Hub is shipped as a Helm chart and a set of Docker images to be deployed into a Kubernetes cluster. The Helm chart deploys different components based on the use case.

Architecture Overview

Optimizer Hub offers two deployment options: a full installation of all components or a ReadyNow Orchestrator-only installation.

Full Installation

In a full installation, all Optimizer Hub components are available and gateway, compile-broker, and cache are scaled when needed.

Full installation diagram

Remarks:

  • All services use one pod, except Cache uses two pods by default.

  • The load balancer is either your own solution (recommended), or the optional gw-proxy included in Optimizer Hub. See Configuring Optimizer Hub Host for more info.

ReadyNow Orchestrator Only

When only ReadyNow Orchestrator is needed, a reduced set of the Optimizer Hub components is deployed in the Kubernetes cluster.

ReadyNow Orchestrator only diagram

Please follow these guidelines to set up your Kubernetes environment.

Kubernetes Pods

The sizes for the pods are specified in values.yaml file, as part of the Helm chart:

  • Gateway: CPU 7, RAM 14GB

  • Compile Broker: CPU 7, RAM 28GB

  • Cache: CPU 7, RAM 28GB

  • gwProxy: CPU 7, RAM 1GB

  • Management Gateway: CPU 2, RAM 2GB

  • Operator: CPU 1, RAM 2GB

Requirements for ephemeral storage (temporary storage space allocated to Kubernetes pods that is non-persistent and exists only for the lifetime of the pod):

  • Compile Broker: 8GB

  • All other pods: 1GB

Kubernetes Nodes

The underlying Kubernetes nodes (either cloud instances or physical computers) have to be large enough to fit one or more of the pods. This means they need to provide a multiple of 8 vCores with 4GB of RAM per vCore. For example: 8 vCore with 32GB, 16 vCores with 64GB, etc.

Note
Ensure the instances you run your Optimizer Hub have enough CPU to handle your requests. For example, on AWS use m6 and m7 instances, and on Google Cloud Platform c2-standard-8 instances.

High Availability of Optimizer Hub

Optimizer Hub is designed with High Availability (HA) as a fundamental architectural principle:

  • The system architecture prioritizes continuous update and service reliability.

  • Built-in redundancy at multiple levels ensures business continuity.

High Availability in Clusters

HA is guaranteed inside and between clusters.

Inside a Cluster

The nodes inside the Optimizer Hub service have failover built-in:

  • Automatic redistribution of workload when a node fails.

  • The system maintains full functionality even if individual nodes crash.

  • A seamless transition between nodes prevents service interruption.

Between Clusters

HA is also integrated in configurations with multiple clusters:

  • Clusters have health check endpoints to declare their readiness to accept traffic.

  • You can add a DNS-based load balancer or service mesh to route the requests to the nearest available cluster.

  • Custers can sync important information.

High Availability Configuration

Follow these recommendations to ensure High Availability (HA) of Optimizer Hub.

  • Install multiple instances of the Optimizer Hub service, for example, one per region of availability zone.

  • Front the Optimizer Hub service with either:

  • Let the clients connect to the load balancer or service mesh.

  • Use the health check APIs to only route requests to instances that are ready to handle traffic.

  • Route the requests to the Optimizer Hub service that is nearest to the JVMs.

  • Set-up synchronization of ReadyNow profiles.

Note
Cloud Native Compiler artifacts are not synced. They can be easily regenerated without compromising application performance.