Visit Azul.com Support

Optimizer Hub Architecture Overview

Optimizer Hub is shipped as a Helm chart and a set of docker images to be deployed into a Kubernetes cluster. The Helm chart deploys different components based on the use case.

Architecture Overview

Optimizer Hub offers two deployment options: a full installation of all components or a ReadyNow Orchestrator-only installation.

Full Installation

In a full installation, all Optimizer Hub components are available and gateway, compile-broker, and cache are scaled when needed.

Full installation diagram

Remarks:

  • The Management Gateway component is optional, as it depends on your use case. See Management Gateway Parameters for more info.

  • All services use one pod, except Cache uses two pods by default.

  • The load balancer is either the gw-proxy included in Optimizer Hub or your own solution, See Configuring Optimizer Hub Host for more info.

ReadyNow Orchestrator Only

When only ReadyNow Orchestrator is needed, a reduced set of the Optimizer Hub components is deployed in the Kubernetes cluster.

ReadyNow Orchestrator only diagram

Deployment Overview

With the default AWS setup (values-aws.yaml), the setup is divided into three node types (four if you also want to use the optional monitoring stack). Each node has a role label used to set the affinity for the nodes. If you set up your cluster on AWS EKS using the Azul-provided cluster config file, nodes are created with these labels.

Note
Make sure that the instances on which you run your Optimizer Hub on have enough CPU to handle your requests. For example, for AWS m5.2xlarge instances can be used, and on Google Cloud Platform c2-standard-8 instances.

The nodes in a Optimizer Hub instance are as follows:

  • Compile Broker - Performs JIT compilations.

    • AWS node type: role=opthubserver

    • System Requirements: CPU 8, RAM 32GB, HDD 100GB

  • Cache - Stores information about the JVM that the compiler needs to perform compilations.

    • AWS node type: role=opthubcache

    • System Requirements: CPU 8, RAM 32GB, HDD 100GB

    • There is one pod per Cache node. To scale up, create more replicas.

  • Infrastructure - Provides supporting functionality.

    • AWS node type: role=opthubinfra

    • System Requirements: CPU 8, RAM 32GB, HDD 100GB. Make sure the disk connection is fast (use SSD) and that the storage volume is persistent between runs.

    • The pods included in this node are:

      • db

      • gateway

      • storage

High Availability of Optimizer Hub

Optimizer Hub is designed with High Availability (HA) as a fundamental architectural principle:

  • The system archictecture prioritizes continuous update and service reliability.

  • Built-in redundancy at multiple levels ensures business continuity.

High Availability in Clusters

HA is guaranteed inside and between clusters.

Inside a Cluster

The nodes inside the Optimizer Hub service have failover built in:

  • Automatic redistribution of workload when a node fails.

  • The system maintains full functionality even if individual nodes crash.

  • A seamless transition between nodes prevents service interruption.

Between Clusters

HA is also integrated in configurations with multiple clusters:

  • Clusters have health check endpoints to declare their readiness to accept traffic.

  • You can add a DNS-based load balancer or service mesh to route the requests to the nearest available cluster.

  • Custers can sync important information.

High Availability Configuration

Follow these recommendations to ensure High Availability (HA) of Optimizer Hub.

  • Install multiple instances of the Optimizer Hub service, for instance, one per region of availability zone.

  • Front the Optimizer Hub service with either:

  • Let the clients connect to the load balancer or service mesh.

  • Use the health check APIs to only route requests to instances which are ready to handle traffic.

  • Route the requests to the Optimizer Hub service which is nearest to the JVMs.

  • Set-up synchronization of ReadyNow profiles.

Note
Cloud Native Compiler artifacts are not synced. These artifacts can be easily regenerated without compromising the application performance.