Network and Gateway Requirements
To achieve peak performance and seamless compilation offloading, a robust and high-speed network foundation is essential. The connection between your Optimizer Hub instance and the environments running your Java applications is the backbone of the optimization process.
General Network Requirements
Because Optimizer Hub relies on real-time, bidirectional communication to manage JIT compilation tasks, the stability and speed of this link directly impact application latency and throughput. A reliable network ensures that JVMs can maintain the long-lived gRPC streams necessary for continuous optimization without interruption.
JVMs that run your Java applications and want to make use of the Optimizer Hub services, require unauthenticated access to the Optimizer Hub gateway.
Load Balancing
Use a load balancer or service mesh to set up a high-availability system, optionally with a secondary fallback system. JVMs connecting to Optimizer Hub need a stable, single entry point to communicate with the service.
Benefits of a Load Balancer
A load balancer provides this external access point while also potentially offering benefits like:
-
SSL configuration in the load balancer
-
Traffic distribution across Optimizer Hub components
-
High availability
-
Network isolation
-
Consistent endpoint for clients regardless of internal pod IP changes
Load Balancer Requirements
-
The load balancer must be an application-level load balancer, i.e., it must understand the gRPC protocol (built on top of HTTP/2) and load balance each gRPC request independently.
-
The load balancer may not limit the duration of gRPC calls. Optimizer Hub uses streaming gRPC calls, which can last for hours, days, or how long the VM stays alive. These long-lived calls must not be killed.
Configuring the Optimizer Hub Host
As an Optimizer Hub administrator, you must provide users the host (DNS or IP) and optional port of the Optimizer Hub service or the (DNS) load balancer the JVMs must connect to. The JVMs need this for the value in the -XX:OptHubHost=<host>[:<port>] option.
Host for Single Optimizer Hub service
In a setup with a single Optimizer Hub service, you can either add your own load balancer (recommended), or use the included gw-proxy component.
Using your Own Load Balancer
It’s recommended to use your own preferred load balancer, consistent with how you dispatch HTTP traffic to your other applications. In such a case, disable gw-proxy in Optimizer Hub and use your own instance, by adding the following to your values-override.yaml file:
$ gwProxy.enabled=false
Your load balancer must be aware of gRPC calls and avoid affinity to a single gateway and not interrupt long calls.
If you correctly defined the load-balancer in values-override.yaml as described in Standard Optimizer Hub Installation Procedure on Kubernetes, you can discover the external IP of the service using the following command:
$ kubectl describe service gateway -n my-opthub | grep 'LoadBalancer Ingress:'
LoadBalancer Ingress: internal-add1ff3e1591e4f93a49af3523b68e3b-1321158844.us-west-2.elb.amazonaws.com
JVM customers then connect using the following command:
java -XX:OptHubHost=internal-add1ff3e1591e4f93a49af3523b68e3b-1321158844.us-west-2.elb.amazonaws.com \
-XX:+EnableRNO \
-jar my-app.jar
Using the Included gw-proxy
|
Note
|
We recommend using your own load balancer. |
The gw-proxy pod, deployed in the Optimizer Hub namespace, is the default load balancer. It uses Envoy as the default gRPC proxy for optimal session balancing. You can find the endpoint of gw-proxy using the following steps:
-
Run the following command:
kubectl -n my-opthub get services -
Look for the
gatewayservice and note the ports corresponding to port 50051 inside the container. This is the port to use for connecting VMs to this Optimizer Hub cluster.service/gateway NodePort 10.233.15.55 <none> 8080:31951/TCP,50051:30926/TCP 52dIn this example the port is
30926.NoteOnly the internal ports 8080and50051in Optimizer Hub are fixed. The port in each setup is a random value. You need to use this lookup to find the port of your Optimizer Hub instance.You can change the gRPC-port used by the Gateway pod, in your
values-override.yamlfile in case you want to override the default values:gateway: ports: serviceGrpcPort: 50051 internalGrpcPort: 50052 cache: ports: internalGrpcPort: 50071 -
Run the
kubectl get nodescommand and note the IP address or name of any node. -
Concatenate node IP with service ports to get something like
10.22.20.131:30926. Do not prefix it withhttp://. -
JVM customers set
-XX:OptHubHost=host:portflag to the port mapped to 50051.java -XX:OptHubHost=10.22.20.131:30926 \ -XX:+EnableRNO \ -jar my-app.jar
Host for High Availability and Failover
When you have multiple Optimizer Hub services to guarantee high-availability (HA) and provide a failover system, you can use the following approaches.
-
Use a (DNS) load balancer of your choice, e.g. Route 53.
-
Use the readiness state of each Optimizer Hub service by using the Kubernetes check available on
/q/health, see Readiness (healthy) API. -
Configure your (DNS) load balancer with the host info of each Optimizer Hub service.
Configuring the Gateway
You must expose the Optimizer Hub gRPC/HTTP endpoints outside the Kubernetes cluster, so that the client JVMs can reach them. Specify the Kubernetes Service type you use as a loadbalancer in your values-override.yaml file:
gateway:
service:
type: <your-service-type>
Available types:
-
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default, used if you don’t explicitly specify a type for a Service. You can expose the Service to the public internet using an Ingress or a Gateway. -
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type:ClusterIP. -
LoadBalancer: Exposes the Service externally using an external load balancer. Kubernetes does not directly offer a load balancing component. You must provide one, or you can integrate your Kubernetes cluster with a cloud provider.
You may need additional settings, depending on the type of service and your environment. The following is an example for a LoadBalancer in an AWS EKS cluster:
gateway:
service:
type: 'LoadBalancer'
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: 'ip'
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: 'preserve_client_ip.enabled=false'
service.beta.kubernetes.io/aws-load-balancer-type: 'nlb'