Visit Support

Monitoring Optimizer Hub

Table of Contents
Need help?
Schedule a consultation with an Azul performance expert.
Contact Us

You can monitor your Optimizer Hub using the standard Kubernetes monitoring tools: Prometheus and Grafana. Optimizer Hub components are already configured to expose key metrics for scraping by Prometheus.

In your production systems, you will likely want to use your existing Prometheus and Grafana instances to monitor Optimizer Hub. If you are just evaluating Optimizer Hub, you may want to install a separate instance of Prometheus and Grafana to just monitor your test instance of Optimizer Hub.

Monitoring Optimizer Hub assumes you have a Prometheus and Grafana available, or install one within your Kubernetes cluster.

Retrieving Optimizer Hub Logs

All Optimizer Hub components, including third-party ones, log some information to stdout. These logs are very important for diagnosing problems.

You can extract individual logs with the following command:

kubectl -n my-opthub logs {pod}

However by default Kubernetes keeps only the last 10 MB of logs for every container, which means that in a cluster under load the important diagnostic information can be quickly overwritten by subsequent logs.

You should configure log aggregation from all Optimizer Hub components, so that logs are moved to some persistent storage and then extracted when some issue needs to be analyzed. You can use any log aggregation One suggested way is to use Loki. You can query the Loki logs using the logcli tool.

Here are some common commands you can run to retrieve logs:

  • Find out host and port where Loki is listening

    export LOKI_ADDR=http://{ip-adress}:{port}
  • Get logs of all pods in the selected namespace

    logcli query --since 24h --forward --limit=10000 '{namespace="zvm-dev-3606"}'
  • Get logs of a single application in the selected namespace

    logcli query --since 24h --forward --limit=10000 '{namespace="zvm-dev-3606" app="compile-broker"}'
  • Get logs of a single pod in the selected namespace

    logcli query --since 24h --forward --limit=10000 '{namespace="zvm-dev-3606",pod="compile-broker-5fd956f44f-d5hb2"}'

Extracting Compilation Artifacts

Optimizer Hub stores a record of every compilation request and its processing result in a Compilation Index. Also it stores logs of compiler engine executions. By default these logs only include information about failed compilations.

You can retrieve information from the Compilation Index using a special REST endpoint on the gateway:

  • /testing/compilations — returns JSON containing all compilations performed in the life of the server

  • /testing/compilations?vmid={VM_ID} — Returns the compilation history for a single VM identified by VM_ID

  • /testing/diagnostic-dump — returns a ZIP archive containing both compilation metadata and compiler engine logs (including crashdumps and coredumps if present) for all compilations that the service ever performed

  • /testing/diagnostic-dump?vmid={VM_ID} — the same as above, but only for a single VM identified by VM_ID

You can find the VM ID in connected-compiler-%p.log:

# Log command-line option -Xlog:concomp=info:file=connected-compiler-%p.log::filesize=500M:filecount=20 # Example: [0.647s][info ][concomp] [ConnectedCompiler] received new VM-Id: 4f762530-8389-4ae9-b64a-69b1adacccf2

It is recommended to only query these endpoints using the vmid={VM_ID} parameter, as returning information for the entire history of the service can lead Optimizer Hub performance to degrade.