Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
andrea_campo
Product and Topic Expert
Product and Topic Expert
1,980

Introduction

 

An OpenTelemetry Prometheus metric exporter is available for SAP Focused Run​ since release 4.0 FP01. This pull metric exporter  reacts to the metrics scrapers like Prometheus and reports the data passivelyIt can be used to report metrics by responding to Prometheus scraper requests​.

Focused Run metrics can be therefore stored in the Prometheus Storage and used by external applications (like Grafana) to display metrics from several use cases (System Monitoring, Health Monitoring, Real User Monitoring, etc.)

This approach can be used to:

  • Reduce the workload on the Focused Run System (periodic scrape requests  from a single Prometheus instance vs multiple concurrent requests from several users/applications) 
  • Store historical data
  • Improve the performances of Grafana Dashboards​ or third party applications that need high volume of raw data 

In this article we will describe how to use and configure the new functionality using several examples.

Before that, we need to review the architecture of the Analytics API in Focused Run.

 

Focused Run Analytics API

 

The Focused Run Analytics API enables customers to build dashboards and reports aggregating all types of data managed by most of SAP Focused Run applications.

The Analytics API comes with Rest and OData endpoints exposing analytics data.

 

andrea_campo_0-1706709090321.png

In this article we will focus on the relevant API for the Prometheus Exporter:  REST.

 

Advanced Analytics REST API

 

The FRUN AAI REST interface (available since SAP Focused Run 3.0 FP01) provides external access to most of the SAP Focused Run metrics in time-series and, depending on the data source, table formats.

andrea_campo_0-1706710021030.png

 

This interface can be used, among other things, by the SAP alm-plug-in-for-grafana to build Dashboards displaying metrics from SAP Focused Run (or SAP Cloud ALM).

The plugin is published as a public project in GitHub. Source Code can be downloaded, and the plugin must be built to be installed on a Grafana Instance.

 

andrea_campo_0-1706710546136.png

When using the plugin, multiple requests are sent to the Focused Run REST endpoint. Depending on the type and number of dashboards, visualizations, queries, and users, performance issues or unwanted workload on the Focused Run System may arise.

For these reasons it might be appropriate to include an intermediate entity, acting as a sort of caching proxy, to store metrics configured in the most complex (or used) dashboards.

This intermediate entity can be a metric database like Prometheus.

 

Prometheus and Focused Run

 

Prometheus is an open-source metric database and monitoring tool. It's designed to collect and store data about the performance and behaviour of computer systems and application.
Prometheus can periodically “scrape” Focused Run metrics and store them in its time series database.
Prometheus Data Source can be then used to build Dashboards in Grafana like shown below:

 

andrea_campo_1-1706711428678.png

 

Before Focused Run 4.0 FP01, this was not working out of the box because some kind of “proxy” was needed to read metrics from Focused Run REST endpoint and translate it in a format that Prometheus could “understand”.

 

andrea_campo_2-1706711730035.png

 

This "proxy" could be either an "Open Telemetry Collector" (with a custom receiver) or a dedicated application.

Starting with Focused Run 4.0 FP01, there is no need for an external collector between the Prometheus Server and the Focused Run System because a dedicated exporter is now built in in the REST API of Focused Run. The ALM Plugin-in for Grafana can, of course, still be used to directly display metrics from the Focused Run System.

 

andrea_campo_3-1706712594991.png

 

Focused Run Exporter

 

Overview

The outbound metrics API translates the results of the SAP Focused Run Analytics REST API into output compatible with the OpenTelemetry Prometheus Metrics Exporter format in PULL mode. The number of data points returned by the API depends on the data provider and filters (mainly drilldown options) used during the API calls.

Most of the Focused Run Advanced Analytics data providers are supported (System Monitoring, Real User Monitoring, Events etc).

The exporter supports the following OTeL metrics format:

https://opentelemetry.io/docs/reference/specification/metrics/data-model/#gauge

 

URL Format

Metric data are retrieved from SAP Focused Run via a dedicated HTTP API end-point:

 

https://<frun host>:<frun port>/sap/frun/fi/dp/metrics

 

 

Authorizations

Access to the metrics API require a role with the S_ICF Authorization Object and the following fields/values:

  • ICF_FIELD = Internet Communication Framework Service
  • ICF_VALUE = AAI_REST

SAP_FRN_FI* standard roles include the authorization object by default.

Depending on the data provider accessed from the API, additional, use case specific, authorizations/roles might be required.

A request without proper authorization for SAP Focused Run REST service always receives an authorization exception (HTTP 403).

 

Focused Run Exporter Details

Query Parameters

 

NameDescriptionComments
PROVIDER

The data provider accessed through the metrics API.

Supported values:

  • DP_ALERT (Alert Reporting)
  • DP_CSA (Configuration & Security Analysis)
  • DP_AIM (Document Monitoring (AIM))
  • DP_OUTAGES (Downtime Monitoring)
  • DP_EVENTS (Events)
  • DP_EXCEPTION_MON (Exception Monitoring)
  • DP_HEALTH_MONITORING (Health Monitoring - Application Check)
  • DP_OCMON (Health Monitoring - Availability)
  • DP_HEALTH_MON_CLOUD (Health Monitoring - Cloud Services)
  • DP_JAM (Job & Automation Monitoring)
  • DP_JOBMON (Job Monitoring (ABAP Only))
  • DP_OPEN_KPI_STORE (Open KPI Store)
  • DP_RUM (Real User Monitoring)
  • DP_RUM_REQUEST_TYPES (Real User Monitoring - Request Type Overview)
  • DP_SAM (Service Availability Management)
  • DP_STAT_RECORDS (Statistical Records)
  • DP_SUM (Synthetic User Monitoring)
  • DP_SYSMON (System Monitoring)
Mandatory
FILTERS

List of filters to apply when retrieving the metrics from the data providers. Filters should be passed as a comma separated list using the filter parameter.

  • Example: filters=GUID,METRIC_NAMES

Filters values should be passed as URL parameters and can be passed as a comma separated values

  • Example: 
    • GUID=fa162e14-f818-1ed6-96c2-9fe1f3d76a19
    • METRIC_NAMES=DIALOG_RESPONSE_TIME,ABAP_INST_ASR_DIA_TOTAL

Default: N/A

Mandatory

PERIOD

Specify the aggregation period (ex: C2H) to use in combination with the resolution to determine the number of data points to be returned. The response payload will contain all data points for the selected period and resolution. If omitted, only the last data point is returned according to the resolution and shift parameters.

  • Semantic format with the following syntax: “[Prefix][Number][Suffix]”.​ Where:​
    • Prefix:
      • C: Current
      • L: Last
    • Number: any integer number
    • Suffix
      • H : Hour
      • D : Days
      • W : Weeks
      • M : Months
  • Examples
    • C10D
    • L2H
Mandatory
RESOLUTION

Specify the aggregation resolution (e.g. 15min) to apply to the selected period.

  • Granularity of the query
    • R: Use the smallest granularity available
    • <X>Mi for X (5,10,15) Minutes
    • H: Hour

Default: H

Mandatory

NAMEText used to prefix the name of the metrics returned in the API payload.Mandatory 
METHOD

Optional parameter to specify the aggregation function.

Supported methods:

  • avg (default value)
  • last

Timestamp is provided only with the "last" method.

Optional
SHIFTShift the start time and the end time by a 'shift' factor of the given resolution.

Default: 0

Optional

 

Example

The following call demonstrates an access to the "System Monitoring" data provider using two metrics, DIALOG_RESPONSE_TIME and ABAP_INST_ASR_DIA_TOTAL and one system represented by its Global System ID (GUID). Time period is "current two hours" using a 5 minutes resolution:

 

https://<frun host>:<frun port>/sap/frun/fi/dp/metrics?
provider=DP_SYSMON&
filters=GUID,METRIC_NAMES&
GUID=fa163e24-f817-1ed6-96c2-9fe1f3d76a18&
METRIC_NAMES=DIALOG_RESPONSE_TIME,ABAP_INST_ASR_DIA_TOTAL&
period=C2H&
resolution=5min&
name=system_monitoring&
method=last&
shift=0

 

This is the typical response from the exporter:

 

# TYPE system_monitoring gauge
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="fa163e24-f817-1ed6-96c2-9fe1f3d76a19",managed_obj_name="FRNADM (ABAP)",metric_id="4926BF3A91934B10E10000000A42193A",root_context_name="FRNADM (ABAP)",root_context_id="fa163e24-f817-1ed6-96c2-9fe1f3d76a19",drilldown="DETAIL"} 127.22 1706865000000
system_monitoring{metric_names="ABAP_INST_ASR_DIA_TOTAL",guid="fa163e24-f817-1ed6-96c2-9fe1f3d76a19",managed_obj_name="FRNADM (ABAP)",metric_id="089E014133DE1EE5A38FEBB2D2381063",root_context_name="FRNADM (ABAP)",root_context_id="fa163e24-f817-1ed6-96c2-9fe1f3d76a19",drilldown="DETAIL"} 424.67 1706864400000
# EOF

 

There are at least a couple of ways to get the list of filters and values for a specific data provider:

  1. Configure a query in the OCC Dashboard and check the query configuration in "expert mode"
    • andrea_campo_0-1706867209486.png
  2. Send a specific request using the REST interface to get the full list of filters and values, for example:

 

curl -X POST -H "Content-Type: application/json" -d '{"providerName": "DP_SYSMON", "providerVersion": ""}' https://<frunhost>:<port>/sap/frun/fi/dp/providers/filters

 

 

Prometheus Configuration

We won't go into details about the whole Prometheus setup as extensive documentation is available on the official Prometheus site (a generic file example for the Job Configuration can be found here).

Once a query is defined with all the needed parameters, as in the previous example, the related URL can be periodically scraped by configuring jobs in Prometheus. All the parameters (FILTERS, PROVIDER etc.) can be used in the "params:" section of the job definition.

Here is an example of a Job configuration:

 

 - job_name: alm_system_monitoring_job #name of the scrape job that will displayed as a target in Prometheus
    scrape_timeout: 15s
    scrape_interval: 1m
    static_configs:
      - targets: ['<frun host>:<frun port>'] #Hostname and Port of the Focused Run System
    metrics_path: /sap/frun/fi/dp/metrics

    params:
      provider: [DP_SYSMON] #Data provider that will be used during the scrape. In this case we are using "System Monitoring"
      filters: [GUID,METRIC_NAMES] #List of parameters that will be used for the query, they need to be declared here to be used in the params section. These are the same parameters that can be found in the expert view of the OCC Dashboard
      GUID: [fa163e24-f817-1ed6-96c2-9fe1f3d76a19]
      METRIC_NAMES: [DIALOG_RESPONSE_TIME,ABAP_INST_ASR_DIA_TOTAL]
      period: [C2H]
      resolution: [5mi]
      name: [alm_monitoring] #name of the metric that will be used to record the data in Prometheus
      method: [last] #last data point from the series will be selected. A timestamp will be returned by the exporter along with the value. If avg is selected, the average value of the datapoints will be calculated and no explicit timestamp will be returned by the endpoint. In this case Prometheus will use the timestamp at the time of the actual scrape.

    scheme: https
    honor_timestamps:  true #the timestamps returned by the Frun Exporter will be actually used by Prometheus
    tls_config:
      insecure_skip_verify: true
    basic_auth:
      username: FRUN_USER #FRUN User used for the connection
      password_file: /prometheus/scrape.password #using a password file for authentication

 

After restarting the prometheus server (or forcing a configuration reload with the management API), the new Job should be visible in the list of targets:

andrea_campo_0-1706885466809.png

Metrics can be checked directly using the expression browser available at http(s)://<prometheus host>:<prometheus port>/graph:

andrea_campo_1-1706886840447.png

 

Configuring Multiple Systems

In the previous example we used a single  system to retrieve two metrics. What if we want to use a group of systems for the same data provider and metrics? In this case a filter named SCOPE_VIEW can be used.

These views are created and saved in the context of the Focused Run System Monitoring application from the Scope Selection.

andrea_campo_0-1706879409390.png

They are just a collection of LMDB attributes that can be used to build a dynamic list of systems, like here:

andrea_campo_1-1706879457272.png

Once saved, they are accessible using the REST API by sending a POST request to the filters endpoint:

 

POST https://<frun host>:<frun port>/sap/frun/fi/dp/providers/filters
#BODY
{
     "providerName": "DP_SYSMON", "providerVersion": ""
}

 

The response will include all the possible values for the SCOPE_VIEW filter:

 

  {
    "key": "SCOPE_VIEW",
    "name": "Scope Views",
    "description": "",
    "isAttribute": true,
    "type": "attribute",
    "values": [
      {
        "key": "id_1505742290679_200_filterBar",
        "label": "Productive Systems (Active)"
      },
      {
        "key": "id_1698745198775_505_filterBar",
        "label": "Systems Starting with F"
      },
      {
        "key": "id_1101937076808_2563_filterBar",
        "label": "ABAP System scope F"
      },
      {
        "key": "id_1279999413032_276_filterBar",
        "label": "Private-Save-View"
      },
      {
        "key": "id_1680015015490_273_filterBar",
        "label": "ABAP Systems"
      },
      {
        "key": "id_1680010289524_324_filterBar",
        "label": "Java Systems"
      }
    ],
    "isMultiple": null,
    "triggerRefresh": null,
    "group": "Scope Selection"
  }

 

Here is an example of a query using the SCOPE_VIEW filter:

 

https://<frun host>:<frun port>/sap/frun/fi/dp/metrics?
provider=DP_SYSMON&
filters=SCOPE_VIEW,METRIC_NAMES&
SCOPE_VIEW=id_1698745198775_505_filterBar& #Key corresponding to the label "Systems Starting with F" view from the filters request
METRIC_NAMES=DIALOG_RESPONSE_TIME,ABAP_INST_ASR_DIA_TOTAL&
period=C2H&
resolution=5min&
name=system_monitoring&
method=last&
shift=0

 

The result will be similar to this:

 

# TYPE system_monitoring gauge
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="6cae8b74-a13e-1ee8-99a4-945a0d54ab92",managed_obj_name="FPDADM (ABAP)",metric_id="4126BF3A91934B10E10000000A42193A",root_context_name="FPDADM (ABAP)",root_context_id="6cae8b74-a13e-1ee8-99a4-945a0d54ab92",drilldown="DETAIL"} 103.84 1706881080000
system_monitoring{metric_names="ABAP_INST_ASR_DIA_TOTAL",guid="6cae8b74-a13e-1ee8-99a4-945a0d54ab92",managed_obj_name="FPDADM (ABAP)",metric_id="089E014133DE1EE5A38FEBB2D2381063",root_context_name="FPDADM (ABAP)",root_context_id="6cae8b74-a13e-1ee8-99a4-945a0d54ab92",drilldown="DETAIL"} 178.79 1706880780000
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="98f2b303-2cd3-1edd-898e-687c0dac2a45",managed_obj_name="FPDADMPI (ABAP)",metric_id="4126BF3A91934B10E10000000A42193A",root_context_name="FPDADMPI (ABAP)",root_context_id="98f2b303-2cd3-1edd-898e-687c0dac2a45",drilldown="DETAIL"} 3.04 1706880960000
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="6cae8b74-a13c-1ed6-b3b5-e275fde3776e",managed_obj_name="FPPADM (ABAP)",metric_id="4126BF3A91934B10E10000000A42193A",root_context_name="FPPADM (ABAP)",root_context_id="6cae8b74-a13c-1ed6-b3b5-e275fde3776e",drilldown="DETAIL"} 68.14 1706880960000
system_monitoring{metric_names="ABAP_INST_ASR_DIA_TOTAL",guid="6cae8b74-a13c-1ed6-b3b5-e275fde3776e",managed_obj_name="FPPADM (ABAP)",metric_id="089E014133DE1EE5A38FEBB2D2381063",root_context_name="FPPADM (ABAP)",root_context_id="6cae8b74-a13c-1ed6-b3b5-e275fde3776e",drilldown="DETAIL"} 38.94 1706880720000
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="98f2b303-2cd3-1edd-9ec3-111e44ed2e80",managed_obj_name="FFPABA (ABAP)",metric_id="4126BF3A91934B10E10000000A42193A",root_context_name="FFPABA (ABAP)",root_context_id="98f2b303-2cd3-1edd-9ec3-111e44ed2e80",drilldown="DETAIL"} 78.64 1706881080000
system_monitoring{metric_names="ABAP_INST_ASR_DIA_TOTAL",guid="98f2b303-2cd3-1edd-9ec3-111e44ed2e80",managed_obj_name="FFPABA (ABAP)",metric_id="089E014133DE1EE5A38FEBB2D2381063",root_context_name="FFPABA (ABAP)",root_context_id="98f2b303-2cd3-1edd-9ec3-111e44ed2e80",drilldown="DETAIL"} 86.49 1706880600000
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="fa163e24-f817-1ed6-92c2-9fe1f3d76a18",managed_obj_name="FQPSYS (ABAP)",metric_id="4126BF3A91934B10E10000000A42193A",root_context_name="FQPSYS (ABAP)",root_context_id="fa163e24-f817-1ed6-92c2-9fe1f3d76a18",drilldown="DETAIL"} 129.64 1706880900000
system_monitoring{metric_names="ABAP_INST_ASR_DIA_TOTAL",guid="fa163e24-f817-1ed6-92c2-9fe1f3d76a18",managed_obj_name="FQPSYS (ABAP)",metric_id="089E014133DE1EE5A38FEBB2D2381063",root_context_name="FQPSYS (ABAP)",root_context_id="fa163e24-f817-1ed6-92c2-9fe1f3d76a18",drilldown="DETAIL"} 540.13 1706880780000
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="6cae8b74-a13e-1ee1-adde-09c881a053a6",managed_obj_name="FQTSYS (ABAP)",metric_id="4126BF3A91934B10E10000000A42193A",root_context_name="FQTSYS (ABAP)",root_context_id="6cae8b74-a13e-1ee1-adde-09c881a053a6",drilldown="DETAIL"} 62.75 1706881020000
# EOF

 

 The Job configuration in Prometheus will include the SCOPE_VIEW in the param section:

 

 - job_name: alm_system_monitoring_job #name of the scrape job that will displayed as a target in Prometheus
    scrape_timeout: 15s
    scrape_interval: 1m
    static_configs:
      - targets: ['<frun host>:<frun port>'] #Hostname and Port of the Focused Run System
    metrics_path: /sap/frun/fi/dp/metrics

    params:
      provider: [DP_SYSMON] #Data provider that will be used during the scrape. In this case we are using "System Monitoring"
      filters: [SCOPE_VIEW,METRIC_NAMES] #List of parameters that will be used for the query, they need to be declared here to be used in the params section. These are the same parameters that can be found in the expert view of the OCC Dashboard
      SCOPE_VIEW: [id_1698745198775_505_filterBar]
      METRIC_NAMES: [DIALOG_RESPONSE_TIME,ABAP_INST_ASR_DIA_TOTAL]
      period: [C2H]
      resolution: [5mi]
      name: [alm_monitoring] #name of the metric that will be used to record the data in Prometheus
      method: [last] #last data point from the series will be selected. A timestamp will be returned by the exporter along with the value. If avg is selected, the average value of the datapoints will be calculated and no explicit timestamp will be returned by the endpoint. In this case Prometheus will use the timestamp at the time of the actual scrape.

    scheme: https
    honor_timestamps:  true #the timestamps returned by the Frun Exporter will be actually used by Prometheus
    tls_config:
      insecure_skip_verify: true
    basic_auth:
      username: FRUN_USER #FRUN User used for the connection
      password_file: /prometheus/scrape.password #using a password file for authentication

 

 

Key Takeaways

 

  • A prometheus exporter is available in SAP Focused Run since release 4.0 FP01
  • The exporter can be used to store Focused Run Metrics in a metric database like Prometheus
  • Metrics are exposed in a format that a Prometheus Scraper can process without the help of external collectors
  • The exporter is available on the Focused Run system at the following endpoint: https://<frun host>:<frun port>/sap/frun/fi/dp/metrics
  • Metrics can be directly tested in a browser using the appropriate URL parameters (PROVIDER, FILTERS, RESOLUTION etc.)
  • Jobs must be configured in prometheus to scrape the data using specific parameters in the "params:" section
  • Multiple Systems can be configured with one Job using the Scope View filter

 

 

1 Comment
wenjie_he
Explorer
0 Kudos

Hi andrea_campo,

   Thanks for your blog.

  I have two  questions:

1) how to   pull all metrics  into prometheus

2) in the futrue , can frun storage the metric data  in prometheus or the other Time series database directly?