An OpenTelemetry Prometheus metric exporter is available for SAP Focused Run since release 4.0 FP01. This pull metric exporter reacts to the metrics scrapers like Prometheus and reports the data passively. It can be used to report metrics by responding to Prometheus scraper requests.
Focused Run metrics can be therefore stored in the Prometheus Storage and used by external applications (like Grafana) to display metrics from several use cases (System Monitoring, Health Monitoring, Real User Monitoring, etc.)
This approach can be used to:
In this article we will describe how to use and configure the new functionality using several examples.
Before that, we need to review the architecture of the Analytics API in Focused Run.
The Focused Run Analytics API enables customers to build dashboards and reports aggregating all types of data managed by most of SAP Focused Run applications.
The Analytics API comes with Rest and OData endpoints exposing analytics data.
In this article we will focus on the relevant API for the Prometheus Exporter: REST.
The FRUN AAI REST interface (available since SAP Focused Run 3.0 FP01) provides external access to most of the SAP Focused Run metrics in time-series and, depending on the data source, table formats.
This interface can be used, among other things, by the SAP alm-plug-in-for-grafana to build Dashboards displaying metrics from SAP Focused Run (or SAP Cloud ALM).
The plugin is published as a public project in GitHub. Source Code can be downloaded, and the plugin must be built to be installed on a Grafana Instance.
When using the plugin, multiple requests are sent to the Focused Run REST endpoint. Depending on the type and number of dashboards, visualizations, queries, and users, performance issues or unwanted workload on the Focused Run System may arise.
For these reasons it might be appropriate to include an intermediate entity, acting as a sort of caching proxy, to store metrics configured in the most complex (or used) dashboards.
This intermediate entity can be a metric database like Prometheus.
Prometheus is an open-source metric database and monitoring tool. It's designed to collect and store data about the performance and behaviour of computer systems and application.
Prometheus can periodically “scrape” Focused Run metrics and store them in its time series database.
Prometheus Data Source can be then used to build Dashboards in Grafana like shown below:
Before Focused Run 4.0 FP01, this was not working out of the box because some kind of “proxy” was needed to read metrics from Focused Run REST endpoint and translate it in a format that Prometheus could “understand”.
This "proxy" could be either an "Open Telemetry Collector" (with a custom receiver) or a dedicated application.
Starting with Focused Run 4.0 FP01, there is no need for an external collector between the Prometheus Server and the Focused Run System because a dedicated exporter is now built in in the REST API of Focused Run. The ALM Plugin-in for Grafana can, of course, still be used to directly display metrics from the Focused Run System.
The outbound metrics API translates the results of the SAP Focused Run Analytics REST API into output compatible with the OpenTelemetry Prometheus Metrics Exporter format in PULL mode. The number of data points returned by the API depends on the data provider and filters (mainly drilldown options) used during the API calls.
Most of the Focused Run Advanced Analytics data providers are supported (System Monitoring, Real User Monitoring, Events etc).
The exporter supports the following OTeL metrics format:
https://opentelemetry.io/docs/reference/specification/metrics/data-model/#gauge
Metric data are retrieved from SAP Focused Run via a dedicated HTTP API end-point:
https://<frun host>:<frun port>/sap/frun/fi/dp/metrics
Access to the metrics API require a role with the S_ICF Authorization Object and the following fields/values:
SAP_FRN_FI* standard roles include the authorization object by default.
Depending on the data provider accessed from the API, additional, use case specific, authorizations/roles might be required.
A request without proper authorization for SAP Focused Run REST service always receives an authorization exception (HTTP 403).
Name | Description | Comments |
PROVIDER | The data provider accessed through the metrics API. Supported values:
| Mandatory |
FILTERS | List of filters to apply when retrieving the metrics from the data providers. Filters should be passed as a comma separated list using the filter parameter.
Filters values should be passed as URL parameters and can be passed as a comma separated values
| Default: N/A Mandatory |
PERIOD | Specify the aggregation period (ex: C2H) to use in combination with the resolution to determine the number of data points to be returned. The response payload will contain all data points for the selected period and resolution. If omitted, only the last data point is returned according to the resolution and shift parameters.
| Mandatory |
RESOLUTION | Specify the aggregation resolution (e.g. 15min) to apply to the selected period.
| Default: H Mandatory |
NAME | Text used to prefix the name of the metrics returned in the API payload. | Mandatory |
METHOD | Optional parameter to specify the aggregation function. Supported methods:
Timestamp is provided only with the "last" method. | Optional |
SHIFT | Shift the start time and the end time by a 'shift' factor of the given resolution. | Default: 0 Optional |
The following call demonstrates an access to the "System Monitoring" data provider using two metrics, DIALOG_RESPONSE_TIME and ABAP_INST_ASR_DIA_TOTAL and one system represented by its Global System ID (GUID). Time period is "current two hours" using a 5 minutes resolution:
https://<frun host>:<frun port>/sap/frun/fi/dp/metrics?
provider=DP_SYSMON&
filters=GUID,METRIC_NAMES&
GUID=fa163e24-f817-1ed6-96c2-9fe1f3d76a18&
METRIC_NAMES=DIALOG_RESPONSE_TIME,ABAP_INST_ASR_DIA_TOTAL&
period=C2H&
resolution=5min&
name=system_monitoring&
method=last&
shift=0
This is the typical response from the exporter:
# TYPE system_monitoring gauge
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="fa163e24-f817-1ed6-96c2-9fe1f3d76a19",managed_obj_name="FRNADM (ABAP)",metric_id="4926BF3A91934B10E10000000A42193A",root_context_name="FRNADM (ABAP)",root_context_id="fa163e24-f817-1ed6-96c2-9fe1f3d76a19",drilldown="DETAIL"} 127.22 1706865000000
system_monitoring{metric_names="ABAP_INST_ASR_DIA_TOTAL",guid="fa163e24-f817-1ed6-96c2-9fe1f3d76a19",managed_obj_name="FRNADM (ABAP)",metric_id="089E014133DE1EE5A38FEBB2D2381063",root_context_name="FRNADM (ABAP)",root_context_id="fa163e24-f817-1ed6-96c2-9fe1f3d76a19",drilldown="DETAIL"} 424.67 1706864400000
# EOF
❗There are at least a couple of ways to get the list of filters and values for a specific data provider:
curl -X POST -H "Content-Type: application/json" -d '{"providerName": "DP_SYSMON", "providerVersion": ""}' https://<frunhost>:<port>/sap/frun/fi/dp/providers/filters
We won't go into details about the whole Prometheus setup as extensive documentation is available on the official Prometheus site (a generic file example for the Job Configuration can be found here).
Once a query is defined with all the needed parameters, as in the previous example, the related URL can be periodically scraped by configuring jobs in Prometheus. All the parameters (FILTERS, PROVIDER etc.) can be used in the "params:" section of the job definition.
Here is an example of a Job configuration:
- job_name: alm_system_monitoring_job #name of the scrape job that will displayed as a target in Prometheus
scrape_timeout: 15s
scrape_interval: 1m
static_configs:
- targets: ['<frun host>:<frun port>'] #Hostname and Port of the Focused Run System
metrics_path: /sap/frun/fi/dp/metrics
params:
provider: [DP_SYSMON] #Data provider that will be used during the scrape. In this case we are using "System Monitoring"
filters: [GUID,METRIC_NAMES] #List of parameters that will be used for the query, they need to be declared here to be used in the params section. These are the same parameters that can be found in the expert view of the OCC Dashboard
GUID: [fa163e24-f817-1ed6-96c2-9fe1f3d76a19]
METRIC_NAMES: [DIALOG_RESPONSE_TIME,ABAP_INST_ASR_DIA_TOTAL]
period: [C2H]
resolution: [5mi]
name: [alm_monitoring] #name of the metric that will be used to record the data in Prometheus
method: [last] #last data point from the series will be selected. A timestamp will be returned by the exporter along with the value. If avg is selected, the average value of the datapoints will be calculated and no explicit timestamp will be returned by the endpoint. In this case Prometheus will use the timestamp at the time of the actual scrape.
scheme: https
honor_timestamps: true #the timestamps returned by the Frun Exporter will be actually used by Prometheus
tls_config:
insecure_skip_verify: true
basic_auth:
username: FRUN_USER #FRUN User used for the connection
password_file: /prometheus/scrape.password #using a password file for authentication
After restarting the prometheus server (or forcing a configuration reload with the management API), the new Job should be visible in the list of targets:
Metrics can be checked directly using the expression browser available at http(s)://<prometheus host>:<prometheus port>/graph:
In the previous example we used a single system to retrieve two metrics. What if we want to use a group of systems for the same data provider and metrics? In this case a filter named SCOPE_VIEW can be used.
These views are created and saved in the context of the Focused Run System Monitoring application from the Scope Selection.
They are just a collection of LMDB attributes that can be used to build a dynamic list of systems, like here:
Once saved, they are accessible using the REST API by sending a POST request to the filters endpoint:
POST https://<frun host>:<frun port>/sap/frun/fi/dp/providers/filters
#BODY
{
"providerName": "DP_SYSMON", "providerVersion": ""
}
The response will include all the possible values for the SCOPE_VIEW filter:
{
"key": "SCOPE_VIEW",
"name": "Scope Views",
"description": "",
"isAttribute": true,
"type": "attribute",
"values": [
{
"key": "id_1505742290679_200_filterBar",
"label": "Productive Systems (Active)"
},
{
"key": "id_1698745198775_505_filterBar",
"label": "Systems Starting with F"
},
{
"key": "id_1101937076808_2563_filterBar",
"label": "ABAP System scope F"
},
{
"key": "id_1279999413032_276_filterBar",
"label": "Private-Save-View"
},
{
"key": "id_1680015015490_273_filterBar",
"label": "ABAP Systems"
},
{
"key": "id_1680010289524_324_filterBar",
"label": "Java Systems"
}
],
"isMultiple": null,
"triggerRefresh": null,
"group": "Scope Selection"
}
Here is an example of a query using the SCOPE_VIEW filter:
https://<frun host>:<frun port>/sap/frun/fi/dp/metrics?
provider=DP_SYSMON&
filters=SCOPE_VIEW,METRIC_NAMES&
SCOPE_VIEW=id_1698745198775_505_filterBar& #Key corresponding to the label "Systems Starting with F" view from the filters request
METRIC_NAMES=DIALOG_RESPONSE_TIME,ABAP_INST_ASR_DIA_TOTAL&
period=C2H&
resolution=5min&
name=system_monitoring&
method=last&
shift=0
The result will be similar to this:
# TYPE system_monitoring gauge
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="6cae8b74-a13e-1ee8-99a4-945a0d54ab92",managed_obj_name="FPDADM (ABAP)",metric_id="4126BF3A91934B10E10000000A42193A",root_context_name="FPDADM (ABAP)",root_context_id="6cae8b74-a13e-1ee8-99a4-945a0d54ab92",drilldown="DETAIL"} 103.84 1706881080000
system_monitoring{metric_names="ABAP_INST_ASR_DIA_TOTAL",guid="6cae8b74-a13e-1ee8-99a4-945a0d54ab92",managed_obj_name="FPDADM (ABAP)",metric_id="089E014133DE1EE5A38FEBB2D2381063",root_context_name="FPDADM (ABAP)",root_context_id="6cae8b74-a13e-1ee8-99a4-945a0d54ab92",drilldown="DETAIL"} 178.79 1706880780000
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="98f2b303-2cd3-1edd-898e-687c0dac2a45",managed_obj_name="FPDADMPI (ABAP)",metric_id="4126BF3A91934B10E10000000A42193A",root_context_name="FPDADMPI (ABAP)",root_context_id="98f2b303-2cd3-1edd-898e-687c0dac2a45",drilldown="DETAIL"} 3.04 1706880960000
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="6cae8b74-a13c-1ed6-b3b5-e275fde3776e",managed_obj_name="FPPADM (ABAP)",metric_id="4126BF3A91934B10E10000000A42193A",root_context_name="FPPADM (ABAP)",root_context_id="6cae8b74-a13c-1ed6-b3b5-e275fde3776e",drilldown="DETAIL"} 68.14 1706880960000
system_monitoring{metric_names="ABAP_INST_ASR_DIA_TOTAL",guid="6cae8b74-a13c-1ed6-b3b5-e275fde3776e",managed_obj_name="FPPADM (ABAP)",metric_id="089E014133DE1EE5A38FEBB2D2381063",root_context_name="FPPADM (ABAP)",root_context_id="6cae8b74-a13c-1ed6-b3b5-e275fde3776e",drilldown="DETAIL"} 38.94 1706880720000
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="98f2b303-2cd3-1edd-9ec3-111e44ed2e80",managed_obj_name="FFPABA (ABAP)",metric_id="4126BF3A91934B10E10000000A42193A",root_context_name="FFPABA (ABAP)",root_context_id="98f2b303-2cd3-1edd-9ec3-111e44ed2e80",drilldown="DETAIL"} 78.64 1706881080000
system_monitoring{metric_names="ABAP_INST_ASR_DIA_TOTAL",guid="98f2b303-2cd3-1edd-9ec3-111e44ed2e80",managed_obj_name="FFPABA (ABAP)",metric_id="089E014133DE1EE5A38FEBB2D2381063",root_context_name="FFPABA (ABAP)",root_context_id="98f2b303-2cd3-1edd-9ec3-111e44ed2e80",drilldown="DETAIL"} 86.49 1706880600000
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="fa163e24-f817-1ed6-92c2-9fe1f3d76a18",managed_obj_name="FQPSYS (ABAP)",metric_id="4126BF3A91934B10E10000000A42193A",root_context_name="FQPSYS (ABAP)",root_context_id="fa163e24-f817-1ed6-92c2-9fe1f3d76a18",drilldown="DETAIL"} 129.64 1706880900000
system_monitoring{metric_names="ABAP_INST_ASR_DIA_TOTAL",guid="fa163e24-f817-1ed6-92c2-9fe1f3d76a18",managed_obj_name="FQPSYS (ABAP)",metric_id="089E014133DE1EE5A38FEBB2D2381063",root_context_name="FQPSYS (ABAP)",root_context_id="fa163e24-f817-1ed6-92c2-9fe1f3d76a18",drilldown="DETAIL"} 540.13 1706880780000
system_monitoring{metric_names="DIALOG_RESPONSE_TIME",guid="6cae8b74-a13e-1ee1-adde-09c881a053a6",managed_obj_name="FQTSYS (ABAP)",metric_id="4126BF3A91934B10E10000000A42193A",root_context_name="FQTSYS (ABAP)",root_context_id="6cae8b74-a13e-1ee1-adde-09c881a053a6",drilldown="DETAIL"} 62.75 1706881020000
# EOF
The Job configuration in Prometheus will include the SCOPE_VIEW in the param section:
- job_name: alm_system_monitoring_job #name of the scrape job that will displayed as a target in Prometheus
scrape_timeout: 15s
scrape_interval: 1m
static_configs:
- targets: ['<frun host>:<frun port>'] #Hostname and Port of the Focused Run System
metrics_path: /sap/frun/fi/dp/metrics
params:
provider: [DP_SYSMON] #Data provider that will be used during the scrape. In this case we are using "System Monitoring"
filters: [SCOPE_VIEW,METRIC_NAMES] #List of parameters that will be used for the query, they need to be declared here to be used in the params section. These are the same parameters that can be found in the expert view of the OCC Dashboard
SCOPE_VIEW: [id_1698745198775_505_filterBar]
METRIC_NAMES: [DIALOG_RESPONSE_TIME,ABAP_INST_ASR_DIA_TOTAL]
period: [C2H]
resolution: [5mi]
name: [alm_monitoring] #name of the metric that will be used to record the data in Prometheus
method: [last] #last data point from the series will be selected. A timestamp will be returned by the exporter along with the value. If avg is selected, the average value of the datapoints will be calculated and no explicit timestamp will be returned by the endpoint. In this case Prometheus will use the timestamp at the time of the actual scrape.
scheme: https
honor_timestamps: true #the timestamps returned by the Frun Exporter will be actually used by Prometheus
tls_config:
insecure_skip_verify: true
basic_auth:
username: FRUN_USER #FRUN User used for the connection
password_file: /prometheus/scrape.password #using a password file for authentication
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
32 | |
14 | |
13 | |
13 | |
11 | |
8 | |
8 | |
7 | |
7 | |
6 |