Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
gabbi
Product and Topic Expert
Product and Topic Expert

Welcome to the first blog in our three-part series, where we will explore how Kyma can seamlessly integrate with the SAP Cloud Logging service. By enabling the three pillars of observability - logs, traces, and metrics - Kyma developers and operators can effectively troubleshoot issues, identify root causes, investigate performance bottlenecks, and gain a comprehensive understanding of system behavior.

In this initial blog post, we will delve into the following topics:

  1. SAP Cloud Logging: An Overview

    • Learn about the SAP Cloud Logging service and its significance in the context of Kyma integration.
    • Discover how to provision an instance of SAP Cloud Logging.
  2. Shipping Logs to SAP Cloud Logging

    • Explore the step-by-step process of shipping logs from applications deployed on SAP BTP, Kyma runtime to SAP Cloud Logging.

In the subsequent blogs, we will continue our exploration by discussing the integration of traces and metrics.

What is SAP Cloud Logging?

The SAP Discovery Center description says:

SAP Cloud Logging service is an instance-based observability service that builds upon OpenSearch to store, visualize, and analyze application logs, metrics, and traces from SAP BTP Cloud Foundry, Kyma, Kubernetes, and other runtime environments.

For Cloud Foundry and Kyma, SAP Cloud Logging offers an easy integration by providing predefined content to investigate the load, latency, and error rates of the observed applications based on their requests and correlate them with additional data.

To get started with SAP Cloud Logging, visit the Discovery Center where you will find more detailed information about its features and capabilities.

Pricing for the SAP Cloud Logging service can be determined using the SAP Cloud Logging Capacity Unit Estimator. It is important to note that for Kyma, the "Ingest Otel" option needs to be enabled, which should be taken into account when estimating pricing. This option is used for shipping traces and metrics.

Provision an Instance of SAP Cloud Logging

Now, let's explore how we can leverage SAP Cloud Logging to ingest logs from applications deployed on SAP BTP, Kyma runtime.

Prerequisites

Procedure

You can refer to the official SAP documentation to create an SAP Cloud Logging service instance for details.

  • Export your namespace's name as an environment variable.

 

 

# In the instructions, all resources are created in cls namespace. If you want to use a different namespace, adjust the files appropriately
export NS=cls
kubectl create ns ${NS}

 

 

  • Create an instance of SAP Cloud Logging and a service binding:

 

 

kubectl -n ${NS} apply -f https://raw.githubusercontent.com/SAP-samples/kyma-runtime-extension-samples/main/sap-cloud-logging/k8s/cls-instance.yaml

 

 

For reference, this is the service instance specification:

 

 

apiVersion: services.cloud.sap.com/v1
kind: ServiceInstance
metadata:
    name: my-cls
spec:
    serviceOfferingName: cloud-logging
    servicePlanName: dev
    parameters:
      retentionPeriod: 7
      ingest_otlp:
        enabled: true

 

 

This is the corresponding service binding.

 

 

apiVersion: services.cloud.sap.com/v1
kind: ServiceBinding
metadata:
    name: my-cls-binding
spec:
  serviceInstanceName: my-cls
  credentialsRotationPolicy:
    enabled: true
    rotationFrequency: "720h"
    rotatedBindingTTL: "24h"

 

 

The service binding specifies the credentials' rotation policy.

It is a great developer experience that the Telemetry module intelligently switches to new credentials once they are rotated. This does not require any action from the developer.

NOTE: The same instance will be reused for configuring tracing and monitoring.

The service binding also generates a Secret with the same name. It contains the details to access the dashboard of the SAP Cloud Logging instance previously created.

cls-binding-secret.png

Ship your application logs to SAP Cloud Logging

To ship your logs to SAP Cloud Logging, create LogPipeline custom resources (CRs).

Your application running in SAP BTP, Kyma runtime will send logs to stdout. The Telemetry module based on the LogPipeline will capture and ship them to SAP Cloud Logging.

Create a LogPipeline CR for Your Application Logs

To create the LogPipeline, run:

 

 

kubectl apply -f https://raw.githubusercontent.com/SAP-samples/kyma-runtime-extension-samples/main/sap-cloud-logging/k8s/logging/logs-pipeline-application-logs.yaml

 

 

In the LogPipeline, configure the details about shipping the logs to SAP Cloud Logging. Include major configurations such as:

  • Input: From which applications, containers, and namespaces the logs should be shipped
  • Output: The access details of the SAP Cloud Logging instance to which logs will be shipped

You can learn about all the parameters in detail from the official Telemetry LogPipeline documentation.

This is an example of the LogPipeline configuration used for this blog post:

Screenshot 2024-05-27 at 09.20.14.png

Create a LogPipeline CR for the Istio access logs

Referred from kyma-project.io documentation about istio access logs.

Istio access logs provide fine-grained details about the traffic when accessing the workloads that are part of Istio service mesh. The only prerequisite is to enable Istio sidecar injection for your workloads. The Istio access logs provide useful information relating to 4 golden signals, such as latency, traffic, errors, and saturation as well as any troubleshooting anomalies.

To create the LogPipeline, run:

 

 

kubectl apply -f https://raw.githubusercontent.com/SAP-samples/kyma-runtime-extension-samples/main/sap-cloud-logging/k8s/logging/logs-pipeline-istio-access-logs.yaml

 

 

This is an example of the LogPipeline configuration used for this blog post:

Screenshot 2024-05-27 at 09.20.34.png

View the logs

You can access the SAP Cloud Logging instance dashboard. The access details are available in the Secret generated by the service binding.

cls-binding-secret.png

The simplest way to start exploring the logs is to navigate to discover and choose the appropriate index.

cls-access.png

 

You can choose the index pattern to view the relevant logs, apply a filter or search term to narrow down your search or use other Open Search capabilities.

choose-index.png

 

We will talk more about metrics in one of the next blog posts. However, I would like to bring your attention to the Four Golden Signals in dashboard. It is provided out of the box and is based on the Istio access logs which we configured previously.

For reference, check out the generic and latency dashboards.

kyma-4-golden-signals.png

kyma-4-golden-signals-latency.png

Now you can start exploring your application as well as the access logs.

Stay tuned for the next blog post about shipping traces from SAP BTP, Kyma runtime to SAP Cloud Logging.

8 Comments
sarbajeet
Explorer
0 Kudos

 

Thank you for the post and clear steps.

Currently, we are using the managed SAP Kyma environment.

Does "SAP Cloud Logging" fall under the same license cost? If no, is there anything we can use for trial purpose?

 

gabbi
Product and Topic Expert
Product and Topic Expert

Hi Sarabjeet,

The pricing is not included in Kyma.

You can check the details here in discovery center.

Since you are already using managed kyma runtime, you are most likely either using CPEA or Pay-as-you-go model. SAP Cloud Logging falls under the same model. That means, no separate license would be required. You will be charged for the usage.

You can consider starting with a dev license and later move to a standard license.

 

Thanks,

Gaurav Abbi

sarbajeet
Explorer
0 Kudos

Hi Gaurav, 

Thank you for your response. Is there any estimator to estimate the capacity units for SAP Cloud Logging?

How will the usage be calculated?

Regards,

Sarbajeet

gabbi
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hi Sarabjeet,

Please check this calculator: https://sap-cloud-logging-estimator.cfapps.us10.hana.ondemand.com/

 

BR,

Gaurav

sarbajeet
Explorer
0 Kudos

Hi Gaurav,

We have completed the setup for "traces, metrics & logs".
As suggested, we began with the "dev" license, but it appears to retain the data for only a week depending on size.

Consequently, I am curious about how we can monitor the usage of storage space and memory, even if we opt for a larger capacity option (standard license).

Thank you for your guidance.

Regards,
Sarbajeet

sarbajeet_0-1717998810946.png

sarbajeet_0-1717998728146.png

 

Hariharan-Gandhi
Associate
Associate

Dear @sarbajeet,

You can use the OpenSearch dashboard’s Dev tools to obtain the information:

We understand that this is a spot check and less attractive than a visualization with history – we are currently looking into such capability and would update once the delivery roadmap is concrete.

We also recommend familiarising with curation policy (mentioned under Caution)

Thanks,

Cloud Logging team

#SAP Cloud logging

sarbajeet
Explorer
0 Kudos

Thank you @Hariharan-Gandhi for the details.

I have couple of follow-up queries on the "Curation policy", 

For example, while setting up "logging instances" on Kyma, we make the retention policy  "90 days". But, with 30 days log, it reaches the maximum size and eventually, it ends up deleting the log of the application which is more than 30 days old. 

My queries are

  1. How could we avoid that?
  2. Can we set up an alert, before the system triggers a delete operation on the index?

Regards,

Sarbajeet

 

juergen-walter
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hi @sarbajeet,

first of all, size-based curation is part of the contract [1]. For production service plans, service instances scale automatically within the configured limits. To avoid disk overflow, there is time-based and disk-utilization-based data curation.

>  How could we avoid that?

A standard instance can handle, dependent on scaling configuration, 5 to 25 times the storage capacity of a dev instance. This is then a multiple to the retention you achieve compared to the dev plan (if input remains the same). A large instance increases the storage by another 10 times.

If your dev plan instance holds the data for 7 days, then a standard can handle 90 days with roughly twice the load (because max retention is currently limited to 90 days). Large instance as said would increase capacity up to another factor 10 (as you cannot go beyond 90 days).

To answer the question: Go for a bigger service plans, allow your instances to scale to 10 data nodes. 

Can we set up an alert, before the system triggers a delete operation on the index?

Considering the above, it does not make much sense from my perspective. It is probably better to have an alert it the log volume breaches a threshold.
This can be done via OpenSearch alerting feature https://opensearch.org/docs/latest/observing-your-data/alerting/index/ (unfortunately I did not find time to publish blog posts on alerting yet). This way you can detect potential incidents and can act more quickly on data overload of your Cloud Logging service instance.

Best regards
Jürgen

[1] https://help.sap.com/docs/cloud-logging/cloud-logging/service-plans?version=Cloud