Welcome to the first blog in our three-part series, where we will explore how Kyma can seamlessly integrate with the SAP Cloud Logging service. By enabling the three pillars of observability - logs, traces, and metrics - Kyma developers and operators can effectively troubleshoot issues, identify root causes, investigate performance bottlenecks, and gain a comprehensive understanding of system behavior.
In this initial blog post, we will delve into the following topics:
SAP Cloud Logging: An Overview
Shipping Logs to SAP Cloud Logging
In the subsequent blogs, we will continue our exploration by discussing the integration of traces and metrics.
The SAP Discovery Center description says:
SAP Cloud Logging service is an instance-based observability service that builds upon OpenSearch to store, visualize, and analyze application logs, metrics, and traces from SAP BTP Cloud Foundry, Kyma, Kubernetes, and other runtime environments.
For Cloud Foundry and Kyma, SAP Cloud Logging offers an easy integration by providing predefined content to investigate the load, latency, and error rates of the observed applications based on their requests and correlate them with additional data.
To get started with SAP Cloud Logging, visit the Discovery Center where you will find more detailed information about its features and capabilities.
Pricing for the SAP Cloud Logging service can be determined using the SAP Cloud Logging Capacity Unit Estimator. It is important to note that for Kyma, the "Ingest Otel" option needs to be enabled, which should be taken into account when estimating pricing. This option is used for shipping traces and metrics.
Now, let's explore how we can leverage SAP Cloud Logging to ingest logs from applications deployed on SAP BTP, Kyma runtime.
You can refer to the official SAP documentation to create an SAP Cloud Logging service instance for details.
# In the instructions, all resources are created in cls namespace. If you want to use a different namespace, adjust the files appropriately
export NS=cls
kubectl create ns ${NS}
kubectl -n ${NS} apply -f https://raw.githubusercontent.com/SAP-samples/kyma-runtime-extension-samples/main/sap-cloud-logging/k8s/cls-instance.yaml
For reference, this is the service instance specification:
apiVersion: services.cloud.sap.com/v1
kind: ServiceInstance
metadata:
name: my-cls
spec:
serviceOfferingName: cloud-logging
servicePlanName: dev
parameters:
retentionPeriod: 7
ingest_otlp:
enabled: true
This is the corresponding service binding.
apiVersion: services.cloud.sap.com/v1
kind: ServiceBinding
metadata:
name: my-cls-binding
spec:
serviceInstanceName: my-cls
credentialsRotationPolicy:
enabled: true
rotationFrequency: "720h"
rotatedBindingTTL: "24h"
The service binding specifies the credentials' rotation policy.
It is a great developer experience that the Telemetry module intelligently switches to new credentials once they are rotated. This does not require any action from the developer.
NOTE: The same instance will be reused for configuring tracing and monitoring.
The service binding also generates a Secret with the same name. It contains the details to access the dashboard of the SAP Cloud Logging instance previously created.
To ship your logs to SAP Cloud Logging, create LogPipeline custom resources (CRs).
Your application running in SAP BTP, Kyma runtime will send logs to stdout. The Telemetry module based on the LogPipeline will capture and ship them to SAP Cloud Logging.
To create the LogPipeline, run:
kubectl apply -f https://raw.githubusercontent.com/SAP-samples/kyma-runtime-extension-samples/main/sap-cloud-logging/k8s/logging/logs-pipeline-application-logs.yaml
In the LogPipeline, configure the details about shipping the logs to SAP Cloud Logging. Include major configurations such as:
You can learn about all the parameters in detail from the official Telemetry LogPipeline documentation.
This is an example of the LogPipeline configuration used for this blog post:
Referred from kyma-project.io documentation about istio access logs.
Istio access logs provide fine-grained details about the traffic when accessing the workloads that are part of Istio service mesh. The only prerequisite is to enable Istio sidecar injection for your workloads. The Istio access logs provide useful information relating to 4 golden signals, such as latency, traffic, errors, and saturation as well as any troubleshooting anomalies.
To create the LogPipeline, run:
kubectl apply -f https://raw.githubusercontent.com/SAP-samples/kyma-runtime-extension-samples/main/sap-cloud-logging/k8s/logging/logs-pipeline-istio-access-logs.yaml
This is an example of the LogPipeline configuration used for this blog post:
You can access the SAP Cloud Logging instance dashboard. The access details are available in the Secret generated by the service binding.
The simplest way to start exploring the logs is to navigate to discover and choose the appropriate index.
You can choose the index pattern to view the relevant logs, apply a filter or search term to narrow down your search or use other Open Search capabilities.
We will talk more about metrics in one of the next blog posts. However, I would like to bring your attention to the Four Golden Signals in dashboard. It is provided out of the box and is based on the Istio access logs which we configured previously.
For reference, check out the generic and latency dashboards.
Now you can start exploring your application as well as the access logs.
Stay tuned for the next blog post about shipping traces from SAP BTP, Kyma runtime to SAP Cloud Logging.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
27 | |
25 | |
19 | |
14 | |
14 | |
11 | |
10 | |
8 | |
7 | |
7 |