Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
4,837

Introduction


Application Logging is a service provided by SAP CF to help developers in identifying an error in the application and also helps in troubleshooting to find the root cause of the error. Loggregator is the Cloud Foundry component responsible for streamlining logs from applications and other system components that interact with the applications. Although this component can be used to create and manage logs, it's not very efficient in doing so. For example, when the amount of logs is too much Loggregator tends to either overwrite or drop some of the log lines which hampers the log analysis. So, CF provides dedicated Application Logging service to support in writing and analyzing structured logs.

Repository for this blog: Application Logging POC

Writing Structured Logs


Getting Started with Logging


There are many libraries that provide support for writing logs in Java Applications like logback, log4j2 etc. In this blog, we will be working with log4j2 only. To write logs in any Java application using log4j2, first we have to import the dependencies via pom.xml file as follows.
<properties>
<log4j2.version>2.8.2</log4j2.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
<version>${log4j2.version}</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>${log4j2.version}</version>
</dependency>
</dependencies>


Next, we have to create an instance of the Logger class using the LoggerFactory class. In the getLogger() method of LoggerFactory we have to pass the reference of the class for which we need the logger. The syntax is as follows.
private static final Logger LOGGER = LoggerFactory.getLogger(SecondController.class);

Now, we can use the LOGGER object to write logs wherever required. For example inside any method of LogController we can write logs as follows.
LOGGER.error("Inside /logs/demo");

This will write a log whenever the endpoint "/logs/demo" of the API is hit and hence can be used to track endpoint hits.

Using Application Logging Service


The Application Logging service is provided in the SAP CP Cloud Foundry. We can create an instance of this service with the preferred plan and bind it with out application as shown in this guide. In this blog we have used the "lite" plan of the service. After the binding is done we have to restart the application. Now, we can hit any endpoint that writes logs for example "/logs/demo" endpoint will write a log on error level. To see the logs, we can open the application in the cockpit and go to the "Logs" section.



We can check that the recent log is from our request only. But these logs are not properly visible here. Lets see now, how we can visualize the logs in a better way.

Using Kibana Dashboard


SAP Cloud Foundry uses ELK stack (Elasticsearch, Logstash and Kibana) for storing, analyzing and visualizing application logs coming from Cloud Foundry applications. If we click on the "Open Kibana Dashboard" button we are navigated to the Kibana dashboard on CF. The Kibana dashboard is available at "logs.<SAP_CF_DOMAIN>" like https://logs.cf.sap.hana.ondemand.com/ .

In Kibana, open the "Requests and Logs" dashboard. Here, we will be able to see the both the request and application logs as follows.



If we check the level of the application log it's displayed as INFO although the log was written on error level. This happens because the log that we have written is default unstructured log. But as Cloud Foundry uses the ELK stack for log analysis and Elasticsearch works with JSON data, so CF by default converts the console log into JSON log but it's not able to set a lot of values like the values for level, logger etc. To get these values and display proper logs we have to make the logs structured which we will do next.

Structuring the Logs


To write structured logs, we have to do a couple of things. First, we need to use the CF Java Logging Support libraries which provide custom pattern layouts which we can use and structure our logs. In this blog, we will use the JSONPatternLayout. To get the pattern layouts we have to import another dependency which is as follows.
<properties>
<cf-logging-version>3.0.4</cf-logging-version>
</properties>
<dependencies>
<dependency>
<groupId>com.sap.hcp.cf.logging</groupId>
<artifactId>cf-java-logging-support-log4j2</artifactId>
<version>${cf-logging-version}</version>
</dependency>
</dependencies>

Also, if you are using Spring boot make sure to remove the default starter logging library.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-logging</artifactId>
</exclusion>
</exclusions>
</dependency>

Now, we have to define log configuration by creating a new file called "log4j2.xml". In case of spring boot applications, we can create this file in the "src/main/resources" folder. The content of this file will be as follows.
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="warn" strict="true" packages="com.sap.hcp.cf.log4j2.converter,com.sap.hcp.cf.log4j2.layout">
<Appenders>
<Console name="STDOUT-JSON" target="SYSTEM_OUT" follow="true">
<JsonPatternLayout charset="utf-8"/>
</Console>
</Appenders>
<Loggers>
<Root level="${LOG_ROOT_LEVEL:-WARN}">
<AppenderRef ref="STDOUT-JSON" />
</Root>
</Loggers>
</Configuration>

In this file we have defined the JSONPatternLayout as the Appender layout and it can convert the standard logs in proper JSON logs that can be used by Elasticsearch. Here, we have also refined the root log level as WARN. So, any logs below the WARN level won't be picked up by Logstash. Only log levels that would be visible are WARN and ERROR. After making these changes we can rebuild and deploy the application. Now, if we hit the same endpoint "/logs/demo" then we would be able to see other proper information regarding the log written on Kibana dashboard as follows.



 

Logging Custom Metrics


Sometimes, we need to trace the values of some custom metrics within the context of an application. Because of this we tend to write the values of those metrics in the logs. But,  these logs are just text and very hard to analyze or visualize. The CF Java Logging Support provides a way to write these custom metrics such that these will also be converted into proper JSON key value pair. So, they can further be visualized in Kibana. To write these metrics we follow a specific structure. First, we have to import one function which will help us write a custom field as follows.
import static com.sap.hcp.cf.logging.common.customfields.CustomField.customField;

Now, we can write a log with the custom field called "message" which will have the value "Custom Log Message" as follows.
String testString = "Custom Log Message";
LOGGER.error("{}", customField("message",testString));

Now, we can hit the corresponding endpoint and open the "Requests and Logs" dashboard in Kibana. Here we can easily find the application log corresponding to this request and if we expand it we can see the metric "message" under custom fields.


Using Correlation IDs


Correlation Ids are unique transaction Ids that are assigned to each transaction. So, when a transaction is distributed among multiple services, then correlation ids can be passed from one service to the other service. So, that all the calls during the transaction can be traced using these Ids. This is really helpful as when something goes wrong in a specific transaction we can trace every single call in that transaction and figure out the issue via logs.

Single Request Correlation ID


First, we will start adding correlation ID to a single request. So, in this case the transaction has just one micro service call and that call should be traceable via correlation ids. To do that, we have added one endpoint in the "SecondController" which can accept "X-CorrelationID" header. If this header is set while making the request then the value the header would be used as the correlation id in all the logs. If this is not set then we can randomly generate a correlation id. To make the correlation id visible in the logs, we have to initialize the "LogContext" with the above mentioned correlation id. Here's is the complete snippet to do so.
public class SecondController {

private static final Logger LOGGER = LoggerFactory.getLogger(SecondController.class);
private static final String CORRELATION_ID_HEADER_NAME = "X-CorrelationID";

@GetMapping("/demo")
public String getError(@RequestHeader(value = CORRELATION_ID_HEADER_NAME, required = false) String correlationId) {
if (LogContext.getCorrelationId() == null) {
if (correlationId == null) {
correlationId = generateUniqueCorrelationId();
}
LogContext.initializeContext(correlationId);
}

String testString = "Inside /logs/demo";
LOGGER.error(testString);
return "Success";
}

private String generateUniqueCorrelationId() {
return UUID.randomUUID().toString();
}
}

If the endpoint "/logs/demo" is hit then a log is written and the correlation Id of that log is either the id sent while making the request of a randomly generated one. This Id can be seen in the Kibana dashboard as well.


Tracing Service to Service Calls


When there are multiple service calls within a transaction it is required to forward the correlation id from one transaction to the next one to keep track of the calls. In this example we are going to add another endpoint in the current hangman-server service which will in turn call another service and pass the correlation id. At the end we will see two logs in the Kibana dashboard both having the same correlation id. The second service is going to be the same service of the second application we deployed which had the endpoint "/logs/demo". Following is the code snippet to achieve it.
@GetMapping("/second")
public String getSecond(@RequestHeader(value = CORRELATION_ID_HEADER_NAME, required = false) String correlationId) {
try {
// Use SAP Cloud SDK to get HTTP Client
final HttpClient httpClient = HttpClientAccessor.getHttpClient("secondapp");

// Call microservice hangman-words
final HttpGet httpUriRequest = new HttpGet();

if (correlationId == null) {
correlationId = generateUniqueCorrelationId();
}
LogContext.initializeContext(correlationId);
httpUriRequest.setHeader(CORRELATION_ID_HEADER_NAME, LogContext.getCorrelationId());

final HttpResponse response = httpClient.execute(httpUriRequest);
if (response.getStatusLine().getStatusCode() != 200)
throw new RuntimeException("Bad response from secondapp: " + response.getStatusLine());

HttpEntity entity = response.getEntity();
LOGGER.error("Inside /logs/second");
return EntityUtils.toString(entity);

} catch (IOException e) {
throw new RuntimeException("Could not connect to ms-hangman-words", e);
}
}

Now, if the hit the endpoint in the first service, we get to see two logs on Kibana with same correlation id as follows.


Custom Kibana Dashboard


Till now, we have been using the "Requests and Logs" dashboard already provided by Kibana on CF. But, it's also possible to create custom dashboards based on Requirement. This is what we are going to do now. As we need some amount of data to visualize on the dashboard, lets hit multiple different endpoints and have a lot of logs at our disposal. Make sure to hit the service to service endpoint as well. So that we can check correlations.

Creating Dashboard


To create a dashboard, we have to go to the "Dashboard" section of Kibana and click on "Create New Dashboard". Once done, we can click on "Save" and provide a name ("TestDashboard") and we will have an empty dashboard ready.

Adding Visualizations


Let's start adding some visualizations. The first visualization we will add is "Log Level Distribution". It will show us a bar chart with the count of logs in each level. To do this, click on "Edit" on the "TestDashboard" and click on "Add". This opens a list of all the available visualizations as well as the ones created by you. We will now create our own visualization by click on "Add new Visualization". From the visualization panel select "Vertical Bar". Now, we have to select a data source to feed the charts. By default there are three data sources available on ELK stack of CF as follows.



We are going to require log level data, so we can go with the "logstash" source. Now, we have the chart panel where to add metrics/parameters in the chart. For Y-Axis we can keep "Count" as the metric and for X axis, lets add "level" as the Bucket. To do that, we can choose "Terms" as aggregation and "level" as field. We can also provide a custom field name if required. The configuration of the chart is as follows.



Once done, we can click on the play button to check the visualization and finally click on "Save" and provide a name for the visualization. Once confirmed, we would be able to see the chart on our dashboard. Now, we can again "Save" the dashboard to persist changes. Similarly we can add multiple different visualizations for different purposes. We can also have different data sources. For example to show "CPU load" metric data we can select "metrics" data source. As another example lets create a table visualization to help in tracing requests of visualizations using correlation ids.

For this visualization the type would be "Data Table" and the data source would "logstash". The configuration for the visualization is as follows.





We can now save the visualization and the dashboard as well. The visualization will look as follows.



We can see that the correlation id of the first two are same but components are different which means, that the endpoint "/logs/demo" got hit after we called the endpoint "/logs/second".

In this way, we can add multiple different visualizations and customize a dashboard for a specific use case. Follow this open source guide for more information on Kibana dashboard creation.

References



  1. Application Logging Official Document: https://help.sap.com/viewer/product/APPLICATION_LOGGING/Cloud/en-US

  2. SAP CF Java Logging Support: https://github.com/SAP/cf-java-logging-support

  3. Kibana Visualizations: https://www.elastic.co/guide/en/kibana/current/tutorial-dashboard.html

  4. Repository: https://github.wdf.sap.corp/I353397/Application-Logging-POC


 
5 Comments