Introduction
There were a couple of posts about advanced logging already. In certain respect they all point out that SAP Cloud Platform Integration has pitfalls when it comes to logging. Some can really ruin your business day:
- the 1GB per day MPL attachment limit - when enforced by the so called "circuit breaker" makes log analysis impossible
- automatic MPL deletion after 30 days
- Monitoring dashboard message filtering and sorting could be better
Since there are reliable cloud logging solutions, at last there is no reason in enduring that situation.
One of them is
Elastic Stack also known as ELK.
The scope of the article is to provide an overview what can be done with it. I do not go into every technical detail.
Install an Elastic Stack
Elastic Stack has a Basic licence which makes the product available at no cost. It can also be used as managed Elastic Cloud service.
I decided to try out a self managed scenario in an Azure subscription by deploying a prepared Ubuntu virtual machine image with the complete elastic stack already installed. We could also use containers in a Kubernetes service in future - that depends on the experiences we make with the setup and cost considerations.
The virtual machine only opens HTTP/HTTPS ports 80/443. A DNS name is assigned to its public IP.
Currently, it uses a 200GB disk.
There are two endpoints that have to be opened to the internet:
- Logstash - the API to send log messages from CPI flows
- Kibana - the front end to visualise log data
Their transport must be encrypted and clients have to authenticate.
I installed an
Nginx as reverse proxy that utilises
Let's Encrypt certificates with automatic renewal via cron job. Authentication is done using basic username and password simply provided with htpasswd.
Create a Search Index Template
The Kibana UI has a Stack Management / Index Management perspective that allows to create index templates. With a template you can define settings that are inherited by the indexes which are automatically created on a daily basis. It can also have a lifecycle policy that removes indexes after a defined period or moves them to less performant and therefore cheaper hardware.
To use the index for searching there must be an index pattern which can be created on the same management UI. It is useful to create the pattern after at least one document is in the index. Else the pattern must be refreshed to know all the fields sent by the CPI.
Send log messages to the Elastic Stack
As with any other MPL attachment where you use scripts like this Groovy
Message logMessage(Message message) {
def messageLog = messageLogFactory.getMessageLog(message)
if (messageLog) {
def body = message.getBody(String)
def attachment = createAttachment(message, body)
def name = ["Log", message.getProperty("customLogName")]
messageLog.addAttachmentAsString(name.findAll().collect { it.trim() }.join(" - "), attachment as String, "text/xml")
}
}
you basically do the same but use some additional Camel knowledge.
First, there are two tasks to prepare the platform for sending to the Elastic Stack:
- Add the Let's Encrypt Root Certificate DST Root CA X3 to the platform keystore.
- Add the username and password that was used to protect the Logstash endpoint as user credential
Then, in the script there are the following steps:
- Prepare the request to send to the Logstash API.
def metadata = ["beat": "scpi", "version": "1.0.0", "@timestamp": new Date().format("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'")]
def name = ["Log", text, message.getProperty("customLogName")]
def logs = ["name": name.findAll().collect { it.trim() }.join(" - "),
"level": level,
"body": message.getBody(String),
"headers": mapToString(message.headers),
"properties": mapToString(message.properties),
"mplId": message.getProperty("SAP_MessageProcessingLogID"),
"messageCorrelationId": getCorrelationIdFromMpl(message.exchange)
]
def logstashBody = [ "@metadata": metadata,
"component": message.exchange.context.name,
"environment": getEnvironment(),
"logs": logs
]
- Send the request. (credentials are fetched using the SecureStoreService API)
def logstashUrl = message.getProperty("logstashUrl")
def credential = getCredential("Logstash")
def template = message.exchange.context.createProducerTemplate()
MplConfiguration mplConfig = new MplConfiguration()
mplConfig.setLogLevel(MplLogLevel.NONE)
def exchange = ExchangeBuilder.anExchange(message.exchange.context)
.withHeader("CamelHttpMethod", "POST")
.withHeader("Content-Type", "application/json")
.withHeader("Authorization", "Basic " + Base64.encoder.encodeToString("${credential.username}:${credential.password as String}".getBytes(StandardCharsets.UTF_8)))
.withBody(new JsonBuilder(logstashBody).toString())
.withProperty("SAP_MessageProcessingLogConfiguration", mplConfig)
.build()
template.asyncCallback("ahc:${logstashUrl}", exchange, new Synchronization() {
void onComplete(Exchange ex) {
template.stop()
}
void onFailure(Exchange ex) {
if (ex.exception)
log.logErrors("Error sending to Logstash: " + ex.exception)
else
log.logErrors("Error response from Logstash: ${ex.out.headers['CamelHttpResponseCode']} - ${ex.out.headers['CamelHttpResponseText']}")
template.stop()
}
})
- That is it!
Inspect log messages in Kibana
This does not only look pretty, it comes with much more filtering features than the CPI monitoring.
It would also be possible to have a linkage between Kibana and the CPI monitoring just by submitting an URL for mplId or correlationId.
Conclusion
With that relatively simple changes we can provide
- a more robust monitoring for the operations team
- message history whose size only depends on what the customer is willing to pay for the disc
- search in log attachments at a level of granularity that the CPI sorely misses
- continuous logging, no necessity to decrease the log level by setting a logEnabled property to false in test or production environments in fear of the circuit breaker