Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
MartyMcCormick
Product and Topic Expert
Product and Topic Expert
8,462
Customers often ask for ways to persist data longer than default policies when using SAP Cloud Integration.  There are limits with regards to how much and for how long data can be logged for many reasons, including performance and operations.

Data Archiving is a new feature in SAP Cloud Integration where customers can persist data from Cloud Integration to an external Content Management System (CMS).  For example, customers may have legal or reporting requirements or just want data kept for X number of years for historical purposes.  This can now be accomplished using this feature.

This blog walks through the documentation available on help.sap.com and provides some screen shots and additional information.  Archiving Data

In order to archive data, a customer needs to integrate their own Content Management System, which is external to the Cloud Integration tenant.

For the purposes of this blog, I created a new repository using the Document Management Service on a Neo tenant on SAP Business Technology Platform (BTP) --link.  I then developed a proxy-bridge Java application and deployed it on the Neo BTP tenant in order to connect to my repository.  Here is a good blog describing the process of developing this application which also contains links to SAP help documentation.

Then using the proxy-bridge app, I can connect to the repository using browser based binding using applications like OpenCMIS.  You'll need the repositoryId of the repository, which is required in the configuration.


Now we are ready to configure archiving in SAP Cloud Integration.

The first step is to configure the destination.

In the BTP cockpit, navigate to the subaccount for the Cloud Integration tenant.  Select Destinations under Connectivity and click New Destination.


 

Complete the destination configuration using the URL to your repository.  The name needs to be CloudIntegration_LogArchive.

The URL should be the browser binding URL (or if you want to set an additional property to use AtomPub see the documentation to set that property).  For example, I used the URL https://<app name>/cmisproxy-application/cmis/json

Provide the details for authentication (basic in my case) and then set an additional property for RepositoryId and enter the value.


 

The next step is to activate archive logging on the Cloud Integration tenant using the OData APIs.  For these steps I'll use Postman.

URL: https://<<CloudIntegrationHost>>/api/v1/activateArchivingConfiguration

Before we can POST we need to fetch the x-csrf-token.  Set a Header property "x-csrf-token" to "Fetch" and issue a GET on the URL to fetch the x-csrf-token.

Paste the returned header value of the token into the x-csrf-token header variable and issue an HTTP POST.  You should get back an 200 OK HTTP code that archiving has been enabled.  (Note the first time that I tried this I received an error that the retrieved destination did not have the required data--turns out that I was missing RepositoryId.)


Next, you need to assign your user the required roles in order to configure archiving.  The documentation refers to the roles ConfigurationService.RuntimeBusinessParameterRead and ConfigurationService.RuntimeBusinessParameterEdit but these are roles available for Neo tenants.  If you are using Cloud Foundry for your Cloud Integration tenants then you need to assign the roles are TraceConfigurationEdit and TraceConfigurationRead.


 

To assign the roles in Cloud Foundry go into your subaccount in the BTP cockpit and to navigate Security->Roles and assign the role(s) to a Role Collection that your user has assigned.  If you are on Neo, the roles are also assigned in the BTP cockpit but under Security->Authorizations.


Now when you go into the Monitoring View in the Cloud Integration WebUI and open your iFlow using the Manage Integration Content tile, you will see the Data Archiving Link enabled.  By default, it is not activated.


Click on the Archive Data link and you are presented with options to Archive.  You can log all Sender and Receiver channel payloads or anytime a message is persisted to a data store.  In my case, I developed a simple iFlow with a content modifier to set a body and then GroovyScript that logs this body as an attachment using Message Payload Logging (MPL). So in my case I only selected option "Log Attachments".


After running the iFlow, the log viewer of the iFlow will show that Archiving is pending for the integration.  By default, the logs will be archived after 7 days according to the documentation.


 

You can then connect to your repository using your preferred tool to see the content.  I'll come back and update the blog once my content is archived.

2 other helpful URLs:

You can check the tenant configuration by using URL /api/v1/ArchivingConfigurations('s4hccpis')

and can check performance metrics using this link:

/api/v1/ArchivingKeyPerformanceIndicators?$filter=MplsToBeArchived eq 5000

 

Addition 10/10/2021

After 1 week, I do see the archive files in the repository.  I need to play around with the functionality a bit more but it seems like a zip file is placed into the repository with the MessageID as the name, i.e. <messageid>.zip.  Inside this zip file would be some data regarding the integration and archiving configuration and more importantly, the archived files themselves (my attachment was stored as my log description .bin, i.e. "SOAP payload sent_.bin".  I suspect all attachments that are stored in the iFlow run are placed into the same zip under different subfolders but will test this out as well.


Thanks,
Marty

 
8 Comments