Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
5,419
The content server installation has two flavors:

  1. SAP Content Server and SAP Cache Server (Legacy)


These versions have to depend on Microsoft IIS server (Windows) or Apache HTTP server (Unix) for web-engine.

  1. SAP Content Server and SAP Cache Server (7.5 or higher)


These newer versions of Content Servers are standalone and do not require any web-engine.

The below approaches can be used to migrate(Homogeneous/Heterogeneous) and consolidate the content server data:

  1. Performing backup/restore with a full backup, Incremental backups, and Log backups.

  2. Performing migration of the repositories via export/import (loadercli).

  3. Performing migration of the documents via the SAP customized reports.


Pre-requisites:

  1. For any content server migration make sure that, enough data files and log files are added, taking source as a reference.(This is required to accommodate the source data)

  2. The target content server can be of higher version or equal to that of the source.

  3. Make sure the content servers on the source and target are of the same Installation flavor.


Note: Always make sure that the initial vanilla backup has been taken on the target system before making any changes.

The backup/Restore method.

In the above-mentioned method, we can take a full backup(as a single file or split files) on the source and multiple incremental backups and log backups during cutover for restoring the same onto the target.

Below are the commands that can be used for the same.

  1. Login to the Max-DB with the below commands:


su – sqd<SID>

./dbmcli -d <SID> -u CONTROL,<Password>

  1. Execute the below command to check the db state.


db_state

  1. Create a medium from the database backups downloaded from the source.



  • Complete data backups are mapped to DATA file.

  • Incremental data backups are mapped to PAGES file.

  • Log backups are mapped to LOG file.


medium_put restore_dump <path to backup file/backup file name> FILE DATA 0 8 (Full backup part 1)

medium_put restore_dump1 <path to backup file/backup file name> FILE DATA 0 8 (Full backup part 2)

medium_put restore_dumpn <path to backup file/backup file name> FILE DATA 0 8 (Full backup part n)

medium_put restore_dump2 <path to backup file/backup file name> FILE PAGES 0 8 (Incremental backup)

medium_put restore_dump3 <path to backup file/backup file name> FILE LOG 0 8 (Log backups)

  1. Once all the media has been added, use the below command to check the added backup files.


medium_getall

  1. Change the database to admin mode before restoring.


db_admin

Note:   The database cannot be brought into online mode until all the files are recovered.

  1. Start the Content server restore with the below command.


recover_start restore_dump

  1. The subsequent backup parts have to be restored with below command.


recover_replace restore_dump1

  1. Make sure that the session remains the same between the restores. An alternate way to restore all the backups at once is with the below shell script.


recovery.cmd contents:

echo db_admin >> recovery.cmd

echo db_connect >> recovery.cmd

echo recover_start restore_dump >> recovery.cmd

echo recover_replace restore_dump1 >> recovery.cmd

|

echo recover_replace restore_dump10 >> recovery.cmd

run.cmd contents:

echo /sapdb/programs/bin/dbmcli -d <SID> -u control,<password> -i recovery.cmd >> run.cmd

  1. Execute the below command to run the restoration in the background.


nohup ./run.cmd &

  1. You can lookout for the status with the below command.


recover_state

  1. For incremental backup use the command again.


recover_start restore_dump11

recover_replace restore_dump12

|

recover_replace restore_dump20

  1. Make sure to clear the log before restoring the log backups.


util_connect

util_execute clear log

recover_start restore_dump21

recover_replace retore_dump22

|

recover_replace  restore_dump30

  1. Before bringing the db online, execute the below command.


db_execute ALTER PASSWORD SUPERDBA <password>

load_systab -u SUPERDBA,<password>

  1. Now the database can be brought online.


db_online

  1. Now repoint the repositories to the migrated S/4hana system via OAC0 and check the connectivity via program RSCMST from SE38.


 

Export/Import method (loadercli):

Make sure that the repository is present on both source and target with the same name.

It would also work on the heterogenous combination (ex: Source-Windows & Target-Linux)

  1. Initially check the repository number by logging in with SAPR3 username and execute the below script in the source system.


select * from CONTREP

  1. Make note of the repository number (let’s consider X in our case) and prepare the below text files with the mentioned content.


exportdocs1.txt

EXPORT COLUMNS * FROM "SAPR3"."DOCUMENTS000X" DATA OUTFILE ‘DOCUMENTS000X_full.data' RECORDS

exportcomps1.txt

EXPORT COLUMNS * FROM "SAPR3"."COMPONENTS000X" DATA OUTFILE 'COMPONENTS000X_full.data' RECORDS

  1. Export the data using the below commands.


./loadercli -d <SID> -u SAPR3,SAP -b /<path to file>/exportdocs1.txt

./loadercli -d <SID> -u SAPR3,SAP -b /<path to file>/exportcomps1.txt

  1. Copy the export files on the target server.

  2. Similarly check the repository number on the target


select * from CONTREP

  1. Let’s consider the repository number as X in our case and execute the below import commands.


importdocs1.txt

IMPORT TABLE "SAPR3"."DOCUMENTS000X" IGNORE DUPLICATES data instream file 'DOCUMENTS000X_full.data' RECORDS

Importcomps1.txt

IMPORT TABLE "SAPR3"."COMPONENTS000X" IGNORE DUPLICATES data instream file          'COMPONENTS000X_full.data' RECORDS

  1. Import the data using the below commands.


./loadercli -d <SID> -u SAPR3,SAP -b /<path to file>/importdocs1.txt

./loadercli -d <SID> -u SAPR3,SAP -b /<path to file>/importcomps1.txt

 

Migration documents via custom Z programs (Z_DOC_COPY & Z_MIGRATE_ARCHIVELINK)

The advantage of using the mentioned reports is that they can be run periodically until the cutover.

Z_DOC_COPY (SAP Note 2774469)

This report can be used to copy the attachments from the source content server and copy the attachments to the target server one by one.

This report compares the list of attachments between source and target; therefore the report can be re-run/resumed anytime.

  1. Execute SE38 and run the report Z_DOC_COPY


Give the below inputs and click on execute.

Destination Repository:

Source Repository:

Document List:

Z_MIGRATE_ARCHIVELINK (SAP Note 1043676)

This report can be used to copy the attachments from the source content server to target, which are link based.

This report would change the link table entries and update the newly generated ids in the target repository.

  1. Execute SE38 and run the report Z_MIGRATE_ARCHIVELINK


Give the below inputs and click on execute buy checking the appropriate boxes.

OLD_ARC:

NEW_ORC:

 

The below blogs were used for the migration strategies mentioned above.

https://blogs.sap.com/2019/12/09/content-repository-migration-in-mind/

Special thanks to Alexander Kaiser, Ashok Dalai & B.T Praveen Kumar
1 Comment