Enterprise Resource Planning Blogs by Members
Gain new perspectives and knowledge about enterprise resource planning in blog posts from community members. Share your own comments and ERP insights today!
cancel
Showing results for 
Search instead for 
Did you mean: 
srikanth_sandha
Explorer
1,954
Scenario: If there is a requirement to move ECC on DB2 - AIX (on-premises) to ECC on DB2-Linux(AWS) or any DB2 heterogeneous migrations to cloud, this blog address issues with Export data dump transfer delays to cloud and better manage Cloud migration using parallel Export/Import process, reducing downtime.

Caveat: This method helps if you have limitations in using distribution monitor or in reducing DB2 migration with > 10 TB DB.

Challenge: During parallel export/Import–Data(flat files) that is generated by the export process needs to be available on the target cloud in-line with the Signal files generated by SWPM Export process(for every package it complete the export). But, since signal files are smaller and copied to cloud in short spawn, while export dump files are larger and takes time to be in sync(with cloud directory) there will be multiple situations(because of latency limitations) where your signal file is readily available while DB flat files are not. As soon as a signal file is available, the import process will try to import a package leading to failures - consuming time in re-starting or continuous manual monitoring. *Mounting NFS on either side will lead to the same issues as well.

Solution: To reduce downtime and to have a flawless import, below python program (attached below) and AWS cloud tools like DataSync and EFS been used to address above challenge. This has reduced downtime by 35% of the total time it took in sequential export/import(specific to Cloud migration).

 


Highlevel solution representation


Step 1 - Python Script checks for any new SIG file creation in ‘N/W Exchange Dir_Source’.

Step 2 – If new SIG file created,  It Fetch all the file names matching with SIG FILE from source ‘Export DATA Directory ’along with their sizes into a file[created at Temp Dir_OnPrem]

Step 3 -  same like Step 2 – executes for ‘Import DATA Directory’ - collect sizes into a file [created at Temp Dir_AWS]

Step 4 – If both the Names and Sizes Match – It moves the SIG file from ‘N/W Exchange Dir_Source’ to ‘N/W Exchange Dir_Target’ – which then picked by import SWPM channel.

Required File systems[change them accordingly in the script] :

<Export/Import DATA Path>  - This should be local FS on on-premise(to have faster export speeds) and EFS on AWS. Can have the same naming convention or different.

/<EFS>/<Script location>/ - path for python script.

/<EFS>/<Script location>/OnPrem/ - Directory for storing Export temp files.

/<EFS>/<Script location>/AWS/ - Directory for storing Import temp files.

/<EFS>/<SID>_Network_OnPrem - Directory path for Network Exchange directory for Export.

/<EFS>/<SID>_Network_AWS - Directory path for Network Exchange directory for Import.

 

Required setup:

- Setup SSL between as AWS to On-Premise. And Execute script from AWS.

- Update Source IP or Hostname in the script.

- run with python3.

- Although DataSync has default periodicity of 1 hour. Setup and schedule to have 5 or 10 minutes periodicity to have better results.

- Execute this from the Cloud host.

  • Logfile will be handy to check the errors. This file is generated in the script location.

  • Run the program in a screen session, to not to get terminated.

  • 15 min delay is introduced in the program, this is to ensure Datasync finishes its verification after copy.


Python Program:
from filecmp import dircmp
import filecmp
import subprocess
from subprocess import Popen, PIPE
import time
import datetime

while True:
FileLIST = []
DIRCMP = dircmp('/<EFS>/<SID>_Network_OnPrem', '/<EFS>/<SID>_Network_AWS',ignore=None, hide=None)
f = open("/sapbasis/pytest_QR2/NetworkExchangelog.txt", "a")
now = datetime.datetime.now()
print (now.strftime("%Y-%m-%d %H:%M:%S"),file=f)
print (DIRCMP.left_only,file=f)
f.close()
f = open("/<EFS>/<Script location>/NetworkExchangelog.txt", "a")
AWKCMD = "awk '{print $5,$9}'"
for name in DIRCMP.left_only:
if(name == "export_statistics.properties"):
exportfile = open(r"/<EFS>/<SID>_Network_OnPrem/export_statistics.properties", "r")
IS_Error = exportfile.readlines()[2]
IS_str = IS_Error[0:7]
if(IS_str == 'error=0'):
AWSCOPY1 = subprocess.Popen(['cp -Rpf /<EFS>/<SID>_Network_OnPrem/' + name + ' /<EFS>/<SID>_Network_AWS/'],shell=True,stdout=f)
AWSCOPY1.communicate()
exportfile.close()
else:
SPLITFile = name.split(".")[0]
OnPremRun = subprocess.Popen(['ssh','<userID@SOURCE_IP/HOSTNAME>','ls -ltr <Export DATA Path> | grep -e ' + SPLITFile + '.0 -e ' + SPLITFile + '.TOC |' + AWKCMD + ' > /<EFS>/<Script location>/OnPrem/' + SPLITFile + '.log'],stdout=f)
OnPremRun.communicate()
time.sleep(5)
AWSRun = subprocess.Popen(['ls -ltr <Import DATA Path> | grep -e ' + SPLITFile + '.0 -e ' + SPLITFile + '.TOC |' + AWKCMD + ' > /<EFS>/<Script location>/AWS/' + SPLITFile + '.log'], shell=True,stdout=f)
AWSRun.communicate()
OnPremFile = "/<EFS>/<Script location>/OnPrem/" + SPLITFile + ".log"
AWSFile = "/<EFS>/<Script location>/AWS/" + SPLITFile + ".log"
time.sleep(2)
if(filecmp.cmp(OnPremFile,AWSFile)):
FileLIST.append(name)
else:
print("File {} not Yet SYNC in AWS".format(name),file=f)
print (now.strftime("%Y-%m-%d %H:%M:%S"),file=f)
print("Entering In to Loop-15 min",file=f)
# Sleep time required for DataSYNC verification
time.sleep(900)
for filename in FileLIST:
AWSCOPY = subprocess.Popen(['cp -Rpf /<EFS>/<SID>_Network_OnPrem/' + filename + ' /<EFS>/<SID>_Network_AWS/'],shell=True,stdout=f)
AWSCOPY.communicate()
print("File {} is in SYNC and moved to Target Directory".format(filename),file=f)
f.close()

Maybe there might be other better solution than using this program, it will intresting to learn if you guys came across any.
2 Comments
Labels in this area