I’ll break this article into:
If we go back to part 2, this is where we ended, a filesystem full of config files that by themselves do nothing
Still there are already some advantages in this approach: Every developer has a filesystem so no need for special slow IDEs, network connectivity, dependencies, or deep technical knowledge. These are simply text files.
However, there's a couple of problems as well:
Answer to all of these problems was git. We commit this directory into a git repository as below
if you're not comfortable with git, I would advise you to first create a git repo on the UI, clone it into your system (using git clone), copy the files we created before into that folder and then you can commit them in one go:
git add . && git commit -m "dynamic interface for demo purposes" && git push
Ok, now we have our files in a git server which means, they're not stored only on our system, we have auditing (all commit history of all changes that were committed), we can revert to previous versions, see whenever these files were changed and by whom, so we don't lose track of it.
Now how do we define different values per different environments? For that we use git branches. For those of you not familiar with branches you can think of them almost as a different directory disconnected from the original one where you can have different files with different contents. In this case we'll have exactly the same files across all environments but with different values on these parameters
Now we have our storage and design time artifacts ready, lets see how to sync from git into partner directory. How is this sync happening?
We create one partner id per config level/filename and inside of it we store everything that is defined in the file as String parameters for that partner id. Only exception is the XSLT part, for this we store it as binary associated with that partner id. This was mostly inspired by SAP TPM implementation which we deeply respect and admire!
For this purpose we use Jenkins as our automation orchestrator since we have it already for other purposes but this can be done with other approaches (bash/powershell for instance). Code of our Jenkins pipeline:
import groovy.json.JsonSlurper
def packageResponsible = '********'
def GITBranch = "master"
pipeline {
agent any
options {
skipDefaultCheckout()
}
stages {
stage('Setup dynamic interface') {
steps {
script {
def Environment = params."CPI environment to sync";
def InterfaceId = params."InterfaceId to sync";
def Action = params."Action";
def EnvironmentSuffix = "_"+Environment
def CPIHost = env['CPI_HOST'+EnvironmentSuffix]?env['CPI_HOST'+EnvironmentSuffix]:env['CPI_HOST']
def CPIOAuthCredentials = env['CPI_OAUTH_CRED'+EnvironmentSuffix]?env['CPI_OAUTH_CRED'+EnvironmentSuffix]:env['CPI_OAUTH_CRED']
def CPIOAuthHost = env['CPI_OAUTH_HOST'+EnvironmentSuffix]?env['CPI_OAUTH_HOST'+EnvironmentSuffix]:env['CPI_OAUTH_HOST']
def GITRepositoryURL = env['GIT_REPOSITORY_BINARIES_URL'+EnvironmentSuffix]?env['GIT_REPOSITORY_BINARIES_URL'+EnvironmentSuffix]:env['GIT_REPOSITORY_BINARIES_URL']
def GITCredentials = env['GIT_CRED'+EnvironmentSuffix]?env['GIT_CRED'+EnvironmentSuffix]:env['GIT_CRED']
setupDynamicIntf(Environment,GITCredentials,InterfaceId,CPIOAuthCredentials,CPIOAuthHost,CPIHost,Action,Environment)
}
}
}
}
post {
failure {
emailext mimeType: 'text/html', body: '${JELLY_SCRIPT,template="html"}',
from: "******",
to: packageResponsible,
subject: 'Build failed in Jenkins: $PROJECT_NAME - #$BUILD_NUMBER'
}
unstable {
emailext mimeType: 'text/html', body: '${JELLY_SCRIPT,template="html"}',
from: "******",
to: packageResponsible,
subject: 'Build unstable in Jenkins: $PROJECT_NAME - #$BUILD_NUMBER'
}
}
}
You'll need a Jenkins shared library instantiated where your setupDynamic file is located.
import groovy.json.JsonSlurper
def gitBranchCheckExist(String token, String repo, String branch){
try {
def getBranchExist = httpRequest acceptType: 'APPLICATION_JSON', customHeaders: [[maskValue: false, name: 'Authorization', value: 'token '+token]],
ignoreSslErrors: true, httpMode: 'GET', validResponseCodes: '100:399, 404', timeout: 60,
url: "https://${env.GIT_REPOSITORY_HOST}/api/v1/repos/SAP_CPI_CICD/${repo}/branches/${branch}";
if(getBranchExist.status != 200){
return false
}
return true;
} catch (Exception e) {
error("Unable to check if branch [${branch}] exist:\n${e}")
}
return false;
}
def call(String GITBranch,String GITCredentials, String intfId, String oauthCredentials, String cpioauthHost, String cpiHost, String action, String environment){
script {
println("InterfaceID associated to this run [${intfId}], action to do is [${action}] and target environment is [${environment}]. Requester for this execution is [${currentBuild.getBuildCauses()[0].shortDescription} / ${currentBuild.getBuildCauses()[0].userId}]")
deleteDir();
def giteatoken = ''
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: env.GITEA_API_KEY_CRED ,usernameVariable: 'GiteaApiUsr', passwordVariable: 'GiteaApiKey']]) {
giteatoken = GiteaApiKey
}
String gitRepo = 'CPIDynamicIntf_' + intfId;
boolean branchExist = gitBranchCheckExist(giteatoken,gitRepo,GITBranch)
if(!branchExist)
{
error("Branch with name [${GITBranch}] does not exist for git repository [${gitRepo}]. Please create it in gitea");
return;
}
else{
println("Branch with name [${GITBranch}] exist in git repository [${gitRepo}].");
}
checkout([ $class: 'GitSCM',
branches: [[name: GITBranch]],
doGenerateSubmoduleConfigurations: false,
extensions: [
[$class: 'RelativeTargetDirectory',relativeTargetDir: "."]
],
submoduleCfg: [],
userRemoteConfigs: [[
credentialsId: GITCredentials,
url: 'https://<yourgitrepourl>:3000/SAP_CPI_CICD/'+gitRepo
]]
])
def token = common.getToken(oauthCredentials,cpioauthHost)
def namespace = "FERDyn_"
dir("Preparation Steps")
{
int result = 0;
//delete binary param
result = common.deleteAllLevelsBinaryParameterFromPD(token,cpiHost,namespace,intfId);
if(result!=204 && result!=404){
error("Parameter [${paramname}] was not deleted from partner directory");
}
//reset resultCode
result = 0;
//delete string param
result = common.deleteAllLevelsStringParameterFromPD(token,cpiHost,namespace,intfId);
if(result!=204 && result!=404){
error("Parameter [${paramname}] was not deleted from partner directory");
}
}
if("Cleanup and Prepare".equals(action))
{
dir("Preparation Steps")
{
def notCreatedParams = []
def partnerFiles = findFiles(glob: '**/*.txt')
partnerFiles.each{ partnerFile->
def partnerId = partnerFile.name
partnerId = partnerId.replaceAll(".txt","");
partnerId = namespace+partnerId
if(partnerId.length()<60)
{
def partnerContent = readFile partnerFile.path
def lines = partnerContent.readLines();
def keyValuePairs = [:]
for (line in lines) {
def (key, value) = line.split('=', 2)
keyValuePairs[key.trim()] = value.trim()
}
keyValuePairs.each{ paramname,paramvalue->
if("XsltTransformation".equalsIgnoreCase(paramname))
{
String filecontents = readFile paramvalue;
def filebase64 = common.getBase64Content(filecontents);
println("Creating now a binary parameter on partner directory for partner Id [${partnerId}], parameter name [${paramname}] and param value [${filecontents}]");
result = common.createBinaryParameterOnPD(token,cpiHost,partnerId,"XsltTransformation",filebase64, "xsl")
}
else if("LookupRules".equalsIgnoreCase(paramname))
{
String filecontents = readFile paramvalue;
println("Creating now the LookupRules as a String parameter on partner directory for partner Id [${partnerId}], parameter name [${paramname}] and param value [${filecontents}]");
String filebase64 = common.getBase64Content(filecontents);
result = common.createStringParameterOnPD(token,cpiHost,partnerId,paramname, filebase64)
}
else
{
println("Creating now a String parameter on partner directory for partner Id [${partnerId}], parameter name [${paramname}] and param value [${paramvalue}]");
result = common.createStringParameterOnPD(token,cpiHost,partnerId,paramname, paramvalue)
}
if(result!=201)
{
error("Parameter [${paramname}] was not created in partner directory");
}
}
}
else
{
notCreatedParams.add(partnerId);
}
}
if(notCreatedParams.size()>0)
{
error("Some parameters exceeded the maximum of 60 characters allowed for key definition. Alternative is to perform your logic either via xslt or redirecting from a level up into a process direct. List of problematic partnerIds [${notCreatedParams.join(",")}]")
return;
}
}
}
}
}
One thing to bear in mind is that each file we created on filesystem will be created on partner directory prefixed with FERDyn_ as namespace (to avoid collisions with other partners). There's a limit on the partner ID not to exceed 60 characters. What can you do once you face this?
On any of the files you can use parameter
SAP_ProcessDirect_Address=<address of process direct inside your tenant>
This would mean that for really long chains ultra complex with many key fields involved you might be forced to at some point fallback into a process direct where you do the rest of your lookups if needed
Now just an example of one of the common APIs defined above:
///
/// Update binary parameter in partner directory
///
def updateBinaryParameterOnPD(String cpiToken,String cpiHost,String partnerId, String paramName, String paramValue, String contentType)
{
def bodyPayload = """{
"ContentType": "${contentType}",
"Value": "${paramValue}"
}"""
try {
def pdParamUpdate = httpRequest customHeaders: [[maskValue: false, name: 'Authorization', value: cpiToken]],
requestBody: bodyPayload, contentType: 'APPLICATION_JSON', acceptType: 'APPLICATION_JSON',
ignoreSslErrors: true, validResponseCodes: '100:399, 400, 404', timeout: 600, httpMode: 'PUT',
url: "https://${cpiHost}/api/v1/BinaryParameters(Pid='${partnerId}',Id='${paramName}')";
if(pdParamUpdate.status!=204)
{
println("While updating binary parameter [${paramName}] with value [${paramValue}] on partner directory. Details on error [${pdParamUpdate.content}]. Details on body [${bodyPayload}] \n")
}
return pdParamUpdate.status;
} catch (Exception e) {
error("While updating binary parameter [${paramName}] with value [${paramValue}] on partner directory. Details on error [${e.message}]. Details on body [${bodyPayload}] \n${e}")
}
}
So the pipeline accepts an interface ID to sync (the one from this demo is NUPEBPAPOC), then an environment (DEV, TEST, PREPROD or PROD) which will be used to determine both the git branch and the cpi runtime node where to deploy this. Finally the action to cleanup existing data for this partner id on partner directory before setting it up
If execution above terminates with success you should see all info synced into Monitor->Manage Partner Directory on your CPI runtime node
with the respective string parameters specified on the text files
and the respective binary parameters (for storing the XSLTs)
In this chapter we covered everything from "deployment" perspective, meaning picking the design time files from local file system into git and then from git into partner directory. Now everything is ready for the generic iflow to be in charge!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
| User | Count |
|---|---|
| 26 | |
| 25 | |
| 21 | |
| 20 | |
| 19 | |
| 14 | |
| 14 | |
| 14 | |
| 14 | |
| 10 |