Technology Blog Posts by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
alex_myakinkii
Participant
0 Kudos
936

In this one we are going to talk about serious things, so I'd really appreciate being challenged on assumptions I make here or solutions I suggest.

Most probably someone might know better why it should not be done this way.

Anyway use it all at your own risk (if you somehow read this and decide to give it a try).

Link back to index: https://community.sap.com/t5/technology-blogs-by-members/blog-series-on-my-cpi-camel-learning-journe...

About property placeholders in Camel and CPI

As we need to have some camel stuff in each post, we will start with property placeholders.

You might ask why? And the reason is the assumption I made based on my expectations of how cpi works with properties (externalized parameters) made me re-implement stuff after the assumption was proven wrong.

So, in short, what are property placeholders and how they are potentially related to transporting of our artifacts?

Well, as documentation states "Property placeholders are used to define a placeholder instead of the actual value. This is important as you would want to be able to make your applications external configurable, such as values for network addresses, port numbers, authentication credentials, login tokens, and configuration in general."

That might sound familiar to externalized parameters we have in CPI, and even the double curly brace syntax is the same.

But there is a big catch.

In camel there is this Properties Component that deals with .properties files that act as source of those placeholders.

And imagine if we could just have some props files for DEV, QA and PROD and then include them in our camelContext dynamically via env variable that karaf gets from OS (or as JVM property) or defines itself.

Maybe even looking like this:

camel-properties.png

But it does not look like this, because CPI does NOT use properties component (therefore I did not bother actually testing this so cannot 100% guarantee that this is correct syntax).

Instead we have some separate something exposed via api and also as Externalize + Configure uis.

So in actual blueprints we are not going to find {{my_externalized_property}} expressions, but rather actual parameters that were substituted during the "compile time" (in .iflw artifacts in src we CAN see it though).

And that almost fatal assumption I made was based on the fact that we actually still somehow DO have parameters.prop file and its counterpart parameters.propdef in our repo.

For example, here it is for aforementioned content_enricher_json we covered last time (which in turn was a copy of content_enricher I made from the guidelines example)

And it looks like this (what's up Dude? 😉 )

cpi-properties.png

Why it matters?  Well, because in this file for some reason we see some properties that are not even used in iflow.

And even though the creation date in header is current (the date when I copied it from the guidelines sample), the content is actually coming from that original iflow.

How come? Well, those are default values we configured in ui.

And not only that - each time we create an externalized token it is instantly created in this "Configuration service" and kept there forever (unless you manually clean it up and rezip the artifact). And this explains why the Dude is still with us even if the author of that iflow just accidentally created it.

So first of all that means be careful with what you can possibly expose in your code if you use git to sync your design-time artifacts.

And then there was this wrong assumption I made, that those were current values "dumped" to prop file during "compilation".

Later I discovered that we also have externalconfig.prop in src folder that indeed has the value that was configured in UI, but it looked liked it was ignored during subsequent import/rezip.

 

About mtars and Content Agent Service

My grand-grandfather used to tell me when I was a kid: "Don't you dare to touch the PROD with your dirty hands! Keep everything in code and only deploy the artifacts via CI/CD so that you control the state PROD is in!"

What I mean by that is as developers we want to make sure our stuff works before it even leaves DEV. And once it does, everything else happens automatically.

And this is exactly the reason why this wrong assumption I made was so important: I wanted prepare all the design-time artifacts in my DEV environment and then just deploy it in PROD (maybe even without manual intervention).

And now we need to talk a little bit about mtars which stand for MultiTARget applications.

Mtars

In BTP those were first-class citizens even in Neo times (it wasn't obviously event called Neo back then).

And in CF they were also introduced as there was just an application concept (you might probably find some examples of single apps deployed via cf push having simple manifest.yml) 

But the point of mtar is to define the container for more than one application (module as they call it), resources and services those modules need, and relations between the modules (provide/require).

For example, this is sap cap documentation regarding mtars which leads to another one leading to help (and probably I will do one the next blog parts covering "cloudification of mpl app" where we will of course need to build and deploy our mtar to btp).

The most important point here is that if we have code locally, we can just use mbt tool (they point to at first link) to get us the mtar file.

And back to those Camel properties files I mentioned earlier - it would be super cool if we were able to just build ourselves mtar for our package locally or in CI/CD pipeline because we would have everything in git.

And then we would simply do the same stuff as we typically do in cap - cf deploy (it is a cf plugin to deal with mtars in cloud foundry environment)

The only problem is - we cannot do that... Because we need to have proper mta descriptor (and maybe even some build scripts) to tell mbt how to build the mtar for us.

Cas

I discovered this guy while trying to setup mtar download option for CPI - obviously I was curious to see what was in there and whether I could just deploy it and maybe even build one myself (like I described above).

Basically, if you are trying to setup CTMS (or just mtar export) you would find blogs (like this or that) or just  sap help docs telling you to create an instance of this guy with "standard" plan and then to create a couple of destinations so that CPI and CAS (and TMS) can talk to each other.

For mtar download you'll just need to create CloudIntegration (CAS talks to CPI) and ContentAssemblyService (CPI talks to CAS)

And indeed it works, although it looks like sometimes BTP UI incorrectly saves destination params, so make sure you enter token url first and then clientSecret - at least it was what finally helped me to get it working in trial tenant (when at some point it cleared the secret after I changed tokenurl).

But there's something else: the "free" plan (as subscription) giving you access to the nice UI to import/export packages (and even separate iflows as mtars) and "application" plan that allows, well, using the apis to perform import/export programmatically (I cheated a little bit and used UI to trace requests/responses which seem to be the same ).

For this we only need CloudIntegration destination (to be able to pull/push data).

Failure and analysis

Now, finally, back to my assumption:

  • I hoped that I could implement some UI in my capic tool to do "mass-change" of externalized parameters (via api) based on props file in my repo (locally where capic is being run) and then export mtar via CAS so that it has the parameters in prop file.
  • So that later I (or some other responsible person cuz we DONT want to give developers access to PROD, remember?) can just import it using this nice CAS UI where ALL the imports could be later traced/audited.

And that was looking almost pro-code enough to me...

Except for the fact that assumption was wrong, so after I implemented all that stuff I found my mtar was exported with just those default values, and not the configured ones I applied...

And indeed, after I checked the documentation for Iflow download option with "Merged Configured and Default Values" it became clear that my Camel-inspired pro-code assumption was wrong and it clearly was just either exporting it as it was (original default values) or "dumping" the current ones into that parameters.prop file we discovered earlier.

"Merged Configured and Default Values: Downloads the integration flow with values that consists of configured and default values. This option would replace the default value with the configured value and accept it as a new default value"

Some inner mtar details

And at that point I felt like I was doomed, because in order to to have the flow I envisioned I would have to reverse engineer the mtar build process for a package (and even though it is just a zip with zipped iflows, there is this magic base64 encoded json resources.cnt file and also hash file with some content hashes).

The mtad.yaml (the deployment descriptor that is used during the deployment of mtar) is actually rather simple, and most surprisingly it was only using the well-known "SAP Process Integration Runtime" service instance:

cpi-mtad.png

In other words, when CF deployment service gets this mtar file, it checks if there is "it-rt" service with plan "api" with this particular name "process_integration_transport_instance" created, then creates it automatically with proper role "WorkspacePackagesTransport" if not, and then (I guess) just pushes the design-time content there to be processed (imported).

Also as we actually don't see any runtime modules here (like of type nodejs or java) you can think of this mtar of something similar to html5 repository content module (where our ui5 apps go in managed approuter scenario).

And indeed, like I mentioned in teaser, you can just do cf deploy it to any tenant you want (iirc you must have space developer role in CF assigned to you).

About resourceIds

There is one more thing to notice here: resourceIds ARE important (for whatever reason SAP decided to make life harder for design-time artifacts), and in the base64 encoded resources.cnt file there is a relation between parent (package, whose resourceId IS exposed via public api) and children whose resourceId's are only visible in workspace (or in cas contentResources endpoint), so creating those this files locally would be a pain in the back.

So changing those ids by deleting and re-creating iflows might not be a good idea if "actual/technical id" stays the same (see UniquenessViolationException case below).

mtar-resource.pngmtar-relations.png

 

What are the options though?

Well, I guess I figured out an approach that I am about to try applying at my current project.

In case I can only have default values in my mtar, those MUST be configured for PROD already.

Meaning, at some point when I am ready to transport stuff to PROD I might want to rezip the iflow to get rid of those "Dude credentials" (incorrect or obsolete externalised values), and then properly configure the default properties in that Externalisation UI we have in CPI.

Afterwards I can generate myself the mtar file and deploy it the way I want (cicd cf deploy, manual cas ui, sem-manual tms).

Some things to consider:

  1. As soon as someone configures any parameter in PROD cpi ui, it will take precedence over default ones in mtar.
  2. I still need to manually deploy artifacts one by one in PROD (yeah.. this is so-very-low-code, Dude)
  3. If I rezip (or just manually delete/recreate) any iflow, it will get new resourceId, so I will have a UniquenessViolationException error in PROD system (which I will show in demo, also this snote says something about it)

BTW you can even generate mtar for pre-packaged content (both via CPI ui and programmatically), but in case you transport it like that according to this note it will become marked as modified (meaning chargeable by sap)

 

Stuff implemented in capic tool

In the tool I initially developed a "Transport" dialog where you would mass-apply the parameters from local prop file (DEV, QA, PROD) - to prepare mtar and then revert it back.

But after I discovered it will not work like that and the only option I have is to deal with default values, I decided that the UI/UX can still can be somewhat the same.

Except for the fact that we are NOT going to have the PROD prop file - instead it is going to be that "parametes.prop" (I refer to it as "defaults" target) so that you could compare the parameters to "current" ones.

And also in case we have DEV/QA in one tenant it could still be useful to not have copies of same iflows to test

Maybe later I will also add some autocommit/tag feature for mtar so that ci/cd hooks could be setup for deployment on push to remote (meaning mtar is going to be a part of repo rather than being built with mbt in pipeline).

Basically it looks like this, but I will show how it all works in the demo.

mtar-qa.pngmtar-defaults.png

Currently there is no ui in capic setup app to configure cas service, but if you got this far, it should be rather trivial to get yourself an instance of it with "application" plan, then create a service key, and set parameters in .cdsrc.json next to "cpi" and "iflow" (don't forget "/oaut/token" though)

cas-cdsrc.png

Demo part

We are going to play with our old friend Basics_Exception_Subprocess:

  • We will rezip it with Common_Generic_Receiver and basics_scripts and make export_v1.mtar
  • Then we will change End event to Error and make export_v2.mtar
  • Then we will rezip it again to revert to End and to get new resourceId and make export_rz.mtar
  • After that we will delete the package and try deploying mtars (export_v1 -> export_rz -> export_v2) via cf deploy and cas ui (as if it was PROD) and see what happens

I apologise as I it is not the best demo I could have done, but this part took too much time to make so we have here the first and the only take I made rather late at night.

00:00 Intro and scope of demo

02:27 Setup and prerequisites

Part1 Prepare mtars for "prod"

04:19 Prepare and deploy artifacts

05:51 Changes in Basics_Exception_Subprocess iflow - externalized odata $format parameter // this parameter will act as system-specific setting like urls for dev/prod

07:37 About Transport dialog, parameters and mtar generation

10:09 generate export_v1.mtar with End event

11:41 Change iflow to Error event

14:14 Test and generate export_v2.mtar

15:03 Introduce a resourceId conflict by rezipping the iflow with End event again from git

16:03 Forget I pressed rezip and spend 2 minutes figuring out what happenned

17:34 Deploy v1.0.0 and apply DEV params

22:24 Test json and generate export_rz.mtar

Part2 Deploy mtars to "prod"

23:28 Delete package (only design-time) and explain motivation for using mtars

26:32 Actual cf deploy of export_v1.mtar (but have old runtime artifact still running in "prod")

28:57 Very-low-code manual deployment of prod artifacts

30:50 I messed up and wasted 2 minutes, but package Version attribute actually was there in the v2 resources, but it was 1.0.1 rather than 1.1.0 I failed to find 🙂

32:45 Test v1 and try deploying export_rz.mtar that fails (also show cf mta-ops and cf deploy abort)

35:18 Compare resourceIds between mtar versions

36:15 Deploy export_v2.mtar via cf deploy and then deploy iflow (deploy is used 4 times in this sentence)

40:35 Wasted 3 more minutes

43:04 Delete package again and deploy all three mtars via CAS UI

46:59 Outro

Labels in this area