Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
Showing results for 
Search instead for 
Did you mean: 
This blog is part of a blog series, so you can find the first page here ( This is the agenda we’re following:


Documentation is a crucial task required for most of the people working in IT. Whether you work with user manuals, functional specification, technical specifications, architecture diagrams, code comments etc, you should be able to transmit your knowledge in a concise, simple enough way so that your audience can absorb it (this blog is likely not concise enough)

Again, my opinion is very debatable, but the way I see it, your interface documentation should answer the following questions: Who, What, How but also Why? On context of integration, while the first three questions have some potential for automation, the last one I think it's should be manual input.


Who requested you to do the interface? It is always good to have the contacts identified for both the requester of the interface as well as the integration developer and your source and target counterpart goto persons. Basically all the person's involved in it.

If you or another new person to the interface have to revisit the interface later on, at least there are person's identified that can be contacted for additional information. Since we have the link from CPI package to jira user story and we document the contact person's there, there are ways to automate the generation of this part of the document.



In regards to interfaces, the what is, in my eyes, the scope of your interface, what are the boundaries of what you're implementing, which uses cases will it cover? It would be nice if AI can read your iflow and interpret what is it doing and describe it to non technical people. I tried providing the iflow source to Chat GPT and asked what was it doing. Example below, it's not incorrect but too technical. There's definitely room for improvement.

BPMN explanation by chat gpt


How did you implement the scope you compromised to? Having your iflow screenshot, the list of external parameters, the connections between the components on your iflow, would allow an integration developer to understand how did you accomplished the scope you promised to achieve.


This is for me the most important question (when aplicable), most developers can open an iflow and read what is there and how it was built, but why it was built that way is usually not expressed anywhere. Even when documenting code I tend to see the same pattern being repeated, where the developer is explaining line by line what the code is doing on words that non-technical people can read and understand. I don't see much value on it.I mean, I see much more value on documenting a complex chunk of code in simple words, as well as the why you're doing it that way, or why do you need that.

Understand why have you used an adapter instead of the other, why have you chosen one integration pattern over the other, why are you considering only deltas or only full snapshots of data, why do you have an extra facade layer, etc is in my eyes the missing knowledge gap that would allow a new developer to understand the history and context in order to successfully take ownership of an interface.

Even for your architecture, if you do the exercise of asking yourself why do you have each layer/component/connection and you can't explain it in a convincing way to yourself and to others, maybe you should question if you really need that part.

Unfortunately on Cloud Integration, AFAIK, there are not many places where you can add this information. You can comment on groovy code, on package description, iflow description, but not on a component level.

Technical Documentation

All our interfaces rely on a document template that we need to fill. We link it on the documentation tab of the package always with the same name so that we can check on our automations if documentation is missing and report it if so. Some of the chapters of this documentation are what and hows (iflow screenshot, external parameters), while others are whys (business motivation, process context). I hate doing repetitive work, so when I realize I would need to go through all the iflows developed and take a screenshot of them as well as to create a table with all the external parameters, I lost motivation to do it manually, so I had the idea to automate it as much as possible

Documentation tasks diagram

1. and 2. IFlow screenshot

From my knowledge, there's no official API to download your iflow screenshot, I found an interesting blog from @r_herrmann2 about using a nodejs library to generate a screenshot of the bpmn representation of the iflow. I've implemented it on our jenkins platform as a new pipeline that would extract on a daily basis all the screenshots for all the iflows. Example below was taken from

Example of a generation from bpmn io site

Unfortunately the end screenshot is IMO an ugly black and white image which seems like a very old outdated iflow with a terrible look and feel.

I've worked with webdriver in the past, and since there was no official API to grab the screenshots of the iflows, I created a script that would go on a daily basis opening CPI DEV with a technical user and navigate through all the packages and all iflows and take a screenshot and save it on a directory. One detail you might miss, is that your script shall click the zoom icon once your chrome headless window is running your automation and you're located on the iflow designer, so that your iflow zoom is fit to the window and with that assure that all iflow components are part of the screenshot. Example of such image below:

Automatically captured image of an iflow from Iflow designer

Then created a simple http server directory and published it as a window service so that your screenshots would be available as URLs having in mind to include them on git markdown later on.

Value added

You have an http server automatically updated daily where you can easily navigate through the screenshot of each iflow. You can use this images to allow documentation generation

External Parameters

In the meantime, a colleague enhanced our pipeline to now generate the technical document (docx) using the screenshot downloaded early as well as creating the list of external parameters for each of them. I think he used java apache poi (which is a very nice library), but there are also ways to do it with nodejs. Nice job Flavio 🙂

Value added

Generate parts of the official documentation in an automated/always synced fashion

3. 4. and 5. Message Mapping Documentation

One of the parts of your technical document is most likely the mapping documentation. There's an option on the UI to export it as excel, however it's kind of limited, since:

  • it doesn't identify the source and target data types involved in each mapping line

  • It doesn't have keyword highlighting so it's kind of hard for you to read a complex scripting logic

  • If you copy it directly to your word document, you still need to format it according to your template

Therefore, created a java application that parses the xml mapping file and generates an html page with syntax highlighting and indentation for functions to make it easier to read. It reads the datatypes from associated xsds information, therefore it only supports xml mappings at the moment. It also supports different "contexts" inside each mapping line. At the end, makes it also available to be consumed via http server. Example below

Message mapping documentation in html format with syntax highlighting

Value added

Without developer intervention, the automation creates an html documentation for each mapping inside each iflow whenever the jenkins pipeline package for your package runs.  

6. and 7. Markdown Generation

As we now have an automatically generated git repository for each CPI package source, the automatically generated README file was kind of empty, so I had it in mind to generate it with the information we already have when running the pipeline. So I was able to generate a markdown for each package containing the package descriptions, tags, the list of all your iflows, the description of each iflow, the screenshot of the iflow as well as the message mapping documentation (above) if the iflow has any. This is automatically updated on a daily basis when the code is being synchronized for that package. So in the end, it's a way to have always up to date documentation for your interfaces. Example below:

Example of a README file on your git generated repo

Example of the contents of a README file (continuation)

Value added

Without developer intervention, there's a git repo created for his/her package, if you access that git repo, you see an up to date readme file containing an overview of the list of the iflows, screenshots and extra info, totally self-sustained


Next steps

Thinking about a way to include the "whys" required by our template inside the package of the interface, either by following some template convention on the package description or any other means, so that we can fully generate the documentation and so that the documentation is closely coupled with the CPI package as most as possible. Motivation is that if you transport the package, or download it, it would be nice to have all that documentation as part of the unit you transported.


In this topic, I’ve introduced the topic of documentation generation for CPI packages. We automated message mapping documentation, iflow screenshots and we read metainfo from package to include on the official documentation. This is a way to increase productivity and make documentation always up to date.

I would also invite you to share some feedback or thoughts on the comments sections. I’m sure there are still improvement or ideas for new rules that would benefit the whole community. You can always get more information about cloud integration on the topic page for the product.

Labels in this area