CRM and CX Blogs by Members
Find insights on SAP customer relationship management and customer experience products in blog posts from community members. Post your own perspective today!
cancel
Showing results for 
Search instead for 
Did you mean: 
adambadura
Participant
2,510

Introduction


Migrating from older SAP Commerce version running on-premise to SAP Commerce Cloud (CCV2) platform can be challenging on different levels and there are some great guidelines already available out there that allow you to sidestep many of the pitfalls that are waiting in the process.

In this article I would like to share some additional thoughts and observations that me and my team gathered working with CCV2 lately.

General approach to solution architecture


One of the characteristics of platforms such as SAP Commerce Cloud is that it tends to have a lot of guardrails, patterns and mechanisms that are broadly used within the platform. Some of them are more elegant and efficient than the others and in some cases it may be appealing to get out of the path made by SAP Commerce platform creators. While generally it is desirable to try to find the best solution for a certain problem it may sometimes come at a cost.

As a more down-to-earth example let's look at the possible implementations of master data replication to SAP Commerce Cloud from a non-SAP ERP. In such scenario apart from ODATA APIs and Hot Folders there are actually some other options that may in some cases offer better performance characteristics, lower latency or resource consumption. It may be sometimes tempting to create your own non-ODATA integration API or use database directly for replicating large amounts of data.

However when you think about deploying such "non-SAP standard" solutions to the cloud other quality characteristics start to appear more and more prominent: maintainability, observability, interoperability:

  • Maintainability may suffer because development and support teams must learn to operate, troubleshoot and extend another framework/library/protocol.

  • Observability may suffer because "non-SAP standard" solutions may not be covered by standard observability tools and in the cloud you have very limited flexibility when it comes to modifying low-level workload configuration in order for example to add a new monitoring agent.

  • Interoperability may suffer because this could limit the ability to use predefined integration packages that heavily use ODATA inbound interfaces.


Additionally due to the fact that the infrastructure (networking) layer is beyond your control it may be even impossible to use some communication protocols in the cloud.

That is why when it comes to deploying SAP Commerce Cloud based solution we tend to use as much of out-of-the-box functionality as possible even if this is related to some complexity or efficiency penalty.

Getting back to our master data replication example:

  • We use ODATA interfaces exposed by SAP Commerce Integration Module wherever possible.

  • Alternatively in cases when large data sets are replicated (e.g. very large amounts of price rows, binary data, etc.) - Cloud Hot Folders with additional customizations when required.


For more in-depth look at SAP integration strategy have a look at this blog post: https://blogs.sap.com/2022/09/30/exploring-saps-integration-strategy-free-ebook-available-now/

Delivery pipeline


CCV2 comes with it's own builder that can be used for deploying code to cloud environments but it is lacking support for more advanced CI/CD features. Therefore every team aiming for keeping better control over their delivery process must setup it's own CI/CD tool.

There are actually two main goals that we aim to achieve:

  • Using your own CI/CD tool allows you to take advantage of automatic testing and code analysis to make sure that the code that is delivered to the cloud passes quality gates.

  • Using your own CI/CD tool allows you to more easily manage the deployment process across various environments especially in a scenario when besides cloud environments you have your own, local environments for automatic testing purposes. For example teams need to be able to easily deploy packages to selected environment (be it local or cloud) and receive status notifications.


In one of our latest projects we pursued the following pipeline:

  1. CI process is accomplished by using feature branches and merge requests/pull requests. One of the requirements for MR/PR to be accepted is the success of automatic unit tests (backend, frontend, initialization, you name it) and static analysis. Feature branches are tested using isolated ephemeral environments.

  2. After MR/PR is accepted and merged to main branch the tests are repeated but this time using the target database engine in order to ensure that the application does not contain any database-specific logic and it will actually run correctly in the target environment. After integrating the changes and making sure that it does not break anything the changes are tagged. This is a good place to trigger a build on CCV2 to have a package already waiting for deployment later.

  3. The tagged code is then actually deployed to local environments or to the cloud via Cloud Commerce build and deployment API (using the pre-built package).


My company mostly uses Gitlab (https://gitlab.com/) as CI/CD environment so it seemed a natural choice for us but this kind of pipeline is possible to achieve on any decent CI/CD tool. Building this pipeline on a platform that the development team was already familiar with helped to bring the team up to speed in less time. Especially cloud deployments management was simplified as our GitLab instance is already integrated with our internal SSO and we did not need to login to Cloud Portal every time to run a deployment (especially when using MFA login).

See SAP help for details regarding Commerce Cloud API: https://help.sap.com/docs/SAP_COMMERCE_CLOUD_PUBLIC_CLOUD/452dcbb0e00f47e88a69cdaeb87a925d/66abfe678...

Configuration management


Application configuration


Properly managing application configuration across environments may appear to be a no-brainer but in fact it can backfire at you severely when done wrong.

First of all we find it very important to share the common immutable application settings in a single location in the code repository and then reuse this location on all environments: local development, test, cloud environments. In one of our latest projects we used a structure inspired by work published by Markus Perndorfer (https://github.com/sap-commerce-tools/ccv2-project-template). We basically have a standard Hybris local.properties where we fix all of the immutable, shared settings. All mutable stuff is filled with a <CHANGE_ME> placeholder that prevents the platform from starting in case when any of the required environment-specific settings is missing. Then for all environments (including local developer environments) we keep additional properties files that override the required parameters.

Additional issue that we found ourselves struggling with for quite some time was keeping the configuration of all nodes in a cluster consistent and easily manageable. In order to tackle this problem we developed a custom extension that allows to keep the configuration parameters in the database and manage them conveniently using Backoffice. This way we are able to keep configuration parameters in one place and be able to easily modify them at runtime without the need to go through all HAC instances in the cluster.

Init/update


In order to keep all environments configured in the same way we use a single init/update config file that we use during deployments. You can store such file in your repository and then use it when initializing/updating platform:

Translations


An important and quite hard to tackle aspect of content management are translations. There are two main UI related areas that need to be translated when adding a support for a new language: basic storefront UI components (e.g. button texts, navigation elements) and CMS content. I deliberately ignore master data here because in most cases these are translated elsewhere.

The hard part here is that translations are very difficult to externalize in a form that an external non-technical translation will be able to bear. In Spartacus translations for UI elements are bundled with the app in form of a multitude of .ts files. On the other hand CMS content is stored as Impex files. Both types of files are rather hard to understand by non technical person so after struggling a bit with the out-of-the-box approach we came up with tools that are able to convert translation files stored alongside the code to XLS and vice versa. This way the translator uses a file that is very easy to work with, example structure:


Translation XLS file structure



Cloud environment preparation and tuning


All of the above topics are not strictly related to the cloud configuration. Now we are going to scratch the surface of CCV2 a bit.

Endpoint configuration


One of our main concerns when deploying to the cloud was proper and secure endpoint configuration. We followed the following approach:

  • Customer facing endpoints (JS storefront, OCC API) are served via public endpoints with optional IP filtering.

  • Backend administration endpoints (Backoffice, Solr, ...) are served using NAT endpoints which is a more secure way of exposing SAP Commerce services. This of course requires us to setup a VPN connection to the Cloud from the networks to which users are connected.


There is however one thing worth remembering: when configuring access from on-premise network to the cloud you cannot create NAT rules using IP ranges so when you want to configure access to some environment for multiple workstations then you must configure rule for each one of them. The solution here is to configure NAT on your network's side. You could define several IP addresses for different host ranges to be able to define more granular rules for example:

  1. One IP for git - accessible from Builder.

  2. Second IP for developer workstations - allowed to access NAT endpoints of d1 environment.



NAT rules configuration



Kubernetes workloads tuning


Even though the infrastructure layer is out of your control that does not mean that you can forget about it completely. There are still cases where Kubernetes expertise is required. In one of our recent projects we faced an issue when just after deploying a package to a new environment we started observing that the API service was frequently unavailable. After analyzing the behavior of Kubernetes workloads using Dynatrace we saw that Kubernetes was regularly killing our API pods because of reaching memory limit.

After some more analysis (including of course the suspicion of a memory leak) we came to a conclusion that no memory leak was actually happening - it was just that the Kubernetes memory limit to JVM heap size ratio was too low. As we put some load on the pods the JVMs started to use more native memory and that triggered the OOMKill by Kubernetes even though the heap was nowhere near being full. After contacting support it turned out that the SAP Operations Team is using some limit to JVM heap size presets and it just happens that the default one did not suit our scenario well. After some discussions we managed to find the sweet spot and apply it to all of the environments.

To make long story short: It is important to have at least one developer on the team who knows Kubernetes well. Problems such as this one can be hard to grasp unless you know how this platform works and what metrics to look at in Dynatrace.

Monitoring


When it comes to observing the behavior of SAP Commerce Cloud there are 3 main tools that can be used:

  1. Dynatrace APM

  2. OpenSearch Dashboards - where CCV2 sends all the logs

  3. Backoffice


Each of them serve different purpose and surely when put together these tools provide a lot of insights into how the system works.

More information can be found here:

Using OpenSearch Dashboards for analyzing data


OpenSearch Dashboards (OSD) is a powerful tool that can be used not only for issue analysis and browsing through the logs. It can be also used for analyzing data such as user behavior. For example you can use to check who and when logs in to the system, what users search for most frequently e.g. There is a lot of information in the logs and OSD is really helpful at getting it out.

For example in our last project we used custom scripted fields to extract the information about user logins based on Spring events that appeared in the logs.


Scripted fields in OpenSearch Dashboards


Then we were able to use that field in visualizations and dashboards in OSD.

Monitoring integration interfaces


Just as I mentioned before we tend to use out of the box features for integration purposes partially because there is built-in monitoring capability that can be used right away.

For example Cloud Hot Folders come with AOP-based monitoring functionality that allows you to check the status of the jobs in the Backoffice Cloudcommons screen. When you customize Cloud Hot Folders it is vital make full use of this - having the right information recorded here can be very helpful to be able to pinpoint integration problems quicker without resorting to log analysis.

You can read more about it here: https://help.sap.com/docs/SAP_COMMERCE_CLOUD_PUBLIC_CLOUD/403d43bf9c564f5a985913d1fbfbf8d7/00416b3b6...

ODATA communication monitoring is also covered using Backoffice Integration UI Tool perspective and it works all fine until you start using ODATA batch requests. In this mode each ODATA HTTP request's payload contains multiple entities e.g. one request can be used to send all variants within a single product family. The mishap is that when you look through requests using Integration Module monitoring screen in Backoffice you cannot identify which request contained given product. In case of batch requests when you have for example a request with 3 products A, B, C (in this order) in the monitoring tool you will only see one request where integration key is A. This makes it hard to troubleshoot integration issues and need to be taken care of in case when you are using batch requests.

More information about Integration API Module inbound requests monitoring: https://help.sap.com/docs/SAP_COMMERCE_CLOUD_PUBLIC_CLOUD/bad9b0b66bac476f8a4a5c4a08e4ab6b/485b479c3...

Another option for integration monitoring is using OpenSearch Dashboards. You can easily fashion a saved search query and visualization that offers a quick summary of the integration failures in a time window - here is one simple example:


Replication request status visualization in OSD


Additonally you can define monitors that perform certain action when a trigger fires, for example you can setup OSD to send email notification in case when monitor detects high number of failed ODATA requests, or some Hot Folder fails to process a file:


Example of monitor in OSD


I hope that this article will be valuable for all of you who already have a background in SAP Commerce on-premise and are starting to work with SAP Commerce Cloud.

In case you have any feedback or suggestion please feel free to comment below or contact me directly.
Labels in this area