CRM and CX Blogs by SAP
Stay up-to-date on the latest developments and product news about intelligent customer experience and CRM technologies through blog posts from SAP experts.
cancel
Showing results for 
Search instead for 
Did you mean: 
Gabriel-Toebe
Advisor
Advisor
0 Kudos
243

When cluster ID auto discovery is enabled with cluster.nodes.autodiscovery=true, each node has a randomly assigned ID after every startup. Usually, frontend nodes will have the task engine disabled with task.engine.loadonstartup=false.
If a nodeId is explicitly set for a CronJob or Job, there may be cases where the assigned nodeId ends up linked to a node that cannot run tasks or CronJobs. For example, in the screenshot below, we have a Job with nodeId=13:
GabrielToebe_1-1731432924978.png
In this example, nodeId=13 corresponds to a frontend node where the task engine is disabled. As a result, the CronJob will not run on this node.
This will most often happen in systems that were migrated to SAP Commerce Cloud, where auto discovery is automatically enabled. In previous systems, you may have explicitly set the nodeId to a value that was always assigned to a dedicated backend server having for example cluster.id=13.

Resolution:

  • When using cluster ID auto-discovery, the nodeId associated with tasks should always be set to null. This ensures that CronJobs are scheduled dynamically based on the available nodes.
    The solution is to delete the problematic CronJob and recreate it. By default, when a new CronJob is created in SAP Commerce Cloud, the NodeID is set to null.
  • However, if there are too many CronJobs or Jobs with the nodeId explicitly set, and you want to avoid recreating each one manually, you can use the following SQL query to bulk update the nodeId for all Jobs where the nodeId is not null:

    UPDATE jobs
    SET p_nodeid = null
    WHERE p_nodeid is not null

Make sure that Commit mode is enabled, otherwise, the above query will be rolled back and have no effect.

  • Important: After you ran the SQL Query successfully, the cache should be cleaned for each one of the pods. Please do the following:
  1. Go to the HAC/Administration Console
  2. Monitoring
  3. Cache
  4. Click on Clean Cache