2023 Apr 27 2:53 PM
Most ABAP changes can be deployed into a running production system with minimal or no impact. There are, however, some operations - mainly DDic changes - that require some downtime: e.g. table adjustments or index updates, to mention the most obvious ones, can take several hours to deploy.
In a high-availability setting such maintenance windows are usually not acceptable. For its own changes, SAP has devised mechanisms such as ZDO or its little sibling nZDM, which alleviate long downtimes in the event of system upgrades and the likes.
But what about custom development? Are there rules, tools, best practices that support deployment into live systems? I remember reading that the ABAP Test Cockpit should have rulesets to verify that transports are zero downtime-compatible. But I haven't been able to figure out what those rules might be, and what to do if you really need to deploy changes that violate these rules.
2023 Apr 28 8:25 AM
nearly zero downtime is for upgrade or related provisioning. I dont know how long you take to deploy your custom code. Can you afford a zero-downtime process? e.g. implement the new version in another new server and switch to it to save business down time.
2023 Apr 28 8:29 AM
Includes in routines in a BW system can take hours as well. As can adjustments to domains. For the former, the simplest approach is "don't do it".
I did suggest that instead of an include we use a static method call, but apparently it was too difficult for our offshore partners who were not congnisant of such techniques. Ahem.
2023 Apr 28 8:55 AM
For me it is not possible to manage no downtime. Otherwise, you set a strict rule, no modification only new feathure. (no bug .... )
2023 Apr 28 10:34 AM
There are numerous methods that can help achieving zero downtime deployments even if you need to deploy changes that would ordinarily require substantial downtime.
One example to illustrate what I'm thinking of: if you need to rename or change a DB table column, you could do this in incremental steps:
All these steps can be deployed and executed without a (big) maintenance window, even for very large tables that would otherwise take hours or more to activate. This does however imply a paradigm change from the "old" way of deploying changes in only one step.
(I find "Migrating to Microservice Databases" by Edson Yanaga to be an excellent source for such patterns and strategies.)
Similarly, if you need to change or add an index to a large table, which would otherwise lock down the table for hours until the index is activated, you could create or change the index (with Oracle, at least) directly on the database system, in the background, without impacting system availability, before you actually transport the DDic index definition; in this way there is no downtime whatsoever.
Now, what I'm looking for is an ideally comprehensive set of rules and best practices for doing such things in a HA ABAP environment. Tool support would be welcome.
(BTW, I mentioned deployment into a running production system "with minimal or no impact". This is of course only true if you can assume that an occasional dump for individual users during the deployment qualifies as minimal. If this is not tolerable, then the whole argument is potentially moot.)