‎2016 Sep 22 1:04 PM
Hello,
Has anyone faced this issue where there is one custom report which needs to be transported to all SAP systems, because it will be run manually by users on these SAP instances?
I was thinking of writing a "shell" on the destination system (where user would execute it), which copies the code from a central system via RFC, and executes it on the destination system. It's a report, no Z tables or other updates, so everything would be contained within the report object. The output would be an ALV which the user would download or whatever.
The main obvious advantage is that changes are centralized, and everyone will automatically have the latest version. Otherwise the program needs to be transported dozens of times to dev, test, and prod landscapes each time.
One disadvantage is that report changes which require new inputs would need all the copies of the shell to be updated, negating that advantage.
But I would be interested in hearing people's thoughts on this approach, or other ideas.
Thank-you.
‎2016 Sep 22 1:21 PM
The main issues are
1. Security - preventing the possibility of malicious code being spread
2.-Safety - preventing a catastrophic bug affecting all instances at one go.
One of my clients is a multinational with many regional ECC instances, each with their own D/C/P . The code is always transported through the landscapes. It isn't onerous.
Other clients have a single D system, and from there the transport goes to the C/P of the target instances. I've never seen a centralised D and C system, and then straight into the Ps - pronanly because of point 2.
‎2016 Sep 22 1:21 PM
The main issues are
1. Security - preventing the possibility of malicious code being spread
2.-Safety - preventing a catastrophic bug affecting all instances at one go.
One of my clients is a multinational with many regional ECC instances, each with their own D/C/P . The code is always transported through the landscapes. It isn't onerous.
Other clients have a single D system, and from there the transport goes to the C/P of the target instances. I've never seen a centralised D and C system, and then straight into the Ps - pronanly because of point 2.
‎2016 Sep 22 1:28 PM
Thanks Matthew.
Yes, #1 was my other main concern.
In the centralized system, any program change would still go through the Dev-Test-Quality-Prod landscape, and would be tested on each respective instance of the destination systems, hopefully reducing the chance of the global problem you mentioned.
‎2016 Sep 22 1:33 PM
Hello,
There are many possible solutions to this problem. So this are my thougts:
1. Create custom Z* transport route between source development system and every development system that should get this new version of program. Transport layer is assigned to package so you may create new Z* package and add program there.
- Infulence STMS landscape,
+ Transports of changes are propagated automatically,
2. Manually extract transport from source system to any other (or by system level scripts). Basically you can manually create a transport files and copy them to the transport directory of another system and add them to STMS import queue (tp w with option addtobuffer).
- Requires manual work or scripts,
+ Does not infulence STMS landscape.
3. Dynamic code generation by use of command generate sobroutine pool. You may then replace code with updated one but I don't recommend this approach due to security issues.
- High risk on security
Summarizing usually whenever we are facing recurring update in many systems of we use option one. Option 2 is commonly used to apply SAP notes in case of SAR archives but also whenever I have to transport a package with own products to customer systems. Option 3 as mentioned is not recommended.
‎2016 Sep 26 9:28 PM
In addition to what Matt noted:
3. Audit, especially in a public company (SOX compliant in the US, for example). Usually the changes are subject to some kind of audit trail. E.g. we can't move any changes into our PRD unless they have a ticket # and a ticket has been approved by a user.
Moving the transports through the landscape is actually not that difficult. Where you could invest time more effectively is maybe to reduce the potential changes. For example, I had to write a corporate report that I knew would go into 3 other systems. Of course, they all have some quirks and differences. So what I did was put as many options on the selection screen as possible, so that others wouldn't have to come back to me for small changes. Just a thought.
‎2016 Sep 27 8:10 AM
Or add your own enhancement points or BADIs for localisation.
‎2016 Sep 27 9:06 AM
Some solutions that we have done once are as follows:
1. We had one central RFC which had the main body or logic of the code. Programs in the other production systems send data to this RFC and then get back the result.
2. We deploy the original code and one RFC on one P. The RFC is capable of transferring the entries source code to the caller. The caller can then process the code at its own station. And the code change needs to be done at only one place.
Both of these have its own drawbacks and overheads.
Ideally I would suggest in your requirement to separate out the core processing which you expect to have frequent changes and call it dynamically.
‎2016 Sep 27 9:59 AM
Hi,
I think in the long run you're better off just to transport and import it like you normally do.
The reason why I say this is that at the moment you have 1 report which conforms to some common business requirement and used by different users.
Users on system A will develop new requirements dissimilar to those of users of systems B and C in the next couple of years.
The risk to overgenerify solutions whilst still trying to cater for a number of users which makes the solution unnecessary complex becomes apparent and in turn means that development and support becomes more expensive.
At that point it is better to just split the solution into specific ones depending on business requirements.
Kind regards, Rob Dielemans