on 2015 Jan 21 8:57 AM
Hi,
I have Write Optimized DSO of huge volume (12 TB) close to 3 yrs.
Now I want to retain data for 6 Months , but all traditional deletion methods fail as with dumps.
Selective loads to another DSO isn't helping either.
I was thinking whether I can create small tables out of this big active table at Oracle DB level.
My question is
1) Is it possible to create these smaller table at DB level ?
2) If I succeed the above , how can load this into a brand new DSO ?
Thanks,
Prashanth
Sure will do.
Thanks a lot Matthew !
I'm closing this discussion.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I assume your DSO has some kind of period or date InfoObject.
A DSO is implemented as a transparent table (or collection of tables). Therefore a competent ABAPper could certainly write a program that deletes all data with a date of more than 6 months ago. It may even be as simple as DELETE /bic/aDSONAME00 WHERE date LT six_months_ago.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
As informed by BASIS , Its out of memory and puts lots of stress on CPU utilization.
We have been instructed not to run this program again 😞
We also have some other approaches like creating smaller flat files using some UNIX based tools like 'PUTTY' and place it in App server and then load back into a new DSO.
I'm looking at some more out of the box approach to tackle this.
Presumably this is a one off (once you're running regularly the volumes will be higher), and it's a business requirement. Basis cannot tell the business "you cannot run this program" - they need to find a solution. If my Basis team were to try to instruct me to do anything (which they wouldn't because they're competent professionals), I'd tell them where to get off. The business instructs Basis, not the other way around - you should escalate if necessary.
By the way "out of the box" solution is one that is prepackaged. I think you're asking for "outside of the box" solution.
The problem is probably due to rollback - the entire db operation needs to be able to be rolled back - and that's what causing the stress on the db server.
Here's a simple solution.
1) Create a write optimised DSO
2) Load that DSO with the data you need from the old DSO
3) Drop the old DSO contents at DB level. A simple drop shouldn't require any rollback
4) Load the old DSO from the new DSO.
DSO to DSO solution is already in-place. Each request is ~100 mil records.
Selective loads from a w/o DSO reading each request is taking hours to even kick off first data package.
If it was a standard DSO , probably it would have been easier.
I have already proposed one more option to re-create required data from the Cube into a new DSO and drop data from old DSO. Looks like this is safest and recommended at the moment.
As I mentioned I was just checking for more tested options.
Another way: create a Z table with the same structure as your /BIC/Adso00 table. Write an ABAP program that populates this Z table with the required data from the DSO (and will write into the DSO from the Z table)
Run the program to populate the Z table
Drop the DSO.
Run the program to repopulate the DSO
The program should use OPEN CURSOR and SELECT, to minimise server load.
You should also look at the various notes for reducing the tables that hold requests for loads. Implement and run the program for this DSO. (Sorry, don't know the note numbers off the top of my head)
User | Count |
---|---|
69 | |
10 | |
8 | |
7 | |
6 | |
6 | |
6 | |
6 | |
6 | |
5 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.