Financial Management Blogs by Members
Dive into a treasure trove of SAP financial management wisdom shared by a vibrant community of bloggers. Submit a blog post of your own to share knowledge.
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
12,028


Performance is the key factor in SAP BPC ,for each transaction we perform in the system,many tables will be updating with the logs for that task.

Over a period of time these tables become very large and consume lot of space. Apart from consuming lot of space,has negative impact on performance.

Hence we need to archive/delete the log data after certain period.

Wanted to document all house keeping jobs at one place.

1.BPC Statistics.

Always switch off parameter BPC_STATISTICS in SPRO settings after BPC performance statistics trace.

Execute program UJ0_STATISTICS_DELETE via transaction code SA38/SE38 to delete obsolete statistics or schedule it as background job as per note 1648137.

2.UJBR Backup and Restore:

As best practice should take backup with UJBR T code weekly once.

We can schedule this job weekly or if you have more than one environment and wanted to run back up jobs, you can create a process chain with the program UJT_BACKUP_RESTORE_UI.

By selecting  “Execute Backup” radio button we can take full backup of the environment. In the event of data loss due to any reason, we can restore the environment by selecting “Execute Restore” button.

3.Exports Jobs:

In the UJBR backup we take full back up weekly. But usually we work mostly on one category (Plan/forecast).Most of the times we may need to restore particular category data for a month or two for some selections. But we don’t have option restore a particular set of data in UJBR restore.

If we take exports and save it in the server, we can import based on our selections.

With "Export Transaction Data to File" DMP we can export the data.

We can import the data with the below 2 DMPs per our requirement.

a.Import Transaction Data Aggregate Overwrite Mode.

b.Import Transaction Data (Last Overwrite Mode).

4.LO Job:

Lite Optimization process helps to move transaction data from F fact table to E Fact table apart from other activities (i.e., Rollup, Statistics update, Closing the open requests).

This should be scheduled everyday night during off business hours. It improves query performance.

We can switch on Zero elimination in the /CPMB/LIGHT_OPTIMIZE process chain.

Else we can check “with Zero elimination” check box in Cube Manage tab in BW system.

The first option is applicable for all the models in a system, but the second option is for the specific cube (Model).

5.Zero Elimination:

If Zero elimination is not switched on due to any reason, if you want to eliminate Zero records from the system, you may use “RSCDS_NULLELIM” program.

6.Audit tables House Keeping:

In most of the cases we have audit logs enabled for Administration activity and User Activity.

Based on the purge frequency we have given, these tables don’t purge automatically.

We need to schedule the DMP - BPC: Archive Data(/CPMB/ARCHIVE_DATA) regularly. Based on the purge frequency we have given in audit functionality audit data moves from Audit data table to Archive table.

From archive table we can delete  with “UJU_DELETE_AUDIT_DATA” program.

For Administration activity logs we need to use “BPC: Archive Activity” ( /CPMB/ARCHIVE_ACTIVITY) DMP.

This DMP moves data from UJU_AUDACTDET to UJU_AUDACTDET_A ; UJU_AUDACTHDR ,UJU_AUDACTHDR_A table based on the selection given in the DMP. We can delete the data from UJU_AUDACTDET_A , UJU_AUDACTHDR_A  with the help of SE14 functionality.

7. Comments and Journals House Keeping:

If Comments are enabled and using journal entries, you may use BPC: Clear Comments(/CPMB/CLEARCOMMENTS), BPC: Clear Journal Tables(/CPMB/CLEAR_JOURNALS) DMPs.

8.BALDAT,BALHDR,BAL_INDX:

With the help of SBAL_DELETE program or SLG2 we can delete the application logs which are older than 1 year or as per our requirement.

9.UJF_DOC, UJF_DOC_CLUSTER,UJD_STATUS tables:

UJF_DOC table contains transformation files, conversion files, script logics and other documents.

And the flat files which were generated by exports jobs and files uploaded for imported jobs apart from logs generated by DMP execution.

a.We can delete the unwanted files, reports from Data Manager/EPM tabs in Excel.

b.UJF_DOC_CLUSTER, UJD_STATUS tables contain DMP execution logs details. UJF_FILE_SERVICE_DLT_DM_FILES, UJF_FILE_SERVICE_CLEAN_LOGS can be used to delete the Data from these tables.

c.Even if you select 'Script Logic logs' for UJF_FILE_SERVICE_CLEAN_LOGS program;the logs with the suffix '.lgx' under the folder '/root/webfolders/<environment id>/adminapp/<model id>' are not deleted.

We need to implement '2581931 - Add feature for cleaning script logic logs' to fix this problem.

d.We can delete the entries in UJFS t code manually as well.

10.Work Status Tables:

Some times we will have entries in the system for obsolete transaction data.(You have locked data for 2010 year and after some time,you have deleted the transaction data.But work status table still contains the data for 2010).

Implement 2053697 note and run UJW_WS_TEST program.If you don't give selection here all work status entries will be deleted.

 

References:

1.1470209 - BW report RSCDS_NULLELIM on Info Cube without time dimension

2.1934038:housekeeping of table UJ0_STAT_DTL

3.1705431 - Planning and Consolidation 10.0 NW - House keeping

4. 195157 - Application log: Deletion of logs

5.1908533 - BPC File Service Cleanup Tool

6. 2053697 - ABAP report to remove obsolete work status for data region

7. http://scn.sap.com/thread/3887031

8.2581931 - Add feature for cleaning script logic logs

5 Comments
former_member228877
Participant

Hi Manohar,

Thanks for the document, All the house keeping activities in one place, Really helpful..

Best Regards

Sree

Former Member

Latest updates for UJF_DOC_CLUSTER table - OSS Notes 2314951(UJF_FREE_UJF_DOC_CLUSTER)

and for UJ0_CLUS_PARAM table- OSS notes 2316677, 2321462(UJ0_CLEAR_UJ0_CLUS_PARAM)

And for BAL_INDX:

2289009 - Referred records still exist in table BAL_INDX after removing corresponding uj logs in SLG2

former_member187113
Participant
Hi Manohar,

Useful document. Thanks for putting this together.

Appreciated

Cheers

Nick
houtbyr
Explorer
0 Kudos

RE Zero Elimination (points 4 and 5):  Word of Warning

 

One thing to consider is whether you really want to have 0 values removed from the database.  During high volume data loads, multiple data requests can be created.  The database will only keep so many of the most recent data requests managed; after that, it compresses the data to simplify and remove as many data records as possible for performance.

e.g. There are 5 separate data records for +$100.  After compression, there will be a single data record for +$500 (compressed 5 records into 1)

 

This doesn’t seem like an issue on the surface but caused us a great deal of grief.  We have logic that fires after loads, but with zero elimination active, any removed 0 records would not be able to trigger logic to run.

 

For example, on Day 1, we load data with a transaction that places $1000 in ACCOUNTA.  Logic that runs after the load would also move that $1000 into consolidation accounts or other downstream accounts for other financial analysis (business logic).
On Day 2, we realize that was the wrong account so we reverse out $1000 from ACCOUNTA and put it in ACCOUNTB.  This creates a net $0 for ACCOUNT A when the data gets compressed (from a large number of data transactions) and the ACCOUNTA=$0 database record is removed with zero elim active.

With no value and no record now associated with ACCOUNTA, the logic will not fire on that account and will not zero out any of the downstream accounts.  If there’s no source data, no logic will run on it.
At the same time, since there’s a value in ACCOUNTB, the logic will push those values downstream.  This creates a double-counting scenario and incorrect financial results;  downstream ACCOUNTA and ACCOUNTB both have the same $1000 which would show as a double-counted figure.

 

Again, this is only an issue if there is enough data throughput that multiple data requests are created which would force data compression to happen before logic runs and there were some reversal transactions that results in a net zero value (values that WERE there, but are now removed).  In other words, potentially a rare scenario.  However, it happened for us quite consistently based on the amount of data being loaded on a monthly basis and our system/database configuration.

 

Our solution was to turn zero elimination OFF so that the system would leave 0 records behind.  This way there would always be a record to initiate the logic and correct any downstream data records we were managing.

Yes, this would result in some extra database records, but I doubt enough to affect general performance on an ongoing basis.  I’d rather have a few extra database records than have incorrect financial results.

 

Everywhere I read suggests that zero elimination should be turned on.  Before you consider that option, understand your data requirements and the impacts that can result when database records are “removed” especially if you have custom logic that runs after large data loads.

 

Rich

0 Kudos

Hi Richard,

You are correct. In some scenarios we can’t switch on the zero elimination. We too have such scenario, so we haven’t switched on the Zero elimination while compressing the records.

That’s why I have given as below.

5.Zero Elimination:

If Zero elimination is not switched on due to any reason, if you want to eliminate Zero records from the system, you may use “RSCDS_NULLELIM” program.

Usually we run all calculations and do validation before we run “RSCDS_NULLELIM” program.

Why we need to run Zero elimination:

1.As you mentioned we can live with some performance degradation with some zeroed records.

2.We have another requirement. At the end of the Planning cycle, once Plan data is signed off, we retract the data to other systems.

Let me explain with your example.

User entered $100 against ACCOUNT A which he is supposed to ACCOUNT B.

Now he corrected the data by moving data from A to B. And Account A is no longer exist in down streams.

But there are records(one with $100 and another with -$100,which net to Zero).When we run retraction process, it would fail as Account A is expired/no longer exist in that system.

So before we run retraction process we run zero elimination program couple of times per year after thorough validations for all calculations.

1.We validate the data for all calculations.

2.Run Zero elimination

3.Run Retraction Process.

Labels in this area
Top kudoed authors