Here you can find the result of an investigation I have done on SAP BPC write back parameters that can be maintained as model parameter for planning and consolidation in SPRO transaction (a general description can be found on BPC admin guide).
RECLEVEL_NR specifies the maximum number of records to be written for which a record-based locking is always applied.
This is quite important because a wrong setting of this parameter can affect performance and overload the en-queue server by generating a huge number of locking selection range.
In case the number of submitted records in a BPC input form or in the result of a Data Manager Package exceeds the value of RECLEVEL_NR, we will have 2 different behaviors:
The choice of these 2 scenarios in theory should depend from parameter SPARSITY_COEF, but in implementation I checked the sparse behaviour was never possible, no matter what the value of SPARSITY_COEF is.
This is screenshot is taken from BPC release 801 SP level 12.As a result of these 2 scenarios we can have:
record-based locking (the same we have if RECLEVEL_NR is greater than the number of submitted records)
a between range locking: instead of adding a record for every member to lock for a specified dimension, we will have a single record defining an interval from lowest and highest member. This means we avoid to overload the server with a huge locking selection, but on the other side we risk to lock too many members. Especially if we are in a sparse situation.
The locking selection is created using BPC parameter INTERVAL_NR. From BPC admin guide we know that:
"In the situation where record level locking is not being implemented and the data set being saved is NOT sparse, any dimensions with less than this number of distinct member values in the dataset will be locked using their single values. If the dimension has more than this number of records, the range between the low to high values will be locked."
So if the members of a dimension are less or equal to INTERVAL_NR, record based locking for that dimension will be implemented; otherwise the locking will be done using an interval that considers only the lowest and highest member.
If I am going to write 3 members for Time (2016.01,2016.02 and 2016.12) and INTERVAL_NR is 2, the interval will lock every time member from 2016.01 to 2016.12.As a consequence:
I have generated just 1 record instead of 3 in the locking table (positive thing)
I have locked 12 months instead of the 3 affected by the changed (negative thing)
SPARSE records (actually in my current version is never implemented).
The locking selection is created using BPC parameter MULTIPLY_COEF.
"In the situation where record-level locking is not being implemented and a sparse data set is being saved, this value specifies the maximum number of members for which you can implement record level locking (that is, when to swap to using a BETWEEN range in the lock table)."
In this case it will apply the range selection by considering the number of members in every dimension.
See below example where the model has 4 dimension and MULTIPLY_COEFF is equal to 15. Please note that the table is a sorted table by number of members in each dimension: so category (1 member) is first and account (7 member) is the last.
I would advice to keep parameter RECLEVEL_NR to default value 10 and to not increase it.
Decision of type of locking to be implemented in this way will depend from parameter INTERVAL_NR (I assume that we are always in a NON SPARSE scenario).
Impact on parallelisation
When a parallel BPC process(RUNLOGIC_PH or BPC parallel framework) is implemented, it is highly recommended to have a low value of RECLEVEL_NR.
RUNLOGIC_PH with a RECLEVEL_NR equal to PACKAGE_SIZE usually has some process that fails because of an overloading of the en-queue server.
Below you can see the result of some test I have done on our BPC system:
RECLEVEL_NR = 40000 and INTERVAL_NR = 10 -> some process failed. It took ~14 minutes
RECLEVEL_NR = 10 and INTERVAL_NR = 10 -> Succeeded. It took ~8 min 30s
RECLEVEL_NR = 10 and INTERVAL_NR = 1000 -> Succeeded. It took ~8 min 40s.