Data reconciliation is a challenge for applications that are built upon multiple systems, in case of multiple systems; data is sent by sender system(s) and received by receiver system(s). Data across systems has to be valid for it to be useful for enterprise systems. When data is sent across systems, errors inadvertently creep in. These errors must be tracked and rectified for the upkeep of data consistency in enterprise systems. This may require manual or intelligent reconciliation of the disparate data across systems. Systems must be set up to ensure that accurate data are sent to production and from production. Inadvertent errors may result in problems in production & production support. Further, due to inaccurate data, processes that follow will also encounter issues.
The banking Industry typically has a heterogeneous system landscape, with many different systems involved in the banking process, exchanging a huge number of transactions between each other. Each time data is sent to two or more systems, there arises a potential risk of data inconsistency and a reconciliation mechanism needs be implemented to ensure data consistency. Usually, this is solved with dedicated applications for each specific scenario, which leads to an even more heterogeneous landscape and multiple data silos. To adhere to the product standard ITSAM we need to ensure data consistency across disparate systems.
SAP banking implementation projects always face the requirement to provide a solution for the reconciliation of at least the involved SAP systems, if not all involved systems including third party products and legacy systems. If no consistent reconciliation strategy exists, projects need to find a customer specific
solution, which can be a considerable cost driver. Instead of having a specific solution at each customer, which leads to more complexity, we have now a unified approach to solve the issue of reconciling data.
In the production system,customers face several inconsistencies in the Loans management and FICAx (open item management) system in the area of billing, Payment distribution, and open item clearing. As billing documents creation, payment distribution, & open items clearing are separate asynchronous processes, the differences cause inconsistent data across the systems. This in turn affects the subsequent asynchronous processes such as invoicing.
The inconsistencies which are encountered are either due to technical issues, for instance a failed RFC call or due to functional issues, for instance an inconsistent customizing that leads to non-creation of objects or inconsistent objects on the receiver system. Reconciliation of data can either be numeric reconciliation or qualitative reconciliation. In case of numeric reconciliation, data can be aggregated based on numbers (for instance Billed amount) and compared. if there are no issues in the aggregated data, data is consistent and no further action needs to be taken. Incase of inconsistency in the aggregated data, detailed data can then be examined to find the source of the inconsistency. In case of qualitative reconciliation, there are no numbers that need to be reconciled; there are specific objects that are non-numeric that need to be reconciled in the sender and receiver systems. For example, Business partner. In such type of reconciliation, aggregation cannot be applied and each object has to be examined. The approach that we have adopted in our reconciliation model is a numeric reconciliation that can detect inconsistencies caused due to technical & functional issues.
As a first step towards data reconciliation,a unique reconciliation key is generated during the billing mass run in the banking services system (sender system). The generation of the reconciliation key is performed in a user exit. Generation of reconciliation keys in a user exit gives the customer the flexibility to choose how the reconciliation key generated, what is the structure of the key and for how many records the reconciliation key remains the same. The parameters used to generate the reconciliation key can be decided by the customer. This unique reconciliation key is associated with all the billing items and data aggregation is performed on the billing items and the aggregated data is stored in a separate outbound aggregated data table. The same reconciliation key is passed to the FICAx system (receiver system). Data aggregation is performed on incoming data in receiver system and stored in a separate inbound aggregated data table.
The aggregated data and the detail data on both sender system and the receiver system is used for reconciliation in the BW system once the billing mass run is successfully completed.
As stated above, for data reconciliation the sender and receiver systems need to provide two BW data sources each: For aggregated and for detailed data. The following diagram illustrates the different components on the sender and receiver systems which are required for data reconciliation.
Typically the data source for aggregated data only reads from a specific reconciliation table, i.e. inbound or outbound aggregated data table (where the data is stored after aggregation based on the reconciliation unit). The detailed data source also reads the application data from specific detailed table, such as billing items on the sender system & billing documents on the receiver system in our scenario.
The first phase of data reconciliation involves activities on the ERP systems, As phase two of data reconciliation, data is passed onto the BW system, where further processing is done. To this end a concept of BW reconciliation hub is used to reconcile data in a standardized manner; a reconciliation hub consists of BW objects which are used as templates for building BW objects necessary for reconciling data. These BW objects include InfoObjects, InfoSources, DataStore Objects, InfoCubes, Multiproviders & transformations. The reconciliation hub is a framework that can be used to reconcile different objects.
The reconciliation hub also requires customizing to be created to maintain various details regarding the reconciliation tracks. The customizing activities are mentioned below.
Once the BW objects and customizing are in place, a process chain is to be set up to perform the actual reconciliation. Aggregated data is extracted from the source system into the BW system by extractors built on the source and receiver systems using timestamp as the criteria. Aggregated data from the last extraction to the current extraction is extracted into the BW system. Aggregate data extracted from the sender and receiver systems are then harmonized to bring them to the required formats. Aggregated data is then processed to determine if there are discrepancies. If discrepancies are detected in the aggregated data, detailed data which correspond to the detected discrepancy is extracted using the reconciliation key – reconciliation date pair. Extraction of detailed data is based on reconciliation key and reconciliation date as these two parameters are the main link to drill down from the aggregate data to the detailed data & currently in a given day all reconciliation keys are unique. Detailed data are also harmonized and processed in the BW system. The end result of such a process shows us the records / items that are having discrepancies. The resultant discrepancies can then be reported to the required users using reporting tools like the BEX analyzer reports in the required format.
Below is an action diagram which shows the flow of data in the BW system.
Figure 1: Overall flow of the 2 step method of reconciliation
For further information and cook books related to the reconciliation hub please refer note 1986826.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.