
With the growing demand of HANA amongst customers, SAP Planning and Consolidation, version for SAP NetWeaver - Powered by SAP HANA has also gained momentum and is being implemented by various customers. In this blog we will try to share few of our experiences with the BPC on HANA Implementations, the various issues faced during implementation, the best approach we followed during the migration, best practices on BADI’s migration and how to load the 125 million records of transaction data from backend BW system.
1: Technical and Functional considerations while migrating to SAP Planning and Consolidation, version for SAP NetWeaver - Powered by SAP HANA
If a customer is planning to move to BPC on HANA then they need to make sure they meet the below system requirements and also please refer to the Product Availability Matrix for more details.
Note: SAP HANA typically runs as an appliance on certified hardware. For the step by step instruction on how to migrate from BPC 7.5 to SAP Planning and Consolidation, version for SAP NetWeaver - Powered by SAP HANA please refer to the HTG published by Rob Marshall https://scn.sap.com/docs/DOC-33863. and also refer to the Technical Considerations for Migrating to BPC 10 on HANA HTG published by Bruno Ranchy https://scn.sap.com/docs/DOC-34745n .
Migration to SAP Planning and Consolidation, version for SAP NetWeaver - Powered by SAP HANA:
2: Commonly encountered errors during migration to BPC on HANA and after Migration steps:
Make sure the BPC Content is properly installed on the BPC Server including the activation of EnvironmentShell. Validate if BPC is installed properly by making a copy of the EnvironmentShell and test the basic functionalities. If it is not installed properly you might get the error message below while restoring the custom EnvironmentShell.
1. “Error occurred when creating Dimension Attributes via Admin API”. The root cause for the issue is, because the EnvironmentShell in the system was somehow not activated.
So run the program UJS_ACTIVATE_CONTENT, by only selecting "Activate BI content" option and activate the EnvironmentShell. Once the EnvironmentShell is activated then UJBR will successfully restore the custom Environments.
2. Sometimes during the EnvironmentShell restore you also encounter the error below with dimension formulas and in that case please make sure the note: 1782923 is implemented or upgrade to BPC on HANA SP09 which already includes the mentioned note
3. When restoring the Transaction data with Transaction code UJBR make sure to disable the work status in BPC Web Admin otherwise you will get the below error message.
Steps to follow after custom Environment is restored on SAP Planning and Consolidation, version for SAP NetWeaver - Powered by SAP HANA:
3. Functions and Tasks which are handled/Not handled by Migration program UJT_MIGRATE_75_TO_10 and BPC_HANA_MIGRATE_FROM_10:
BADI Migration best practices and API’s mapping examples:
1. Try to use READ FROM TABLE KEY instead of READ FROM KEY
2. Try to READ from a Hashed table or Sorted table instead of standard table
3. When use ASSIGN COMPONENT, try to use ASSIGN COMPONENT <number> instead of ASSIGN COMPONENT <component name>
4. Try to use ASSIGN COMPONENT before the LOOP and use LOOP INTO instead of use ASSIGN COMPONENT inside the LOOP and use
LOOP ASSIGNING
API Mapping from BPC 7.5 to BPC 10:
IF_UJ_MODEL CREATE_MD_DATA_REF IF_UJA_MEMBER_MANAGER CREATE_DATA_REF
IF_UJA_APPSET-DATA GET_APPSET_INFO IF_UJA_APPSET_MANAGER GET
IF_UJA_DIM_DATA GET_INFO IF_UJA_DIMENSION_MANAGER GET
IF_UJA_DIM_DATA GET_DEFAULT_MBR IF_UJA_MD_READER GET_DEF_MBR
4: How to move very large amounts of Transaction data during migration process:
If the transaction data is really huge and UJBR is running out of memory or timing out then we have another method of getting transaction data into the model. In this blog we will discuss a scenario where the transaction data is over 120 million records in one Application / Model and how transaction data is moved to SAP Planning and Consolidation, version for SAP NetWeaver - Powered by SAP HANA environment using Open Hub and Flat file data loading.
Now we will discuss a scenario where just the environment is restored with Meta and Master data and we will upload the Transaction data to the Model directly from the Backend BW system.
Basically there are 3 ways you can upload the transaction data into your environment.
Steps to Create Open Hub Destination:
Steps to load the transaction data into the BPC on HANA model/ cube using flat file data loading:
Once you have your transaction data ready in the flat files. Move the Flat files to the BPC on HANA server and login into the BW system.
1. 4. Create a DataSource for the Flat File Source system.
5. When creating the DataSource, make sure under the “Extraction Tab” , the Adapter and Data Format are selected properly. Adapter is the location where the Flat File is located and for Data Format select the “CSV” format as our Flat Files have CSV extension.
1. 6. Under the Fields tab, it would be easy if you could copy and paste the technical names of the dimensions under InfoObject field and press enter button (You can find the technical names by going to Cube – right click – Display – expand dimension folder) . This way when creating the DTP’s, the transformation rules are created and 1:1 mapping will be done for you automatically. You don't have to worry about manually mapping the source and target fields.
1. 7. Once the DataSource is created, right click on DataSource and Select “Create InfoPackage”. InfoPackage will transfer the data from Flat File to Persistence Storage Area “PSA”. I would suggest to run the Infopackage as a background job.
1. 8. Once the data is loaded into the PSA, now create a DTP. In order to create DTP, go to your cube – right click – Select Create Data Transfer Process” . Make sure the source and target fields are correct. DTP would also create Transformation rules for you with all the fields of source properly mapped to targets fields.
1. 9. Once the DTP is created, under the Extraction tab, make sure to use the “Parallel Extraction” option so that data will be loaded into the cube through parallel jobs. This way data will be loaded quickly.
1. 10. Under the Execution tab, select the Processing mode as mentioned in the screenshot
1. 11. Once the DTP job is finished successfully, go to cube – right click – select manage – under request tab, you can see all the requests and how many records are loaded by each request.
1. 12. To see the total number of records in the cube , go to cube – right click – select manage – select Contents tab – click on Fact table – select Number of entries and it would show the no: of records in the Cube / Model. Here we have loaded around 124 million of records.
11
******* (Special thanks to my colleague Cloudy for sharing the BADi best practices info)
1.
1.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
8 | |
7 | |
5 | |
4 | |
3 | |
2 | |
2 | |
1 | |
1 | |
1 |