Enterprise Resource Planning Blogs by Members
Gain new perspectives and knowledge about enterprise resource planning in blog posts from community members. Share your own comments and ERP insights today!
Showing results for 
Search instead for 
Did you mean: 
Hello Learners,

Hope you all are doing Great!

Welcome back to my data migration blog series.

Purpose of this Blog :

We learnt about the SAP Migration Cockpit within S4 HANA which is the latest DM tool for S4 implementation projects. You can have the recap for the same using below link -


Today we will go deeper into one of its commonly used data migration approach which is File Based Approach.

What’s in it for you?

The blog will explain File Based data migration approach in detail manner. This approach is very useful and quick to setup for migration activities.

Let’s deep dive into this approach!

Note - This blog is based on S4 2020 version using LTMC tcode. The approach is slightly different on recent versions of S4. I will cover important changes at the end of this blog and will cover details in next blog 🙂

Overview -

File Based approach can also be termed as Template based approach or Inbuilt content Usage approach.

The reason we call it as template or Inbuilt content based approach is, here SAP has provided standard templates for different set of data objects where you can simply add data and then you can follow the data migration steps inside cockpit to load data into system

Most of the data which gets added to templates is transformed as per target system requirements though you can still do some value transformations/conversion in the convert step of cockpit.

Pros :

  • Easy to setup, less development/customization required

  • User friendly and can be performed by functional as well as business users

  • Covers many standard fields and also allows customization of adding own fields


  • Size constraints- Only 100mb of file size is allowed to be uploaded in cockpit within this approach

  • Additional splitting and management of files required when data volumes are high

  • Manual preload data validations efforts

  • Less automated way hence risk of more errors during process

  • Template formatting and structure to be preserved to avoid upload issues

There could be more pros and cons depending on project requirements.

Now lets learn some important options and way of handling this approach -


Considering you are already inside the LTMC tcode, the first step is to setup a project. Below steps to be performed for the same.

  1. Click on Create Project, Give suitable name (eg - DM_File_Approach)

  2. Select the DM approach as - File Based Approach

  3. Make a note of mass transfer ID generated or generate one of your own. This would be useful while migration projects from one environment to another

Once you have completed all the above steps, you are all set!

Project Setup

How to use File Based Approach-

  1. Once you are inside your project, select one object for which you want to migrate data (eg -Product Master)

  2. When you will select the object, you will have screen with different buttons/options to carry out the migration

    • Download Template - using this options, you can download the standard template to your system and add data to it. Templates are XML files but with excel file look.

    • Each template will have different sheets as per object.

      • first sheet is always Instruction sheet, which has all the details on how to use template and copy or add data to template. Its a good practice to go through before adding data to template

      • Second sheet is Fields sheet, This sheet has all the fields available in template with their details like Name, data types, table name, sheet name etc.

      • Apart from these 2 common sheets, you will have data object specific sheets and they are further classified as mandatory and optional sheets

      • Mandatory sheets are highlighted in yellow and required to be filled in. Optional fields can be filled if required or they can be left blank

    • Upload Template - Once you add required data as per template requirements you are ready for upload. Make sure you have filled in mandatory fields whatever sheets you have filled in with data

    • Cockpit will do the validations while uploading file but it is good practice to perform some common validations prior to upload to avoid going back and forth between tool and changing data to template

      • Mandatory Fields validation check- Make sure all the mandatory fields are populated on sheets which are being used for data object

      • Duplicate check - Make sure key column(s) of every sheet have unique values else you will get an error during validation step

      • Data Type validation - Enter correct values as per data type and follow length restrictions etc.

  3. Once you have uploaded the template filled with data, you can click on Start Transfer button/option and follow the four step data migration process.

  4. The allowed file size for a single upload is 100MB or 160MB for a zipped file

  5. You can upload multiple files of 100MB or less and then process them together when you are working with high volumes. Splitting of files is covered in another section in this blog

  6. Cockpit always create a background job for each data migration step, can be monitored in SM 37 with filters as - Job Name =/LT* and User = * (or you can use SAP_SYSTEM)

  7. Max Transfer Jobs - This option is useful when you are using multiple files and processing them together. For n no of files you can set this parameter upto n-1 value. It is by default set to 1 initially. This will create that many jobs and can be monitored in SM37. It also can cause problem if you have data dependencies

  8. Each step will give you an option to  download logs with limited numbers

  9. Once you have completed all the steps, a delta file will automatically generated with failed records. Errors can be fixed and file can be reuploaded.

Above is the procedure to perform data migration for any object using file based approach.

Selecting Data Object


File based Approach Options

Splitting of Files-

  • Splitting of loadfiles would be required if your file is more than 100MB in size

  • You can split the xml files/load files using splitter tool created by SAP itself

  • The tool helps you split files and make the size less than 100MB

  • No of split files required can be decided based on total size of main file (eg - if file is 564MB then you would need to split the main file into 6, basically divide by 100 and round off)

  • link to download splitter tool is below, it has details to run the tool also.


Link to tool - https://github.com/SAP-samples/s4hana-mc-xml-file-splitter


Handling Internal number range mappings with File based approach-

Many SAP implementation project use internal number ranges for product master, Business partner etc. In those cases it is important to map the old source number to newly generate number by the system. Below are some important points to note down while working with internal numbering system.

  1. You can maintain a cross reference file with Old to New number mapping and then use lookup wherever required and replace the old number with new number. As this is manual way hence it is not recommended

  2. Migration cockpit handles this internally by itself. Cockpit stores the old to new number mapping internally and auto populate the mapping during convert step of migration process.

  3. Make sure you are using same project for dependent objects where these mappings are used because it will only store old to new mapping for the records which are created through that project


Important Points to Pen down-

  • Common validations to be done prior to upload as otherwise it can be repetitive process and consume more time for initial steps to run successfully like upload and validate

  • Limit the use of this approach based on data volumes though you can use it for high volumes but you would need to manage multiple files and more manual efforts that can lead to errors

  • Follow the naming conventions for all your templates/loadfiles with proper versioning

  • avoid using multiple projects for same objects to handle multiple files as it will not help you preserve some mappings where you are using internal number ranges (eg - product number)

  • When dealing with multiple files for an object, once upload of all the files is complete; activate all the files first before you click on start transfer option to process all the files together.

  • Add descriptions of file and versioning when uploading files to cockpit, this will be helpful in identifying corresponding delta files when the records would fail.

  • You can not upload file with same name if it is already uploaded and exist in the object.

  • You can only delete the files uploaded until their status is NOT FINISHED.


File approach in Fiori-

As mentioned at start of the blog, I covered the file based approach using LTMC tcode but with recent S4 versions this has been modified slightly and combined with staging approach.

Below are some important changes or details  with fiori app - Migrate your Data way-

  • File and staging approaches are merged in fiori app - Migrate Your Data and called as Staging approach only

  • During project setup while selecting DB connection you can choose Local or Remote Db option. When you use local DB option that is your file based approach.

  • Once you are inside the project then you can follow the same procedure listed above with little changes to the UI as it is Fiori based

  • The data from templates get stored in staging tables

  • You can upload as many as 100MB files and data will reside in staging tables which can be directly accessed in S4 Hana System

  • If you try to upload same data then system gives you an option to skip or replace the existing data

  • You can not upload file with same name if it is already uploaded and exist in the object.


  1. File based approach is very useful when you have the data extracted and transformed already as per requirements of target S4 system.

  2. you can handle large volumes with cons like manual efforts and multiple files handling

  3. Overall it is a very useful approach and widely used in S4 implementation projects because of its ease of use.

I will cover more advanced topics in upcoming blogs so stay tuned!

Please do share and subscribe the blog and share your valuable feedback so that I can improve my content and help readers get more better learning experience around SAP data migration and data quality.

Happy Learning and reading!! 🙂

Labels in this area