Enterprise Resource Planning Blogs by Members
Gain new perspectives and knowledge about enterprise resource planning in blog posts from community members. Share your own comments and ERP insights today!
Showing results for 
Search instead for 
Did you mean: 
Active Contributor

This blog is the continuation to Stumbling blocks in purchasing info record migration by LSMW IDOC method

The project is outlined in the other blog, so I can directly start with the difficulties I had with the migration of the price conditions.

When you migrate info records by IDoc method, then the price conditions are not included. They have to be loaded in an extra step.

So again the question about the method. Batch Input vs. IDoc method.

In my last blog I explained that the IDoc method had the advantage that I can work with preassigned numbers and can create a mapping directly from the conversion step in LSMW.  This advantage does not count for the conditions of an info record, because there is no extra mapping which has to be given to the business. The mapping was already there with the info record migration.

Still I decided to go with IDoc method, as it allows me to reload the data again and again if I do something wrong. With the batch input method I would need 2 LSMW objects worst case, for initial creation and for a change if something goes wrong. Trust me, even I make mistakes :wink: . Eventually already with the decision to use IDoc method here. You can leave a comment.

Now I knew how I am going to load the data, but I still had to solve the issue how do I get the data from the legacy system. If you check SCN with key words how send COND_A  then you get 70 hits but rarely an answer. I actually found just one correct answer in relation to info record conditions and will spread it here again: it is possible from MEK3 transaction via condition info. However, at the time of migration I did not know about that. And I still think that it wouldn't be suitable for me, as MEK3 expects the condition type before you can continue to condition info, and I had 28 different condition types in the source system. Further I cannot even go into the condition info if there is no access defined for the condition type.

In the meantime I got an idea to send the conditions together with the info record using serialization, I will try this in my next info record migration.

But in this migration I decided to download the conditions using SE16 and a simple SQVI QuickView.

The goal was to migrate only valid conditions, no historic conditions.

I started with SE16 and downloaded all valid conditions from table A017 and A018; excluded the deleted conditions and those who had an validity end date lower than today. (We did not load conditions at material group level, hence no download from A025 table)

As you can see, the condition record number is listed at the end.

Next step was the development of the QuickView.

Just a join of the table KONH and KONP since we decided to do the few scales (KONM table) manually.

The chosen fields for our migration were:

List fieldsSelection fields






























As the number of records exceeded the maximum numbers that can be added in the multiple selection I used the table number and excluded the records with deletion indicator. Nothing else as selection criteria.

116000 condition records caused a file of 44 MB. Too big to output the result from the QuickView directly in Excel. So it had to be downloaded as text and then imported to Excel.

Here you have to careful, if you have more about 60000 lines or more, then you can find 2 empty lines at position 59994 and 59995.

If you continue with sorting and filters, then you do not get all records, hence you need to remove those empty lines.

As a next step the fields have to be formatted. All value fields have to be formatted as number with 2 decimals and without delimiter for thousands.

After this I entered a formula VLOOKUP to identify the condition records that equal the condition record number from the A017 table download. Only those were kept for next processing step.

If you wonder why this is needed then have a closer look at this validity end date from KONH table. It is almost everytime 31.12.9999.  If you enter a new price in the info record, then you usually care only about the valid from date and the condition is valid until a new condition is entered. But when a new condition is entered it does not change the valid-to date of the old condition. SAP selects the valid condition only based on the valid-from date.

And here I was lucky that I only had to load current valid conditions. I tried it a few times for all conditions but this caused a bigger chaos, because I was not able to process the IDocs in the needed sequence: the oldest condition first, the newest condition at the end. Even I had the correct sequence in the CONV file in LSMW, they became a mess when the IDocs got generated and processed.

As said, I kept only the latest conditions identified with VLOOKUP in the file. Be aware that this VLOOKUP for that many records can take hours depending on your PC and the numbers of processors. With 24 processors this was a job of a little more than 5 minutes.

Still I had to do a bit more cosmetic to get what I needed to load this file.

In this picture from LSMW you can see the IDoc structure: a header, the validity period, the conditions

The source file has to be prepared to deliver data for each of this segments in the structure.

Starting basis is my download file which has only the valid records now, this will be used for the detail, the KONP segment of the structure.

For the header segment I just copy the columns A to E  into a new Excel tab.

for the validity period segment I copy the columns A to G into a new Excel tab.

From both new tabs I remove the duplicate records.

Now I save each tab as CSV file and assign the CSV files in LSMW to my source structures.

When you execute the step READ DATA then SAP joins all those 3 files based on the common fields in the beginning of each structure.

The rest is just the standard process of executing LSMW.

Important in the field mapping is the VAKEY  field. SAP uses a variable key (VAKEY) to access the conditions. This is concatenated field with material number in the beginning, followed by vendor number, purchasing organisation, plant (only in case of A017 conditions) and info record type. As usual in a migration you have to replace the old values with new values to enable SAP again to find those conditions.

This means you have to create a data declaration to split this VAKEY field into its parts, like this:

data:  begin of ZZVARKEYA,

       ZLIFNR like LFM1-LIFNR,

       ZMATNR like MARc-MATNR,

       ZEKORG like LFM1-EKORG,

       ZWERKS like MARC-WERKS,


       end of zzvarkeyA.

you may want to do this for the source field and for the target field.

And then you need a little ABAP coding to replace the individual values. I have this mapping old to new  in a Z-table with 3 fields, the identifier e.g. EKORG for purchasing organiatisation, the old value and the new value. I use a select statement to retrieve the value from this Z-table.

The value field KBETR can cause as well some headaches, especially if you have conditions with currencies that have no decimals like JPY  or where the value is percentage. In both cases SAP has a strange way to store the values in the table.

The value field is defined in the data dictionary with 2 decimals. A currency like JPY   does not get decimals when it is stored, it is just moved as is to this value field. And a price of 123JPY is then stored as 1.23 JPY.

A percentage is allowed to have 3 decimals, but stored in a field with 2 decimals makes it look extraordinary too.

And last but not least you may have negative condition values in the source file because of discount conditions.

All these circumstances have to be taken care in the conversion routine, which is in my case like this

g_amount = ZKONP-KBETR / 100.

if ZKONP-KONWA = '%'.

  g_amount = ZKONP-KBETR / 1000.



  g_amount = ZKONP-KBETR.


if  ZKONP-KSCHL1 = 'RA00'

or ZKONP-KSCHL1 = 'RA01'

or ZKONP-KSCHL1 = 'RB01'

or ZKONP-KSCHL1 = 'RC00'.

g_amount = g_amount * -1.


write g_amount

   to E1KONP-KBETR decimals 2.



Because of the CSV file format I have to devide the value by 100 to get the normal condition value.

In case of a percentage the source value has to be devided by 1000

In case of the zero decimal currencies, I can take the value as is from the source.

If the condition value is negative, then I need to multiply with -1 to get a positive value as I an only carry positive values in this IDoc.

And the last lines take care about dots and commas as decimal separator. In Germany we use the comma as the separator for the decimals, while the Americans use the dot as a separator. And as I get often Excel files from all over the world for migrations I always have to care about the decimal separators.

And finally I want to share an issue that came up during our tests as I had not completed the condition migration.

The info records itself got loaded like explained in the previous blog including the information about the price and effective price, even when they were zero.

You can see that both fields are protected like they are when conditions exist. However, the conditions were not loaded at that time.

When the user clicked the condition button...nothing happened. A inconsistent situation, the info record itself has the information that conditions exist, while they are not there, hence the condition button does not work.

The only workaround is to change a value in another field and save the info record. Then go in again, now you click the condition button as usual. (Don't forget to correct the previously changed field again)

Labels in this area