<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Processing High Data Volume in LSMW - Risks? in Application Development and Automation Discussions</title>
    <link>https://community.sap.com/t5/application-development-and-automation-discussions/processing-high-data-volume-in-lsmw-risks/m-p/8548278#M1657984</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have a requirement to Process high volume of Data using LSMW. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Objects in scope:  1. HR - Associate Qualification (350,000 records)&lt;/P&gt;&lt;P&gt;                              2. Associate Absence (250,000 records)&lt;/P&gt;&lt;P&gt;                              3. Material Master (70,000 records). and some other data objects as well.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Though one recommendation to process high data volume is to get the files split into smaller chunks (30K-40K records each) and process separately, to keep SAP memory, Data management etc in control.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Want  to list down what all risks we can have if we process these high volume data object as one input file in LSMW? &lt;/P&gt;&lt;P&gt;Appreciate your input.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Manish&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Wed, 22 Feb 2012 14:16:33 GMT</pubDate>
    <dc:creator>Former Member</dc:creator>
    <dc:date>2012-02-22T14:16:33Z</dc:date>
    <item>
      <title>Processing High Data Volume in LSMW - Risks?</title>
      <link>https://community.sap.com/t5/application-development-and-automation-discussions/processing-high-data-volume-in-lsmw-risks/m-p/8548278#M1657984</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have a requirement to Process high volume of Data using LSMW. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Objects in scope:  1. HR - Associate Qualification (350,000 records)&lt;/P&gt;&lt;P&gt;                              2. Associate Absence (250,000 records)&lt;/P&gt;&lt;P&gt;                              3. Material Master (70,000 records). and some other data objects as well.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Though one recommendation to process high data volume is to get the files split into smaller chunks (30K-40K records each) and process separately, to keep SAP memory, Data management etc in control.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Want  to list down what all risks we can have if we process these high volume data object as one input file in LSMW? &lt;/P&gt;&lt;P&gt;Appreciate your input.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Manish&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 22 Feb 2012 14:16:33 GMT</pubDate>
      <guid>https://community.sap.com/t5/application-development-and-automation-discussions/processing-high-data-volume-in-lsmw-risks/m-p/8548278#M1657984</guid>
      <dc:creator>Former Member</dc:creator>
      <dc:date>2012-02-22T14:16:33Z</dc:date>
    </item>
    <item>
      <title>Re: Processing High Data Volume in LSMW - Risks?</title>
      <link>https://community.sap.com/t5/application-development-and-automation-discussions/processing-high-data-volume-in-lsmw-risks/m-p/8548279#M1657985</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;there are pros and cons as always, and they even depend on other things&lt;/P&gt;&lt;P&gt;so you may encounter a problem that I never get.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I recently had 270000 IDOCs for classification, it came to lock table overflow, because the lock table could only take 55000 locks in that machine. I never had this issue before in any other machine with even more data.&lt;/P&gt;&lt;P&gt;the more data you have the more memory you may need for the conversion process and the conversion process takes more time, this can then take so long that the maximum run time gets exeeded and the activity is cancelled.&lt;/P&gt;&lt;P&gt;if you have an error  (should not happen after several tests and approval of test loads) then you may post  350000 wrong records, while in smaller packages you theoretically find this error directly when validating the first package which means you just posted 20000 wrong records instead of 350000.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;But on the other hand creating this smaller packages could extremly increase the preparation time.&lt;/P&gt;&lt;P&gt;and you could potentially create an error when splitting the files which you would never have if you had kept the files as they were.&lt;/P&gt;&lt;P&gt;continuing with my classifcation experience: Due to internal compliance rules it was  not allowed to create dynamic file names in the apllication server. but because of the lock table problem I had to split my file into packages with 30000 IDOCs, so the file was created with the first 30000 idocs, then the second package was created .....and overwrote my first ...and the 3rd package overwrote it again. Finally I had one file with the last 30000 only.&lt;/P&gt;&lt;P&gt;so I had to change again and test again...from my point of view it created much more work for me, and because of all these failures and reruns much more traffic than a single big shot.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 22 Feb 2012 20:35:04 GMT</pubDate>
      <guid>https://community.sap.com/t5/application-development-and-automation-discussions/processing-high-data-volume-in-lsmw-risks/m-p/8548279#M1657985</guid>
      <dc:creator>JL23</dc:creator>
      <dc:date>2012-02-22T20:35:04Z</dc:date>
    </item>
  </channel>
</rss>

