<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Process files with huge amount of data using parallel processing in Application Development and Automation Discussions</title>
    <link>https://community.sap.com/t5/application-development-and-automation-discussions/process-files-with-huge-amount-of-data-using-parallel-processing/m-p/3943376#M943191</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My Requirement is to process a file with huge amount of data and update database table ( std or z-table).There are more than 5 million records in each file.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I think  parallel processing is one of the best approach.but my question is how to handle 5million recrds at the internal table level.I want to split the data at internal table/File level for each 50-75k records and process this data using parallel processing.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Pls suggest the best approach?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Mon, 02 Jun 2008 17:07:49 GMT</pubDate>
    <dc:creator>Former Member</dc:creator>
    <dc:date>2008-06-02T17:07:49Z</dc:date>
    <item>
      <title>Process files with huge amount of data using parallel processing</title>
      <link>https://community.sap.com/t5/application-development-and-automation-discussions/process-files-with-huge-amount-of-data-using-parallel-processing/m-p/3943376#M943191</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My Requirement is to process a file with huge amount of data and update database table ( std or z-table).There are more than 5 million records in each file.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I think  parallel processing is one of the best approach.but my question is how to handle 5million recrds at the internal table level.I want to split the data at internal table/File level for each 50-75k records and process this data using parallel processing.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Pls suggest the best approach?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 02 Jun 2008 17:07:49 GMT</pubDate>
      <guid>https://community.sap.com/t5/application-development-and-automation-discussions/process-files-with-huge-amount-of-data-using-parallel-processing/m-p/3943376#M943191</guid>
      <dc:creator>Former Member</dc:creator>
      <dc:date>2008-06-02T17:07:49Z</dc:date>
    </item>
    <item>
      <title>Re: Process files with huge amount of data using parallel processing</title>
      <link>https://community.sap.com/t5/application-development-and-automation-discussions/process-files-with-huge-amount-of-data-using-parallel-processing/m-p/3943377#M943192</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Doesn't matter if you have 100 processes in parallel, you can only lock the same DB one at a time. So, other processes will always be waiting.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I would create a main program which spawns 10 or 20 (or 50 or 100) sub-programs to process the data.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;for instance, in your main program, upload all the recs into an ITAB. divide up the records and send them off to be processed.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We have a Time splitter program that will generate up to 50 jobs - BUT only 10 are processing at any one time. we've found that any more jobs will cause a bottleneck in the system.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 02 Jun 2008 17:28:35 GMT</pubDate>
      <guid>https://community.sap.com/t5/application-development-and-automation-discussions/process-files-with-huge-amount-of-data-using-parallel-processing/m-p/3943377#M943192</guid>
      <dc:creator>Former Member</dc:creator>
      <dc:date>2008-06-02T17:28:35Z</dc:date>
    </item>
    <item>
      <title>Re: Process files with huge amount of data using parallel processing</title>
      <link>https://community.sap.com/t5/application-development-and-automation-discussions/process-files-with-huge-amount-of-data-using-parallel-processing/m-p/3943378#M943193</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Robert,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you very much for quick response.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt; &lt;STRONG&gt;for instance, in your main program, upload all the recs into an ITAB. divide up the records and send them off to be processed.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;How to divide up the records in the internal table?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;do u have any sample code related to this requirement?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;It would be a helpful if you can send me the related documents including time splitter program.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;you can find my ID in the business card as I cant provide you here.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;SRini&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 02 Jun 2008 18:08:36 GMT</pubDate>
      <guid>https://community.sap.com/t5/application-development-and-automation-discussions/process-files-with-huge-amount-of-data-using-parallel-processing/m-p/3943378#M943193</guid>
      <dc:creator>Former Member</dc:creator>
      <dc:date>2008-06-02T18:08:36Z</dc:date>
    </item>
    <item>
      <title>Re: Process files with huge amount of data using parallel processing</title>
      <link>https://community.sap.com/t5/application-development-and-automation-discussions/process-files-with-huge-amount-of-data-using-parallel-processing/m-p/3943379#M943194</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;PRE&gt;&lt;CODE&gt;How to divide up the records in the internal table?&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt; &lt;/P&gt;&lt;P&gt;you would find out how many lines are in your itab by using the DESCRIBE statement.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;let's say there are 150,000 records and you know you want to split them up into 10,000 record batches.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;you can:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;
rec1 = 1.
recN = 10000.
do 15 times.
loop at itab from rec1 to recN.
 append lines to a 2nd itab until you reach the Nth record.
 create &amp;amp; submit your spawn job.
 rec1 = recN + 1.
 recN = rec1 + 10000.
endloop.
enddo.&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am not at liberty to give up our code but I am happy to share ideas.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 02 Jun 2008 19:45:36 GMT</pubDate>
      <guid>https://community.sap.com/t5/application-development-and-automation-discussions/process-files-with-huge-amount-of-data-using-parallel-processing/m-p/3943379#M943194</guid>
      <dc:creator>Former Member</dc:creator>
      <dc:date>2008-06-02T19:45:36Z</dc:date>
    </item>
    <item>
      <title>Re: Process files with huge amount of data using parallel processing</title>
      <link>https://community.sap.com/t5/application-development-and-automation-discussions/process-files-with-huge-amount-of-data-using-parallel-processing/m-p/3943380#M943195</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Keep your large files on the application server.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Provide fields on the load program's selection screen to specify the line number range. Let us say the fields are p_start and p_end.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;In the program:&lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;
- num_lines = p_end - p_start
- open dataset
- do p_start times.
-   read dataset "just skipping these rows; do not append the itab
- enddo
- do num_lines times.
-   read dataset
-   append itab
- enddo
- insert from itab
&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Then run your load program in parallel by specifying the following values for p_start and p_end.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;0 - 50000&lt;/P&gt;&lt;P&gt;50001 - 100000&lt;/P&gt;&lt;P&gt;100001 - 150000&lt;/P&gt;&lt;P&gt;and so on.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 02 Jun 2008 20:14:14 GMT</pubDate>
      <guid>https://community.sap.com/t5/application-development-and-automation-discussions/process-files-with-huge-amount-of-data-using-parallel-processing/m-p/3943380#M943195</guid>
      <dc:creator>Former Member</dc:creator>
      <dc:date>2008-06-02T20:14:14Z</dc:date>
    </item>
    <item>
      <title>Re: Process files with huge amount of data using parallel processing</title>
      <link>https://community.sap.com/t5/application-development-and-automation-discussions/process-files-with-huge-amount-of-data-using-parallel-processing/m-p/3943381#M943196</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thanks Robert and Sudhir....&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 02 Jun 2008 20:17:57 GMT</pubDate>
      <guid>https://community.sap.com/t5/application-development-and-automation-discussions/process-files-with-huge-amount-of-data-using-parallel-processing/m-p/3943381#M943196</guid>
      <dc:creator>Former Member</dc:creator>
      <dc:date>2008-06-02T20:17:57Z</dc:date>
    </item>
  </channel>
</rss>

