<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Modify table from itab in Application Development and Automation Discussions</title>
    <link>https://community.sap.com/t5/application-development-and-automation-discussions/modify-table-from-itab/m-p/3821983#M919177</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Whether the DB can handle it or not, it is not a good practice to insert/update half a million records in one shot. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Having had too many unpleasant exchanges with DBAs, I have cultivated the habit of submitting a commit after every 100,000 changes to a DB table. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Apparently, change log table gets filled up and becomes a big nuisance if a rollback occurs.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt; DESCRIBE TABLE t_zceb_fica_gl_ext LINES sy-tfill.&lt;/P&gt;&lt;P&gt;  IF sy-tfill &amp;lt; p_maxins.&lt;/P&gt;&lt;P&gt;    IF g_subrc = c_unprocessed_keys_found.&lt;/P&gt;&lt;P&gt;      INSERT zceb_fica_gl_ext FROM TABLE t_zceb_fica_gl_ext&lt;/P&gt;&lt;P&gt;        ACCEPTING DUPLICATE KEYS.&lt;/P&gt;&lt;P&gt;    ELSE.&lt;/P&gt;&lt;P&gt;      INSERT zceb_fica_gl_ext FROM TABLE t_zceb_fica_gl_ext.&lt;/P&gt;&lt;P&gt;    ENDIF.&lt;/P&gt;&lt;P&gt;  ELSE.&lt;/P&gt;&lt;P&gt;    REFRESH t_insert_chunk.&lt;/P&gt;&lt;P&gt;    LOOP AT t_zceb_fica_gl_ext ASSIGNING &amp;lt;fs_zceb_fica_gl_ext&amp;gt;.&lt;/P&gt;&lt;P&gt;      ADD 1 TO l_count.&lt;/P&gt;&lt;P&gt;      APPEND &amp;lt;fs_zceb_fica_gl_ext&amp;gt; TO t_insert_chunk.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;      IF l_count &amp;gt;= p_maxins.&lt;/P&gt;&lt;P&gt;        IF g_subrc = c_unprocessed_keys_found.&lt;/P&gt;&lt;P&gt;          INSERT zceb_fica_gl_ext FROM TABLE t_insert_chunk&lt;/P&gt;&lt;P&gt;            ACCEPTING DUPLICATE KEYS.&lt;/P&gt;&lt;P&gt;        ELSE.&lt;/P&gt;&lt;P&gt;          INSERT zceb_fica_gl_ext FROM TABLE t_insert_chunk.&lt;/P&gt;&lt;P&gt;        ENDIF.&lt;/P&gt;&lt;P&gt;        COMMIT WORK.&lt;/P&gt;&lt;P&gt;        CLEAR l_count.&lt;/P&gt;&lt;P&gt;        REFRESH t_insert_chunk.&lt;/P&gt;&lt;P&gt;      ENDIF.&lt;/P&gt;&lt;P&gt;    ENDLOOP.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;*&amp;amp;  Now, insert the remaining last chunk of records&lt;/P&gt;&lt;P&gt;    IF g_subrc = c_unprocessed_keys_found.&lt;/P&gt;&lt;P&gt;      INSERT zceb_fica_gl_ext FROM TABLE t_insert_chunk&lt;/P&gt;&lt;P&gt;        ACCEPTING DUPLICATE KEYS.&lt;/P&gt;&lt;P&gt;    ELSE.&lt;/P&gt;&lt;P&gt;      INSERT zceb_fica_gl_ext FROM TABLE t_insert_chunk.&lt;/P&gt;&lt;P&gt;    ENDIF.&lt;/P&gt;&lt;P&gt;  ENDIF.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;  COMMIT WORK.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Reward if helpfull...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cheers,&lt;/P&gt;&lt;P&gt;Rakesh.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Sat, 17 May 2008 10:28:53 GMT</pubDate>
    <dc:creator>Former Member</dc:creator>
    <dc:date>2008-05-17T10:28:53Z</dc:date>
    <item>
      <title>Modify table from itab</title>
      <link>https://community.sap.com/t5/application-development-and-automation-discussions/modify-table-from-itab/m-p/3821982#M919176</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I have a modify statement that is used to insert into a z table &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;MODIFY Z_TABLE FROM ITAB. ITab has 459780 entries and is of the exact structure as the Z table (7 fields in total) that I am tryin to insert into. But i have got a short dump &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Database error text........: "ORA-00604: error occurred at recursive SQL level&lt;/P&gt;&lt;P&gt;1#ORA-04031: unable to allocate 4120 bytes of shared memory ("shared&lt;/P&gt;&lt;P&gt;pool","select i.obj#,i.ts#,i.file#,...","Typecheck","kgghteInit")"&lt;/P&gt;&lt;P&gt;Internal call code.........: "[RSQL/178/ZBW_POS_CTRL ]"&lt;/P&gt;&lt;P&gt;Please check the entries in the system log (Transaction SM21).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;On reading help on the modify statement, I learnt that there is a limit on the number of entries that can be inserted/changed in a single LUW. But half a million records is hardly a huge number to give me a dump isn't it ? Doesn't anyone have any ideas why ????&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sat, 17 May 2008 10:24:50 GMT</pubDate>
      <guid>https://community.sap.com/t5/application-development-and-automation-discussions/modify-table-from-itab/m-p/3821982#M919176</guid>
      <dc:creator>Former Member</dc:creator>
      <dc:date>2008-05-17T10:24:50Z</dc:date>
    </item>
    <item>
      <title>Re: Modify table from itab</title>
      <link>https://community.sap.com/t5/application-development-and-automation-discussions/modify-table-from-itab/m-p/3821983#M919177</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Whether the DB can handle it or not, it is not a good practice to insert/update half a million records in one shot. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Having had too many unpleasant exchanges with DBAs, I have cultivated the habit of submitting a commit after every 100,000 changes to a DB table. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Apparently, change log table gets filled up and becomes a big nuisance if a rollback occurs.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt; DESCRIBE TABLE t_zceb_fica_gl_ext LINES sy-tfill.&lt;/P&gt;&lt;P&gt;  IF sy-tfill &amp;lt; p_maxins.&lt;/P&gt;&lt;P&gt;    IF g_subrc = c_unprocessed_keys_found.&lt;/P&gt;&lt;P&gt;      INSERT zceb_fica_gl_ext FROM TABLE t_zceb_fica_gl_ext&lt;/P&gt;&lt;P&gt;        ACCEPTING DUPLICATE KEYS.&lt;/P&gt;&lt;P&gt;    ELSE.&lt;/P&gt;&lt;P&gt;      INSERT zceb_fica_gl_ext FROM TABLE t_zceb_fica_gl_ext.&lt;/P&gt;&lt;P&gt;    ENDIF.&lt;/P&gt;&lt;P&gt;  ELSE.&lt;/P&gt;&lt;P&gt;    REFRESH t_insert_chunk.&lt;/P&gt;&lt;P&gt;    LOOP AT t_zceb_fica_gl_ext ASSIGNING &amp;lt;fs_zceb_fica_gl_ext&amp;gt;.&lt;/P&gt;&lt;P&gt;      ADD 1 TO l_count.&lt;/P&gt;&lt;P&gt;      APPEND &amp;lt;fs_zceb_fica_gl_ext&amp;gt; TO t_insert_chunk.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;      IF l_count &amp;gt;= p_maxins.&lt;/P&gt;&lt;P&gt;        IF g_subrc = c_unprocessed_keys_found.&lt;/P&gt;&lt;P&gt;          INSERT zceb_fica_gl_ext FROM TABLE t_insert_chunk&lt;/P&gt;&lt;P&gt;            ACCEPTING DUPLICATE KEYS.&lt;/P&gt;&lt;P&gt;        ELSE.&lt;/P&gt;&lt;P&gt;          INSERT zceb_fica_gl_ext FROM TABLE t_insert_chunk.&lt;/P&gt;&lt;P&gt;        ENDIF.&lt;/P&gt;&lt;P&gt;        COMMIT WORK.&lt;/P&gt;&lt;P&gt;        CLEAR l_count.&lt;/P&gt;&lt;P&gt;        REFRESH t_insert_chunk.&lt;/P&gt;&lt;P&gt;      ENDIF.&lt;/P&gt;&lt;P&gt;    ENDLOOP.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;*&amp;amp;  Now, insert the remaining last chunk of records&lt;/P&gt;&lt;P&gt;    IF g_subrc = c_unprocessed_keys_found.&lt;/P&gt;&lt;P&gt;      INSERT zceb_fica_gl_ext FROM TABLE t_insert_chunk&lt;/P&gt;&lt;P&gt;        ACCEPTING DUPLICATE KEYS.&lt;/P&gt;&lt;P&gt;    ELSE.&lt;/P&gt;&lt;P&gt;      INSERT zceb_fica_gl_ext FROM TABLE t_insert_chunk.&lt;/P&gt;&lt;P&gt;    ENDIF.&lt;/P&gt;&lt;P&gt;  ENDIF.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;  COMMIT WORK.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Reward if helpfull...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cheers,&lt;/P&gt;&lt;P&gt;Rakesh.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sat, 17 May 2008 10:28:53 GMT</pubDate>
      <guid>https://community.sap.com/t5/application-development-and-automation-discussions/modify-table-from-itab/m-p/3821983#M919177</guid>
      <dc:creator>Former Member</dc:creator>
      <dc:date>2008-05-17T10:28:53Z</dc:date>
    </item>
  </channel>
</rss>

