SAP Business Warehouse (SAP BW) systems running on IBM Db2 for Linux, UNIX, and Windows can use column organized (BLU Acceleration) tables for BW tables to speed up query processing. While SAP BW queries often benefit from column-organized tables, BW ETL processes often do not benefit from BLU Acceleration because ABAP INSERT statements may be slower on column-organized tables. This blog post describes how to enable parallel inserts for ABAP INSERT statements on column-organized tables to speed up the BW ETL process.
Performance Benefit for BW ETL Processing
Parallel insert has been available since Db2 11.1, and its extension, sometimes called vectorized insert, is available as of Db2 11.5.
The diagram below shows results from SAP-internal tests of flat InfoCube data propagation in SAP BW. The tests were performed with 6 parallel-running SAP batch jobs. The tests were run in an SAP test system and, of course, cannot be representative for each customer system. However, the figures indicate that parallel and vectorized inserts can speed up the INSERT part of BW ETL processing.
The diagram shows the end-to-end runtime with different INSERT methods:
- Standard array INSERT (not parallelized)
- INSERT with MEMORY_TABLE and parallel insert
- INSERT with MEMORY_TABLE and vectorized insert
So, how do parallel and vectorized inserts work in detail? What’s the role of MEMORY_TABLE? What do you need to do to enable parallel inserts in your system?
In a Nutshell
ETL processes in SAP BW often use ABAP INSERT ... FROM TABLE ... statements, often also with an ACCEPTING DUPLICATE KEYS clause. By default, the database shared library (DBSL) translates ABAP INSERT ... INTO TABLE statements into array INSERT statements.
With some SAP profile parameters, the DBSL can be forced to translate ABAP INSERT ... FROM TABLE ... statements into INSERT ... SELECT statements. In the SELECT part, the generic table function MEMORY_TABLE is used. This allows Db2 to use parallel INSERT, which is faster than the traditional array INSERT and which speeds up ETL processing if enough CPU resources are available.
As of IBM Db2 version is 11.1.4.4 iFix1SAP and with the setting DB2_WORKLOAD=SAP, the ABAP ACCEPTING DUPLICATE KEYS clause can now be mapped to the new Db2 INSERT ... IGNORE DUPLICATES clause, which was not possible before. As a result, parallel insert is now also possible for ABAP INSERT ... FROM TABLE ... ACCEPTING DUPLICATE KEYS statements. (Note: You need IBM Db2 version is 11.1.4.4 iFix1 SAP or higher because this contains a fix for APAR IT28206. This fix is required to safely use the IGNORE DUPLICATES clause.)
In the following, I’ll outline how INSERT statements work in SAP systems, and what changes with parallel and vectorized insert in detail.
Note for the impatient: You are eager to learn about the required settings first? Scroll down to the last sections of this blog post, where I describe the required settings for enabling parallel and vectorized insert in SAP BW systems.
How ABAP INSERT Statements Work Without Parallelization
So, let me start from the beginning: how do ABAP INSERT statements usually work, and why can this be a problem for data transfer? To insert data from ABAP-internal tables into database tables, the following OPEN SQL statement type is often used in SAP BW:
INSERT <DBTAB> FROM TABLE <ITAB> [ ACCEPTING DUPLICATE KEYS ].
This is one of the ABAP OPEN SQL statement types that needs to transfer data fast from an ABAP-internal table to the database to process the statement. The statement processing on the database server itself should also be as fast as possible.
By default, DBSL uses a Db2 client mechanism called execute chaining to execute these ABAP INSERT statements. This means that a Db2 “INSERT INTO DBTAB VALUES ( ?, … ? )” statement is prepared. During statement execution the input tuples for this statement are transferred over the network from the application server to the database server in packages to save network roundtrips. However, the execution on the database server is not so well-optimized. Rows are inserted in a non-atomic way row by row and separate error codes are returned for each row.
The non-atomic execution is partly due to the ABAP “ACCEPTING DUPLICATE KEYS” clause. If this ABAP clause is specified, all INSERT input tuples that do not result in a SQL0803N (duplicate key error) must be inserted. INSERT input tuples that result in a SQL0803N error are not inserted, and SQL0803N errors are ignored. So, for this ABAP clause individual error codes are required, which has the advantage that a complete bulk INSERT operation does not need to be rolled back if one individual INSERT fails with SQL0803N. Only if a more severe database error is encountered by one of the (tuple) inserts, will DBSL return an error. In this case, the application server ABAP will roll back the complete database transaction containing all inserts.
Single row INSERT statements on column-organized tables are slower compared to single row INSERT statements on row-organized tables because for column-organized tables many more pages need to be touched. This is because on column-organized tables, the values for each column for one table row reside in different pages in the tablespace.
What’s more, BW ETL processes often use ABAP INSERT statements with “ACCEPTING DUPLICATE KEYS” clauses. In many BW ETL processes with BLU Acceleration as a default, the slower inserts cannot be fully compensated by faster query processing with BLU Acceleration.
Now what has been done about these issues? Let’s turn to the issue of slower inserts on column-organized tables first.
Parallel INSERT with BLU Acceleration
To compensate for slower inserts on column-organized tables, IBM introduced
parallel insert as of Db2 Version 11.1.2.2 for SQL INSERT statements of the form “INSERT INTO DBTAB ( SELECT … )”. If you have made the required settings for parallel insert in your system (see below), the Db2 cost-based optimizer estimates the number of columns returned by the SELECT part of the statement. If the cardinality is above a threshold, multiple agents are started in parallel to execute the INSERT statement. Parallel insert is only supported on column-organized tables (Db2 BLU Acceleration).
As with query parallelism, Db2 needs to have some free CPU resources to benefit from parallel insert.
With Db2 version 11.1, the parallel INSERT operations were still performed in a row-organized manner. With Db2 11.5, parallel insert has been further enhanced by performing part of the parallel operation in a vectorized (column by column) manner. Therefore, parallel insert becomes even more beneficial for customers with Db2 11.5.
As shown in the screen shot with Db2 11.5, the planned degree of INSERT parallelism is nicely visible in the EXPLAIN utility in the SAP tool DBA Cockpit (transaction code DBACOCKPIT).
However, by default, parallel insert does not apply to ABAP “INSERT … FROM ITAB” statements because these statements are not translated to atomic “INSERT INTO DBTAB ( SELECT … )” statements by DBSL and Db2 cannot parallelize them.
So, what can be done about this?
New IGNORE DUPLICATE KEYS Clause
One of the reasons why DBSL never used atomic SQL statements for ABAP “INSERT … FROM ITAB” statements was that DBSL needed a way to ignore SQL0803N (SQL error code -803) errors for individual input rows without rolling back the complete atomic INSERT statement. By default, Db2 triggers a statement-level rollback for “INSERT INTO DBTAB ( SELECT … )” statements that return a negative SQL error code.
Starting with Db2 Version 11.1.3.3, a new IGNORE DUPLICATE KEYS clause is supported for INSERT statements. As a result, atomic INSERT statements that hit a duplicate key error during processing only return an SQL warning (SQL error code +803) and all rows that result in duplicates are inserted successfully.
The new IGNORE DUPLICATE KEYS clause for Db2 is not part of the SQL standard. However, it’s not unique for Db2. Similar clauses exist on some other databases, for example, the
IGNORE DUPLICATES clause on SAP MaxDB. The new clause is only available if Db2 registry setting DB2_WORKLOAD=SAP is enabled.
With the new clause, atomic SQL statements can now be used for ABAP INSERT statements with “ACCEPTING DUPLICATE KEYS”.
Putting It All Together …
Let me remind you where we started: To insert data from ABAP-internal tables into database tables, the following OPEN SQL statement type is often used in SAP BW ETL processing:
INSERT <DBTAB> FROM TABLE <ITAB> [ ACCEPTING DUPLICATE KEYS ]
These ABAP statements are now translated by DBSL to SQL statements of the form:
INSERT INTO “DBTAB"
( SELECT * FROM TABLE( SAPTOOLS.MEMORY_TABLE( CAST( ? AS BLOB( 2G )) )
CARDINALITY 200000 ) AS DB6_MEMORY_TABLE ( <column definition> ) )
IGNORE DUPLICATES
Thanks to the new IGNORE DUPLICATES clause, DBSL can now turn these statements into atomic SQL statements. For this purpose, DBSL makes use of the generic table function SAPTOOLS.MEMORY_TABLE: DBSL encodes the <ITAB> data into MEMORY table format and sends the data over to the database server as BLOB value. On the database server, the SAPTOOLS.MEMORY_TABLE function decodes the BLOB input data and returns all value tuples from the internal table db6 in its result set. Since these SQL INSERT statements are now atomic SQL statements, Db2 can use parallel insert if the target table is a column-organized table.
(Note for the curious: SAPTOOLS.MEMORY_TABLE is also used for fast data access. See my blogs on SAP Community for more:
Why Fast Data Access (FDA) for ABAP “FOR ALL ENTRIES” Statements on IBM Db2 Is Useful and
Fast Data Access (FDA), Part 2: How to detect and tune long-running FDA statements)
With SAPTOOLS.MEMORY_TABLE, the Db2 optimizer cannot use table statistics to determine if parallel insert is useful because the table function is part of the SELECT statement. Therefore, it needs to rely on the CARDINALITY clause provided by DBSL. To avoid flooding the SQL cache, DBSL only generates a maximum of two SQL statements for one OPEN SQL INSERT statement: One SQL statement with CARDINALITY 200000, which is used if a threshold of rows in <ITAB> is exceeded. The second SQL statement is for small <ITAB>, where there is no performance benefit with parallelization, so DBSL does not add a CARDINALITY clause and Db2 does not trigger parallel insert for these smaller INSERT statements. Typically INSERT statements in BW ETL processes get CARDINALITY 200000 and therefore benefit from parallel insert.
To conclude: If you configure SAP and Db2 correctly (see below), DBSL can now use atomic bulk SQL statements for all ABAP “INSERT … FROM ITAB” statements. As a result, parallel insert is used for INSERT statements in BW ETL processes, with the expected performance benefits.
Now let’s turn to the question how you can enable parallel inserts in your SAP BW system.
How to Enable Parallel Inserts
SAP Profile Parameters
To enable the new DBSL feature, you need to use SAP kernel 7.49 or higher and need to set the following SAP profile parameters:
- dbs/db6/dbsl_use_memtab_for_atomic_modify=1
- dbs/db6/dbsl_use_insert_ignore_duplicates=1
With the first parameter, DBSL uses atomic INSERT statements for all ABAP “INSERT … FROM ITAB” statements without “ACCEPTING DUPLICATE KEYS” clause. The second parameter enables the use of the IGNORE DUPLICATE KEYS clause and atomic INSERT statements are used for all ABAP “INSERT … FROM ITAB” statements.
If enabled, DBSL uses the new feature on all tables and not only on column-organized tables. However, only column-organized tables may benefit from parallel insert. On row-organized tables, atomic INSERT statements will also work fine and may also be somewhat faster. However, the effect is less significant than with parallel insert.
Note: In Db2 11.1, a DUPLICATE KEYS clause on an INSERT statement may have the unwanted side effect that INSERT triggers are fired even if a row has not been inserted due to a SQL error code +803. This unwanted side effect has been removed with Db2 11.5. However, since currently Db2 does not support triggers on column-organized tables, this restriction is not relevant for column-organized tables in SAP BW.
Required IBM Db2 Versions
On the Db2 side, you must use version Db2 11.1.4.5 or higher or version 11.5 or higher. The potential performance benefit on Db2 11.5 will probably be much higher due to vectorized insert. You need to enable INTRA_PARALLEL in your system to use parallel insert, but this is no extra setting because this is already required by BLU Acceleration.
With these Db2 versions, the Db2 registry setting DB2_EXTENDED_OPTIMIZATION contains the substring “COL_PARALLEL_IUD_THR 50000, CDE_RTABLE_INS_THR 190000”. Basically, this means that for INSERT statements with CARDINALITY clause 200,000 like above, parallel insert is used by the Db2 optimizer and an INSERT parallelism of 4 is used.
Tuning the Db2 Client Setting for Atomic ABAP INSERT Statements
The INSERT statements listed above have a BLOB input parameter. The BLOB values can become very large for large ABAP input tables. By default, the Db2 client will only keep 25 MB in memory before a temporary file in the TEMP directory on your SAP application server will be created.
To avoid that the write to temporary files slows down the execution of INSERT statements, it is helpful to increase the temp file threshold. Currently this needs to be done by adding an additional parameter to the db2cli.ini configuration file located in /usr/sap/SID/SYS/global/db6.
Add the parameter LOBFileThreshold to the COMMON section as follows:
...
[COMMON]
Diagpath=...
LOBFileThreshold=52428800
The setting above represents 50 MB. The optimal setting may be specific to your system because the size of the data packages used in BW ETL processing is customer-specific. Work processes executing an ABAP INSERT statement may use this amount of memory temporarily for SQL statements with large LOB input parameters.
Required SAP and Db2 Versions and SAP Notes
To be able to use parallel insert for ABAP INSERT statements, you need:
- SAP BASIS 7.40 or higher
- SAP Kernel 7.49 or higher
We recommend that you use a DBSL version containing the patch from SAP Note 2864970 with patch text “"DB6: INSERT statement aborts with 'invalid BOM’ “ .
- Db2 version 11.1.4.5 or higher or Db2 version 11.5 or higher
We recommend that you use Db2 version 11.5 to benefit from vectorized insert.
You also need
SAP Note 2774648.
Your Feedback Wanted
If you own an SAP BW system running on IBM Db2 for Linux, UNIX, and Windows, the described mechanism may help you with your BW ETL performance. Technically, the feature is well-tested, and some pilot customers are using the feature successfully. However, with the availability with Db2 11.5 and 11.1.4.5, the feature may now be interesting for a larger number of customers.
If you try out this feature, we would be very interested in your feedback about performance results as well as any feedback about our documentation. Feel free to add a comment to the blog or send me an e-mail.