Application Development and Automation Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 
Read only

Open cursor with parallel processing

Former Member
0 Likes
2,655

Hi,

I need to update huge amount (around 3 million) of data in parallel process and fetch cursor is giving dump because of database commit during parallel process. I can not take whole data at once in internal table hence need to use fetch next cursor.

Please help.

regards,

Nilanjana

1 ACCEPTED SOLUTION
Read only

kilian_kilger
Product and Topic Expert
Product and Topic Expert
0 Likes
1,681

Hi Nilanjana,

what do you mean with "parallel process"? Please be a bit more specific about what you actually do, like "I call a function module with the addition IN SEPARATE TASK, in this function module, I do an OPEN CURSOR, etc.).

Nevertheless, there are different options:

1. If you can control the type of COMMITs happening (for example only COMMIT CONNECTION DEFAULT is done, instead of COMMIT WORK), you could consider using a R/3* Service connection and using an OPEN/FETCH cursor on the secondary database connection. If a COMMIT WORK is done, this doesn't help you that much. If only native commits are made, you can also use the WITH HOLD addition to OPEN CURSOR.

2. Let's simulate paging. For this you don't necessarily need an OPEN CURSOR. Say your table has one key field. Then you could do in the first SELECT (this is no working code, just examples):

SELECT * FROM table ORDER BY key_field PACKAGE SIZE 500 INTO TABLE @result.

  EXIT.

ENDSELECT.

In the second SELECT you can do:

SELECT * FROM table WHERE key_field > last_line_of_result-key_field ORDER BY key_field PACKAGE SIZE 500 ...

  EXIT.

ENDSELECT.

Here last_line_of_result is the work area obtained by taking the last line of the result of the first SELECT statement. Don't forget the ORDER BY clause ;-). A similar thing works of course with OPEN CURSOR / FETCH NEXT CURSOR.

You can do similar things with an arbitrary number of key fields, like:

First SELECT: SELECT * FROM table ORDER BY key1, key2 PACKAGE SIZE ...

Second SELECT: SELECT * FROM table WHERE key1 > last_line_of_result-key1 or ( key1 = last_line_of_result-key1 AND key2 > last_line_of_result-key2 ) ORDER BY key1, key2.

You have to be careful about NULL values, though. Of course this gets nasty, if you have 500+ key fields. In this case you could generate a dynamic WHERE condition.

3. There is an ancient function module DB_SETGET. It is not recommended to use that anymore. But this function module does a somewhat simular thing than 2. automatically by using a very deprecated kernel interface. But really don't use that anymore! And a function module is not Open SQL, of course.

4. If you can wait, there is probably a LIMIT / OFFSET addition to the SELECT statement at some point in time ;-).

Of course, with solution 2.,3. and 4. it is not guaranteed, that the data inside the table didn't change after the COMMIT happened in the meantime. So you might not read the exact data. You have to know exactly the lifecycle of your table and help with lock objects, etc. to ensure that this doesn't happen.

Best regards,

Kilian.

Message was edited by: Kilian Kilger

4 REPLIES 4
Read only

kilian_kilger
Product and Topic Expert
Product and Topic Expert
0 Likes
1,682

Hi Nilanjana,

what do you mean with "parallel process"? Please be a bit more specific about what you actually do, like "I call a function module with the addition IN SEPARATE TASK, in this function module, I do an OPEN CURSOR, etc.).

Nevertheless, there are different options:

1. If you can control the type of COMMITs happening (for example only COMMIT CONNECTION DEFAULT is done, instead of COMMIT WORK), you could consider using a R/3* Service connection and using an OPEN/FETCH cursor on the secondary database connection. If a COMMIT WORK is done, this doesn't help you that much. If only native commits are made, you can also use the WITH HOLD addition to OPEN CURSOR.

2. Let's simulate paging. For this you don't necessarily need an OPEN CURSOR. Say your table has one key field. Then you could do in the first SELECT (this is no working code, just examples):

SELECT * FROM table ORDER BY key_field PACKAGE SIZE 500 INTO TABLE @result.

  EXIT.

ENDSELECT.

In the second SELECT you can do:

SELECT * FROM table WHERE key_field > last_line_of_result-key_field ORDER BY key_field PACKAGE SIZE 500 ...

  EXIT.

ENDSELECT.

Here last_line_of_result is the work area obtained by taking the last line of the result of the first SELECT statement. Don't forget the ORDER BY clause ;-). A similar thing works of course with OPEN CURSOR / FETCH NEXT CURSOR.

You can do similar things with an arbitrary number of key fields, like:

First SELECT: SELECT * FROM table ORDER BY key1, key2 PACKAGE SIZE ...

Second SELECT: SELECT * FROM table WHERE key1 > last_line_of_result-key1 or ( key1 = last_line_of_result-key1 AND key2 > last_line_of_result-key2 ) ORDER BY key1, key2.

You have to be careful about NULL values, though. Of course this gets nasty, if you have 500+ key fields. In this case you could generate a dynamic WHERE condition.

3. There is an ancient function module DB_SETGET. It is not recommended to use that anymore. But this function module does a somewhat simular thing than 2. automatically by using a very deprecated kernel interface. But really don't use that anymore! And a function module is not Open SQL, of course.

4. If you can wait, there is probably a LIMIT / OFFSET addition to the SELECT statement at some point in time ;-).

Of course, with solution 2.,3. and 4. it is not guaranteed, that the data inside the table didn't change after the COMMIT happened in the meantime. So you might not read the exact data. You have to know exactly the lifecycle of your table and help with lock objects, etc. to ensure that this doesn't happen.

Best regards,

Kilian.

Message was edited by: Kilian Kilger

Read only

0 Likes
1,681

Hi Kilian,

I am so sorry for not being specific, I posted it in a hurry.

Actually I need to update 1 MARC field and to do that I am trying to use  FM MATERIAL_MAINTAIN_DARK_RETAIL in separate task to run parallel processes as the number of records is in million and time window is less.

Thank you so much for coming up with so many options, actually I also had thought of the second option as backup if there is nothing else is available. I will try what fits to my requirement and get back in case of any issues, hope that is fine for you.

Thank you again for the help.

Regards,

Nilanjana

Read only

0 Likes
1,681

Dear Nilanjana,

you can try Deep Internal table to get all data in internal table ,

for example if you want to select MARC data according to PLANT(WERKS) from selection screen then for every plant you can select data as fallows as in attachment.

please modify your code accordingly,

I hope it will works,

regards,

Pradeep Mishra.

Read only

0 Likes
1,681

Thanks Pradeep but i think deep internal table will also use memory to keep data in application layer, please correct me if  I am wrong.