‎2006 Jan 06 4:12 PM
Hi,
I have programmed a data extraction program in SAP IS-U. Due to the sheer size of the tables (900 million+) if had to use parallel processing to keep the runtime acceptable.
When running in the QA-environment I see a funny behaviour. There are about 49 dialog processes available, but the program never uses more than around 15. In transaction SARFC the settings are, that the appserver may get a load of 100% processes.
Furthermore I see that in the first few minutes a lot of jobs are being created. Then for a few minutes almost nothing happens (maybe 2 or 3 are running) and then a few minutes later it's back to normal. This cycle repeats on and on and takes around 20 minutes. The Basis-people say the system load is not very high, neither is the DB-load.
My questions are:
1) Why does the job counter never exceed the 15 jobs, even though there are plenty available (65 in total, 49 on my app server)?
2) Why is the performance so wobbly? I would expect that the slots in SM51 should always be filled with fresh jobs.
With kind regards,
Crispian Stones
P.S.
The mechanism I use is similar to the following:
loop at tb_todo into wa_todo.
call function 'SPBT_GET_CURR_RESOURCE_INFO'
importing FREE_PBT_WPS = available.
check available gt 1.
call function 'Z_MY_EXTRACTOR'
starting new task my_taskname
in destination my_destination
performing my_callback on end of task
exporting
i_data = wa_todo.
if sy-subrc eq 0.
add 1 to created.
endif.
endloop.
wait until returned ge created.
form my_callback.
add 1 to returned.
receive results from function 'Z_MY_EXTRACTOR'.
endform.
‎2006 Jan 11 8:24 AM
Hi Crispian!
Nice system installation, would love to work with it!
Ad 1) I thought, the use of a server group is necessary to get correct load balancing. I think 'parallel_generators' is now delivered by SAP and should be available - have a look in transaction RZ12 about the settings for maximal use. The report for IDOC processing in background has a parallel option, too. Maybe you compare the technics. Here an init (SPBT_INITIALIZE) was done, but later current resource info is just called, but not used for explicit load handling.
I hope you shortened your example very roughly - otherwise your check available would skip some todo-lines.
Ad 2) In the first minutes is definitely happening something. It's just the question, how to look at this / why do you see no obvious action.
SM50 shows you database accesses - if there is no table displayed, then your program is working in ABAP (or waiting for RFC, locks, number ranges...). Maybe in the beginning you have a lot of memory consumption and you need some swaps before every allocation can be fullfilled. Maybe ST02 will show some action during your 'idle' time.
If found somewhere a nice routine, has some nice comments:
FORM get_arfc_ressources.
*
* Optional call to SBPT_INITIALIZE to check the
* group in which parallel processing is to take place.
* Could be used to optimize sizing of work packets
* work / WP_AVAILABLE).
*
CALL FUNCTION 'SPBT_INITIALIZE'
EXPORTING
group_name = p_group
"Name of group to check
IMPORTING
max_pbt_wps = wp_total
"Total number of dialog work
"processes available in group
"for parallel processing
free_pbt_wps = wp_available
"Number of work processes
"available in group for
"parallel processing at this
"moment
EXCEPTIONS
invalid_group_name = 1
"Incorrect group name; RFC
"group not defined. See
"transaction RZ12
internal_error = 2
"R/3 System error; see the
"system log (transaction
"SM21) for diagnostic info
pbt_env_already_initialized = 3
"Function module may be
"called only once; is called
"automatically by R/3 if you
"do not call before starting
"parallel processing
currently_no_resources_avail = 4
"No dialog work processes
"in the group are available;
"they are busy or server load
"is too high
no_pbt_resources_found = 5
"No servers in the group
"met the criteria of >
"two work processes
"defined.
cant_init_different_pbt_groups = 6
"You have already initialized
"one group and have now tried
"initialize a different group.
OTHERS = 7.
* Check exit code FUB SPBT_INITIALIZE
CASE sy-subrc.
WHEN 0.
"Everything's ok. Optionally set up for optimizing size of
"work packets.
jobs = wp_available.
IF max_proc < wp_available.
wp_available = max_proc.
ENDIF.
* ...
WHEN 1.
"Non-existent group name. Stop report.
* message e836. "Group not defined.
WHEN 2.
"System error. Stop and check system log for error
"analysis.
WHEN 3.
"Programming error. Stop and correct program.
* message e833. "PBT environment was already initialized.
WHEN 4.
"No resources: this may be a temporary problem. You
"may wish to pause briefly and repeat the call. Otherwise
"check your RFC group administration: Group defined
"in accordance with your requirements?
* message e837. "All servers currently busy.
WHEN 5.
"Check your servers, network, operation modes.
WHEN 6.
ENDCASE.
ENDFORM. " GET_ARFC_RESSOURCESRegards,
Christian
‎2006 Jan 11 8:24 AM
Hi Crispian!
Nice system installation, would love to work with it!
Ad 1) I thought, the use of a server group is necessary to get correct load balancing. I think 'parallel_generators' is now delivered by SAP and should be available - have a look in transaction RZ12 about the settings for maximal use. The report for IDOC processing in background has a parallel option, too. Maybe you compare the technics. Here an init (SPBT_INITIALIZE) was done, but later current resource info is just called, but not used for explicit load handling.
I hope you shortened your example very roughly - otherwise your check available would skip some todo-lines.
Ad 2) In the first minutes is definitely happening something. It's just the question, how to look at this / why do you see no obvious action.
SM50 shows you database accesses - if there is no table displayed, then your program is working in ABAP (or waiting for RFC, locks, number ranges...). Maybe in the beginning you have a lot of memory consumption and you need some swaps before every allocation can be fullfilled. Maybe ST02 will show some action during your 'idle' time.
If found somewhere a nice routine, has some nice comments:
FORM get_arfc_ressources.
*
* Optional call to SBPT_INITIALIZE to check the
* group in which parallel processing is to take place.
* Could be used to optimize sizing of work packets
* work / WP_AVAILABLE).
*
CALL FUNCTION 'SPBT_INITIALIZE'
EXPORTING
group_name = p_group
"Name of group to check
IMPORTING
max_pbt_wps = wp_total
"Total number of dialog work
"processes available in group
"for parallel processing
free_pbt_wps = wp_available
"Number of work processes
"available in group for
"parallel processing at this
"moment
EXCEPTIONS
invalid_group_name = 1
"Incorrect group name; RFC
"group not defined. See
"transaction RZ12
internal_error = 2
"R/3 System error; see the
"system log (transaction
"SM21) for diagnostic info
pbt_env_already_initialized = 3
"Function module may be
"called only once; is called
"automatically by R/3 if you
"do not call before starting
"parallel processing
currently_no_resources_avail = 4
"No dialog work processes
"in the group are available;
"they are busy or server load
"is too high
no_pbt_resources_found = 5
"No servers in the group
"met the criteria of >
"two work processes
"defined.
cant_init_different_pbt_groups = 6
"You have already initialized
"one group and have now tried
"initialize a different group.
OTHERS = 7.
* Check exit code FUB SPBT_INITIALIZE
CASE sy-subrc.
WHEN 0.
"Everything's ok. Optionally set up for optimizing size of
"work packets.
jobs = wp_available.
IF max_proc < wp_available.
wp_available = max_proc.
ENDIF.
* ...
WHEN 1.
"Non-existent group name. Stop report.
* message e836. "Group not defined.
WHEN 2.
"System error. Stop and check system log for error
"analysis.
WHEN 3.
"Programming error. Stop and correct program.
* message e833. "PBT environment was already initialized.
WHEN 4.
"No resources: this may be a temporary problem. You
"may wish to pause briefly and repeat the call. Otherwise
"check your RFC group administration: Group defined
"in accordance with your requirements?
* message e837. "All servers currently busy.
WHEN 5.
"Check your servers, network, operation modes.
WHEN 6.
ENDCASE.
ENDFORM. " GET_ARFC_RESSOURCESRegards,
Christian
‎2006 Jan 13 1:16 PM
Hi again,
Nice system indeed, but the extraction job still takes more than 20 hours to complete... not fast enough yet
Given example was severely simplified indeed, by the way.
I was aware of the groups defined in RZ12. For testing purposes I left the group empty and made my alterations in transaction SARFC. But now I see that the settings per logon group can be altered in RZ12 (overriding SARFC?) and I may have to check what the tester has used for logon group.
Most of my code I have based on comments/directions given in the SAP Help PDF RFC Programming in ABAP:
<a href="http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCFESDE2/BCFESDE2.pdf">http://help.sap.com/printdocu/core/Print46c/en/data/pdf/BCFESDE2/BCFESDE2.pdf</a>
What I meant was indeed the activity in SM50 & co. For a short while many jobs are created, which then extract the data. In SM50 you see many jobs with status "Inserting into DB".
Then, when these jobs complete, in the next cycle there are hardly new jobs running, maybe 5 at most, decreasing to zero.
After reaching zero, the cycle starts all over.
According to the Basis-people the servers are not fully loaded and they say I could even start more jobs.
So... what's going on? Are the Basis people wrong and have they overseen something? Is this behaviour normal or may there be deficiencies in my code?
With kind regards,
Crispian
‎2006 Jan 13 1:53 PM
Hi Crispian!
As you may have noticed, I thought each parallel process itself would be lazy in his first minutes - but you were referring to this 'down to nothing - jump to full number of parallel sessions' behavior(?).
That's in your coding. It's linked to the 'wait until returned ge created.' This is only fulfilled, when all send sessions are ready, and only afterwards your cut do ... endo will be triggered again.
I experimented a little with 'returned ge created - 10'. But I wanted to have a dynamic re-trigger (something like 50% are back) to be able to handle very small parallel tests and very big productive runs. End up screwed, because the number of sessions is monotone increasing and the fraction of open jobs was getting smaller and smaller till the end of jobs.
Maybe you just calculate a helper send = created - max_wp + 10. If you then 'wait until returned ge send.', you should get a refill when 10 sessions are back.
But I made an own workload handling based on ideas of assortment list (Retail function), maybe I should have a look at the PDF to give you more detailed performance hints. Your solution might have different places for the wait statements.
Regards,
Christian
‎2006 Jan 13 2:42 PM
Hi,
> As you may have noticed, I thought each parallel
> process itself would be lazy in his first minutes -
> but you were referring to this 'down to nothing -
> jump to full number of parallel sessions'
> behavior(?).
Indeed, when a session runs, it goes well. The problem is that the number concurrent sessions decrease to zero and then rise again.
> That's in your coding. It's linked to the 'wait until
> returned ge created.'
I don't understand. The wait statement is executed after completing the todo-list. However, in the beginning of the loop I check it as well using the following construction (simplified a bit):
LOOP AT tb_todo.
DO.
CALL FUNCTION 'SPBT_GET_CURR_RESOURCE_INFO'
IMPORTING
free_pbt_wps = available.
IF available GT 1.
EXIT.
ELSE.
WAIT UNTIL 1 = 2 UP TO '0.15' SECONDS.
ENDIF.
ENDDO.
CALL FUNCTION 'Z_MY_EXTRACTOR'
STARTING NEW TASK
DESTINATION IN GROUP my_group
PERFORMING my_form ON END OF TASK
EXPORTING
x_to_do = tb_todo
... etc
ENDLOOP.
WAIT UNTIL jobs_returned GE jobs_created AND
jobs_running EQ 0.
... etc
Message was edited by: C. Stones
Oops: IF available LT 1. --> IF available GT 1.
‎2006 Jan 13 3:08 PM
Hi,
I'm preparing for flight, so just a short reply:
I used a different way for waiting, something like this (from include LWBB_HPR2F01):
DO.
REFRESH group.
tkname = tkname + 1 .
APPEND LINES OF i_wind FROM i_ab TO i_bis TO group.
curr_proc_nr = sende - empfang.
IF curr_proc_nr >= max_proc .
WAIT UNTIL empfang >= sende.
curr_proc_nr = sende - empfang.
ENDIF.
IF curr_proc_nr < max_proc .
* ALE
CALL FUNCTION 'WBB_WIND_ANALYZE'
STARTING NEW TASK tkname DESTINATION IN GROUP s_group
PERFORMING ergebnis_wind ON END OF TASK
EXPORTING
...
enddo.
WAIT UNTIL empfang >= sende .
Your code seems to be screwed: program exits do - enddo, when NO available process are left?
Waits are crucial for correct load balancing, I will later have closer look for best strategy.
Christian
‎2006 Jan 13 3:34 PM
In our site, our logic was as follows.. Assuming that our restriction is to allow maximum of 10 process V_ALLOWED = 10.
Get the max number of available processes using the FM = v_available.
If v_available > v_allowed.
v_available = v_allowed.
ENDIF.
Every call to parallel FM increments the counter V_SENT.
Then a WAIT command is coded after a FM call TO WAIT until V_ALLOWED > V_SENT.
In the callback subroutine, we will decrement this counter V_SENT.
Letz assume that we have kicked off 10 process and V_SENT = 10.
Once our logic reaches the max 10 process, WAIT will not be successfull, unless at least once of the process is completed. Once one of the process is completed, V_SENT goes down to 9 and we proceed with the loop. This way we do not wait for all 10 process to complete...
‎2006 Jan 13 3:52 PM
Mmm... seems quite some people use this construction.
However, I have the feeling there can be some nasty things with this way of programming. Our production system is a heavy loaded system, quite often there are many jobs running concurrently.
My experience is that if I check for available processes only once in the beginning of the program, and this happens to be on an awkward moment, there might only be 5 processes available.
However, half an hour later things may be fine again with 30+ free processes. I would like my program to respond to these changes in load, so that the extractor performs as optimal as possible. It would be a pity if only 5 processes would be used throughout the extraction...
Do you have any advise on this matter?
‎2006 Jan 13 4:21 PM
Stones, pLEASE note that we do check for available process everytime we invoke the parallel FM and not just once .. otherwise it is just disaster waiting to happen..
IF V_ALLOWED is the max allowed process
LOOP.
check for available process.
if process is available
call your parallel FM
add +1 to v_sent
WAIT UNTIL v_sent < v_allowed.
endif.
ENDLOOP.
FOR CALL_BACK.
DECREMENT 1 FROM V_SENT.
‎2006 Jan 17 8:40 AM
Hi!
I found an example program description (copyright SAP), which looks being able to start new tasks immediately after a session is available again.
It's displayed in two columns: left with code, right with explanation - but here everything is places after each other. If it sounds more ABAP than English, then it's a new line...
<b>Required Data</b>
<b>Input parameters:</b>
PARAMETERS MAX_PROC_NUM type i.
Limits the maximum number of
parallel jobs. Alternatively, the
system uses as many jobs as can be
started in the system.
PARAMETERS SERV_GROUP type rfcgr.
Name of the server group that is to be
used (required entry for parallel
processing)
PARAMETERS TIMEOUT type I
Maximum waiting time after last package
sent
<b>Further data</b>
DATA STARTED type i value 0.
Counter for jobs started so far
DATA RETURNED type i value 0.
Counter for jobs finished so far
DATA RUNNING type i value 0.
Counter for jobs currently running
DATA PROC_NUM_LIMIT type i.
Current maximum value for the number of
jobs running in parallel
DATA NUM_LINES type i.
Number of work items in worklist
DATA TAB_IDX type i.
Table index of current package
DATA RESULT like sy-subrc.
Return value of aRFC
DATA WORK_ITEMS like standard table
of ???? occurs 0.
Internal table of worklist; the data
type naturally depends on the task in
question.
DATA WA_PACKET like ????.
Work area for data of the current work
item; same data type as WORK_ITEMS
DATA TASKNAME(8).
Variable for a unique name for identifying the started job and thus the processed work item. This is needed if the job returns results data. A simple option is the table index TAB_IDX of the package.
<b>Initialization</b>
IF MAX_PROC_NUM <= 1.
......
ENDIF.
Maybe incorrect entry; possible program
reactions:
1) Use of the maximum number of parallel jobs in the system
2) Re-display selection screen with error message
3) End program with error message
4) At a value of 1, sequential instead
of parallel processing (makes sense for
tests, for example)
SELECT * FROM .... into TABLE
WORK_ITEMS WHERE...
Fill internal table WORK_ITEMS with
required worklist data
Note: This table must not become too
large (memory required!). If necessary,
only key information needs to be read,
or the data can be read in smaller
packages.
DESCRIBE TABLE WORK_ITEMS LINES
NUM_LINES.
Determine the size of the worklist in
table WORK_ITEMS. If the table is empty, nothing needs to be done.
TAB_IDX = 1.
CLEAR STARTED.
CLEAR RETURNED.
PROC_NUM_LIMIT = MAX_PROC_NUM.
Initialize variables that control flow.
Limit for the number of parallel jobs is currently the transferred value. This can be changed at runtime.
CALL FUNCTION SPBT_INITALIZE
EXPORTING GROUP_NAME=SERV_GROUP
Initialization of server group
<b>Processing the Worklist in a Loop</b>
WHILE TAB_IDX <= NUM_LINES.
Loop on complete worklist table
RUNNING = STARTED - RETURNED
IF RUNNING >= PROC_NUM_LIMIT.
WAIT UNTIL RUNNING < PROC_NUM_LIMIT.
ENDIF.
Check whether the maximum number of parallel jobs has already been reached.
If so, wait until one has finished
PROC_NUM_LIMIT = MAX_PROC_NUM.
A new job can now be started. Program tries (again) to use the maximum number of jobs.
READ TABLE WORK_ITEMS INDEX TAB_IDX INTO WA_PACKET.
Read next work item from internal table with worklist in work area.
TASKNAME = TAB_IDX.
Define a unique name for the next job (if necessary). The table index of the work item can be used for this, for example.
CALL FUNCTION MY_PROCESSING_FUNCTION'
STARTING NEW TASK TASKNAME
DESTINATION IN GROUP SERV_GROUP
PERFORMING COME_BACK ON END OF TASK
EXPORTING ....
TABLES ...
EXCEPTIONS
RESOURCE_FAILURE = 1
SYSTEM_FAILURE = 2
COMMUNICATION_FAILURE = 3
OTHERS = 4.
RESULT = SY-SUBRC.
Start function module for processing as aRFC in server group SERV_GROUP. When aRFC finishes, form routine COME_BACK is activated.
It is important to record
any system errors at the start of the aRFC, in particular, the exception RESOURCE_FAILURE. The return value is stored in RESULT and then evaluated.
CASE RESULT.
Evaluation of return code.
WHEN 0.
STARTED = STARTED + 1.
RUNNING = RUNNING + 1.
TAB_IDX = TAB_IDX + 1.
Start of aRFC was successful Update control variables and go on to next work item
Note: At this point, useful trace information can be collected for
subsequent analysis and tuning
WHEN 1.
IF RUNNING = 0.
................
ENDIF.
PROC_NUM_LIMIT = RUNNING.
Resource problem; this must only be a temporary situation Make note in Trace File if necessary The program waits until one of the jobs that is still running returns and tries to start the job later. The value for the maximum
number of jobs running is temporarily reduced to the number of jobs currently running. If no job is running, the problem is with the load distribution Terminate program with error message if necessary
WHEN OTHERS.
.....
Greater problem in RFC communication. Possible reactions:
1)Terminate program with error message
2)Use local instance only
3)Only use sequential processing in this process
ENDCASE
ENDWHILE. " TAB_IDX
<b>Final Processing</b>
WAIT UNTIL RUNNING = 0.
The job for the last work item has now been started. However, processing of this package is not yet finished. Wait until all the jobs that were started have finished. This may be necessary for a follow-on process. In addition, if you do not do this,
COME_BACK will no longer find the context. The waiting time can be restricted as in some cases the end of
a job is not confirmed (e.g. termination of the process at operating system level).
......
All work items have been processed.
Final tasks, such as writing of a log
file, can be carried out, and the
program then finishes.
<b>COME_BACK Routine</b>
FORM COME_BACK USING NAME.
When a job ends, form COME_BACK is
activated.
NAME identifies the job. The system
gives the variable the name that was
defined at the start via ....STARTING
NEW TASK TASKNAME... .
RETURNED = RETURNED + 1.
RUNNING = RUNNING - 1.
Update of controlling variables.
RECEIVE RESULTS FROM FUNCTION
'MY_PROCESSING_FUNCTION'
IMPORTING ...
TABLES ...
EXCEPTIONS
COMMUNICATION_FAILURE = 1
SYSTEM_FAILURE = 2
OTHERS = 3.
....
ENDFORM.
If necessary, results data can be read
and evaluated. As at the start,
communication problems must be taken
into account.
‎2006 Jan 17 9:14 AM
Hi,
Thank you for your nice example. As soon as I get time again I will make a test and see what difference it will make.
Quite striking though, is the fact that during runtime the program never checks whether sufficient resources are available (except from the RESOURCE_FAILURE exception that is). I want to prevent that error because otherwise I'll have to re-extract the data later on, which is awkward.
Have you experienced any problems in this area, or am I unlucky to have such a heavy-loaded system that every now and then all slots are full?
Crispian
‎2007 Sep 26 2:51 PM
Hello,
I am facing a similar issue in one of my || processing program as well. The program, when executed with a data-set of 10,000 records takes 65 minutes to complete. One would expect it to take 650 minutes (or even lesser) to process a data-set of app. 100,000 records.
However, when I run the program for a file with app. 100,000 records the program runs OK initially (i.e; I could see multiple dialog processes getting invoked in SM50) but, after a while it starts running on ONLY ONE dialog process. I am not quite sure where, when and why this PARALLEL to SEQUENTIAL switch is happening. Due to this, the program drags on and on and on. I would highly appreciate your suggestions/tips to put this bug to sleep.
Here is a summary of the logic used...
w_group = 'BATCH_PARALLEL'.
w_task = w_task + 1.
CALL FUNCTION 'SPBT_INITIALIZE'
EXPORTING
group_name = w_group
IMPORTING
max_pbt_wps = w_pr_total "Total processes
free_pbt_wps = w_pr_avl "Avail processes
EXCEPTIONS
invalid_group_name = 1
internal_error = 2
pbt_env_already_initialized = 3
currently_no_resources_avail = 4
no_pbt_resources_found = 5
cant_init_different_pbt_groups = 6
OTHERS = 7.
IF sy-subrc <> 0.
Raise error mesage and quit
w_wait = c_x.
If everything went well, continue processing
ELSE.
CLEAR: w_wait.
The subroutine that receives results from the parallel FMs will reduce
this counter and set the flag W_WAIT once the value is equal to ZERO
w_count = LINES( data ).
Refresh the temporary table that will be populated for every partner
REFRESH: t_data.
LOOP AT data.
Keep appending data to the temporary table
APPEND data TO t_data.
AT END OF partner.
CLEAR: w_subrc.
CALL FUNCTION 'Z_PARALLEL_FUNCTION'
STARTING NEW TASK w_task
DESTINATION IN GROUP w_group
PERFORMING process_return ON END OF TASK
TABLES
data = t_data
EXCEPTIONS
communication_failure = 1 "Mandatory for || processing
system_failure = 2 "Mandatory for || processing
RESOURCE_FAILURE = 3 "Mandatory for || processing
OTHERS = 4.
w_subrc = sy-subrc.
Check if everything went well...
CLEAR: w_rfcdest.
CASE w_subrc.
WHEN 0.
This variable keeps track of the number of threads initiated. In case
all the processes are busy, we should compare this with the variable
w_recd (set later in the subroutine 'PROCESS_RETURN'), and wait till
w_sent >= w_recd.
w_sent = w_sent + 1.
Track all the tasks initiated.
CLEAR: wa_tasklist.
wa_tasklist-taskname = w_task.
APPEND wa_tasklist TO t_tasklist.
WHEN 1 OR 2.
Populate the error log table and continue to process the rest.
WHEN OTHERS.
There might be a lack of resources. Wait till some processes
are freed again. Populate the records back to the main table
CLEAR: wa_data.
LOOP AT t_data INTO wa_data.
APPEND wa_data TO data.
ENDLOOP.
WAIT UNTIL w_recd >= w_sent. "IS THIS THE CULPRIT?
ENDCASE.
Increment the task number
w_task = w_task + 1.
Refresh the temporary table
REFRESH t_data.
ENDAT.
ENDLOOP.
ENDIF.
Wait till all the records are returned.
WAIT UNTIL w_wait = c_x UP TO '120' SECONDS.
FORM process_return USING p_taskname. "#EC CALLED
REFRESH: t_data_tmp.
CLEAR : w_subrc.
Check the task for which this subroutine is processed!!!
CLEAR: wa_tasklist.
READ TABLE t_tasklist INTO wa_tasklist WITH KEY taskname = p_taskname.
If the task wasn't already processed...
IF sy-subrc eq 0.
Delete the task from the table T_TASKLIST
DELETE TABLE t_tasklist FROM wa_tasklist.
Receive the results back from the function module
RECEIVE RESULTS FROM FUNCTION 'Z_PARALLEL_FUNCTION'
TABLES
address_data = t_data_tmp
EXCEPTIONS
communication_failure = 1 "Mandatory for || processing
system_failure = 2 "Mandatory for || processing
RESOURCE_FAILURE = 3 "Mandatory for || processing
OTHERS = 4.
Store sy-subrc in a temporary variable.
w_subrc = sy-subrc.
Update the counter (Number of tasks/jobs/threads received)
w_recd = w_recd + 1.
Check the returned values
IF w_subrc EQ 0.
Do necessary processing!!!
ENDIF.
Subtract the number of records that were returned back from the
total number of records to be processed
w_count = w_count - LINES( t_data_tmp ).
If the counter is ZERO, set W_WAIT.
IF w_count = 0.
w_wait = c_x.
ENDIF.
ENDIF.
ENDFORM. " process_return
Thanks,
Muthu