Showing results for 
Search instead for 
Did you mean: 

Data flows are getting started but not completing successfully while extracting/loading of the data

Active Participant
0 Kudos

Hello People,

We are facing a abnormal behavior with the dataflows in the data services job.


We are extracting the data from CRM end in parallel. Please refer the build:

a. We have 5 main workflows flows i.e :

   => Main WF1 has 6 more sub Wf's in it, in which each sub Wf has 1/2 DF's associated in parallel.

   => Main WF2 has 21 DF's and 1 WFa->with a DF & a WFb. WFb has 1 DF in parallel.

   => Main WF3 has 1 DF in parallel.

   => Main WF4 has 3 DF in parallel.

   => Main WF5 has 1 WF & a DF in sequence.

b. Regularly the job works perfectly fine but, sometimes it gets stuck at the DF’s without any error logs.

c. Job doesn’t stuck at a specific dataflow or on a specific day, many a times it strucks at different DF’s.

d. Observations in the Monitor Log:

Dataflow---------------------- State----------------RowCnt------LT-------AT------ 




8.113      394.164




8.159      394.242




8.159      394.242

Where LT: Lapse Time and AT: Absolute time

If you check the monitor log, the State of the Dataflow DF1 remains PROCEED till the end, ideally it should complete.

In successful jobs, the status for DF1  is STOP . This DF takes approx. 2 min to execute.

The row count for DF1 extraction is 234204 but, it got stuck at  234000.

Then we terminate the job after sometime,but for surprise it gets executed successfully on next day.

e. As per analysis over all the failed jobs, same things were observed over the different data flows that got stuck during the execution.Logic related to the data flows is perfectly fine.

Observations in the Trace log:

DATAFLOW: Process to execute data flow <DF1> is started.

DATAFLOW: Data flow <DF1> is started.

ABAP: ABAP flow <ZABAPDF> is started.

ABAP: ABAP flow <ZABAPDF> is completed.

Cache statistics determined that data flow <DF1>

uses <0>caches with a total size of <0> bytes. This is less than(or equal to) the virtual memory <1609564160> bytes available for caches.

Statistics is switching the cache type to IN MEMORY.

DATAFLOW: Data flow <DF1> using IN MEMORY Cache.

DATAFLOW: <DF1> is completed successfully.

The highlighted text in the trace log is not appearing in the unsuccessful job but, it appears for the successful one.

Note: The cache type is pageable cache, DS ver is 3.2.

Please suggest.



Accepted Solutions (0)

Answers (2)

Answers (2)

Active Contributor

Row count getting stuck very close to actual number of records most probably means they have all been processed by DS, but that the database operations cannot complete. DS is waiting for acknowledgement that never comes.

Might be locking issue, redo log file problem, lack of disk space... Look for more info in the db trace log and error files.

Active Participant
0 Kudos

Probably product issue.

We upgraded to SAP DS 4.5 and it's been 6 months, we haven't faced any such issue.

Regards - Santosh G.

Former Member
0 Kudos

SAP DS 4.5 ??

Active Participant
0 Kudos

Thanks for pointing out: SAP DS 4.2 SP5

0 Kudos

Hi Santosh,

just a wild guess.

Would you be able to replicate all the DF\WF , delete original DF\WF, rename replicated objects to original to DF\WF names(for your convenience)   and excute it.

Some time reference does not work.

Hope this should work.


Shiva Sahu