cancel
Showing results for 
Search instead for 
Did you mean: 

SAP Datasphere , Dataflow read from CSV, random number of rows

albertosimeoni
Participant
669

After the last update,

in SAP Datasphere,

I have some dataflows that read from CSV files from a SFTP.

Now after the last update they extract a random number of rows.

The same dataflows in a quality tenant (with the same connections) read correctly every row from the files.

In production tenant the files read a random number of rows.

Dataflows do not show errors.

I tried to transport the dataflow from quality to production => did not work (did not solve the issue).

I try to change the default separator from ',' to '\t' => did not work (did not solve the issue).

I try to rewrite it from scratch =>did notwork (did not solve the issue).

I try to change batch size (on/off) =>did notwork (did not solve the issue).

 

Are there some common issue that cloud connector could be affected of?

This is a really bad issue as the flow was working from the beginning of the development, now it read a random number of rows!!! without failing or throwing any error

Accepted Solutions (0)

Answers (1)

Answers (1)

TuncayKaraca
Active Contributor
0 Kudos

Hello @albertosimeoni,

Did it happen after the latest release 2024.5 of SAP Datasphere , when, today? I don't see any data integration or data flows enhancements in the latest release, check out What’s New in SAP Datasphere Version 2024.5 — Feb 27, 2024

Have your both quality and production tenants been updated the latest release already? 

Probably you are on something! I suggest you to apply your https://me.sap.com/ and submit an incident.

Regards,
Tuncay

albertosimeoni
Participant
0 Kudos

Hello,

the version (today) is 2024.5.61.

Data Integration Runtime Version
2403.29 (1.1-6813.328a4882)
Deployment Service Version
2024.5.2
 
I see, from data integration monitor logs, a random number of rows starting from february 17.
The absurd thing is that both tenant have the same version. In quality tenant no problem, In Production yes.
 
Now I try to recreate from scratch one of the dataflows and see if the problem persist.
albertosimeoni
Participant

The final solution in my case is:

- Set the target table as "in memory".

- Change a data type inside the dataflow, at the first block that contains the csv file: I had a column with integer values in the CSV file that was read as DECIMAL(13,3). => I change the column type to int32.

Solved applying both of these options. I don't know what components are behind this:

if the change in source object inside the dataflow maybe trigger some rebuild of links somewhere in technical schemas in hana cloud, or the in memory option avoid packet missing in some communications...

Best Regards

TuncayKaraca
Active Contributor
0 Kudos

@albertosimeoni

It's good hear you've resolved the issue.

Actually it's a very meaningful point that if there are changes --there should not be in theory--  in CSV files that may affect the data flow, for sure! So while comparing both tenants it'd better to use the same file exactly. 

Regards,
Tuncay