on 2024 Feb 26 10:23 PM
After the last update,
in SAP Datasphere,
I have some dataflows that read from CSV files from a SFTP.
Now after the last update they extract a random number of rows.
The same dataflows in a quality tenant (with the same connections) read correctly every row from the files.
In production tenant the files read a random number of rows.
Dataflows do not show errors.
I tried to transport the dataflow from quality to production => did not work (did not solve the issue).
I try to change the default separator from ',' to '\t' => did not work (did not solve the issue).
I try to rewrite it from scratch =>did notwork (did not solve the issue).
I try to change batch size (on/off) =>did notwork (did not solve the issue).
Are there some common issue that cloud connector could be affected of?
This is a really bad issue as the flow was working from the beginning of the development, now it read a random number of rows!!! without failing or throwing any error
Request clarification before answering.
Hello @albertosimeoni,
Did it happen after the latest release 2024.5 of SAP Datasphere , when, today? I don't see any data integration or data flows enhancements in the latest release, check out What’s New in SAP Datasphere Version 2024.5 — Feb 27, 2024
Have your both quality and production tenants been updated the latest release already?
Probably you are on something! I suggest you to apply your https://me.sap.com/ and submit an incident.
Regards,
Tuncay
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello,
the version (today) is 2024.5.61.
The final solution in my case is:
- Set the target table as "in memory".
- Change a data type inside the dataflow, at the first block that contains the csv file: I had a column with integer values in the CSV file that was read as DECIMAL(13,3). => I change the column type to int32.
Solved applying both of these options. I don't know what components are behind this:
if the change in source object inside the dataflow maybe trigger some rebuild of links somewhere in technical schemas in hana cloud, or the in memory option avoid packet missing in some communications...
Best Regards
It's good hear you've resolved the issue.
Actually it's a very meaningful point that if there are changes --there should not be in theory-- in CSV files that may affect the data flow, for sure! So while comparing both tenants it'd better to use the same file exactly.
Regards,
Tuncay
User | Count |
---|---|
49 | |
8 | |
5 | |
5 | |
5 | |
5 | |
5 | |
4 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.