3 weeks ago
Hello,
I have a source CDS view within my S4HANA on Prem system (I_GLACCOUNTLINEITEMRAWDATA) with data from 2021 onwards. The amount of data in the table is huge. I want to replicate the view into Google Cloud Storage files and then load the files into Google Big Query. Since the volume is huge, I want to split the replication flows into smaller flows by giving the FiscalYear filter. I created a replication flow (Job_2021), gave I_GLACCOUNTLINEITEMRAWDATA as source, created a filter on FiscalYear 2021 and renamed the default target object, I_GLACCOUNTLINEITEMRAWDATA to I_GLACCOUNTLINEITEMRAWDATA_2021. I saved the job and deployed. It went through successfully. After that, I wanted to create the next job, Job_2022 with same source, filter as 2022 and renamed the target object within the flow as I_GLACCOUNTLINEITEMRAWDATA_2022. When I am trying to deploy it, I am getting an error saying that the object I_GLACCOUNTLINEITEMRAWDATA exists in another space. Is there a solution to this? I simply want to split the source data into multiple tables/files in target.
Request clarification before answering.
If the volume is huge you can use partition in replication flow and load the data in a single target file.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
50 | |
10 | |
8 | |
6 | |
5 | |
5 | |
5 | |
4 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.