on ‎2016 May 09 3:40 PM
Dear Colleagues/Experts,
I'm trying to run replication on a table having very large number of entries. I've used reading type as 5-Sender Queue and have also defined range for initial load in advanced settings.
The Replication started with initial load. However towards the end when the load was almost done with very few records left, I received an abort message stating 'Duplicate KEY'.
On checking the range, I found that the last range had an overlap with the previous one. I've corrected them now.
My question is can the system consider the changes in the ranges and can resume the load or do I need to stop the replication and start again, for the logging tables to be re-created. It seems that I'll have to stop and re-start.
Seeking your advice.
Basically, due to a large number of records, majority of which is already loaded, I'm trying to avoid stop and start of replication.
Thanks.
Shushant
Request clarification before answering.
Hi Shushant,
I solved the duplicate the error scenario like below.
1. Split the number ranges manually and Clicked on Start replication
2. Some of the values were overlapping so i got duplicate error issue.
3. I went to Data Transfer Overview tab in transaction LTRC and selected the row of the table which failed and clicked on EDIT pencil icon just on top left of that ALV table.
4. This open up another screen where maintain value '3' in Transfer Behavior field and save.
5. Goto Expert Functions tab now and reset the Failed status of this table replication.
6. By this way, the replication will resume. No need to stop and start replication here.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Amar,
Yes, that note pretty much i executed.
I dont think we need to restart application at all if we are changing the ranges manually.
But, make sure you change the Advanced replication settings to 'No Action' for that table.
Tcode LTRS -> go to Table Settings -> Add your table in that -> Maintain Refresh Behaviour as 'N No Action'.
After that you can start your replication again. By doing that, the target system data which is already loaded is not deleted.
Thanks Sarang. I assume you meant to resume the load and not re-start the replication. Right?
Your inputs seems to solve the issue. No need to re-load the existing ones. But it has started adding further records. Would monitor the load though. Besides I need not had to reset the failed status as the system takes that into consideration. Just changed the transfer behaviour to 3 and maintained refresh behaviour as N No Action.
Last follow up question. When I started the replication (it started with initial load), I had created range until 3 billion. However now it seems that the records have crossed 3 billion.
Do you think I should create additional range or I assume system would consider the delta above the 3 billion automatically.
After all, this is what the replication task does. Starts with initial load, if not done already and then gets into the replication status. Would that be correct to assume?
Regards,
Shushant
Hi Sarang,
It seems the replication is ON and I'm not more getting any errors. However when I check the count I see difference between ECC and the HANA. I don't see any unprocessed logging table records. However in the load stats, I do observe that number of records read & inserted are not the same as the number of records calculated. Records calculated is more than records read and inserted (which is what we also see when we compare the count between ECC & HANA. There are no portions in error or in process.
When I try to select the table and check the view errors, I see the execution failed RC code 3 (Error writing the data) and then 9(Logging table updation failed in sender system).
This seems to be the reason. However there is not failed status.
So you know how this can be sorted out, so that the system reads and writes all the records calculated.
Thanks.
Shushant
Hello Shushant,
Please let me know the steps you have executed for the setting up the ranges.
also As this is you initial load the logging tables will not be coming into picture as the logging tbales will be used to store the delta or change of the records and not the initial load.
Please let me know if this helps.
Regards
Amar Ghuge
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Amar,
Many thanks. I used the performance options in iuuc_repl_content. Created ranges manually where the last range overlapped the 2nd last, initially.
I didn't do initial load. Instead I started with Replication option directly and as I understand, it would first do the load, post which it would start replicating. Which means the status would be Replication(Initial load) first and later it would become 'Replication'.
Now the issue pops up when the status is still Replication(Initial Load), which as I mentioned was due to overlapping ranges.
My question is, now that I've fixed the ranges and have kept all ranges mutually exclusive of each other, can I resume the process without stopping and starting it again.
Regards,
Shushant
Hello Shushant,
As per my understanding the restart for the replication of the table would be required, as once you define the ranges and start the replication system will first execute the pre-calculation job which will create a plan for execution of the table as this job run in a interval it might not pick up the new changes. thus restart would be required or preferable.
Also restarting the table replication wont take the same time as the data os already avialable in HANA also as per me it has that pointer from where it will do the next steps of load.
Please let me know you feedback on same.
Amar ghuge
| User | Count |
|---|---|
| 9 | |
| 7 | |
| 5 | |
| 4 | |
| 4 | |
| 2 | |
| 2 | |
| 2 | |
| 2 | |
| 2 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.