on 2019 May 01 6:57 AM
Preface: This was originally a sub-comment from that question: "DBREMOTE Messages 36^3 not enough. I think it's helful to ask it as a separate question... // Volker
I found out that the reason of this big size of messages is because of something like an endless loop, in which our application does not match the principle of DBREMOTE. So the situation is as follows:
We have one Cons. DB replicating against 100 Remotes, and each Remote updates the value of a specific row in one table very frequently, and this table in turns replicates in both directions:
The statements are something like:
update TABLE1 set Col1 = 1;
update TABLE1 set Col1 = 2;
.
.
.
update TABLE1 set Col1 = 1000;
This block means for DBREMOTE to replicate the 1000 Update statements from Remote1 to Cons, and the server in turn will apply these updates by him and then replicate the same Updates 99 times for the remaining 99 Remotes.
The question is: is there any way to tell the DBREMOTE to ignore the between transactions and consider only the last one?
I know that with SYNCHRONIZE SUBSCRIPTION I can achieve what I want for one time, but are there maybe better ideas?
Request clarification before answering.
How to fool SQL Remote: (1) Don't include the heavily-updated table1.column in the publication. Create a "shadow" table2.column that looks just like table1.column. Include table2.column in the publication. Just before synchronization occurs, update table2.column with the latest value in table1.column so that update will appear as a single operation in the upload... possibly do this in a hook procedure.
Caveat emptor: That technique has not been tested. Also, a lot more work must be done to solve (2) how the uploaded update is handled on the consolidated, (3) how downloaded updates are to be created on the consolidated and (4) how downloaded updates are to be handled on the remote.
FWIW far more complex trickery has been done with MobiLink, but the upload update scripting feature makes it much easier than SQL Remote.
Playing tricks with SQL Remote can be very difficult to code, especially if performance problems pop up.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
In this case I have to replicate table2 in remote against table2 in cons, and I should have also a trigger on cons. so that I transfer the changes from table2 back to table1 (within cons.), then I will have an indirectly synchronized table1.
What about if I need resolve conflict triggers (inventory), is it enough if I apply the trigger only on the directly replicated table2?
I think it is enough, but just to make sure!!
SQL Remote does replicate each operation on a published table as is, so AFAIK there is no way to "merge" those updates into one. That is a fundamental difference to MobiLink which does this "merging" by default.
Two approaches:
Operations on published tables done within a SQL Remote hook procedure are not replicated at all.
Move the updates to an unpublished shadow table and just apply the final update to the real table.
If those approaches could work is out of my knowledge because I don't know why your app does this sequence of updates on the same row...
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I have learnt to think: If an operation has been put into the transaction log, SQL Remote will replicate it, no matter whether the according row is still in the database or not. In that respect SQL Remote is rather, hm, unsmart...
SQL Remote is different. If you update a one-byte column in a table with a 100M blob, SQL Remote will replicate the one-byte update but not the blob... whereas MobiLink will synchronized the entire row including the unchanged blob... so who's smart now? 🙂
It's not so much unsmart as by design. SQL Remote was designed to replicate every operation in the database. If it goes in the transaction log, it will be sent.
If that's not what you want, you should use MobiLink which synchronizes only the changes since the last synchronization.
The smartest people get the best of both worlds and use MobiLink but separate their large blobs into another table with a 1-1 relationship, so that the blob is only synchronized if it changes.
In our case (and I think also by most other applications) the UPDATE Statements are originated from the remotes, and even in case of Mobilink the DBMLSYNC uses Transaction log files, so I think even with mobilink all the between statements will be sent to Cons.
Does your explanation applies only the cons (MLSRV10/17) or applies also the DBMLSYNC?
It's the MobiLink client we refer to. It "merges" row modifications, i.e. it compares the state of a row after the last download with the state when building the upload. If a row has been updated several times within that time span, only the last state is part of the upload, including all columns. If a row has been inserted and afterwards deleted, it is not part of the upload at all.
The consolidated's behaviour is based on scripts, so there is no general rule there.
Oops, that is a rather uncommon design decision... Is there a particular reason for that?
I guess that relates to MobiLink, see my other current comment..
User | Count |
---|---|
53 | |
6 | |
6 | |
5 | |
5 | |
4 | |
3 | |
3 | |
3 | |
3 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.