cancel
Showing results for 
Search instead for 
Did you mean: 

How can I "merge" multiple updates on the same row with SQL Remote?

Baron
Participant
1,714

Preface: This was originally a sub-comment from that question: "DBREMOTE Messages 36^3 not enough. I think it's helful to ask it as a separate question... // Volker

I found out that the reason of this big size of messages is because of something like an endless loop, in which our application does not match the principle of DBREMOTE. So the situation is as follows:

We have one Cons. DB replicating against 100 Remotes, and each Remote updates the value of a specific row in one table very frequently, and this table in turns replicates in both directions:

The statements are something like:

update TABLE1 set Col1 = 1;

update TABLE1 set Col1 = 2;

.

.

.

update TABLE1 set Col1 = 1000;

This block means for DBREMOTE to replicate the 1000 Update statements from Remote1 to Cons, and the server in turn will apply these updates by him and then replicate the same Updates 99 times for the remaining 99 Remotes.

The question is: is there any way to tell the DBREMOTE to ignore the between transactions and consider only the last one?

I know that with SYNCHRONIZE SUBSCRIPTION I can achieve what I want for one time, but are there maybe better ideas?

Accepted Solutions (0)

Answers (2)

Answers (2)

Breck_Carter
Participant

How to fool SQL Remote: (1) Don't include the heavily-updated table1.column in the publication. Create a "shadow" table2.column that looks just like table1.column. Include table2.column in the publication. Just before synchronization occurs, update table2.column with the latest value in table1.column so that update will appear as a single operation in the upload... possibly do this in a hook procedure.

Caveat emptor: That technique has not been tested. Also, a lot more work must be done to solve (2) how the uploaded update is handled on the consolidated, (3) how downloaded updates are to be created on the consolidated and (4) how downloaded updates are to be handled on the remote.

FWIW far more complex trickery has been done with MobiLink, but the upload update scripting feature makes it much easier than SQL Remote.

Playing tricks with SQL Remote can be very difficult to code, especially if performance problems pop up.

Baron
Participant
0 Kudos

In this case I have to replicate table2 in remote against table2 in cons, and I should have also a trigger on cons. so that I transfer the changes from table2 back to table1 (within cons.), then I will have an indirectly synchronized table1.

What about if I need resolve conflict triggers (inventory), is it enough if I apply the trigger only on the directly replicated table2?

I think it is enough, but just to make sure!!

0 Kudos

Since table1.column is not replicated any more, table2 is the only candidate for conflict resolution. What you will have to do is propagate the result of the conflict resolution to table1.column. HTH

VolkerBarth
Contributor
0 Kudos

SQL Remote does replicate each operation on a published table as is, so AFAIK there is no way to "merge" those updates into one. That is a fundamental difference to MobiLink which does this "merging" by default.

Two approaches:

  1. Operations on published tables done within a SQL Remote hook procedure are not replicated at all.

  2. Move the updates to an unpublished shadow table and just apply the final update to the real table.

If those approaches could work is out of my knowledge because I don't know why your app does this sequence of updates on the same row...

Baron
Participant
0 Kudos

What about if I can manage to (delete the row and re-insert it) before starting DBREMOTE, in this case I think DBREMOTE will merge all (1000update + 1delete + 1insert) into one (1delete + 1*insert). Am I correct?

VolkerBarth
Contributor
0 Kudos

No, any operation into a published table will be replicated by SQL Remote. If you insert a million rows and delete them afterwards, both operations will still be replicated...

VolkerBarth
Contributor

I have learnt to think: If an operation has been put into the transaction log, SQL Remote will replicate it, no matter whether the according row is still in the database or not. In that respect SQL Remote is rather, hm, unsmart...

Breck_Carter
Participant

SQL Remote is different. If you update a one-byte column in a table with a 100M blob, SQL Remote will replicate the one-byte update but not the blob... whereas MobiLink will synchronized the entire row including the unchanged blob... so who's smart now? 🙂

regdomaratzki
Product and Topic Expert
Product and Topic Expert

It's not so much unsmart as by design. SQL Remote was designed to replicate every operation in the database. If it goes in the transaction log, it will be sent.

If that's not what you want, you should use MobiLink which synchronizes only the changes since the last synchronization.

regdomaratzki
Product and Topic Expert
Product and Topic Expert

The smartest people get the best of both worlds and use MobiLink but separate their large blobs into another table with a 1-1 relationship, so that the blob is only synchronized if it changes.

VolkerBarth
Contributor
0 Kudos

Yes, that's why I used the term operations, not "modified rows" or the like..

Baron
Participant
0 Kudos

what if the primary key of the table includes 100M blob? then DBREMOTE must use this 100M in the message, and this is actually our case:

update TABLE1 set Col1 = 1 where VERYLONGCOLUMNNAME1 = verylongdoublevalue and VERYLONGCOLUMNNAME2 = verylongvarchar100value ....

Baron
Participant
0 Kudos

I remember that I read once some smart property of DBREMOTE, so if several statements are applied on a specific row and then the last statement is delete statement, then DBREMOTE decides to ignore all the transient statements as long as they ended up with DELETE.

Maybe I am confusing!!

Baron
Participant
0 Kudos

In our case (and I think also by most other applications) the UPDATE Statements are originated from the remotes, and even in case of Mobilink the DBMLSYNC uses Transaction log files, so I think even with mobilink all the between statements will be sent to Cons.

Does your explanation applies only the cons (MLSRV10/17) or applies also the DBMLSYNC?

VolkerBarth
Contributor

It's the MobiLink client we refer to. It "merges" row modifications, i.e. it compares the state of a row after the last download with the state when building the upload. If a row has been updated several times within that time span, only the last state is part of the upload, including all columns. If a row has been inserted and afterwards deleted, it is not part of the upload at all.

The consolidated's behaviour is based on scripts, so there is no general rule there.

VolkerBarth
Contributor

Oops, that is a rather uncommon design decision... Is there a particular reason for that?

VolkerBarth
Contributor

I guess that relates to MobiLink, see my other current comment..

Baron
Participant
0 Kudos

Yes, thank you

Baron
Participant
0 Kudos

not all the tables are optimally designed!!