on 2011 Dec 12 8:02 AM
We have this consolidated database and remote database(s) running SQL Anywhere 12 and using SQL Remote for replication. In this example both tables are identical on each site and are members of the publication.
TableA is used by our applications, so rows will be added directly to this table. In addition we have a TableB used by another application. Both tables are residing in the same database.
My scenario is a trigger-based synchronization (comment: Trigger based synchronization might not be the best option, I am open for accepting other solutions) from TableB to TableA. When a row is Updated/Deleted/Inserted in TableB the same changes shall appear in TableA.
Problem: TableA's PK is a GLOBAL AUTOINCREMENT value, hence an insert to this table will generate a new ID, say 1. When SQL Remote transfers this to the other database the trigger on the other database will generate a new ID, say 2. We then have the situation where the same row has two different primary keys in those databases. Add 40 remote databases and the same row will have 42 different primary keys, one for each database. TableB does have it's own primary key but since this is a generic integration logic we don't want to reuse primary keys from other systems.
I am aware of the option to replicate trigger-based transactions, but I cannot see how that will solve the problem, the only result as I can see it is that both row 1 (from trigger insert remote) and row 2 (from trigger insert local) will exist in the database.
I am trying to write down some logic that allows me to keep the AUTOINCREMENT value generated at the remote site and use this for the trigger-generated INSERT at the local site.
Replicating trigger generated transactions (dbremote -t) and wrapping the trigger body with IF CURRENT REMOTE USER IS NULL did the trick 🙂
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
On the assumption that TableA and TableB are both in the publication, the simplest solution might be to move the logic that inserts the row into TableA from the insert trigger of TableB to your application, or possible into a stored procedure.
Old Schema :
create table tableA ( pkA integer primary key default global autoincrement, c1 integer not null ); create table tableB ( pkB integer primary key default global autoincrement, pkA integer not null references tableA, c2 integer not null ); create trigger ai_tableA after insert on tableA referencing new as nr for each row begin insert into tableB(pkA,c2) values (nr.pkA, nr.c1); end;
New Schema (note table definitions unchanged) :
create table tableA ( pkA integer primary key default global autoincrement, c1 integer not null ); create table tableB ( pkB integer primary key default global autoincrement, pkA integer not null references tableA, c2 integer not null ); create procedure insert_tableA ( in @c1 integer ) begin insert into tableA(c1) values (@c1); insert into tableB(pkA,c2) values (@@identity, @c1); end;
So, in your application, instead of executing "insert into TableA(c1) values (10)" you would execute "call insert_tableA( 10 )";
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
There seems to be a misunderstanding here:
(GLOBAL) AUTOINCREMENT is a column DEFAULT - i.e. if you add a new row and don't specify a value for that column (say, pkA), it will get a value automatically. And you usually will then have to use "SELECT @@identity" to get the freshly created value. - However, if you insert a row and specify a value for that column, exactly that value will be inserted (if there's no PK violation, apparently!).
When SQL Remote runs, it sends "complete SQL Statements" to the remote databases (unless you have omitted the pkA column from the article definition), including the generated default values, and therefore the generated pkA value in the first database will be inserted in the remote database as is. As a consequence, the row should be stored with identical values at each site it belongs to.
Your repication setup needs to have the same rows identified with the same PK, otherwise something strange is happening...
The reason to use DEFAULT GLOBAL AUTOINCREMENT with according DatabaseIDs is just to assure that two different databases won't create the same PK values for the rows. - By design, rows created a different remotes are different rows.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Just ignore most of my above statements (although they should be correct as such) - I have completely missed the "trigger-generated" part - sorry!
That being said, I have to note the trigger-generated actions usually are not replicated (unless you're using DBREMOTE -t).
So if I do understand your description right, it's the trigger logic that works at each remote and creates this unwanted distinct PKs?
Then you might delay the trigger logic on the original site - e.g. by putting the logic into a SQL Remote hook procedure. That would mean you don't do the insert in the table A via a trigger but you somehow "make a note" which table B rows have to be "cloned", and then you use a SQL Remote hook procedure (like sp_hook_dbremote_receive_end ) to actually build these missing rows in table A.
As these operations are not done within a trigger, they are replicated as usually - and as wanted here.
Yes, it's the trigger logic at each site that will create one unwanted PK and ignore the wanted PK.
The SQL Remote hook procedures could be a viable option, though SQL Remote is set up at certain intervals to ensure that all messages have arrived from remote/consolidated site before replication is triggered. With a hook procedure the data will not be present in our system before next time SQL Remote has triggered, but that might not be a problem after all.
I will check this with the design team and see how they respond.
Thank you.
I think that using triggers one way or another is the only solution.
If we perform a table scan with Scheduled Event or SQL Remote hook procedures there still is a chance that the receiving database will get the insert from TableB but not TableA due to missing messages (our clients are vessels on GSM / VSAT links so that happens a lot), the insert from TableB will be accepted since it is a separated transaction and then the receiving database might generate a new row for TableA on its own (on replication og event schedule) while waiting for the missing message to arrive again.
User | Count |
---|---|
67 | |
10 | |
10 | |
10 | |
10 | |
8 | |
8 | |
7 | |
5 | |
5 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.