on 2020 Jun 06 6:42 PM
I have the following installation:
1X Cons. running SQL10
50X remotes running SQL17.
SQL Remote is running on both sides (bidirectional replication).
Now we are going to upgrade the Cons. to SQL17, but we want to leave the remotes intact, and also we should avoid down times on the remotes.
My plan is as follows:
1- Start DBRemote -s (only send) on Cons10 in order to send its last messages before converting to SQL17 (till the current offset).
2- Run DBLOG against Cons10 and note the start and relative offsets.
3- Shutdown the the cons. unload it, and load it to a new SQL17 DB.
4- Run DBLOG against the new Cons17 (set the start offset to the noted value from step2).
5- Delete the Transaction log files of the Cons17 and start it as a productive Cons.
Theorically everything should work fine, since the SYSREMOTEUSERS is token 1:1 from Cons10, and all the transctions resulted during the convertion process (load into empty SQL17 DB) will be ignored.
I want to ask the experts here, do you see any risk in this plan? Knowing that during all the above steps all the Remotes will continue to run in productive mode and also the DBRemote will periodically run against them (but DBRemote on Cons will start only after step5).
I do not think your last remark is true, dbremote certainly must be able to cope with (offline) logs from older versions, because it must allow a resend of older messages if they are requested. See, even if you would sent all necessary operations from the current log before shutting down the v10 database, there is no guarantee that all remotes wilk receive and apply these messages without a need for a resend.
That being said, I would recommend two points:
Have you considered using dbunload -ar, so you do not need to adjust log offsets manually? (Note, this requires that you do not need to make changes to page sizes or the like.)
I would always use a test environment with copies of the cons and some remotes to check the update steps and to make sure everything will work as expected. Even when your steps seem reasonable, some detail may show up, and you then don't want to need to do "re-extract all remotes". I'm absolutely sure Reg as the true replication expert will agree on that...
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Regarding my last remark, I think my expression is true According to this test!! even if the database version is the same (each database can recognize anyhow whether a transaction log file belongs him or not), and in my case I will have a different Cons DB and also a different version!!
Yes I know that there is no guarantee, and I am almost sure that some of those 50 remotes will request a resend of the same messages, but, I am considering to make a copy of Cons10 sent messages, so that I can refeed them again to each remote requesting a resend!
Actually I dont feel safe in this phase, and not sure if copying the messages and refeeding them will solve the problem!!!
Note: Of course I can't read the contents of each messages to figure out which remote requests a resend, and I will try to avoid arriving this resend request to the Cons17, and I will depend on reading the values of confirm_sent to figure out which remote has got and applied all the changes of Cons till the shutdown of 10.
Yes I will use dbunload -ar.
Yes I prepared a test environment, but again, succeeding the test doesn't guarantee the success on the productive.
Well, there's a misunderstanding here. The current aka active transaction log must fit to the database, and the database version used to create the database file(s) and the log file must match. Also, the backup log files needed for recovery must fit the database file version. That is what your test has shown.
But here the v10 log will only act as an offline log for replication purposes, and offline logs are allowed to be of older versions. (Otherwise, one would simply not be able to update a SQL Remote setup.) (Just to add: When updating from pv9 or older to v10 and above, a particular library was necessary so the "new" dbremote could still read the old log file format.)
As to messages needed in case of a re-send: These must be re-created by dbremote, AFAIK it will not do to simply copy the original message files because the new contents will be different. And no worry: Resending messages is part of SQL Remote's guaranteed message delivery system, so you don't have to care for that or check message contents, you only have to make sure offline logs are available as long as they might be needed.
OK, so you mean that DBRemote(17) can make use of the offline logs from Cons10!!
Yes, here was my doubt, I was afraid that the same copies of the messages could not help, and some resend requests from remotes could make the need that Cons generates new messages.
I will finish my test (test environment) tomorrow and make the results here.
OK, so you mean that DBRemote(17) can make use of the offline logs from Cons10!!
Well, just to prove on that point:
See this older FAQ on SQL Remote with a v8 to v12 migration and Yufei's answer that for pre-v10 offline logs, SQL Remote needs access to a particular DLL (dboftsp.dll on Windows) whereas for v10 and above offline logs, SQL Remote can do so by default. So v17 SQL Remote can read offline logs from v10 and above.
The docs also explain that detail in the deployment section.
I finished one test successfully.
I ran the DBUNLOAD without -ar, since our migration takes over only the data (without the structure) from SQL10 into SQL17 (i.e. DBUNLOAD -d)
In order to take over SQL Remote settings, I did just a dummy migration (i.e. DBUNLOAD -s) on the same SQL10 DB and then copied the section marked as Create SQL Remote definitions from reload.sql.
Of course I maintained the same offset range between SQL10 and SQL17 as described in the manuals using DBLOG.
I am going to do further tests before applying it in productive.
Actually I posted this question because I was not expecting the DBRemote of SQL17 can create messages from transaction logs of a SQL10 DB
Thanks for your help....
I finished one test successfully.
I ran the DBUNLOAD without -ar, since our migration takes over only the data (without the structure) from SQL10 into SQL17 (i.e. DBUNLOAD -d)
In order to take over SQL Remote settings, I did just a dummy migration (i.e. DBUNLOAD -s) on the same SQL10 DB and then copied the section marked as Create SQL Remote definitions from reload.sql.
Of course I maintained the same offset range between SQL10 and SQL17 as described in the manuals using DBLOG.
I am going to do further tests before applying it in productive.
What does my test important, is that the SQL10 was not completely replicated to the remotes, so that DBRemote of SQL17 needed to created old messages (after reading the old transaction logs of SQL10).
Actually I posted this question because I was not expecting the DBRemote of SQL17 can create messages from transaction logs of a SQL10 DB
Thanks for your help....
User | Count |
---|---|
71 | |
11 | |
11 | |
10 | |
9 | |
9 | |
7 | |
6 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.