cancel
Showing results for 
Search instead for 
Did you mean: 

dbextract from a copy of consolidate db

Former Member
3,292

Hello,

we have worked with sqla since 5.5 versions and also used dbremote style replication with very good success.

Now the databases have grown and the extraction process for new remotes takes 1-2 hours, which most of the time can be done during offline hours. But more and more we need to extract new remotes during normal work hours. This of course then blocks the consolidate database and is optimal 😉

We now tested the extraction against a copy of the db, and it works fine. But we did the following:

  1. Create/Start remote
  2. Shut database down
  3. Made a copy of the db files
  4. Restarted the consolidate

Of course this gives a problem in step 2-4, since then the consolidate DB is down, which is not ok during work hours.

My question is now, is it possible to use dbbackup to make the backup ? We tryed it 2-3 times, but we then always had message missing errors for the new remote.

I also looked at http://sqlanywhere-forum.sap.com/questions/7301/replication-extract-from-a-db-backup-or-similar and http://dcx.sybase.com/index.html#1201/en/sqlremote/sr-deploy-s-4842482.html

But for me it's not clear how we get a copy of the db wihout shutting down the consolidate, or what dbbackup options would be required...

Accepted Solutions (1)

Accepted Solutions (1)

VolkerBarth
Contributor

In my experience, doing a backup from the live database and leave it running should work. We have done so with v8.0.3 for several years before switching to a custom extraction process. The latter made the extraction much faster (sometimes factor 100), and we could go back to extract from the live database... (As such, running DBXTRACT -v to check the underlying extraction statements is recommended to find out whether these statements may be supotimal...).

What we did is not using DBBACKUP but the BACKUP DATABASE statement, and with v8, it had way less options than now, so I can't tell which options are required. Immediately after, we made a REMOT RESET for all newly created remotes, and then did a commit.

We made sure that the consolidatdd database would not run SQL Remote meanwhile.

FWIW, here is a SQL script that made just the necessary steps on the live database - I leave the comments in German, as that won't hurt you, I bet:)

The newly created remotes are here filtered from a user-defined table called T_Replikat.

-- Erstellen einer Kopie der konsolidierten Datenbank
-- als Grundlage für das Extrahieren von Replikaten,
-- um dabei nicht den laufenden Betrieb der Original-Datenbank zu beeinträchtigen
--
-- Die Kopie der Datenbank wird mit IsolationLevel 3 (phantom reads) erstellt.
set option ISOLATION_LEVEL = 3;
checkpoint;
backup database directory 'D:\\DATA\\BACKUP\\ZuDbExtract';

-- und dann in der aktuellen DB ein Remote Reset fuer alle neu erstellen/wiederangelegten Replikate
-- durchfuehren mit anschliessendem Commit;
begin
   for forNeueReplikate as crsNeueReplikate cursor
      for select user_name
            from sysremoteusers SRU inner join T_Replikat R on SRU.user_name = R.ReplikatName
            where flagIstFreigegeben = 0 and R.datLetzteAenderung > current date
      for read only
   do
      message 'Remote Reset für Replikat "' || user_name || '" ausführen'
         type info to client;
      execute immediate 'remote reset "' || user_name || '";';
   end for;
   commit;
end;

set option ISOLATION_LEVEL = 0;
commit;
-- Anschliessend darf die aktuelle DB wieder replizieren; die neuen Replikate werden aus der
-- Kopie heraus erstellt.
Former Member
0 Kudos

Thanks, will give it a try. (Und deutsche Kommentare sind für mich auch kein Problem)

What are the steps for creating a own extraction process, and why is this the so much faster ?

VolkerBarth
Contributor
0 Kudos
Nun, das kann man nicht

Well, I don't think that a custom extraction process is generally faster. However, you might have a look at the statements DBXTRACT is using: They are built from the article and subscription definitions for that remote.

We had the situation that we use many subscriptions per table for the same remote user, and that turned into joins with WHERE clauses with lots of " ... OR ... OR = ... " conditions, making them perform very bad, particularly when there are several parent/child levels.

Looking at those statements made it very easy to understand why the extraction would take so long, and to rewrite them to test improvements.

A custom extraction might simply use a temporary table to store the according "subscribe by" values and then join directly with the required tables.

(I don't know if this is all to vague, I hope you get the idea.)


For further info on the details, you may cf. this FAQ...

Former Member
0 Kudos

Hello Volker,

yep, I got the idea what can perhaps be optimized.

Thanks

André

Answers (0)