Showing results for 
Search instead for 
Did you mean: 

Best Practices for Migrating Sybase ASE 15.5 from SUN Solaris to RH6

Former Member
0 Kudos

I am currently running: "Adaptive Server Enterprise/15.5/EBF 18657 SMP ESD#4/P/Sun_svr4/OS 5.8/asear155/2545/64-bit/FBO/Thu Jun 16 10:29:40 2011" with following databases sizes:

267,000.0 MB

122,000.0 MB

  78,500.0 MB

and, of course, many more much smaller in size. I am looking for the best practices, detail steps (and gotchas) of moving these databases from Sybase ASE 15.5 on SUN Solaris to ASE 15.7 on Linux (RH6 64-bit).  I used the cross-platform dump-n-load many years ago to migrate from the same databases from Windows to SUN, and now need to move to company's enterprise platform.  I will be working on testing the process and procedures in test environment starting next week.

By-the-way, what EBF/ESD/SP level should I install?

Thanks for insight and direction.

Former Member
0 Kudos


thanks for your valuable input.  I, many years ago, performed a cross-platform dump-n-load to move databases from windows to SUN. And now moving the same databases, albeit much larger, from SUN to Linux.  So, know some of the steps -- but thanks for the link.

I am not changing dataserver page size, character set and/or sort order, so the cross-platform dump-n-load process may be best option.

I do have a replication environment, so if you have more details on how to minimize downtime/migration time would be great!

and YES, i am a true sadist -- so, changing both platform and minor upgrade (from 15.5 to 15.7) during the migration.  I should start the process to migrate the test environment next week.

Thanks again.


0 Kudos

re: using replication to reduce cutover time ...

The gist of this topic is to get RS setup and working (from your 'old' Solaris PDB to your 'new' Linux RDB) prior to your cutover window. (The method outlined below may be performed while your PDB is up and running, ie, with little impact on PDB users.  NOTE: If you need to mark PDB blob data for replication then you may experience some performance hits and/or blocking in the PDB, ymmv.)

Once replication is up and functioning (ie, PDB activity is being replicated into the RDB), at cutover time you simply quiesce replication and point your applications/users to the Linux ASE

------------- RS setup

I'm assuming you're running RS 15.6+.  RS 15.6 introduced some new features that make (re)syncing databases much easier than the older method of dropping/recreating the RDB's connection. [See RS 15.6 > New Features > Usability and Process Improvements > Replicate Database Resynchronization f... for details and example scenarios.]

NOTE: I don't know how you're using RS or how you've got your PDBs configured so modify the following to match your env; for migrations you typically want to replicate *all* DML, system procs (eg, sp_adduser, sp_addalias, etc), and possibly DDL (your call)

- (re)sync your logins (and roles) from the PDS to the RDS; if you could have a largish volume of activity in the master database (eg, adding/dropping logins, changing passwords) then you may want to consider setting up replication from the PDS master database to the RDS master database, ymmv

- create your RDB; don't bother with creating tables, ie, we just need somewhere to point a RS DSI connection

- add your PDB and RDB to replication (rsinit > add database connection; we'll be using MSA so don't worry about WS config req's)

- mark your entire PDB for replication (sp_reptostandby/ALL; keep in mind the performance issues for replicating blob data; if you upgrade to ASE 15.7 and RS 15.7.1 prior to the migration you you can likely limit this overhead with improvements to blob replication)

- configure the PDB repagent to send warm standby xacts (to enable replication of DDL and system procs)

- create DB repdef for the PDB; enable system proc replication; enable DDL (if you wish to automatically replicate DDL)

- create DB subscription (w/out materialization) for RDB

- RDB's DSI should go down at some point due to missing tables ... this is ok

- stop the PDB repagent

- (suspend the RDB's DSI connection if not already suspended); resume the RDB's DSI connection with 'skip to resync marker'; 'admin who' should show the DSI connection with a status of 'SkipUntil Resync' (I believe the RS errorlog will also display a message about the DSI connection waiting for a resync marker)

- in the PDB generate the resync marker by running sp_start_rep_agent/resync; when the resync marker is received by the RDB's DSI you should see a message in the RS log about the RDB's DSI having received the resync marker and now waiting for the dump marker ('admin who' should also show the RDB's DSI as waiting for dump marker)

- in the PDS perform your database dump of the PDB; keep in mind that 'dump tran' (and 'trunc log on chkpt' ??) will also generate a dump marker so make sure you've (temporarily) disabled any jobs that periodically dump the log so as to keep from sending the 'wrong' dump marker to RS

- wait for RDB's DSI connection to receive the dump marker (ie, RDB's DSI connection is automatically suspended)

- load your database dump into the RDB; online the RDB database (and perform any post-XPDL operations as needed)

- once the RDB database is online make sure you re-add your maintenance user (eg, alias to dbo), disable the secondary trunc point in the RDB (brought over in the dump), and I would also suggest truncating the rs_lastcommit table to make sure you haven't brought across any unwanted/undesirable generation numbers that could cause your RDB's DSI to erroneously discard txns from the PDB

- resume the RDB's DSI

NOTE: You'll need to keep an eye on the RS stable queue as it will start to fill up while you're performing the XPDL; the amount of RS stable space needed will depend on the volume of activity coming out of the PDB ... so (obviously?) performing this initial setup/XPDL during a period of light activity in the PDB may be of benefit

------------- RS verification

- run rs_ticket in the PDB and make sure the ticket shows up in the RDB; this indicates that you've (at a minimum) got a functioning replication path from the PDB to the RDB

Once you've got replication up and functioning I'd suggest you run some validation checks to make sure your data is in fact being replicated (eg, no problems with generation number issues, no processes running with 'set replication off', etc).  When and how (often) you run the checks is up to you (eg, simple row counts, rs_subcmp comparisons - though I'd recommend Data Assurance over rs_subcmp, etc).

------------- cutover

At the time you wish to perform the cutover ...

- make sure all user activity has been halted in the PDB; I typically like to shutdown and bring up the PDS on a different port # using a custom/one-off interfaces file ... making sure that other DBA processing (eg, your isql/GUI login session, monitoring scripts, repserver) know about the new port number

- run rs_ticket in the PDB and make sure the ticket shows up in the RDB; this indicates that all transactional activity has made its way from the PDB to the RDB

- if you're not replicating the master database then consider one last (re)sync of the master database tables (eg, syslogins, syssrvroles, sysroles, sysloginroles, sysattributes for any login specific info??, etc)

- switch users/applications to the new RDS (ie, update their interfaces/sql.ini files)

------------- fallback ?

If you're planning on being able to fallback from Linux to Solaris then you should be able to use RS for this, too.

- you should be able to setup the Linux DB repdefs and Solaris DB subscriptions beforehand (to include Linux master to Solaris master db replication?)

- after the cutover and before letting users/applications in to the Linux DBs ... enable your repagent/secondary-trunc-pt in the Linux DB; stop the repagent and disable the secondary-trunc-pt in the Solaris DB, make sure the Linux DB is marked for replication (sp_reptostandby, config repagent for warm standby replication); suspend the Solaris DB's DSI connection; truncate the Solaris DB's rs_lastcommit table; resume the Solaris DB's DSI connection; run rs_ticket in the Linux DB and verify receipt in the Solaris DB

If you need to fallback to the Solaris DS then perform the same cutover steps as already mentioned above


Keep in mind that replication performance will likely be degraded for the following reasons:

- without repdefs to define table PKs, the entire column set will be passed in the DSI's 'where' clause; prior to RS 15.6 this requires the DBA to create, at a minimum, PK-only repdefs for all heavily replcated tables; with ASE 15.7 and RS 15.7.1, the repagent automatically sends PK data to RS thus alleviating the need for the PK-only repdefs

- the RDS should be configured to utilize statement cache, otherwise the RDS will spend an inordinate amount of time on query compilation overhead


Obviously (?) the use of RS to minimize cutover time may be overkill for your smaller databases in those cases where you determine that XPDL is sufficient for the cutover.

Obviously (?) test the above in your non-production environment first ... as the above is a quick-n-dirty regurgitation from memory and assorted notes. (Alternatively consider contracting that Parsons guy to help out! 😉

Former Member
0 Kudos

From the version string in the original post, it appears that the source is a Solaris/SPARC system.  But for the benefit of anyone not using SPARC, the Solaris to Linux migration from non-SPARC is not actually a cross-platform migration.

I've done some testing from
SunOS pedb 5.10 Generic_137112-03 i86pc i386 i86pc

Adaptive Server Enterprise/15.0.2/EBF 15658 ESD#4/P/Solaris AMD64/OS 5.10/ase1502/2528/64-bit/FBO/Sat Apr  5 11:32:17 2008 


Linux pedb3 2.6.32-431.el6.x86_64 #1 SMP Sun Nov 10 22:19:54 EST 2013 x86_64 x86_64 x86_64 GNU/Linux

Adaptive Server Enterprise/16.0 GA PL03/EBF 23024 SMP/P/x86_64/Enterprise Linux/ase160sp00pl02/3572/64-bit/FBO/Wed Aug 13 05:46:56 2014

I had read all about the cross-platform issues involving Solaris to Linux migrations, and was surprised when sp_post_xpload told me that my data-base "is not a database that was loaded from a byte-swapped architecture".  Upon reflection, of course, I realized why (duh).  Nevertheless, I don't recall any of the documentation, blob posts, etc. discussing Solaris to Linux migration mentioning the significance of SPARC Solaris vs. non-SPARC Solaris.


Accepted Solutions (0)

Answers (1)

Answers (1)

Former Member
0 Kudos


Read about:

sp_dumpoptimize "archive_space = maximum"


I started to use the dumpoptimize option so that all of the pages would exist in the dump image.

Former Member
0 Kudos

Thanks Cory!  I will read up on the sp_dumpoptimize and the other thread recommended.  Have you migrated a database cross-platform in the 200GB+ range?

Active Contributor
0 Kudos

Hi Joe,

you can also use the R3load option if it is a SAP system which you are running, but I think it is not in cause of the version.

On RHEL6 you can also use hugepages.

Have a look at this blog for migration and this one for hugepages.

Is it a OLTP or OLAP DB which you want to migrate?



Former Member
0 Kudos

Jen:  thanks so much for the heads up on the hugepages -- i use them on our Oracle installs, but had not considered the use of them on this migration -- so, will enable their usage based upon the blog you provided.  I had already perused the other blog regarding the migration.

this migration for OLTP databases.

thanks again,