Showing results for 
Search instead for 
Did you mean: 

Question on BIGINT conversion exercise Sybase ASE 15.7

Former Member
0 Kudos


Currently we are in process of converting production db tables from INT to BIGINT with Sybase ALTER command.

One of the table has 1.5Billion rows which we are unable to convert in one weekend time frame (ran for almost 43 hours then we killed it & restored the DB dumps).

Any other suggestion to convert this table quickly to BIGINT. This table has clustered, unique and non-clustered indexes on it.

I am thinking below solutions: Can you suggest the best way to do this?

  1. Create a new column & update it with old value then drop the old column followed by renaming the new column.
  2. BCP out the data, truncate the table, alter the table structure with new data types then BCP in the data
  3. Partition the table into small tables then alter the table & merge it again.



Accepted Solutions (0)

Answers (1)

Answers (1)

Former Member
0 Kudos


I'd go for a BCP - but I wouldn't bcp out the table into a file and then BCP it in as it would take too long. Instead, bcp out and in at the same time.

Try this.

1) Create a new table,

2) Use sqsh (or unix and names FIFO pipes) and select the data and bcp at the same time. (sqsh with "\bcp" is easier and most places I've worked have it installed.)

3) Drop the old table.

How many rows can you bcp per second ?

Assuming you can get 10,000 rows per second - then the whole process would take 1.7 days.

At 20,000 rows per second, you're looking at 20 hours - which is pretty comfortable.

Personally, I'd partition the new table and run (2 or more BCP's in parallel).

Whats your underlying disks based on ? SAN or Direct ? What are the write times per page ? Note: It will likely be quicker at the weekend.

On SAN I'd suggest 4 parallel BCP's is optimal depending on your set up.

If you're not using sqsh then give it a go - its got some superb features.

Hope this helps.


Former Member
0 Kudos

Hi Mike,

Thanks for your reply.

We have SAN at our side.

I think, creating indexes on new table will take more time as the table is huge. Do you think the total time for index recreation will be less compared with "ALTER" ?

We have 1 cluster and 3 non-cluster indexes on this table.



Former Member
0 Kudos

> We have SAN at our side.

Check the performance of the SAN - writes should be fast - ie faster than reads.

> Personally, in my experience - I'd create the new table with all the indexes. Trying to create the indexes after will take too long. At the very minimum create the clustered index.

Even with all the indexes, you should be able to BCP at over 20,000 rows/second (although will depend on the speed or your sever and SAN contention and if the SAN is using something like SRDF)

At this speed and using SQSH to bcp-out and bcp-in in parallel you should be looking at way less than 24 hours. And to get the time down further, partition the input table and run 2 bcp's.

Hope this helps.