cancel
Showing results for 
Search instead for 
Did you mean: 

Adding a column to a table took 5+ hours

Former Member
4,106

Hello,

We use SQL Anywhere 12. We have a table defined as below called reservation. I added a column promotion_rn using the alter table statement below and this took an hour on my local development machine but when I ran the add column sql in production the statement took 5+ hours. Could anyone offer any advice on why this may have happened and possibly what I can do to improve this time when adding columns to this table in future? I ran the sql statement in iSQL.

The reservation table contains approximately 750,000 rows.

The result of "call sa_table_fragmentation( 'reservation' )" gives the following:

TableNamerowsrow_segmentssegs_per_row
reservation751,254752,5541.00173044003759
alter table reservation add promotion_rn unsigned int references promotion(promotion_rn) null;

CREATE TABLE "reservation" (
"reservation_rn" UNSIGNED INT NOT NULL DEFAULT AUTOINCREMENT,
"reservation_id" VARCHAR(255) NOT NULL UNIQUE,
"reservation_dscrptn" "text" NOT NULL,
"reservation_cntct_nme_frst" "text" NOT NULL,
"reservation_cntct_nme_lst" "text" NOT NULL,
"reservation_cntct_eml" "text" NOT NULL,
"reservation_cntct_phne" "text" NOT NULL,
"reservation_dte" "datetime" NOT NULL,
"reservation_dte_crtd" "datetime" NOT NULL,
"reservation_rfrnce" "text" NOT NULL,
"reservation_actve" "BOOL" NOT NULL DEFAULT '1',
"activity_rn" UNSIGNED INT NOT NULL,
"webuser_rn" UNSIGNED INT NOT NULL,
"reservation_pckp_lctn" "text" NOT NULL,
"reservation_pckp_dte" VARCHAR(50) NOT NULL,
"reservation_dte_cnclld" "datetime" NOT NULL,
"reservation_gsts" INTEGER NOT NULL,
"reservation_pymnt_mthd" VARCHAR(30) NOT NULL,
"agent_rn" UNSIGNED INT NOT NULL,
"reservation_cntct_agnt_nme" VARCHAR(255) NULL,
"reservation_rte" VARCHAR(255) NULL,
"reservation_chldrn" VARCHAR(255) NULL,
"reservation_nts" "text" NULL,
"reservation_ntnlty" VARCHAR(255) NULL,
"reservation_usd" CHAR(1) NULL,
"operator_agency_rn" INTEGER NULL,
"reservation_cstmr_type" "text" NULL,
"referral_operator_id" "text" NULL,
"referral_activity_id" "text" NULL,
"reservation_othr_agncy" "text" NULL,
"reservation_rfrnce_extra" "text" NULL,
"reservation_duration" INTEGER NULL DEFAULT 0,
"reservation_srce" VARCHAR(100) NULL,
"reservation_dte_end" TIMESTAMP NULL,
"reservation_latitude" VARCHAR(32) NULL DEFAULT '0',
"reservation_longitude" VARCHAR(32) NULL DEFAULT '0',
"reservation_origin_rn" UNSIGNED INT NULL,
"account_rn" UNSIGNED INT NULL,
"reservation_hide_pricing_on_ticket" "BOOL" NOT NULL DEFAULT 0,
"pickup_rn" UNSIGNED INT NULL,
"reservation_promo_codes" "text" NULL,
"customer_rn" BIGINT NULL,
"itinerary_rn" UNSIGNED INT NULL,
"promotion_rn" UNSIGNED INT NULL,
"reservation_prmtn_dscnt_amnt" "MONETARY" NULL,
"reservation_superseded_on" "datetime" NULL,
PRIMARY KEY ( "reservation_rn" ASC )


);

VolkerBarth
Contributor
0 Kudos

How much users/connections have accessed that table during the ALTER TABLE? Altering a "normal" table (neither the number of rows nor the schema look extraordinary) should certainly not take much time, particularly when simply adding a NULLable column with no default value. I would suspect the actual operation has had to wait for other connections to give it a chance to alter the table, as that requires exclusive access, as the docs state here for example...

Former Member
0 Kudos

Thank you for your help. There were no customers on at the time using our web application. The backup utility started about half an hour after I ran the sql statements. The backup command was:

dbbackup.exe -c "dbn=x;eng=x;uid=x;pwd=x" -x -y f:\\backups\\

VolkerBarth
Contributor
0 Kudos

after I ran the sql statements

So there were several statements? In the question only one statement was mentioned.

Besides that, in a database accessed by only the DBA connection that does the schema update, such a small ALTER TABLE should only take a really short time on modest hardware, at maximum a few minutes but possibly less time.

Accepted Solutions (0)

Answers (2)

Answers (2)

Breck_Carter
Participant

The Smart Money (i.e., Volker) says the ALTER TABLE is blocked by a schema lock.

The next time you do this, start a second ISQL session and run SELECT * FROM sa_conn_info() to see if the BlockedOn column is non-zero for the first ISQL session.

The following example shows that connection 2,'database-maintenance' is blocked by connection 3.

SELECT * FROM sa_conn_info();

Number,Name,Userid,DBNumber,LastReqTime,ReqType,CommLink,NodeAddr,ClientPort,ServerPort,BlockedOn,LockRowID,LockIndexID,LockTable,UncommitOps,ParentConnection
4,'Foxhound-p001','DBA',0,'2014-10-28 07:38:44.294','COMMIT','local','',0,0,0,0,,'',0,
3,'adhoc-queries','k.delacruz',0,'2014-10-28 07:38:50.160','FETCH','local','',0,0,0,0,,'',0,
2,'database-maintenance','DBA',0,'2014-10-28 07:36:54.572','EXEC','local','',0,0,3,0,,'DBA.inventory',0,

To see what locks are being held by connection 3, run SELECT * FROM sa_locks ( 3 ) which in the case of an ALTER TABLE that is being blocked by a SELECT with no COMMIT will show a 'Schema' lock:

SELECT * FROM sa_locks ( 3 );

conn_name,conn_id,user_id,table_type,creator,table_name,index_id,lock_class,lock_duration,lock_type,row_identifier
'adhoc-queries',3,'k.delacruz','BASE','DBA','inventory',,'Schema','Transaction','Shared',

In cases like this, you can wait until everyone else is off the database, or you can ask them politely to get off the database (or stop working after doing a commit), or you can kick them off by calling p_drop_other_connections().

Unlike row locks, AFAIK there is no mechanism that can exactly pinpoint "this block is caused by that schema lock"... in this case sa_conn_info() and sa_locks() work because connection 3 is only holding one lock so that must be the one. In the real world, however, evil-doer connections hold millions of locks so it can be a challenge to diagnose blocks caused by non-row locks.

MCMartin
Participant
0 Kudos

All rows need to be copied when a new column is added so for a huge table with a lot of rows it will need some time to do this copy, in my experience this happens even if the column is a nullable one.

VolkerBarth
Contributor
0 Kudos

AFAIK, that's not generally true and can be influenced with the help of the PCTFREE setting:

PCTFREE clause Specifies the percentage of free space you want to reserve for each table page. The free space is used if rows increase in size when the data is updated. If there is no free space in a table page, every increase in the size of a row on that page requires the row to be split across multiple table pages, causing row fragmentation and possible performance degradation.

Furthermore, SQL Anywhere does not reserve space for NULL columns, so if you add a column without a value, no more space should be claimed.


The shown result of sa_table_fragmentation() (assuming this is after the update) does not hint at a significant number of row splits, cf. that older FAQ - Kudo's to Glenn, one's more:)

Breck_Carter
Participant

> Furthermore, SQL Anywhere does not reserve space for NULL columns, so if you add a column without a value, no more space should be claimed.

Not exactly: "Column space utilization for NULL values is 1 bit per column and space is allocated in multiples of 8 bits. The NULL bit usage is fixed based on the number of columns in the table that allow NULL values."

...so, adding the ninth nullable column will require an extra byte per row. This is surely not the reason for the symptom described here, but it's worth mentioning... well, maybe it isn't 🙂

alt text