cancel
Showing results for 
Search instead for 
Did you mean: 

Concurrent DML statements and speed limitations

3,318

Hello,

I have an abstract question about SQL Anywhere speed limitations when performing many insert/update statements concurrently by different connections. There is a procedure P, which performs many updates of existing records of various tables and also inserts new data. This procedure processes some piece (interval) of data. We launch that procedure for N different (not intersecting) data intervals concurrently (in separate dbisql instances) to achieve more overall processed data amount per minute.

While N is 7-10 we see almost proportional increase of processed data amount, i. e. processed data amount is almost N-times larger than running only one instance of procedure P. Each instance uses almost full logical CPU core (max_query_tasks is set to 1 because when set to 0 or another value the speed decreases drastically in this case).

When N reaches or exceeds 7-10 instances we see that processed data amount per minute no longer increases. It stays the same, for example, when we increase N to 17 instances. Each instance processes less data per minute and each logical CPU usage drops noticeably. Free RAM for cache is enough. Customer reports that disk usage is not very high as well.

What can be the reasons for such a limitation? Are checkpoints the only possible cause or is there something that we can configure, try, etc.? Thanks.

Version: 11.0.1.3158.
Platform: Windows Server 2008 R2 Standard.

Breck_Carter
Participant
0 Kudos

How many processors are being used by the SQL Anywhere server?

Are all the dbisql processes running on the same computer? Different from the SQL Anywhere server, or the same computer?

Have you tried setting dbsrv11 -zt and running Foxhound to see if anything is blocking or waiting for other reasons?

MarkCulp
Participant
0 Kudos

In addition to Breck's questions/suggestions it sounds like you could be hitting a limit in the number of worker threads in the server - have you tried increasing the -gn value? Generally increasing worker threads is not a good idea due to the increase probability in contention but if your workload is indeed non-overlapping (not touching the same data) then adding workers may improve the throughput?

0 Kudos

@Breck: SQL Anywhere server uses 2 physical and 24 logical processors.
All dbisql processes are running on the same computer. The result does not differ when using different or the same computer as DB server.
-zt option is turned on. The results from sa_performance_diagnostics(), when running 17 concurrent instances:

ReqTimeBlockLock - all zeros;
ReqTimeBlockContention is 12 to 18 percent of ReqTimeActive (15 percent average);
ReqTimeBlockIO is only about 0,4 - 0,75 percent of ReqTimeActive (0,5 percent average);
ReqTimeUnscheduled is near zero in all connections (e. g. ReqTimeUnscheduled=0,001... when ReqTimeActive=1520);
ReqStatus is Executing, BlockedContention or BlockedIO.

0 Kudos

@Mark: -gn option is set to 200. It must be that high as there are often waitfor statements running in many connections, but not at the time when running these procedure P instances.

Accepted Solutions (0)

Answers (0)