on 2014 Jan 10 11:31 AM
Our system is composed of two components and SQL Anywhere, a back-end that runs as a Windows service, the SQL Anywhere server, and a front end program.
We know from watching Task Manager that, at certain points in operation, the CPU usage of dbsrv12 skyrockets, but only when both the back-end and the front end program are running. If we stop the front end program, the back-end runs along & dbsrv12's CPU usage is low.
Thanks to the feedback of Jeff Albion & JBShuler, I've got the CPU usage of my database monitor code down to next to nothing, and the CPU usage is still spiking, so I've proven that wasn't the cause.
I've got the SQL Console running right now on a laptop machine. I emptied its database of all data and started the download process over again. When the front end program is running, I can see dbsrv12's CPU usage hit more than 80%. The console is logging requests, but nothing in there tells me anything about how much CPU or any other resource the query that's running is using.
I really need to figure out which query is causing dbsrv12's CPU usage to increase. How do I narrow it down to a particular query or group of queries?
Request clarification before answering.
Hi Tony,
SQL Anywhere doesn't keep track of CPU usage per-query. It does however keep a cumulative time of CPU usage ('ApproximateCPUTime'), per-connection. To see a list of connections that have consumed the most CPU (example is taken from sa_conn_properties() and Breck Carter - thanks Breck!):
SELECT Number AS connection_number, CONNECTION_PROPERTY ( 'Name', Number ) AS connection_name, CONNECTION_PROPERTY ( 'Userid', Number ) AS user_id, CAST ( Value AS NUMERIC ( 30, 2 ) ) AS approx_cpu_time FROM sa_conn_properties() WHERE PropName = 'ApproximateCPUTime' ORDER BY approx_cpu_time DESC;
You would then need to go back on each connection to see what was executed. If you first run the database server with request-level logging:
CALL sa_server_option('RequestLogFile', 'rll.txt'); CALL sa_server_option('RequestLogging', 'ALL');
You can then parse the results, by connection, with query timings to see the top queries for that connection (ordered by the total cursor open time):
CALL sa_get_request_times( 'rll.txt' , conn_id ); SELECT * FROM satmp_request_time ORDER BY millisecs DESC;
Otherwise, using diagnostic tracing is the best way to track down 'time' in the database (where if all data is in cache, should be an indication of 'CPU time'). You can use the following query against the diagnostic tracing tables to examine queries by both optimization time (duration_ms
) and the total cursor open time (total_fetch_time_ms
), for the top 10 queries:
SELECT TOP 10 sdr.connection_number, sdr.request_id, sdc.cursor_id, sdq.query_id, sds.statement_id, sdc.total_fetch_time_ms, sdr.start_time, sdr.finish_time, sdr.duration_ms, sds.statement_text FROM sa_diagnostic_cursor sdc, sa_diagnostic_query sdq, sa_diagnostic_statement sds, sa_diagnostic_request sdr WHERE sdr.statement_id = sds.statement_id AND sdr.query_id = sdq.query_id AND sdr.cursor_id = sdc.cursor_id ORDER BY sdc.total_fetch_time_ms DESC;
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
See Database Tracing In 50 Easy Steps.
Also: See what's happening now with Foxhound.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
52 | |
6 | |
5 | |
5 | |
5 | |
4 | |
3 | |
3 | |
3 | |
3 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.