on 2024 Jul 15 11:01 AM
Hi,
moving to an Azure Cloud VM, our customer complained that the first time executing a (complex) stored procedure, this takes up a long time (~ 15 seconds). From then on, its fast (300 ms). When i look into the master..monCachedProcedures table, i can see the ExecutionCount go up when the same database connection (and other database connections that already executed this proc once) execute this proc. So far so good. But every new database connection does not reuse this cached plan, an new plan is always created (that takes those 15 seconds). After that, all the other existing database connections are using this new cached plan.
I can't seem to find any documentation on this behaviour, does this seem normal?
Greetz
Pieter
Request clarification before answering.
Some assumptions:
--------------------------
re: the lengthy 15-second run time for the 'initial' proc call
The initial lengthy proc call is likely related to an excessive (re)compile time, with subsequent calls being (relatively) fast due to not having to recompile the proc.
But why aren't you seeing this issue on the old (non-azure) instance?
Off the top of my head I'm guessing a difference in optimizer related configuration settings could be at play here with the primary culprit being optimization goal; consider:
My best guess on what's happening:
While this could explain the proc calls that take 15 seconds it doesn't explain why all spids are are incurring a 15 second run time the first time each spid calls the proc; spids that find an available entry in procedure cache should not incur the 15 second delay.
FWIW:
The next guess would be a change in the application's connection process and/or a change in an ASE login trigger/proc that's changing some compiler related configs at the connection level. Short of doing some code reviews you could try running sp_options show, null, null, <spid> against processes running on both (non-azure, azure) instances to see if there are any differences in compiler settings at the connection levels.
--------------------------
A couple scenarios come to mind where #temp tables can wreak havoc on performance by forcing excessive (re)compile operations
One (obvious) problem with this idea (#temp table related recompilations) is it would not be limited to just the initial proc call (for a given spid); in other words, issues with #temp table related recompilations should be occurring quite frequently throughout the life of a connection/spid where multiple spids are (re)using the same entry from procedure cache
--------------------------
An additional idea that builds on the idea of proc recompilations based on #temp table structure changes and which could explain why the only recompilation occurs with a spid's initial proc call:
A variation on this idea would see the conditional create index replaced with a conditional update statistics call.
--------------------------
You've mentioned in a comment the proc uses a global temporary table. I would expect a global temporary table to be treated more like a permanent table (as opposed to a #temp table) if simply because we're dealing with a single object id and a fixed structure; all I've got at this point are questions
Assuming this isn't an issue with the optimization goal configuration, and there are no performance issues with global temporary tables in the old (non-azure) instance, but there are differences (code changes, introduction of multi-tempdb databases) in the new (azure) instance then I'd probably opt for opening a tech support case to see if there any known issues with your 'new' environment.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Mark,
Thx for the info. I have checked the config on the On-Premise and in the Azure and they are exactly the same. I may have mislead you by mentioning Azure, we have the recompile-behaviour on the On-Premise server as well, but that was a very fast server so it was less noticable (i didn't measure it, but i could be just half the time like 6-7 seconds).
Moving to the (expensive) Azure we could not specify the same specs (CPU- and Disk-wise) for this new server, leading to somewhat slower performance.
I did not know the "optimize temp table resolution" option, i will definitely look into that one. I did knew the #temp-table with index meant recompile 😉
As for the global temporary table, i can easily reproduce this issue just by creating a sample procedure with one line (a select to a global temporary table) and the recompile-issue i described occurs. When the stored procedure is not big, a user would not notice this, but sadly, we have a couple of big stored procedures (~ 10.000 lines of SQL code) that all use global temporary tables, so a recompile takes time.
Greetz
Ahhhh, so this performance issue is occurring in the current environment but is now being magnified by the migration (to azure).
The easy/fast recommendation would be to open a tech support case to find out if there are any known performance issues (and workarounds) related to global temporary tables.
The harder/slower recommendation would be a detailed P&T analysis of the stored proc(s) in question. Without knowing details about your env I'll just dump everything that comes to mind ...
Some additional comments re: abstract query plans (AQPs):
Hi Mark,
we used to be an longtime OEM Partner with Sybase, then SAP of course but sadly, it was not possible to maintain that OEM Partner-status due to new restrictions of SAP.
We have however found a workaround for our 'global temporary table'-recompile issue. It is not perfect, but we do not notice any performance loss using our work-around.
Workaround:
When using this real table, the stored procedure does not recompile any more on every new connection! We realize that there is function call extra needed, but the data added to this tables is never that much, so performance wise we don't see any issues.
Greetz
Pieter
Hi Pieter,
Re: "Every new database connection does not reuse this cached plan, an new plan is always created. After that, all the other existing database connections are using this new cached plan".
If you think the problem is due to the query plan, check if the new database connection is using sp_recompile before executing the sp. If yes a new query plan will be generated and then the existing connections(trying to execute the sp as part of the code not having sp_recompile)will make use of the existing plan.
If you think the problem is execution time but the existing plan being used, check your cache (default data/user defined cache) is configured properly or not and the other one to concentrate is that any temporary tables (#) that are needed by the new database connection to execute the sp are present or not.
Regards,
Dilip Voora
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Dilip,
Thx for the response. I have investigated further. It only happens with stored procedure that are using global temporary tables. This behaviour does even occur on a very simple procedure with a few lines of code, if it uses a global temporary table.
I suppose this is internal behaviour of the SAP ASE (we are working on version 16.0 SP04 PL01) and i'm afraid we can not much do about it.
Greetz
Pieter
| User | Count |
|---|---|
| 14 | |
| 8 | |
| 6 | |
| 6 | |
| 3 | |
| 3 | |
| 2 | |
| 2 | |
| 2 | |
| 2 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.