on 2009 Nov 16 4:06 PM
Per Breck's suggestions, I am adding another question directly about how SQLA passes parameters. In the past couple years the question has come up several times when trying to move inline code to a stored procedure that now requires passing parameters.
I did some testing and here are my results. My test consisted of a stored procedure that added the 1st 2 parameters together and sent them back as an out variable. Variables include IN or INOUT parameter types and integer or varchar parameters. Results are in seconds for calling the procedure 1,000,000 times.
As expected the InOut are more costly. It does appear to have a significant impact on performance the more parameters that you have. Strings seem to be faster with lots of parameters for the InOut, while slower for the In.
Request clarification before answering.
The process that is used by SA to pass parameters in a call to a stored procedure is exactly what you would expect:
A few additional things should be noted:
Therefore it would be natural for the cost of INOUT parameters to be higher than IN parameters since extra work is required to copy the values back to the caller's frame.
Excluding the overhead of doing an invocation of a procedure (allocating a new stack frame, initializing a new scope, etc), the cost of an invocation will be roughly proportional to the number of parameters. The cost of each parameter will vary slightly based on the type of the parameter but this variation will be negligible when looking at the total cost of the procedure call.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Great stuff, thanks much Mark. A related question, I've been moving code that is in a procedure 3 or 4 times to a separate stored procedure as I modify someone elses code. What are your suggestions for keeping the performance of this the same?
We're trying using console variables (create variables) for this, do they have overhead associated with them that makes this a bad practice? Besides being global and thus a bad practice already. 🙂
The cost of 'executing a sequence of statements' and 'calling a procedure that executes the sequence of statements' is obviously not zero since there is the additional cost of doing the call. Whether this additional cost is significant will be dependent on the frequency of usage and the relative cost as compared to the total cost of the complete operation. So you need to weigh the factors and determine what is best for you circumstance.
Whether or not to use global variables is a choice that you will need to make - there is likely minimal performance differences except for the obvious cost savings for not needing to pass the parameter(s). i.e. each procedure is only parsed and analyzed once so once a variable reference is bound to its (global or parm) value there is no difference on how it is used.
If you have a long living connection then you may want to consider setting the global variable value to null after you are finished using it so that its associated resources are mimimum - i.e. if you set a global to a long long string (e.g. during processing of a particular request) and then do not use it again, the variable will continue to use memory resources while the connection remains action. (Note that memory assigned to an inactive connection will eventually be paged to the db's temp file)
User | Count |
---|---|
76 | |
30 | |
10 | |
8 | |
8 | |
7 | |
7 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.