Stored procedures are a great way to execute SQL queries and create a level of security in accessing the database. Be it dynamic query statements, or maintenance queries that are triggered every week, stored procedures save time as you can precompile them, reuse the plan for the same session settings, and do the processing at database level itself. You can call stored procedures from your application code or execute them from the database.
What is the stored procedure in the SQL server?
Stored procedure is just a set of code that is pre-compiled. The precompilation makes execution faster, as the procedure once compiled remains in the cache unless it is altered.
Example of a stored procedure:
CREATE PROCEDURE dbo.spEmployee
SELECT empl_name, empl_salary from EMPLOYEE;
The above example is a simple procedure named spEmployee. The syntax consists of a begin and end in between we can write as many SQL statements as required. If the stored procedure needs any change, we can alter it by the statement:
ALTER PROCEDURE dbo.spEmployee
Note that, we have used the naming convention as ‘sp’ along with the procedure to indicate that it is a stored procedure. The dbo here indicates the master or default schema. To execute a stored procedure, use exec dbo.spEmployee
Also read: Dynamic Data Masking in SQL Server
Why and When you should use stored procedure in SQL server
Sometimes, your project may not require too many database transactions. If it is a matter of one or two queries, you can create dynamic queries and hit the database. However, if your application needs to access data quite often and do a lot of transactions, stored procedures might save you a lot of execution time and cost.
Reduce compilation and execution time
Stored procedures are precompiled as opposed to the dynamic prepared statements that are compiled whenever your application code invokes a call. Once you execute a stored procedure, it remains in the cache, saving the execution time - so you get faster results.
Promotes reusability of code
If a lot of parts of your application has to access similar points of the database, writing a stored procedure once and reusing it across the application makes more sense than writing the same queries at different places at the application level.
Stored procedures are secure - you should prefer them when your application is data critical. Any application can just access stored procedures without the need to give the entire database read, write, and execute access to the entire database. With stored procedures, you can choose the exact stored procedures that a user/application (role) can execute or access.
This solves a major issue known as SQL injection attack, where a hacker can inject malicious statements in the query code to get results that you shouldn’t get otherwise.
Ways to find out if your stored procedures are slow
In general, you should keep the result and performance both in mind while writing stored procedures. Stored procedures provide better execution time as opposed to direct queries. However, the stored procedure itself may become slow either due to waiting - for resources to free up, or running - you may have queries that run for a long duration. You should analyze and understand the execution plan, and constantly monitor the performance in the following ways:
Check CPU time and elapsed time
By checking cpu_time and total_elapsed_time of current statements in the sys.dm_exec_requests table, you can find out which queries are making the stored procedure slow. Similarly, for the past execution, you can check last_elapsed_time and last_worker_time on the sys.dm_exec_query_stats table. These queries will help you understand the time taken by the queries to execute using which you can further optimize the query if needed. You can view all the data collection in the execution plan.
Optimize the database
You might be running your stored procedure in an old, inefficient schema. Find the missing or unused columns, fine tune the indexes, drop the indexes that you don’t use, check if the columns that the query receives are all required - if not, remove the columns that the query doesn’t require (for example, select * from), and check if your stored procedures have long transactions or avoidable statements between BEGIN and END.
Check for missing column statistics and missing indexes
While migrating your database, it is possible that statistics for existing statistics are not auto-created (you need to have the right setting using auto_create_statistics). If your queries use WHERE or JOIN, column statistics Similarly, if any indexes are missing, that could be a cause for low performance of the queries. Both column statistics (on unindexed columns) and index statistics help determine the best query plan and optimize query performance.
Top 10 stored procedure performance tuning tips
Optimization of stored procedures need not be a one time task. You have to keep checking performance time and again and tune your statements accordingly. Here are some tips to do that, written in no particular order:
Use "SET NOCOUNT ON" directive
Have you seen the output in the console as “2 row(s) affected” or “5 row(s) updated” and so on? Whenever a DML query gets executed, SQL tells you how many rows were affected. This consumes memory and time, especially if you are using cursors (where you may get a series of “1 row(s) affected”), as this would be an extra load on your resultset. This information is more useful while debugging than in production. If you set the NOCOUNT directive ON, you wouldn’t see those messages, reducing the network traffic and boosting performance.
CREATE PROCEDURE dbo.spEmployee
SET NOCOUNT ON;
SELECT empl_name, empl_salary from EMPLOYEE;
Use schema names before the table name
Using schema name before the table name, particularly when you have more than one database schema helps narrow the search and fetches results faster. For example, if you create a stored procedure as CREATE PROCEDURE spEmployee, SQL will go and search every object in every schema including the master schema, wasting time. Change this to
CREATE PROCEDURE mySchema.spEmployee.
Cursors go through each row one by one, which makes the result slow. You can use window functions that perform aggregations like sum(), avg() and others on a group of data, but do not lose the row identity, or recursive common table expressions for hierarchical queries. This way you can fetch multiple records quickly and boost performance.
Do not use sp_<procname>
sp_<procedurename> is a reserved keyword that Microsoft uses to define the in-built stored procedures. If you want to distinguish a stored procedure, you can use sp<procedurename> without the underscore (_). An underscore will indicate system stored procedure and as a result to search for your stored procedure, the program has to go through all the system stored procedures (which are a whole lot!) again wasting precious time.
Prefer sq_executesql over EXECUTE command
Considering varied use cases of today’s businesses, we do need dynamic SQL, i.e. a Transact-SQL statement (insert, update etc.) or set of statements to retrieve records based on certain parameters (like a WHERE clause). sp_executesql is faster, more efficient and also supports substituting parameters when compared to the EXECUTE command as it
- reuses a single cached plan,
- is less prone to SQL injection,
- supports parameterization,
- supports output variables
Fetch only the required columns
Avoid using the asterisk (*) operator that fetches every column of the table. If you want only certain columns, get just those. Even if you have multiple statements, add conditions so that you get as minimum as required in the resultset. For example, if a user wants to browse through books, you may first want to fetch only the name, author and short description, and once he wants more description, then fetch details. This will significantly save query time.
Use table variables instead of temporary tables
Avoid interleaving DDL and DML statements that include temp_db. In some cases, using temporary tables seems more natural. For example, when you continuously add or delete table rows, or use an index that cannot be created on a table variable, or want statistics and plan. However, generating and maintaining statistics adds up a significant cost, and if your optimal plan is unlikely to change, you wouldn’t want the overhead of a recompile or statistics. Also, the metadata of temporary tables needs more maintenance compared to cached table variables. Further, if you want to store the logs of transaction failures or keep a note of rollbacks, using table variables is more efficient.
Avoid unnecessary statements in between transaction-SQL derived tables, views and JOINs
Transactions lock the table and other transactions have to wait until the previous one is complete. One way to minimize the locking is to set the appropriate isolation level. Other effective ways are to avoid any statements like JOIN, creating views, derived tables and other statements in between two transactions, that can cause delay in releasing the lock. For example, if you want to do a delete and then an insert based on some conditions, prepare the statements beforehand and execute both the statements one after the other.
DELETE FROM EMPLOYEES ......
- avoid any fluff here like a join statement, creating or deleting view, aggregations, or any other operations that might delay the next transaction statement
INSERT INTO EMPLOYEES ....
Further, drop the temporary tables and variables when not in use.
Avoid costly operators, implicit and explicit functions
Avoid costly operators like ‘LIKE’, ‘NOT LIKE’ in the WHERE clause that badly hit performance. If it is unavoidable, create a regular index on the column in the where clause. Upon doing this, SQL will do an index search for string with exact beginning criteria, as:
Select emp_name from employee where emp_name like 'geo%';
instead of ‘%geo%’. The query would do index scan saving time. You could also use the left() function which matches a value with specified characters and does a faster search. .
Select column_name from t_able where LEFT(column_name, 3) = 'geo'
Similarly, using implicit and explicit functions like CAST(), CONVERT(), GETDATE() etc. in the where clause consume time thus impacting the performance.
Use EXISTS instead of COUNT in sub queries
Production databases are huge and count() will count all the rows present in the database. exists() on the other hand, stops at the first matching occurrence it finds, saving many other iterations. Sometimes, we need the exact count of rows, but in cases where we need only a true/false (boolean) answer, we can switch to exists(). For example, if we want to find if any candidate who got selected for the job has not been sent an offer letter, we need not get an exact count, but just need to know if any such candidate exists.
Boost your Database performance with our performance-tuning experts today!
We explored the various ways to boost the performance of a stored procedure. In general, as application developers or business analysts, you should not worry about optimization and performance tuning, and leave the job to performance-tuning experts.