Data World

Archive for the ‘DBA’ Category

SQL Server -Defragment Indexes

Posted by Pramod Singla on December 17, 2015

Problem: Many a times you need to defragment indexes on SQL server. Following query is quite handy for doing this task.


SELECT 'Alter index ' + i.NAME + ' on '
 + Quotename(s.NAME) + '.' + Quotename(o.NAME) + CASE WHEN
 stats.avg_fragmentation_in_percent >=60 THEN ' rebuild ' ELSE
 ' reorganize' END
 FROM   sys.Dm_db_index_physical_stats(Db_id(), NULL, NULL, NULL, 'LIMITED') AS
 sys.objects AS o,
 sys.schemas AS s,
 sys.indexes AS i
 WHERE  o.object_id = stats.object_id
 AND s.schema_id = o.schema_id
 AND i.object_id = stats.object_id
 AND i.index_id = stats.index_id
 AND stats.avg_fragmentation_in_percent >= 20
 AND stats.page_count >= 1000
 ORDER  BY stats.avg_fragmentation_in_percent DESC,
 stats.page_count DESC

Posted in DBA | Leave a Comment »

SQL Server -Find Blocking

Posted by Pramod Singla on December 17, 2015

Problem: Many a times DBA/Developer needs to identify the blocking in SQL server. Following queries will help you out.


SELECT [owt].[session_id],
CASE [owt].[wait_type]
WHEN N’CXPACKET’ THEN RIGHT ([owt].[resource_description],
Charindex (N’=’,
Reverse ([owt].[resource_description]))  1)
END AS [Node ID],
FROM   sys.dm_os_waiting_tasks [owt]
INNER JOIN sys.dm_exec_sessions [es]
ON [owt].[session_id] = [es].[session_id]
INNER JOIN sys.dm_exec_requests [er]
ON [es].[session_id] = [er].[session_id]
OUTER apply sys.Dm_exec_sql_text ([er].[sql_handle]) [est]
OUTER apply sys.Dm_exec_query_plan ([er].[plan_handle]) [eqp]
WHERE  [es].[is_user_process] = 1
AND es.session_id >= 50
–and   [owt].[blocking_session_id] is null
–and owt.blocking_exec_context_id is null
ORDER  BY [owt].[session_id],


SELECT TOP 10 Object_name(qt.objectid)                 AS ‘SP Name’,
Substring(qt.text, ( qs.statement_start_offset / 2 ) + 1,
( CASE qs.statement_end_offset
WHEN 1 THEN Datalength(qt.text)
ELSE qs.statement_end_offset

qs.statement_start_offset ) / 2 ) + 1) AS statement_text,
FROM   sys.dm_exec_requests AS qs
CROSS apply sys.Dm_exec_sql_text(qs.sql_handle) AS qt
OUTER apply sys.Dm_exec_query_plan(qs.plan_handle) qp
WHERE  qt.dbid = Db_id() — Filter by current database

Posted in DBA | Leave a Comment »

PowerShell – Do Not use Write-Host!!!!!

Posted by Pramod Singla on November 20, 2015

Problem Statement: People often use Write-Host in their script, but it is considered harmful. Use other options instead.

Solution: Write-Host is always the wrong thing to do while writing or reviewing the Powershell scripts because it interferes with automation. So, rather than using Write-Host try using any one of the following commandlets as per your requirement:

♦ Write-Debug
  • The Write-Debug cmdlet writes debug messages to the console from a script or command.
  • The text foreground color will be CYAN.
  • Used when you want to run your Powershell script in debug mode. A pop window will display after write-Debug command is hit and script want move forward until you press any of the option given in the popup window.


  • Write-Debug command in your script, will invoke only if you have specified -Debug while calling the script.
  • Example:
    PS C:\>Write-Debug "Cannot open file." -debug
♦ Write-Error
  • Write an object to the error pipeline. Write-Error messages along with other information such as an id, object data, and suggested actions.
  • Specifies the category of the error. The default value is NotSpecified.
  • Example:
Write-Error -Message "Error: Too many input values." -Category InvalidArgument
 ♦ Write-Information
  • Write-Information lets you add an informational message to the stream and specifies how Windows PowerShell handles information stream data for a command.
  • The $InformationPreference preference variable value determines whether the message you provide to Write-Information is displayed at the expected point in a script’s operation.
  • Because the default value of this variable is SilentlyContinue, by default, informational messages are not shown.
  • Example:
function Test-Info
         Get-Process P*
         Write-Information "Here you go"
Test-Info 6> Info.txt
♦ Write-Output
  • The Write-Output cmdlet sends the specified object down the pipeline to the next command. If the command is the last command in the pipeline, the object is displayed in the console.
  • This cmdlet is typically used in scripts to display strings and other objects on the console. However, because the default behavior is to display the objects at the end of a pipeline, it is generally not necessary to use the cmdlet. For example, “get-process | write-output” is equivalent to “get-process”.
  • Example:
write-output "test output" | get-member
♦ Write-Progress
  • The Write-Progress cmdlet displays a progress bar in a Windows PowerShell command window that depicts the status of a running command or script.
  • You can select the indicators that the bar reflects and the text that appears above and below the progress bar.
  • Example:
for ($i = 1; $i -le 100; $i++ )
{write-progress -activity "Search in Progress" -status "$i% Complete:" -percentcomplete $i;}
♦ Write-Verbose
  • The Write-Verbose cmdlet writes text to the verbose message stream in Windows PowerShell. Typically, the verbose message stream is used to deliver information about command processing that is used for debugging a command.
  • By default, the verbose message stream is not displayed, but you can display it by changing the value of the $VerbosePreference variable or using the Verbose common parameter in any command.
  • Example:
$VerbosePreference = "Continue"
PS C:\>Write-Verbose "Copying file $filename"
♦ Write-Warning
  • The Write-Warning cmdlet writes a warning message to the Windows PowerShell host.
  • The response to the warning depends on the value of the user’s $WarningPreference variable and the use of the WarningAction common parameter.
  • Example:
write-warning "This is only a test warning."

Posted in DBA | Leave a Comment »

SQL Server – Check whether the SQL server is configured for case sensitivity or not

Posted by Pramod Singla on June 10, 2015

Problem: How to check whether the SQL server is configured for case sensitivity or not?


  • Case sensitivity: By Case Sensitive, we mean case sensitivity of the objects in the database, i.e. the following two statements are not equal:-


SELECT * FROM table 

  • SQL server determines case sensitivity by COLLATION.
  • CollationCollation refers to a set of rules that determine how data is sorted and compared. Character data is sorted using rules that define the correct character sequence, with options for specifying case-sensitivity, accent marks, kana character types and character width.
  • COLLATION can be set at various levels:-

1. Server-level
2. Database-level
3. Column-level
4. Expression-level

  • Check Database Collation:-

          SELECT DATABASEPROPERTYEX (”, ‘Collation’)

          Result: SQL_Latin1_General_CP1_CI_AS (means Case Insensitive)

          Result: SQL_Latin1_General_CP1_CS_AS (means Case Sensitive)

Posted in DBA | 1 Comment »

SQL Server-Performance Tuning Tips

Posted by Pramod Singla on August 28, 2013

Introduction: This document list the ways of improving the performance of T-SQL. It has few  tips and techniques that we used in our live project.

Query tuning:

  1. If your SQL Server database is not configured to be case sensitive, you don’t need to use LOWER or UPPER to force the case of text to be equal for a comparison to be performed; e.g.

SELECT column_name FROM table_name
WHERE LOWER (column_name) = ‘name’ (Not preferred)

 SELECT column_name FROM table_name
WHERE column_name = ‘name’      (Preferred)

Both the queries will fetch same result unless SQL Server database is not configured to be case sensitive. To check case sensitivity see here.

  1. Use ‘= ‘ as much as possible, and  ‘<>’  as least as possible:

Operators at the top will produce results faster than those listed at the bottom.

  •       =
  •       >, >=, <, <=
  •       LIKE
  •        <>
  1. Avoid using  AUTO_SHRINK:

Set the AUTO_SHRINK database option to OFF. Turning it, ON may fragment your indexes. If you know that the space that you are reclaiming will not be needed in the future, you can reclaim the space by manually shrinking the database.

  1. Stop using “*”  in query:

It is advisable to use column names rather than ‘*’ in select query. “*” may cause index scan or avoid the index altogether.

  1. Avoid using IN clause:

In some circumstances, you would be better to spell IN as EXISTS (especially when preceded by NOT.) The problem is that IN and EXISTS handle NULL values differently.

When dealing with null values ‘NOT IN’ won’t give same result as ‘NOT EXISTS’. Check here.

  1. Avoid implicit Conversion:

Implicit conversion can cause index scan. Avoid any implicit conversions and use proper data type to avoid it.

  1. Avoid use of SELECT COUNT(*):

There is an alternate way to get the count of rows in a particular table. This will improve the performance in case the table contains huge number of rows.

Select count (*) from <table_name> (Not preferred)

SELECT rows FROM sysindexes WHERE id = OBJECT_ID (”) AND indid < 2 (Preferred)

  1. Avoid using HAVING clause:

The HAVING clause is used to restrict the result set returned by the GROUP BY clause. When you use GROUP BY with the HAVING clause, the GROUP BY clause divides the rows into sets of grouped rows and aggregates their values, and then the HAVING clause eliminates undesired aggregated groups. In many cases, you can write your select statement so, that it will contain only WHERE and GROUP BY clauses without HAVING clause. This can improve the performance of your query.

  1. Avoid using the DISTINCT clause:

The use of DISTINCT clause sorts the result, which results into performance degradation. You should use this clause only when it is necessary.

  1. Try to use UNION ALL statement instead of UNION:

The UNION ALL statement is much faster than UNION, because UNION ALL statement does not look for duplicate rows, and UNION statement does look for duplicate rows, whether they exist or not.

  1. Avoid use of “NOT IN” Clause:

If you have a query that uses NOT IN then try following options for better performance:

  • Perform a LEFT OUTER JOIN and check for a NULL condition
  1. Try to replace IN with BETWEEN:

When you have a choice of using the IN or the BETWEEN clauses in your Transact-SQL, you will generally want to use the BETWEEN clause, as it is much more efficient.


WHERE customer_number in (1000, 1001, 1002, 1003, 1004) (Not preferred)

WHERE customer_number BETWEEN 1000 and 1004 (Preferred)

  1. Use ‘is null or‘ over ‘isnull ()’:

Operators are normally faster than functions. In this case use of ‘is null or‘will fetch the results faster than ‘isnull ()’ function. E.g.

Let @in_name be the input variable.

SELECT column_names FROM table_name
WHERE isnull (@in_name, column_name) = column_name) (Not preferred)

SELECT column_names FROM table_name
WHERE (@in_name is null or column_name=@in_name) (Preferred)

  1. In case of insufficient indexes, use “UNION ALL” clause and not “OR” clause:

A query with one or more “OR” clauses can sometimes be rewritten as a series of queries that are combined with a “UNION ALL” statement to boost the performance of the query.

WHERE dept = ‘retx’ or city = ‘karnal’ or division = ‘adidas’

(Not Preferred because this query has three separate conditions in the WHERE clause. In order for this query to use an index, then there must be an index on all three columns found in the WHERE clause)

SELECT employeeID, firstname, lastname FROM names WHERE dept = ‘retx’
SELECT employeeID, firstname, lastname FROM names WHERE city = ‘karnal’
SELECT employeeID, firstname, lastname FROM names WHERE division = ‘adidas’

(Preferred, because if there is only an index on dept, but not the other columns in the WHERE clause, then the first version will not use any index and a table scan must be performed. However, in the second version of the query will use the index for part of the query, but not for the entire query)

  1. Rearrange where clause conditions:

If you have a WHERE clause that includes expressions connected by two or more AND operators, SQL Server will evaluate them from left to right in the order they are written. Because of this, you may want to consider one of the following when using AND:

  • Locate the least likely true AND expression first. This way, if the AND expression is false, the clause will end immediately, saving time.
  • If both parts of an AND expression are equally likely being false, put the least complex AND expression first. This way, if it is false, less work will have to be done to evaluate the expression.
  1. Avoid sorting of the data:

Sorting operation consumes more resources. Sorting often occurs when any of the following Transact-SQL statements are executed:

  • CREATE INDEX (generally not as critical as happens much less often)

In many cases, these commands cannot be avoided. On the other hand, there are few ways that sorting overhead can be reduced. These include:

  • Keep the number of rows to be sorted to a minimum. Do this by only returning those rows that absolutely need to be sorted.
  • Keep the number of columns to be sorted to the least. In other words, do not sort more columns that required.
  • Keep the width (physical size) of the columns to be sorted to a minimum.
  • Sort column with number datatypes instead of character datatypes
  1. Try to avoid WHERE clauses that are non-sargable:

The term “sargable” (which is in effect a made-up word) comes from the pseudo-acronym “SARG”, which stands for “Search ARGUMENT,” which refers to a WHERE clause that compares a column to a constant value. If a WHERE clause is sargable, this means that it can take advantage of an index (assuming one is available).


WHERE SUBSTRING (login, 1, 1) = ‘infosys’

(Non-sargable so not preferred)

WHERE login like ‘infosys%’

(Sargable so preferred)
WHERE DATEDIFF (yy, datofbirth, GETDATE ()) > 21
(Non-sargable so not preferred)
WHERE dateofbirth < DATEADD(yy,-21,GETDATE())

(Sargable because the fucntion is not applied directly on column so preferred)

WHERE NOT column_name > 5

(Non-sargable so not preferred)

WHERE column_name <= 5

(Sargable so preferred)

WHERE column_name LIKE ‘%m’

(Non-sargable so not preferred)

WHERE column_name LIKE ‘m%’
(Sargable so preferred) If you use a leading character in your LIKE clause, then the Query Optimizer has the ability to potentially use an index to perform the query, speeding performance and reducing the load on SQL Server.

  1. In a WHERE clause, the various operands used directly affect how fast a query is run. The operand at the top will produce results faster than those listed at the bottom:
  • A single literal used by itself on one side of an operator.
  • A single column name used by itself on one side of an operator.
  • a single parameter used by itself on one side of an operator.
  • A multi-operand expression on one side of an operator.
  • A single exact number on one side of an operator.
  • Other numeric number (other than exact), date, and time.
  • Character data, NULLs.

Procedure Tuning:

  1. Include SET NOCOUNT ON statement into your stored procedures to stop the message indicating the number of rows affected by a T-SQL statement:
    This can reduce network traffic, because your client will not receive the message indicating the number of rows affected by a T-SQL statement. Use NOCOUNT at the beginning of all your Stored Procedures, Triggers and Functions.
  2. Avoid using SQL Server cursors:


  • They generally use many SQL Server resources and hit the performance.
  • Using cursors can reduce concurrency and lead to unnecessary locking and blocking


  • Use Temp Tables
  • Temp Variables
  1. Try using ‘temp variable’ or ‘temp tables’ to avoid the main table reference more than once:

If a particular set of data is fetched from joining more than one tables and this set of data is needed in more than one query than use temp tables or temp variables instead of referring main tables repeatedly.

Temporary tables or variables keeps the data into a separate area reducing main table scan and avoiding locks.

Cursor Tuning:

  1. Try to use a FORWARD-ONLY or FAST-FORWARD, READ-ONLY cursor:
  • Generally, a FORWARD-ONLY cursor is the most performant, followed by DYNAMIC, KEYSET, and finally STATIC, which is generally the least performant.
  • When working with unidirectional, read-only data, use the FAST_FORWARD option instead of the FORWARD_ONLY option, as it has some internal performance optimizations to speed performance. e.g.

               DECLARE cursor_name CURSOR FAST_FORWARD FOR select_statement

  1. When you are done using a cursor, don’t just CLOSE it, you must also DEALLOCATE it:

Deallocation is required to free up the SQL Server resources used by the cursor.e.g.

 Example:  Close cursor_name

Deallocate cursor_name

Posted in DBA, Performance Tuning, SQL Sever 2008 | Leave a Comment »