RAISERROR:

RAISERROR raise an error with a user defined error message. The error message is either created dynamically or stored in the system table sysmessages.

Example: Lets create a SP, pass 2 number, if second number is 0, then error as .

CREATE PROCEDURE SPDivision @FirstNumber int@SecondNumber int
AS
    IF 
@SecondNumber=0
B
EGIN
SELECT ‘Error’
R
AISERROR (‘Error: Division by zero.’, 16, 1)
     END
   ELSE
     select @
FirstNumber/@SecondNumber

When Run SP as  EXEC SPDivision 5, 0   will show error as

(1 row(s) affected)
Msg 50000, Level 16, State 1, Procedure spDivision, Line 6
Error: Division by zero.

If the error message is used in many SP’s, to avoid inconsistencies due to changes in the message, the message can be stored in sysmessages.. The system SP sp_addmessage can be used to add the message and sp_dropmessage fro drop message. Be sure User-Defined error messages must have a msg_id greater or equal to 50001.

Example: To add an error message into sysmessages..

EXEC sp_addmessage 50001, 16‘Error: Division by zero.’

Now above SP code will be as

CREATE PROCEDURE spDivision @FirstNumber int@SecondNumber int
AS
   IF 
@SecondNumber=0
     BEGIN
       SELECT 
‘Error’
       RAISERROR (50001, 16, 1)
    END
    ELSE
    Select 
@FirstNumber/@SecondNumber 

@@ERROR:

The system function @@ERROR returns an error code if an error was encountered after the completion of the TSQL statement immediately preceding it, otherwise it will return 0, means success. The value of @@ERROR changes for each TSQL statement and the only way to keep track of errors is by using a temporary variable to store the error code. If there is no need to keep track of the error but simply act upon it, then the value of @@ERROR can be checked after the TSQL statement to be tested.

CREATE PROCEDURE spDivision @FirstNumber int@SecondNumber int
  AS
    DECLARE 
@errnum int
    Select 
@FirstNumber/@SecondNumber
    SET 
@errnum=@@Error
IF @errnum<>0
S
ELECT ‘Error’

 EXEC spDivision1 5,0

Msg 8134, Level 16, State 1, Procedure spDivision, Line 4
Divide by zero error encountered.
(1 row(s) affected)

The return code is changed automatically to store the latest @@Error value if no RETURN statement is present. Even if there are more statements after the error occurred, the error code is still preserved.

Locking is a means of not allowing any other transaction to take place when one is already in progress. In this the data is locked and there won’t be any modification taking place till the transaction either gets successful or it fails. The lock has to be put up before the processing of the data whereas Multi-Versioning is an alternate to locking to control the concurrency. It provides easy way to view and modify the data. It allows two users to view and read the data till the transaction is in progress. Multiversion concurrency control is described in some detail in the 1981 paper “Concurrency Control in Distributed Database Systems” by Philip Bernstein and Nathan Goodman.
Multi-Versioning, As the name implies, each record in the system might have multiple versions visible to different transactions. When a transaction modifies a record, a new version is written to the database, and a previous version, representing only the difference between the version of the record that was read by the transaction and the new value of the record, is written as a back version of that record.
Read committed isolation using row versioning is somewhere in between Locks and Multi-Versioning. Under this isolation level, read operations do not acquire locks against the active live data. However, with update operations the process is the same for this isolation level as it is for the default read committed isolation level, The selection of rows to update is done by using a blocking scan where an update lock is taken on the data row as data values are read.
Snapshot isolation uses purely Multi-Versioning because data that is to be modified is not actually locked in advance, but the data is locked when it is selected for modification. When a data row meets the update criteria, the snapshot transaction verifies that the data has not been modified by another transaction after the snapshot transaction started. If the data has not been modified by another transaction, the snapshot transaction locks the data, updates the data, releases the lock, and moves on. If the data has been modified by another transaction, an update conflict occurs and the snapshot transaction rolls back.
Although locking can be the best concurrency-control choice for applications that require data consistency and inter-transaction protection, it can cause writers to block readers. If a transaction changes a row, another transaction cannot read the row until the writer commits. There are cases where waiting for the change to complete is the correct response; however, there are cases where the previous transactionally consistent state of the row is enough.

As we know blocking occurs when one SPID holds a lock on a specific resource and a second SPID attempts to acquire a conflicting lock type on the same resource. Typically, the time frame for which the first SPID locks the resource is very small. When it releases the lock, the second connection is free to acquire its own lock on the resource and continue processing. This is normal behavior and may happen many times throughout the course of a day with no noticeable effect on system performance.
By using the read-committed snapshot isolation level introduced in Microsoft SQL Server 2005, we can reduce blocking and deadlocking issues caused by lock contention.Every time a row is modified by a specific transaction, the instance of the Database Engine stores a version of the previously committed image of the row in tempdb. 

As per BOL, The read-committed snapshot isolation level provides the following benefits:

  • SELECT statements do not lock data during a read operation. Read transactions do not block write transactions and vice versa.
  • Since SELECT statements do not acquire locks in most cases, the number of locks required by a transaction is reduced, which reduces the system overhead required to manage locks.
  • The possibility of blocking is significantly reduced.
  • SELECT statements can access the last committed value of the row while other transactions are updating the row without getting blocked.
  • The number of blocks and deadlocks is reduced.
  • Fewer lock escalations occur.

A quick and easy way to remember how Read-Committed Snapshot Isolation works:

  • Readers DO NOT block Writers
  • Writers DO NOT block Readers
  • Writers DO block Writers

REMEMBER: TempDB is a critical resource for the performance of Read-Committed Snapshot Isolation. RCSI puts a copy of the records that are being modified in TempDB before the transaction is committed so you need to make sure that TempDB has appropriate space and is placed on fast drives that ideally are isolated away from your user database data and log files.

The default Syntax is as

SET TRANSACTION ISOLATION LEVEL
{ READ COMMITTED | READ UNCOMMITTED | REPEATABLE READ | SERIALIZABLE }

As we know, to check whats going on at server level, we commonly use SP_WHO or SP_WHO2 but from both of these, there are no clear picture about exact commands which currently running on server, these shows the command, not exact full text string that’s currently running, Here is a some DMV’s manipulation to find the same.

SELECT SPID er.session_id
,BlkBy 
er.blocking_session_id
,ElapsedMS
er.total_elapsed_time
,CPU 
er.cpu_time
,IOReads 
er.logical_reads er.reads
,IOWrites 
er.writes
,Executions 
ec.execution_count
,CommandType 
er.command
,ObjectName 
OBJECT_SCHEMA_NAME(qt.objectid,dbid‘.’ + OBJECT_NAME(qt.objectidqt.dbid)
,SQLStatement 
= SUBSTRING(qt.text,er.statement_start_offset/2,
(
CASE WHEN er.statement_end_offset 1 THEN LEN(CONVERT(nvarchar(MAX), qt.text)) * 2 ELSE
                er.statement_end_offset 
END – er.statement_start_offset)/2)
,Status 
ses.status
,[Login] 
ses.login_name
,Host 
ses.host_name
,DBName 
DB_Name(er.database_id)
,LastWaitType 
er.last_wait_type
,StartTime 
er.start_time
,Protocol 
con.net_transport
,transaction_isolation 
=CASE ses.transaction_isolation_level 
                        WHEN 0 THEN ‘Unspecified’
                        WHEN 1 THEN 
‘Read Uncommitted’
                        WHEN 2 THEN ‘Read Committed’
                        WHEN 3 THEN ‘Repeatable’
                        WHEN 4 THEN ‘Serializable’
                        WHEN 5 THEN ‘Snapshot’
                        END
,ConnectionWrites 
con.num_writes
,ConnectionReads 
con.num_reads
,ClientAddress 
con.client_net_address
,Authentication 
con.auth_scheme
FROM 
sys.dm_exec_requests er
LEFT JOIN sys.dm_exec_sessions ses ON ses.session_id er.session_id
LEFT 
JOIN sys.dm_exec_connections con ON con.session_id ses.session_id
CROSS 
APPLY sys.dm_exec_sql_text(er.sql_handleas qt
OUTER APPLY SELECT execution_count MAX(cp.usecountsFROM sys.dm_exec_cached_plans cp
WHERE cp.plan_handle er.plan_handle ec
ORDER BY er.blocking_session_id DESCer.logical_reads er.reads DESCer.session_id

Similarity

These both command will only delete data of the specified table, they cannot remove the whole table data structure.

Difference

  • TRUNCATE is a DDL (data definition language) command whereas DELETE is a DML (data manipulation language) command.
  • We can’t execute a trigger in case of TRUNCATE whereas with DELETE command, we can execute a trigger.
  • TRUNCATE is faster than DELETE, because when you use DELETE to delete the data, at that time it store the whole data in rollback space from where you can get the data back after deletion. In case of TRUNCATE, it will not store data in rollback space and will directly delete it. You can’t get the deleted data back when you use TRUNCATE.
  • We can use any condition in WHERE clause using DELETE but you can’t do it with TRUNCATE.
  • If table is referenced by any foreign key constraints then TRUNCATE will not work.

sp_executesql is use for parameterized statements while EXEC does not support Parameterized statements. Parameterized statements is more secured in terms of SQL injection and its better reuse in the plan cache which reduces overhead on the server and boost up performance. Sp_executesql executes a string of Transact-SQL in its own self-contained batch. When it is run, SQL Server compiles the code in the string into an execution plan that is separate from the batch that contained the sp_executesql and its string.

Suppose, if a query which takes a parameter i.e. “EmpID”, When we run the query with ” EmpID ” as 1 and 2 it would be creating two different cache entry (one each for value 1 and 2 respectively).

— First clear Proc cache

DBCC Freeproccache
Go

— Lets Check

    — Using EXEC

    Declare @strQuery nvarchar(1000)
Select 
@strQuery ‘Select * from Employee where EmpID = ”1”’
Exec (@strQuery)
    Select 
@strQuery ‘Select * from Employee where EmpID = ”2”’
    Exec (@strQuery)

    — Using SP_EXECUTESQL
    –Declare @strQuery nvarchar(1000)

    Select @strQuery ‘Select * from Employee where EmpID =@EmpID’
    
Exec sp_executesql @strQueryN’@EmpID int’, 1
    Exec sp_executesql @strQueryN’@EmpID int’, 2

— Lets Check execution count for both

Select sqlTxt.textQS.execution_count from sys.dm_exec_query_stats QS
Cross 
Apply (Select [text] from sys.dm_exec_sql_text(QS.sql_handle)) as sqlTxt

It means for Unparameterised (EXEC ) queries the cached plan is reused only if we ask for the same id again. So the cached plan is not of any major use. In case of SP_EXECUTESQL- means for “Parameterised” queries the cached plan would be created only once and would be reused ‘n’ number of times. Similar to that of a stored procedure. So this would have better performance.

sp_executesql works as “Forced Statement Caching” while EXEC works as “Dynamic String Execution”

The fill factor option is provided for fine-tuning index data storage and performance. It is available on the CREATE INDEX and ALTER INDEX statements and provide for fine-tuning index data storage and performance. The fill-factor value affects an index when it is created or rebuilt. It determines the percentage of space on each page filled with data. The unused space is reserved for future growth. For example, specifying a fill-factor value of 80 means that 20 percent of each page will empty (excluding the root page). As data fills the table, the index keys use this space. The fill-factor value is an integer from 0 to 100 representing the percentage. The server-wide default is 0 which is deemed to be 100.

The best fill-factor depends on the purpose of the database and the type of clustered index If the database is primarily for data retrieval, or the primary entry is sequential, a high fill-factor will pack as much as possible in an index page. If the clusterered index is non-sequential (such as a natural primary entry), then the table is susceptible to page splits, so use a lower page fill-factor and defragment the pages often.

Here is the way to Retrieve Count of Fill-Factor Used in a Database as

Select fill_factor, Count(1) as NoOfIndexes From Sys.Indexes Group by fill_factor

Here are more details and explained article http://www.mssqltips.com/sqlservertip/1940/understanding-sql-server-index-fill-factor-setting/
OR
http://www.sqlskills.com/blogs/kimberly/category/FILLFACTOR.aspx


Its very common, The GROUP BY statement to is used to summarize data, is almost as old as SQL itself. Microsoft introduced additional paradigm of ROLLUP and CUBE to add power to the GROUP BY clause in SQL Server 6.5 itself. The CUBE or ROLLUP operators, which are both part of the GROUP BY clause of the SELECT statement and The COMPUTE or COMPUTE BY operators, which are also associated with GROUP BY.

A very new feature in SQL Server 2008 is the GROUPING SETS clause, which allows us to easily specify combinations of field groupings in our queries to see different levels of aggregated data.

These operators generate result sets that contain both detail rows for each item in the result set and summary rows for each group showing the aggregate totals for that group. The GROUP BY clause can be used to generate results that contain aggregates for each group, but no detail rows.

COMPUTE and COMPUTE BY are supported for backward compatibility. The ROLLUP operator is preferred over either COMPUTE or COMPUTE BY. The summary values generated by COMPUTE or COMPUTE BY are returned as separate result sets interleaved with the result sets returning the detail rows for each group, or a result set containing the totals appended after the main result set. Handling these multiple result sets increases the complexity of the code in an application. Neither COMPUTE nor COMPUTE BY are supported with server cursors, and ROLLUP is. CUBE and ROLLUP generate a single result set containing embedded subtotal and total rows. The query optimizer can also sometimes generate more efficient execution plans for ROLLUP than it can for COMPUTE and COMPUTE BY. When GROUP BY is used without these operators, it returns a single result set with one row per group containing the aggregate subtotals for the group. There are no detail rows in the result set.

To demonstrate all these, lets create a sample table as SalesDate as

— Create a Sample SalesData table

CREATE TABLE SalesData (EmpCode Char(5), SalesYear INTSalesAmount MONEY)
— Insert some rows in SaleData table
INSERT 
SalesData VALUES(1, 2005, 10000),(1, 2006, 8000),(1, 2007, 17000),
             (2, 2005, 13000),(2, 2007, 16000),(3, 2006, 10000),
             (3, 2007, 14000),(1, 2008, 15000),(1, 2009, 15500),
             (2, 2008, 19000),(2, 2009, 16000),(3, 2008, 8000),
             (3, 2009, 7500),(1, 2010, 9000),(1, 2011, 15000),
             (2, 2011, 5000),(2, 2010, 16000),(3, 2010, 12000),
             (3, 2011, 14000),(1, 2012, 8000),(2, 2006, 6000),
             (3, 2012, 2000)

ROLLUP : – The ROLLUP operator generates reports that contain subtotals and totals. The ROLLUP option, placed after the GROUP BY clause, instructs SQL Server to generate an additional total row. Let see the examples as per our TestData from SalesData Table.

Select EmpCode,sum(SalesAmount) as SalesAMT from SalesData Group by ROLLUP(EmpCode)
Will show result as ,


Here we can see clearly sum of all EmpCode with NULL title, Once a guy asked about this NULL , How to give Name for this Null as suppose ‘Grand Total’, here is some manipulation with tricky tips and we can get the desired output and we can also change it as where its required.

Select
case when grouping(EmpCode)=1 THEN ‘Grand Total’ else EmpCode end as EmpCode,
sum(SalesAmount
as SalesAMT from SalesData Group by ROLLUP(EmpCode)
will show result as,


In ROLLUP parameter we can also set multiple grouping set like ROLLUP(EmpCode,Year), hence ROLLUP outputs are hierarchy of values as per selected columns.

CUBE : The CUBE operator is specified in the GROUP BY clause of a SELECT statement. The select list contains the dimension columns and aggregate function expressions. The GROUP BY specifies the dimension columns by using the WITH CUBE keywords. The result set contains all possible combinations of the values in the dimension columns, together with the aggregate values from the underlying rows that match that combination of dimension values. CUBE generates a result set that represents aggregates for all combinations of values as per the selected columns. Let see the examples as per our TestData from SalesData Table.

Select EmpCode,SalesYear,sum(SalesAmountas SalesAMT from SalesData Group by CUBE(SalesYear,EmpCode)

Will shows as

Select SalesYear,EmpCode,sum(SalesAmountas SalesAMT from SalesData Group by CUBE(SalesYear,EmpCode)

Will show as,


The ROLLUP and CUBE aggregate functions generate subtotals and grand totals as separate rows, and supply a null in the GROUP BY column to indicate the grand total.

the difference between the CUBE and ROLLUP operator is that the CUBE generates a result set that shows the aggregates for all combinations of values in the selected columns. By contrast, the ROLLUP operator returns only the specific result set.The ROLLUP operator generates a result set that shows the aggregates for a hierarchy of values in the selected columns. Also, the ROLLUP operator provides only one level of summarization

GROUPING SETS : The GROUPING SETS function allows you to choose whether to see subtotals and grand totals in your result set. Its allow us versatility as

  • include just one total
  • include different levels of subtotal
  • include a grand total
  • choose the position of the grand total in the result set
    Let See examples,

Select EmpCode,SalesYear,sum(SalesAmountas SalesAMT from SalesData Group by GROUPING SETS (SalesYear,EmpCode)


Select EmpCode,SalesYear,sum(SalesAmountas SalesAMT from SalesData Group by GROUPING SETS (EmpCode,SalesYear)

COMPUTE & COMPUTE BY: The summary values generated by COMPUTE appear as separate result sets in the query results. The results of a query that include a COMPUTE clause are like a control-break report. This is a report whose summary values are controlled by the groupings, or breaks, that you specify. You can produce summary values for groups, and you can also calculate more than one aggregate function for the same group.

When COMPUTE is specified with the optional BY clause, there are two result sets for each group that qualifies for the SELECT:

  • The first result set for each group has the set of detail rows that contain the select list information for that group.
  • The second result set for each group has one row that contains the subtotals of the aggregate functions specified in the COMPUTE clause for that group.

When COMPUTE is specified without the optional BY clause, there are two result sets for the SELECT:

  • The first result set for each group has all the detail rows that contain the select list information.
  • The second result set has one row that contains the totals of the aggregate functions specified in the COMPUTE clause.

Comparing COMPUTE to GROUP BY

To summarize the differences between COMPUTE and GROUP BY:

  • GROUP BY produces a single result set. There is one row for each group containing only the grouping columns and aggregate functions showing the subaggregate for that group. The select list can contain only the grouping columns and aggregate functions.
  • COMPUTE produces multiple result sets. One type of result set contains the detail rows for each group containing the expressions from the select list. The other type of result set contains the subaggregate for a group, or the total aggregate for the SELECT statement. The select list can contain expressions other than the grouping columns or aggregate functions. The aggregate functions are specified in the COMPUTE clause, not in the select list.

    Let See example

Select EmpCode,SalesYear,SalesAmount from SalesData order by EmpCode,SalesYear COMPUTE SUM(SalesAmount)

The COMPUTE and COMPUTE BY clauses are provided for backward compatibility. Instead, use the following components, both are discontinued from SQL Server 2012.

Magic Tables are invisible tables which created on MS SQL Server, during INSERT/UPDATE/DELETE operations on any table. These tables temporarily persists values before completing the DML statements.
Magic Tables are
internal table which is used by the SQL server to recover recently inserted, deleted and updated data into SQL server database. That is when we insert or delete any record from any table in SQL server then recently inserted or deleted data from table also inserted into inserted magic table or deleted magic table with help of which we can recover data which is recently used to modify data into table either use in delete, insert or update to table. Basically there are two types of magic table in SQL server namely: INSERTED and DELETED, update can be performed with help of these twos. When we update the record from the table, the INSERTED table contains new values and DELETED table contains the old values. Magic Tables does not contain the information about the Columns of the Data Type text , ntext or image. These are maintained by SQL Server for internal processing whenever an update,insert,Delete occur on table.
These
Magic tables is used In SQL Server 6.5, 7.0 & 2000 versions with Triggers only while in SQL Server 2005, 2008 & 2008 R2 Versions can use these Magic tables with Triggers and Non-Triggers also. Magic tables with Non-Trigger activities uses OUTPUT Clause in SQL Server 2005, 2008 & 2008 R2 versions.

Suppose we have Employee table, Now We need to create two triggers to see data with in virtual tables Inserted and Deleted and without triggers.

For INSERTED virtual table, using Trigger ( Magic Tables with Triggers)
CREATE TRIGGER trg_Emp_Ins ON Employee
FOR INSERT
AS
begin
/* Here you can write your required codes, but I am here putting only demostration of Magic Table.*/
Print ‘Data in INSERTED Table’
SELECT FROM INSERTED — It will show data in Inserted virtual table
Print ‘Data in DELETED Table’
SELECT FROM DELETED — It will show data in Deleted virtual table
end
–Now insert a new record in Employee table to see data with in Inserted virtual tables
INSERT INTO Employee(NameBasicSalaryVALUES(‘Virendra’,2000)

For INSERTED virtual table, without Trigger ( Magic Tables with Non-Triggers)

— Use INSERTED Magic Tables with OUTPUT Clause to Insert values in Temp Table or Table Variable

create table #CopyEMP(CName varchar(12),CDept varchar(12))
–Now insert a new record in Employee table to see data with in Inserted virtual tables

Insert into Employee(NameBasicSalaryOUTPUT INSERTED.NameINSERTED.BasicSalary into #CopyEMP values(‘Ram’,‘100’)
Select from #CopyEMP

For DELETED virtual table, using Trigger ( Magic Tables with Triggers)

CREATE TRIGGER trg_Emp_Ins OEmployee
FOR DELETE
AS
Begin
/* Here you can write your required codes, but I am here putting only demostration of Magic Table.*/
Print ‘INSERTED Table’
SELECT FROM INSERTED — It will show data in Inserted virtual table
Print ‘DELETED Table’
SELECT FROM DELETED — It will show data in Deleted virtual table
End
–Now Delete few records in Employee table to see data with in DELETED virtual tables
DELETE Employee where BasicSalary > 10000

For DELETED virtual table, without Trigger ( Magic Tables with Non-Triggers)

— Use DELETED Magic Tables with OUTPUT Clause to Insert values in Temp Table or Table Variable

create table #CopyEMP(CName varchar(12),CDept varchar(12))

–Now Delete record in Employee table to see data with in DELETED virtual tables

DELETE Employee where BasicSalary > 10000

Select from #CopyEMP