I hope, all SQL Server DBA or Developer have faced the scenario where they have to copy only SQL Server Database objects without data. To do this, commonly DBA/Developer script-out source database using generate script wizard and run that script at targeted instance. I have seen very few people are aware about a feature named as “DAC Package” which is available from SQL server 2008 R2 onwards. Using DAC Package, we can take all objects backup without data and restore it to any other instance. Here are the step-by-step details as

Step -1 ) Connect SQL Server instance using SSMS, from Object explorer, Right click on database-àTasksàExtract Data-tier Application
It will open Extract Data-tier Application wizard, click on next, Set Properties option will be open

Step 2) Just Set DAC Package file location and click on next, Check Validation and Summary, then Next

Step 3) Click on Finish, DAC Package will be create at specified location

Now you can restore this DAC package anywhere you wanted. Steps are as below
Step 4) Connect your SQL Instance using SSMS where you want to restore this DAC Package, Right click on Databasesà Deploy Data-tier Application…, Deploy wizard will be open, Select Next,

Step 5) Browse DAC package then click on next, Change desired Database Name as per requirement, click on next, Just review summary and click on finish,

That’s all , New Database restored without data….

Enjoy J !


I personally observed with various organisations or even within an organisation if multiple development team is working on projects , there might be the development team could follow ambiguous naming conventions. It’s one of the best development practices when we are creating any objects like Stored Procedure, Function, view or table, a proper prefixing should be in place to identify the objects. Take an example, suppose we have to create a stored procedure, so many developer guys start stored procedure name with ‘SP_’ which relate stored procedure name as System stored procedure, system will try to find this stored procedure within system databases and it could be cause for performance bottleneck, to avoid this type happening and management of these types of user created objects, we can enforce objects naming convention policy with our Server as well as database level depending on requirement. We can manage this using Policy Based management (PBM).

Here, I am taking an example to enforce naming convention policy for Stored Procedure at server level. Let consider we want to enforce a policy where all new stored procedure should be create with a prefix ‘USP_’. To do this, here are the details steps.

Step 1) Open SSMS, Connect SQL Server instance, Expand Managementà Policy Management and Right Click on Conditions and select New Condition as “Stored Procedure Naming Conventions”

Step 2) Define values as , Name = Stored Procedure Naming Convention, Facet = Stored Procedure, Expression , in Field section select @Name, Operator LIKE and Value as ‘USP[_]%’, click on OK.
Details are as below

Now PBM condition has been created, verify from Condition Tab, it will look like as below,

Step 3) From Policy Management, right click on Policies and select New policy

Step 4) Set Name = Stored Procedure Naming, Check Condition = Your created condition name – here I am taking our created condition from previous steps – Stored Procedure Naming Conventions, in Against targets just keep Every, Evaluation Mode = On Change : Prevent, Server Restriction = None (Default) and click OK.

We have created Policy for stored procedure name

Step 5) Enable policy

We have created a policy to restrict Stored procedure name prefix, as per policy, now every stored procedure name will be started with USP_, other prefix will be not accepted, Let see in below example I have taken SP name as CustomerList, PBM enforcing the same,

Now if I change SP name as USP_CustomerList, it will be create.

Thanks, and keep reading.

As we know COMMIT operation save changes made in a transaction to the database while ROLLBACK undo those changes. Its observed, generally COMMIT is a faster process than a ROLLBACK operation.
Now question is how commit is faster?
First we need to understand what occurs in database when we select COMMIT or ROLLBACK operation.
When we start a transaction, the data is being written to database and all pages marked as DIRTY pages, means three operation performed here as

  1. The database is changed.
  2. Log has been written
  3. Rollback is written.

But when we go for COMMIT operation, All DIRTY pages will be written to disk and ROLLBACK deleted, and in case of ROLLBACK operation, all changed pages rollback will be written to log file again and delete from database as before a COMMIT or ROLLBACK all changes being stored in the database. The COMMIT command simply removes any recourse to undoing the changes and ROLLBACK executes everything in reverse.
Happy Reading ! Please comments your thought on this.

Checkpoint Operation

Posted: November 10, 2016 by Virendra Yaduvanshi in Database Administrator
Tags: , ,

Checkpoints operation flush dirty data pages from the buffer cache of the current database to disk. This minimizes the active portion of the log that must be processed during a full recovery of a database. During a full recovery, the following types of actions are performed:

  • The log records of modifications not flushed to disk before the system stopped are rolled forward.
  • All modifications associated with incomplete transactions, such as transactions for which there is no COMMIT or ROLLBACK log record, are rolled back.
  • A checkpoint performs the following processes in the database:
  • Writes a record to the log file, marking the start of the checkpoint.
  • Stores information recorded for the checkpoint in a chain of checkpoint log records.
  • One piece of information recorded in the checkpoint is the log sequence number (LSN) of the first log record that must be present for a successful database-wide rollback. This LSN is called the Minimum Recovery LSN ,The Minimum Recovery LSN is the minimum of the:
    • LSN of the start of the checkpoint.
    • LSN of the start of the oldest active transaction.

The checkpoint records also contain a list of all the active transactions that have modified the database.

  • If the database uses the simple recovery model, marks for reuse the space that precedes the Minimum Recovery LSN.
  • Writes all dirty log and data pages to disk.
  • Writes a record marking the end of the checkpoint to the log file.
  • Writes the LSN of the start of this chain to the database boot page.

Activities That Cause a Checkpoint

Checkpoints occur in the following situations:

  • A CHECKPOINT statement is explicitly executed. A checkpoint occurs in the current database for the connection.
  • A minimally logged operation is performed in the database; for example, a bulk-copy operation is performed on a database that is using the Bulk-Logged recovery model.
  • Database files have been added or removed by using ALTER DATABASE.
  • An instance of SQL Server is stopped by a SHUTDOWN statement or by stopping the SQL Server (MSSQLSERVER) service. Either action causes a checkpoint in each database in the instance of SQL Server.
  • An instance of SQL Server periodically generates automatic checkpoints in each database to reduce the time that the instance would take to recover the database.
  • A database backup is taken.
  • An activity requiring a database shutdown is performed. For example, AUTO_CLOSE is ON and the last user connection to the database is closed, or a database option change is made that requires a restart of the database.

As a DBA it’s a TOP PRIORITY to make sure that all database backups should be useful in case of any Disaster and should know how to recover database from a Disaster.
In Real time, with every SQL Server environment, we regularly take database backup on various available backup devices. With SQL Server 2005 onwards, there is an option of Mirror Backup with enterprise edition. Using Mirroring a backup media set increases backup reliability by reducing the impact of backup-device failure. These type of failure sometime becomes very serious because backups are the last heartbeats against data loss. In current era, database sizes are increasing drastically, and as a resultant the failure of a backup device or media will make a backup non restorable. Mirroring backup media increases the reliability of backups by providing redundancy.

As per below given example, here I am taking backup at two locations,

Point to be Note here is the number of media families and mirrors is defined when the media set is created by a BACKUP DATABASE statement that specifies WITH FORMAT keyword.

Keep reading

Here, I am writing some indexing related the best practices during index creation with SQL Server DB’s tables.

  • Periodically, run the Index Wizard or Database Engine Tuning Advisor against current Profiler traces to identify potentially missing indexes.
  • Check and remove unused indexes.
  • Its not a thumb rule to get optimized performance by creating Indexes, First check and create index instead of creating redundant indexes.
  • As a rule of thumb, every table should have at least a clustered index. Generally, but not always, the clustered index should be on a column that monotonically increases — such as an identity column, or some other column where the value is increasing — and is unique. In many cases, the primary key is the ideal column for a clustered index.
  • As we can have only one clustered index per table, take extra time to carefully consider how it will be used. Consider the type of queries that will be used against the table, and make an efficient judgement to which query is the most critical, and if this query will benefit from having a clustered index.
  • If a column in a table is not at least 95% unique, then most likely the query optimizer will not use a non-clustered index based on that column. Because of this, generally don’t want to add non-clustered indexes to columns that aren’t at least 95% unique.
  • Keep the width of indexes as narrow as possible. This reduces the size of the index and reduces the number of disk I/O reads required to read the index, boosting performance.
  • If possible, avoid adding a clustered index to a GUID column. GUIDs take up 16-bytes of storage, more than an Identify column, which makes the index larger, which increases I/O reads, which can hurt performance.
  • Try to avoid index on Timestamp column, its extra burden for I/O.
  • Indexes should be considered on all columns that are frequently accessed by the JOIN, WHERE, ORDER BY, GROUP BY, TOP, and DISTINCT clauses.
  • Don’t automatically add indexes on a table because it seems like the right thing to do. Only add indexes if you know that they will be used by the queries run against the table.
  • When creating indexes, try to make them unique indexes if at all possible. SQL Server can often search through a unique index faster than a non-unique index because in a unique index, each Row is unique, and once the needed record is found, SQL Server doesn’t have to look any further.
  • If we are regularly using joins between two or more tables in your queries, performance will be optimized if each of the joined columns has appropriate indexes.
  • Don’t automatically accept the default value of 100 for the fill factor for your indexes. It may or may not best meet your needs. A high fill factor is good for seldom changed data, but highly Modified data needs a lower fill factor to reduce page splitting.
  • Don’t over index OLTP tables, as every index will increases the time it takes to perform INSERTS, UPDATES, and DELETES. There is a fine line between having the ideal number of Indexes (for SELECTs) and the ideal number to minimize the overhead that occurs with indexes during data modifications.
  • If it’s known, an application is performing the same query over and over on the same table, consider creating a non-clustered covering index on the table. A covering index, which is a form of a composite index, includes all of the columns referenced in SELECT, JOIN, and WHERE clauses of a query.

    Happy Reading: Please Comment !


Image  —  Posted: August 31, 2016 by Virendra Yaduvanshi in Database Administrator
Tags: , , ,

SQL Server 2016 has introduced Row Level Security (RLS) – it’s an enhanced security feature that enables control over access to rows in a table. For general security purpose there are also many features like “Always Encrypted”, “Dynamic Data Masking”, “enhancement of Transparent Data Encryption”, but RLS allows us to easily control which users can access which data with complete transparency to the application. This enables us to easily restrict the data based on the user identity or security context. Row-Level Security in SQL Server 2016 helps us to maintain a consistent data access policy and reduce the risk of accidental data leakage.
Actually, Row Level Security (RLS) is a concept that provides security at the row level within the database layer, instead of at the application layer.  RLS may be implemented by using a function and a new security policy feature without even changing application code.

Before implementing RLS, First, we should know the few new terms which one need to learn and understand this feature.

Security Predicate: This is not a new object but an inline table valued function -inline TVF -which contains the logic of filtering the rows.

Security Policy: This is a new object which can be CREATE, ALTER and DROP. It may be consider as a container of predicates which can be applied to tables. One policy can contain security predicate to many tables. A policy can be in an ON or OFF state.

RLS supports two types of security predicates.

  • Filter predicates silently filter the rows available to read operations (SELECT, UPDATE, and DELETE).
  • Block predicates explicitly block write operations (AFTER INSERT, AFTER UPDATE, BEFORE UPDATE, BEFORE DELETE) that violate the predicate.

Let’s take an example :

–Create a Database named “VirendraRLS_Test”


USE VirendraRLS_Test;

— Create users

— Create a table with sample data and grant SELECT to the new users
CREATE TABLE dbo.Employee
[EmployeeName] VARCHAR(25),[BasicSalary] MONEY)

INSERT INTO dbo.Employee
VALUES (‘Viren’,1000),(‘Yaduvanshi’,1200),(‘viren’,1450)

GRANT SELECT ON dbo.Employee TO Viren
GRANT SELECT ON dbo.Employee TO Yaduvanshi
— Create a filter predicate function as below
CREATE FUNCTION dbo.RLSPredicate (@EmployeeName as sysname)
SELECT 1 AS RLSPredicateResult WHERE @EmployeeName USER_NAME();
— Add filter predicate to the table
ADD FILTER PREDICATE dbo.RLSPredicate(EmployeeName)
ON dbo.Employee WITH (STATE=ON);

EXECUTE (‘SELECT * FROM Employee’AS USER=‘Yaduvanshi’

As per Microsoft, MS is planning to put one of its main products MS-SQL Server on Linux for the first time.

Here is the complete details.


Waiting enthusiastically for same !