Archive for the ‘Database Administrator’ Category

I personally observed with various organisations or even within an organisation if multiple development team is working on projects , there might be the development team could follow ambiguous naming conventions. It’s one of the best development practices when we are creating any objects like Stored Procedure, Function, view or table, a proper prefixing should be in place to identify the objects. Take an example, suppose we have to create a stored procedure, so many developer guys start stored procedure name with ‘SP_’ which relate stored procedure name as System stored procedure, system will try to find this stored procedure within system databases and it could be cause for performance bottleneck, to avoid this type happening and management of these types of user created objects, we can enforce objects naming convention policy with our Server as well as database level depending on requirement. We can manage this using Policy Based management (PBM).

Here, I am taking an example to enforce naming convention policy for Stored Procedure at server level. Let consider we want to enforce a policy where all new stored procedure should be create with a prefix ‘USP_’. To do this, here are the details steps.

Step 1) Open SSMS, Connect SQL Server instance, Expand Managementà Policy Management and Right Click on Conditions and select New Condition as “Stored Procedure Naming Conventions”

Step 2) Define values as , Name = Stored Procedure Naming Convention, Facet = Stored Procedure, Expression , in Field section select @Name, Operator LIKE and Value as ‘USP[_]%’, click on OK.
Details are as below

Now PBM condition has been created, verify from Condition Tab, it will look like as below,

Step 3) From Policy Management, right click on Policies and select New policy

Step 4) Set Name = Stored Procedure Naming, Check Condition = Your created condition name – here I am taking our created condition from previous steps – Stored Procedure Naming Conventions, in Against targets just keep Every, Evaluation Mode = On Change : Prevent, Server Restriction = None (Default) and click OK.

We have created Policy for stored procedure name

Step 5) Enable policy

We have created a policy to restrict Stored procedure name prefix, as per policy, now every stored procedure name will be started with USP_, other prefix will be not accepted, Let see in below example I have taken SP name as CustomerList, PBM enforcing the same,

Now if I change SP name as USP_CustomerList, it will be create.

Thanks, and keep reading.

As we know COMMIT operation save changes made in a transaction to the database while ROLLBACK undo those changes. Its observed, generally COMMIT is a faster process than a ROLLBACK operation.
Now question is how commit is faster?
First we need to understand what occurs in database when we select COMMIT or ROLLBACK operation.
When we start a transaction, the data is being written to database and all pages marked as DIRTY pages, means three operation performed here as

  1. The database is changed.
  2. Log has been written
  3. Rollback is written.

But when we go for COMMIT operation, All DIRTY pages will be written to disk and ROLLBACK deleted, and in case of ROLLBACK operation, all changed pages rollback will be written to log file again and delete from database as before a COMMIT or ROLLBACK all changes being stored in the database. The COMMIT command simply removes any recourse to undoing the changes and ROLLBACK executes everything in reverse.
Happy Reading ! Please comments your thought on this.

Checkpoint Operation

Posted: November 10, 2016 by Virendra Yaduvanshi in Database Administrator
Tags: , ,

Checkpoints operation flush dirty data pages from the buffer cache of the current database to disk. This minimizes the active portion of the log that must be processed during a full recovery of a database. During a full recovery, the following types of actions are performed:

  • The log records of modifications not flushed to disk before the system stopped are rolled forward.
  • All modifications associated with incomplete transactions, such as transactions for which there is no COMMIT or ROLLBACK log record, are rolled back.
  • A checkpoint performs the following processes in the database:
  • Writes a record to the log file, marking the start of the checkpoint.
  • Stores information recorded for the checkpoint in a chain of checkpoint log records.
  • One piece of information recorded in the checkpoint is the log sequence number (LSN) of the first log record that must be present for a successful database-wide rollback. This LSN is called the Minimum Recovery LSN ,The Minimum Recovery LSN is the minimum of the:
    • LSN of the start of the checkpoint.
    • LSN of the start of the oldest active transaction.

The checkpoint records also contain a list of all the active transactions that have modified the database.

  • If the database uses the simple recovery model, marks for reuse the space that precedes the Minimum Recovery LSN.
  • Writes all dirty log and data pages to disk.
  • Writes a record marking the end of the checkpoint to the log file.
  • Writes the LSN of the start of this chain to the database boot page.

Activities That Cause a Checkpoint

Checkpoints occur in the following situations:

  • A CHECKPOINT statement is explicitly executed. A checkpoint occurs in the current database for the connection.
  • A minimally logged operation is performed in the database; for example, a bulk-copy operation is performed on a database that is using the Bulk-Logged recovery model.
  • Database files have been added or removed by using ALTER DATABASE.
  • An instance of SQL Server is stopped by a SHUTDOWN statement or by stopping the SQL Server (MSSQLSERVER) service. Either action causes a checkpoint in each database in the instance of SQL Server.
  • An instance of SQL Server periodically generates automatic checkpoints in each database to reduce the time that the instance would take to recover the database.
  • A database backup is taken.
  • An activity requiring a database shutdown is performed. For example, AUTO_CLOSE is ON and the last user connection to the database is closed, or a database option change is made that requires a restart of the database.

As a DBA it’s a TOP PRIORITY to make sure that all database backups should be useful in case of any Disaster and should know how to recover database from a Disaster.
In Real time, with every SQL Server environment, we regularly take database backup on various available backup devices. With SQL Server 2005 onwards, there is an option of Mirror Backup with enterprise edition. Using Mirroring a backup media set increases backup reliability by reducing the impact of backup-device failure. These type of failure sometime becomes very serious because backups are the last heartbeats against data loss. In current era, database sizes are increasing drastically, and as a resultant the failure of a backup device or media will make a backup non restorable. Mirroring backup media increases the reliability of backups by providing redundancy.

As per below given example, here I am taking backup at two locations,

Point to be Note here is the number of media families and mirrors is defined when the media set is created by a BACKUP DATABASE statement that specifies WITH FORMAT keyword.

Keep reading

Here, I am writing some indexing related the best practices during index creation with SQL Server DB’s tables.

  • Periodically, run the Index Wizard or Database Engine Tuning Advisor against current Profiler traces to identify potentially missing indexes.
  • Check and remove unused indexes.
  • Its not a thumb rule to get optimized performance by creating Indexes, First check and create index instead of creating redundant indexes.
  • As a rule of thumb, every table should have at least a clustered index. Generally, but not always, the clustered index should be on a column that monotonically increases — such as an identity column, or some other column where the value is increasing — and is unique. In many cases, the primary key is the ideal column for a clustered index.
  • As we can have only one clustered index per table, take extra time to carefully consider how it will be used. Consider the type of queries that will be used against the table, and make an efficient judgement to which query is the most critical, and if this query will benefit from having a clustered index.
  • If a column in a table is not at least 95% unique, then most likely the query optimizer will not use a non-clustered index based on that column. Because of this, generally don’t want to add non-clustered indexes to columns that aren’t at least 95% unique.
  • Keep the width of indexes as narrow as possible. This reduces the size of the index and reduces the number of disk I/O reads required to read the index, boosting performance.
  • If possible, avoid adding a clustered index to a GUID column. GUIDs take up 16-bytes of storage, more than an Identify column, which makes the index larger, which increases I/O reads, which can hurt performance.
  • Try to avoid index on Timestamp column, its extra burden for I/O.
  • Indexes should be considered on all columns that are frequently accessed by the JOIN, WHERE, ORDER BY, GROUP BY, TOP, and DISTINCT clauses.
  • Don’t automatically add indexes on a table because it seems like the right thing to do. Only add indexes if you know that they will be used by the queries run against the table.
  • When creating indexes, try to make them unique indexes if at all possible. SQL Server can often search through a unique index faster than a non-unique index because in a unique index, each Row is unique, and once the needed record is found, SQL Server doesn’t have to look any further.
  • If we are regularly using joins between two or more tables in your queries, performance will be optimized if each of the joined columns has appropriate indexes.
  • Don’t automatically accept the default value of 100 for the fill factor for your indexes. It may or may not best meet your needs. A high fill factor is good for seldom changed data, but highly Modified data needs a lower fill factor to reduce page splitting.
  • Don’t over index OLTP tables, as every index will increases the time it takes to perform INSERTS, UPDATES, and DELETES. There is a fine line between having the ideal number of Indexes (for SELECTs) and the ideal number to minimize the overhead that occurs with indexes during data modifications.
  • If it’s known, an application is performing the same query over and over on the same table, consider creating a non-clustered covering index on the table. A covering index, which is a form of a composite index, includes all of the columns referenced in SELECT, JOIN, and WHERE clauses of a query.

    Happy Reading: Please Comment !


SQL Server 2016 has introduced Row Level Security (RLS) – it’s an enhanced security feature that enables control over access to rows in a table. For general security purpose there are also many features like “Always Encrypted”, “Dynamic Data Masking”, “enhancement of Transparent Data Encryption”, but RLS allows us to easily control which users can access which data with complete transparency to the application. This enables us to easily restrict the data based on the user identity or security context. Row-Level Security in SQL Server 2016 helps us to maintain a consistent data access policy and reduce the risk of accidental data leakage.
Actually, Row Level Security (RLS) is a concept that provides security at the row level within the database layer, instead of at the application layer.  RLS may be implemented by using a function and a new security policy feature without even changing application code.

Before implementing RLS, First, we should know the few new terms which one need to learn and understand this feature.

Security Predicate: This is not a new object but an inline table valued function -inline TVF -which contains the logic of filtering the rows.

Security Policy: This is a new object which can be CREATE, ALTER and DROP. It may be consider as a container of predicates which can be applied to tables. One policy can contain security predicate to many tables. A policy can be in an ON or OFF state.

RLS supports two types of security predicates.

  • Filter predicates silently filter the rows available to read operations (SELECT, UPDATE, and DELETE).
  • Block predicates explicitly block write operations (AFTER INSERT, AFTER UPDATE, BEFORE UPDATE, BEFORE DELETE) that violate the predicate.

Let’s take an example :

–Create a Database named “VirendraRLS_Test”


USE VirendraRLS_Test;

— Create users

— Create a table with sample data and grant SELECT to the new users
CREATE TABLE dbo.Employee
[EmployeeName] VARCHAR(25),[BasicSalary] MONEY)

INSERT INTO dbo.Employee
VALUES (‘Viren’,1000),(‘Yaduvanshi’,1200),(‘viren’,1450)

GRANT SELECT ON dbo.Employee TO Viren
GRANT SELECT ON dbo.Employee TO Yaduvanshi
— Create a filter predicate function as below
CREATE FUNCTION dbo.RLSPredicate (@EmployeeName as sysname)
SELECT 1 AS RLSPredicateResult WHERE @EmployeeName USER_NAME();
— Add filter predicate to the table
ADD FILTER PREDICATE dbo.RLSPredicate(EmployeeName)
ON dbo.Employee WITH (STATE=ON);

EXECUTE (‘SELECT * FROM Employee’AS USER=‘Yaduvanshi’

As per Microsoft, MS is planning to put one of its main products MS-SQL Server on Linux for the first time.

Here is the complete details.

Waiting enthusiastically for same !

Buffer pool extension introduced in SQL server 2014. The buffer pool extension provides the seamless integration of a nonvolatile RAM extension to the Database Engine buffer pool to significantly improve I/O throughput. As we know the primary purpose of a SQL Server Database is to store and retrieve data, in this operation, commonly Data is read from disk into memory, once the data is changed, it is marked as dirty and the dirty pages are written to disk and flagged as clean data. clean pages may be flushed from memory when the data cache known as Buffer Pool comes under pressure and in this situation data got deleted from memory and SQL Server has to read it from disk the next time a user runs a query that requires the data.
Now there are no problems as data size is less than memory, but as data size is more than memory – It’s started creating issues, To resolve this, In SQL Server 2014 Buffer Pool Extensions introduced to solve problem if we have not enough memory. In the case of Buffer Pool Extensions, SQL Server uses the disk space to store clean buffers having unmodified data pages that out of RAM.
The buffer pool extension feature extends the buffer pool cache with nonvolatile storage (usually SSD). Because of this extension, the buffer pool can accommodate a larger database working set, which forces the paging of I/O’s between RAM and the SSDs. This effectively offloads small random I/O’s from mechanical disks to SSDs. Because of the lower latency and better random I/O performance of SSDs, the buffer pool extension significantly improves I/O throughput.

SSD storage is used as an extension to the memory subsystem rather than the disk storage subsystem. That is, the buffer pool extension file allows the buffer pool manager to use both DRAM and NAND-Flash memory to maintain a much larger buffer pool of lukewarm pages in nonvolatile random access memory backed by SSDs. This creates a multilevel caching hierarchy with level 1 (L1) as the DRAM and level 2 (L2) as the buffer pool extension file on the SSD. Only clean pages are written to the L2 cache, which helps maintain data safety. The buffer manager handles the movement of clean pages between the L1 and L2 caches.

The following illustration provides a high-level architectural overview of the buffer pool relative to other SQL Server components.

When enabled, the buffer pool extension specifies the size and file path of the buffer pool caching file on the SSD. This file is a contiguous extent of storage on the SSD and is statically configured during startup of the instance of SQL Server. Alterations to the file configuration parameters can only be done when the buffer pool extension feature is disabled. When the buffer pool extension is disabled, all related configuration settings are removed from the registry. The buffer pool extension file is deleted upon shutdown of the instance of SQL Server.

Best Practices

  • The buffer pool extension size can be up to 32 times the value of max_server_memory.
  • A ratio between the size of the physical memory (max_server_memory) and the size of the buffer pool extension of 1:16 or less. A lower ratio in the range of 1:4 to 1:8 may be optimal.


  • BPE feature is available for 64-bit SQL Server only.
  • Its available with SQL Server 2014 Standard, Business Intelligence and Enterprise only.
  • If BPE is enabled and you suppose have to modify the size of the file used by BPE on the non-volatile disk, then first need tp disable BPE and set the new size and enable BPE again. If the size is less than previously set, SQL Server must be restarted for the changes to take effect on the non-volatile disk.

Implementing SSD Buffer Pool Extension

— Enabling BPE as 50GB


  (FILENAME‘F:\SSD_Data\VirendraTest.BPE’,SIZE = 50 GB)

— Disable BPE


— To See BPE status

  (CASE WHEN ([is_modified] = 1 AND ([is_in_bpool_extension] IS NULL OR [is_in_bpool_extension] = 0)) THEN N’Dirty’
WHEN ([is_modified] = 0 AND ([is_in_bpool_extension] IS NULL OR 
[is_in_bpool_extension] = 0)) THEN 
WHEN ([is_modified] = 0 AND [is_in_bpool_extension] = 1) THEN N’BPE’ END) AS N’Page State’,

(CASE WHEN ([database_id] = 32767) THEN N’Resource Database’ ELSE DB_NAME ([database_id]) END) AS N’Database Name’

COUNT(1) AS N’Page Count’
FROM sys.dm_os_buffer_descriptors 
GROUP BY [database_id], [is_modified], [is_in_bpool_extension]

ORDER BY [database_id], [is_modified], [is_in_bpool_extension]

Happy Reading : Please Comment !