Archive for the ‘Database Administrator’ Category

Hello !

I am excited to announce a free webinar on “SQL Server optimization and Performance Tuning”

Please save date and time on 6:00 PM IST / 7:30 AM CST / 8:30 AM EST June 20th, 2020.

Register : https://forms.gle/41iGy4UbLEHeMgDi9

Meeting Link : meet.google.com/tpu-stbh-ryh

Happy Learning!

The caching feature with SQL Server was introduced in SQL Server 2005. This functionality provides the caching of temporary objects (temp tables, table variables and TVFs) across repeated calls of objects like Stored procedures, Triggers and UDFs.

In-short, when a stored procedure execution ends, SQL Server truncates (few exceptions are there) and renames the table, keeping only one IAM and one data page. The structure will be used by the subsequent calls instead of allocating new pages from scratch when the object is created again.

If the temp objects are smaller than 8MB, the truncation happens immediately after module execution ends, for the larger temp objects, SQL Server performs “deferred drop” and immediately returns control to the application.

The caching mechanism works for the temp tables that is created by using CREATE TABLE or SELECT INTO statements. The caching is not possible when there is an explicit DDL on a temp table after it is created like ALTER #table ADD Constraint, CREATE STATISTICS on table columns or there is a named constraint on a temp table. Temporary tables are also not cached if they are part of dynamic SQL or ad-hoc batch.

Statistics created using an explicit CREATE STATISTICS statement are not linked to a cached temporary object. Auto-Created statistics are linked to a cached temp object. UPDATE STATISTICS does not prevent temp table caching.

As we can track temp table name by using SELECT OBJECT_ID(‘tempdb.dbo.#temp’) , This will show that temp table object_id never changes – an internal process renames temp table name to hexadecimal form at the end of the stored procedure. This would happen even if we explicitly dropped the table.

If a stored procedure is executed concurrently, multiple separate cached objects may be created in tempdb. There is a cached temp object per execution context.


For a Database Administrator managing Log Files growth is a consequential task and I hope every DBA have been faced this issue on very frequent basis. Depending on coding pattern and business logic implementation there might be n numbers of causes for Log files growth. Now a DBA wanted to get notify when a defined threshold regarding file growth breaches, a system generated alert should be triggered and the related team members should notify via email.

Here, to do this, we can create an alert in SSMS as per below steps for desired notification as,

Step 1) Right click on “Alert” from SQL Server Agent Section and select “New Alert”

Step 2)     From General Page, we can define parameter as below

a) Define Name as per your convenience , I have given Name as LogFileSize5GB, because here I want alert when Log file size increase after 5 GB in size.
b) Select Type as SQL Server Performance condition alert
    c) Just Set parameter in Performance condition alert definition section as
Object = Databases
Counter = Log File(s) Size (KB) —- 5242880 KB = 5 GB
Instance = Database Name which log files growth needs to be monitored, in this case I have select my test DB – VirendraTest

Step 3)     Click on Response Page, we can define parameter as below
Note : We need to create operator(s) where we can specify to whom we wanted to send emails.

Step 4)     Click on Options Page, we can define parameter as below

Step 5) Click on OK, an alert has been configured and whenever log file size increases more than 5 GB, respected team members will get notified.

Today, My Dev team was looking for a solution where they wanted to access a Development server with alternate name of server instead of server’s original name because in an application connection string was using server name and team was not interested to change application configuration file. With SQL Server Configuration Manager there is a feature named as “Alias” – its a simply an alternative name given to the server. In other case this alias name can be used as easier for users to remember server instead of having a complicated name like DESKTOP-0P6BOHT ( for example my laptop name) , here we can simplify it to something like Virendra. I hope this approach can be applied in case of any server movement/migration from one server to another old/new server. Once an alias has been created and all relevant system objects have been updated to reference the alias, renaming or moving the server becomes a much less tedious process as only the alias need to be updated to reference the new server name and it can save huge time.

In order to configure an alias, Here I am using my PC, which name is “DESKTOP-0P6BOHT”

Step 1 ) Open SQL Server Configuration Manager and Select Alias from SQL Native Client Configuration

Step 2) A new dialog box “Alias – New” will be open


Set Alias Name as per your convenience (example – Virendra) , Your Server SQL port Number – here is default 1433 and Server Name as per your system name ( here my PC name)

Click on Apply /OK.

Great, now we should be able to connect to the instance of SQL Server using the newly created alias.  To check this Launch SQL Server Management Studio and enter the alias name as the server and try to connect.


Good News, you are connected with your server, this approach is harmonious if you are testing on same server, but if you want to access your server remotely, needs to add alias name in your organization’s DNS server.


I hope, all SQL Server DBA or Developer have faced the scenario where they have to copy only SQL Server Database objects without data. To do this, commonly DBA/Developer script-out source database using generate script wizard and run that script at targeted instance. I have seen very few people are aware about a feature named as “DAC Package” which is available from SQL server 2008 R2 onwards. Using DAC Package, we can take all objects backup without data and restore it to any other instance. Here are the step-by-step details as

Step -1 ) Connect SQL Server instance using SSMS, from Object explorer, Right click on database-àTasksàExtract Data-tier Application
It will open Extract Data-tier Application wizard, click on next, Set Properties option will be open


Step 2) Just Set DAC Package file location and click on next, Check Validation and Summary, then Next

Step 3) Click on Finish, DAC Package will be create at specified location

Now you can restore this DAC package anywhere you wanted. Steps are as below
Step 4) Connect your SQL Instance using SSMS where you want to restore this DAC Package, Right click on Databasesà Deploy Data-tier Application…, Deploy wizard will be open, Select Next,

Step 5) Browse DAC package then click on next, Change desired Database Name as per requirement, click on next, Just review summary and click on finish,

That’s all , New Database restored without data….

Enjoy J !

 

I personally observed with various organisations or even within an organisation if multiple development team is working on projects , there might be the development team could follow ambiguous naming conventions. It’s one of the best development practices when we are creating any objects like Stored Procedure, Function, view or table, a proper prefixing should be in place to identify the objects. Take an example, suppose we have to create a stored procedure, so many developer guys start stored procedure name with ‘SP_’ which relate stored procedure name as System stored procedure, system will try to find this stored procedure within system databases and it could be cause for performance bottleneck, to avoid this type happening and management of these types of user created objects, we can enforce objects naming convention policy with our Server as well as database level depending on requirement. We can manage this using Policy Based management (PBM).

Here, I am taking an example to enforce naming convention policy for Stored Procedure at server level. Let consider we want to enforce a policy where all new stored procedure should be create with a prefix ‘USP_’. To do this, here are the details steps.

Step 1) Open SSMS, Connect SQL Server instance, Expand Managementà Policy Management and Right Click on Conditions and select New Condition as “Stored Procedure Naming Conventions”

Step 2) Define values as , Name = Stored Procedure Naming Convention, Facet = Stored Procedure, Expression , in Field section select @Name, Operator LIKE and Value as ‘USP[_]%’, click on OK.
Details are as below

Now PBM condition has been created, verify from Condition Tab, it will look like as below,

Step 3) From Policy Management, right click on Policies and select New policy

Step 4) Set Name = Stored Procedure Naming, Check Condition = Your created condition name – here I am taking our created condition from previous steps – Stored Procedure Naming Conventions, in Against targets just keep Every, Evaluation Mode = On Change : Prevent, Server Restriction = None (Default) and click OK.

We have created Policy for stored procedure name

Step 5) Enable policy

We have created a policy to restrict Stored procedure name prefix, as per policy, now every stored procedure name will be started with USP_, other prefix will be not accepted, Let see in below example I have taken SP name as CustomerList, PBM enforcing the same,

Now if I change SP name as USP_CustomerList, it will be create.

Thanks, and keep reading.

As we know COMMIT operation save changes made in a transaction to the database while ROLLBACK undo those changes. Its observed, generally COMMIT is a faster process than a ROLLBACK operation.
Now question is how commit is faster?
First we need to understand what occurs in database when we select COMMIT or ROLLBACK operation.
When we start a transaction, the data is being written to database and all pages marked as DIRTY pages, means three operation performed here as

  1. The database is changed.
  2. Log has been written
  3. Rollback is written.

But when we go for COMMIT operation, All DIRTY pages will be written to disk and ROLLBACK deleted, and in case of ROLLBACK operation, all changed pages rollback will be written to log file again and delete from database as before a COMMIT or ROLLBACK all changes being stored in the database. The COMMIT command simply removes any recourse to undoing the changes and ROLLBACK executes everything in reverse.
Happy Reading ! Please comments your thought on this.

Checkpoint Operation

Posted: November 10, 2016 by Virendra Yaduvanshi in Database Administrator
Tags: , ,

Checkpoints operation flush dirty data pages from the buffer cache of the current database to disk. This minimizes the active portion of the log that must be processed during a full recovery of a database. During a full recovery, the following types of actions are performed:

  • The log records of modifications not flushed to disk before the system stopped are rolled forward.
  • All modifications associated with incomplete transactions, such as transactions for which there is no COMMIT or ROLLBACK log record, are rolled back.
  • A checkpoint performs the following processes in the database:
  • Writes a record to the log file, marking the start of the checkpoint.
  • Stores information recorded for the checkpoint in a chain of checkpoint log records.
  • One piece of information recorded in the checkpoint is the log sequence number (LSN) of the first log record that must be present for a successful database-wide rollback. This LSN is called the Minimum Recovery LSN ,The Minimum Recovery LSN is the minimum of the:
    • LSN of the start of the checkpoint.
    • LSN of the start of the oldest active transaction.

The checkpoint records also contain a list of all the active transactions that have modified the database.

  • If the database uses the simple recovery model, marks for reuse the space that precedes the Minimum Recovery LSN.
  • Writes all dirty log and data pages to disk.
  • Writes a record marking the end of the checkpoint to the log file.
  • Writes the LSN of the start of this chain to the database boot page.

Activities That Cause a Checkpoint

Checkpoints occur in the following situations:

  • A CHECKPOINT statement is explicitly executed. A checkpoint occurs in the current database for the connection.
  • A minimally logged operation is performed in the database; for example, a bulk-copy operation is performed on a database that is using the Bulk-Logged recovery model.
  • Database files have been added or removed by using ALTER DATABASE.
  • An instance of SQL Server is stopped by a SHUTDOWN statement or by stopping the SQL Server (MSSQLSERVER) service. Either action causes a checkpoint in each database in the instance of SQL Server.
  • An instance of SQL Server periodically generates automatic checkpoints in each database to reduce the time that the instance would take to recover the database.
  • A database backup is taken.
  • An activity requiring a database shutdown is performed. For example, AUTO_CLOSE is ON and the last user connection to the database is closed, or a database option change is made that requires a restart of the database.

As a DBA it’s a TOP PRIORITY to make sure that all database backups should be useful in case of any Disaster and should know how to recover database from a Disaster.
In Real time, with every SQL Server environment, we regularly take database backup on various available backup devices. With SQL Server 2005 onwards, there is an option of Mirror Backup with enterprise edition. Using Mirroring a backup media set increases backup reliability by reducing the impact of backup-device failure. These type of failure sometime becomes very serious because backups are the last heartbeats against data loss. In current era, database sizes are increasing drastically, and as a resultant the failure of a backup device or media will make a backup non restorable. Mirroring backup media increases the reliability of backups by providing redundancy.

As per below given example, here I am taking backup at two locations,


Point to be Note here is the number of media families and mirrors is defined when the media set is created by a BACKUP DATABASE statement that specifies WITH FORMAT keyword.

Keep reading
J
!.