Archive for the ‘Database Administrator’ Category

As SQL query performance improvement is a very debating topic between developers and the other user community. Users always wants a fast response on their data retrieval action and developers put forth their best efforts to provide the data in the minimum time span, however, there is no straightforward way to define what is the best performance. Sometime it’s debatable what is good and what bad performance of a query is but overall we have to follow best practices during development, a good developer may provide the best query response to users and avoid such discussions. We can choose multiple ways to improve SQL query performance, which falls under various categories like re-writing the SQL query, creation and use of Indexes, proper management of statistics….. and other best practices. Here are some top tips for this regards.

Avoid Correlated Subqueries

Subqueries are a very powerful and useful feature of the SQL standard. Subqueries can be categorized as either correlated or uncorrelated queries. A correlated subquery is one that is dependent on the outer query to be processed. These types of subqueries can be very inefficient and should be avoided. We can use temp table and join with outer query or other method to get the result.

Eliminate Cursors from the Query

Try to remove cursors from the query and use set-based query; set-based query is more efficient than cursor-based. A good SQL programmer must develop the mental discipline to explore set-based possibilities thoroughly before falling back on the intuitive procedural solution If there is a need to use cursor than avoid dynamic cursors as it tends to limit the choice of plans available to the query optimizer. For example, dynamic cursor limits the optimizer to using nested loop joins.

Avoid Multiple Joins in a Single Query

Try to avoid writing a SQL query using multiple joins that includes outer joins, cross apply, outer apply and other complex sub queries. It reduces the choices for Optimizer to decide the join order and join type. Sometime, Optimizer is forced to use nested loop joins, irrespective of the performance consequences for queries with excessively complex cross apply or sub queries

Avoid Use of Non-correlated Scalar Sub Query

We can re-write our query to remove non-correlated scalar sub query as a separate query instead of part of the main query and store the output in a variable, which can be referred to in the main query or later part of the batch. This will give better options to Optimizer, which may help to return accurate cardinality estimates along with a better plan.

Avoid Multi-statement Table Valued Functions (TVFs)

Multi-statement TVFs are more costly than inline TFVs. SQL Server expands inline TFVs into the main query like it expands views but evaluates multi-statement TVFs in a separate context from the main query and materializes the results of multi-statement into temporary work tables. The separate context and work table make multi-statement TVFs costly.

Creation and Use of Indexes

As many DBA/Developer creates index to magically reduce the data retrieval time but have a reverse effect on DML operations, which may degrade query performance. With this fact, Indexing is a challenging task, but could help to improve SQL query performance and give us best query response time.

Index on Highly Selective

Selectivity define the percentage of qualifying rows in the table (qualifying number of rows/total number of rows). If the ratio of the qualifying number of rows to the total number of rows is low, the index is highly selective and is most useful. A non-clustered index is most useful if the ratio is around 5% or less, which means if the index can eliminate 95% of the rows from consideration. If index is returning more than 5% of the rows in a table, it probably will not be used; either a different index will be chosen or created or the table will be scanned. Here is a query to find the index status.
SELECT statement AS [database.scheme.table],
column_id , column_name, column_usage,
migs.user_seeks, migs.user_scans,
migs.last_user_seek, migs.avg_total_user_cost,
migs.avg_user_impact
FROM sys.dm_db_missing_index_details AS mid
CROSS APPLY sys.dm_db_missing_index_columns(mid.index_handle)
INNER JOIN sys.dm_db_missing_index_groups AS mig ON mig.index_handle = mid.index_handle
INNER JOIN sys.dm_db_missing_index_group_stats AS migs ON mig.index_group_handle=migs.group_handle
ORDER BY migs.user_seeks, migs.user_scans
GO

Column order in an Index

Order or position of a column in an index also plays a vital role to improve SQL query performance. An index can help to improve the SQL query performance if the criteria of the query matches the columns that are left most in the index key. As a best practice, most selective columns should be placed leftmost in the key of a non-clustered index. The above query will let us know about Column positioning also.

Drop Unused Indexes

Dropping unused indexes can help to speed up data modifications without affecting data retrieval. Also, we need to define a strategy for batch processes that run infrequently and use certain indexes. In such cases, creating indexes in advance of batch processes and then dropping them when the batch processes are done helps to reduce the overhead on the database. Here is a query to find unused indexes.

SELECT ‘[‘DB_NAME() ‘].[‘ + su.[name] ‘].[‘ + o.[name] ‘]’ AS [statement] ,i.[name] AS [index_name] ,
ddius.[user_seeks] + ddius.[user_scans] + ddius.[user_lookups] AS [user_reads] ,ddius.[user_updates] AS [user_writes] ,
SUM(SP.rows) AS [total_rows]
FROM sys.dm_db_index_usage_stats ddius
INNER JOIN sys.indexes i ON ddius.[object_id] = i.[object_id] AND i.[index_id] = ddius.[index_id]
INNER JOIN sys.partitions SP ON ddius.[object_id] = SP.[object_id] AND SP.[index_id] = ddius.[index_id]
INNER JOIN sys.objects o ON ddius.[object_id] = o.[object_id]
INNER JOIN sys.sysusers su ON o.[schema_id] = su.[UID]
WHERE ddius.[database_id] =DB_ID() — current database only
AND OBJECTPROPERTY(ddius.[object_id]‘IsUserTable’)= AND ddius.[index_id] > 0
GROUP BY su.[name] ,o.[name] ,i.[name] ,ddius.[user_seeks] + ddius.[user_scans] + ddius.[user_lookups] ,ddius.[user_updates] HAVING ddius.[user_seeks] + ddius.[user_scans] + ddius.[user_lookups] = 0
ORDER BY ddius.[user_updates] DESC,su.[name] ,o.[name] ,i.[name ]

Statistic Creation and Updates

We have to take care of statistic creation and regular updates for computed columns and multi-columns referred in the query, the query optimizer uses information about the distribution of values in one or more columns of a table statistics to estimate the cardinality, or number of rows, in the query result. These cardinality estimates enable the query optimizer to create a high-quality query plan.

Happy Reading ! .. Like to hear your valuable comments…

Let in your environment there are lots of SQL user and as a DBA you have to give permission to a specific user to truncate table only. The specified User have no select/delete/insert/update permission on table.

In this scenario we have to grant ALTER permission to that user.

USE [VirendraTest] — VirendraTest is here database Name
GO

GRANT ALTER ON [dbo].[TestTable] TO [Virendra] — Virendra is a User
GO

 

Hi Guys, Here I am starting How To series, hope it will be helpful for SQL’s lovers.

It’s a daily routine we see system reports where all things like Server health, DB Status, Jobs Status, Disk Spaces … etc. are as per defined standard or not. Hope you guys also worked/managed SSRS – report server also. Hope you have also got queries / complaints from your internal or external clients stating reports are slow / reports are not being generated / reports server is not working / reports are being generated from long time …… etc.
Here is a query, it will show where are problems – is it related to data Retrieval or related to data processing or where it is related to rendering. Below query will show all details

Use ReportServer — Use configured DB Name
GO

SELECT EL.ReportID ‘Report ID’,
CT.name ‘Report Name’,
CT.Path ‘Report Path’,
CASE
WHEN EL.RequestType = 0 THEN ‘Interactive’
WHEN EL.RequestType = 1 THEN 
‘Subscription’
WHEN EL.RequestType = 2 THEN ‘Refresh Cache’ END AS ‘Request Type’,
EL.Format ‘Report Format’,
EL.TimeStart,
EL.TimeEnd,
DATEDIFF(ss,EL.TimeStart,EL.TimeEndAS ‘TotalDuration(Sec)’,
(EL.TimeDataRetrieval/1000.00AS ‘Data Retrieval Time (Sec)’,
(EL.Timeprocessing/1000.00AS ‘Processing Time(Sec)’,
(EL.TimeRendering/1000.00AS ‘Rendering Time(Sec)’,
CASE
WHEN EL.Source=1 THEN ‘LIVE’
WHEN EL.Source=2 THEN ‘Cache’
WHEN EL.Source=3 THEN ‘Snapshot’
WHEN EL.Source=4 THEN ‘History’
WHEN EL.Source=5 THEN ‘Ad hoc(Report Builder)’
WHEN EL.Source=6 THEN ‘Session’
WHEN EL.Source=7 THEN ‘Report Definition Customization Extension(RDCE)’
END AS ‘Source’,
EL.Status,
EL.ByteCount/1024.00 AS ‘Size(Kb)’,
EL.[RowCount] AS ‘Number of Records’
FROM ExecutionLog EL
INNER JOIN [Catalog] CT ON CT.itemid=EL.reportid
Order by EL.TimeStart Desc

Use it and please share the comments/views/your finding on same – Happy Reading !

It’s very common practice a database administrator guy has two login as WINDOWS authentication and a SQL’s sysadmin at database box. DBA guy always connect DB Server from their own working workstation (Desktop/Laptop) via SQL Server Management Studio (SSMS). Sometimes it happens we have to connect Server from different workstation, in this situation we have to use own windows credential instead of local workstation. To use the different login to connect to SQL Server using SSMS we need to use the “Run as” feature. To do this there are many option, some of few are as,

1)Run As different User :

Just go to at SSMS.exe, Press and Hold the Shift Key and Right Click on the SSMS executable, Now select “Run as Different User” option.
or
Click on Start Button, Go to SSMS and, Press and Hold the Shift Key and Right Click on the SSMS.
It will be look like as below

It will ask user name and password as below, write Username and password, it will connect to database server with given credential.

2) Run As from Command Prompt :

Just open the command prompt ( Start ->Run->CMD)

At your command prompt, write as

Runas /user:VirendraRnD\Virendra “C:\Program Files\Microsoft SQL Server\120\Tools\Binn\ManagementStudio\Ssms.exe”
or
Runas /user:VirendraRnD\Virendra SSMS.EXE

or

It will ask password for given user.
Note : SSMS.EXE path may be changed as per your installation or SQL server version. Use RUNAS / NETONLY If the credentials specified are for remote access only.

3)Run As from RUN Command :

Step -1 : Go to Start -> Run,
Step -2 : Type following command in text box and press enter, this will launch command prompt to get user’s password:


Happy Reading, Please suggest other options !

Hi Guys, Today when I was just doing some R&D with SQL server 2016 CTP2, got a new feature in “SQL Server Import and Export Wizard”. I was just importing a CSV file and got a new feature in advance tab as New, Delete and Suggest Types..

The detail step by step screen snaps as below,

Step 1 – Start “SQL Server Import and Export Wizard”, it will shows welcome screen as

Click on Next

Step 2 – Select Data source as ” Flat File Source” , then Browse CSV file.


Step 3 – Click on Columns, it will display all columns details

 Step 4 – Click on ADVANCEED, it will show advance details, there will be 3 New botton as below, marked in RED.

Step 5 – When click on NEW button, it will be add new column options at current selected column as

Now, rest steps are as previous.

.

Please share your finding/observations.

 

Hi Guys, its very surprising morning today, when I opened my mail box in morning, got a subscribed mail from SQL Server Blog’s from technet.com. Its saying Mr. Satya Nadella, CEO of Microsoft, announced SQL Server 2016, an intelligent platform for a mobile first, cloud first world. The next major release of Microsoft’s flagship database and analytics platform provides breakthrough performance for mission critical applications and deeper insights on data across on-premises and cloud. Top capabilities for the release include:

Always Encrypted – a new capability that protects data at rest and in motion
Stretch Database – new technology that lets you dynamically stretch your warm and cold transactional data to Microsoft Azure.
Row-level Security and Dynamic Data Masking
Built-in Advanced Analytics, PolyBase and Mobile BI
Row-level Security and Dynamic Data Masking

In-database analytics – Enhancements to our industry-leading in-memory technologies for real-time analytics on top of breakthrough transactional performance
………………….. and many more…
Always Encrypted

Data security is top of mind, especially for mission critical applications, and SQL Server has been the enterprise database with the fewest security vulnerabilities six years running.  To help customers with data security and compliance when using SQL Server on-premises or in the cloud, SQL Server 2016 introducing Always Encrypted. Always Encrypted, based on technology from Microsoft Research, protects data at rest and in motion. With Always Encrypted, SQL Server can perform operations on encrypted data and best of all, the encryption key resides with the application in the customers trusted environment. Encryption and decryption of data happens transparently inside the application which minimizes the changes that have to be made to existing applications.


 

Stretch Database

Today, in the Ignite keynote, we can gain the benefits of hyper-scale cloud in the box with new hybrid scenarios including Stretch Database. As core transactional tables grow in size, we may need to archive historical data to lower cost and to maintain fast performance. This unique technology allows us to dynamically stretch our warm and cold transactional data to Microsoft Azure, so our operational data is always at hand, no matter the size, and we benefit from the low cost of using Microsoft Azure.  We can use Always Encrypted with Stretch Database to extend our data in a more secure manner for greater peace of mind.


Real-time Operational Analytics & In-Memory OLTP

Building on our industry leading and proven in-memory technologies, customers will benefit from the combination of real-time operational analytics with blazing fast transactional performance – a first among enterprise vendors.  For In-Memory OLTP, which customers today are using for up to 30x faster transactions than disk based systems, Now we can be able to apply this technology tuned for transactional performance to a significantly greater number of applications as well as benefit from increased concurrency.  With these enhancements, the in-memory columnstore will deliver 100X faster queries with in-memory OLTP for in-memory performance and real-time operational analytics.


Built-in Advanced Analytics, PolyBase and Mobile BI

For deeper insights into data, SQL Server 2016 expands its scope beyond transaction processing, data warehousing and business intelligence to deliver advanced analytics as an additional workload in SQL Server with proven technology from Revolution Analytics.  It will make advanced analytics more accessible and increase performance for our advanced analytic workloads by bringing R processing closer to the data and building advanced analytic capabilities right into SQL Server.  Additionally, PolyBase into SQL Server, expanding the power to extract value from unstructured and structured data using existing T-SQL skills. With this wave, we can then gain faster insights through rich visualizations on many devices including mobile applications on Windows, iOS and Android.

Additional capabilities in SQL Server 2016 include:

  • Additional security enhancements for Row-level Security and Dynamic Data Masking to round out our security investments with Always Encrypted.
  • Improvements to AlwaysOn for more robust availability and disaster recovery with multiple synchronous replicas and secondary load balancing.
  • Native JSON support to offer better performance and support for your many types of your data.
  • SQL Server Enterprise Information Management (EIM) tools and Analysis Services get an upgrade in performance, usability and scalability.
  • Faster hybrid backups, high availability and disaster recovery scenarios to backup and restore your on-premises databases to Azure and place your SQL Server AlwaysOn secondaries in Azure.

In addition, there are many more capabilities coming with SQL Server 2016 that deliver mission critical performance, deeper insights on your data and allow you to reap the benefits of hyper-scale cloud.

SQL Server 2016 can capture, transform, and analyze any data, of any size, at any scale, in its native format —using the tools, languages and frameworks in a trusted environment on-premises and in the cloud.

A vulnerability discovered recently could prove to be the final nail in SSL’s (Secure Socket Layer) coffin. The latest vulnerability to hit SSL, called ‘Factoring attack on RSA-EXPORT Keys’ (or FREAK), is claims to breach as many as 36 percent of all websites trusted by browsers. According to reports, several highly ranked websites, including the likes of FBI and NSA, are vulnerable to the flaw. Microsoft has confirmed that all versions of Windows, which have the implementation of SSL/TLS, are vulnerable to FREAK. This implies that if an enterprise is using Windows operating system, an attack on the network can force all software that uses the Windows Secure Channel component to install weak encryption on the Internet. Although widely adopted, SSL has had a chequered history. Discovered as a means to enable secure transmission between two points, SSL was quickly adopted by the industry to ensure secure data transmission. However, SSL was continually bogged down by vulnerabilities. SSL 2.0 was open to length extension attacks, had no protection for the handshake, and was vulnerable to truncation attacks. When 2.0 version gave way to SSL 3.0 that too had several flaws. In October 2014, researchers from Google identified a flaw in the design of SSL 3.0, which exposed it to a padding attack. Called POODLE, (an acronym for Padding Oracle on Downgraded Legacy Encryption), the attack required only 256 SSL requests to disclose one byte of encrypted information. It was also at the receiving end of Heart bleed bug. Given the large number of flaws in SSL, the PCI SSC (Payment Card Industry Security Standards Council) recently came out with a special bulletin that called for imminent amendments to the PA-DSS (Payment Application Data Security Standard) and PCI DSS (Payment Card Industry Data Security Standard). The bulleting warned the payment card industry members that SSL could no longer be relied upon for data protection. To overcome the shortcomings of SSL, a new protocol called Transport Layer Security (TLS) has come to the fore. Based on open standard, it is not only more extensible but also ensures support in the future. Also, TLS is scalable enough to secure connections on client’s side that have only SSL. The bottom line is that enterprise information technology practitioners can no longer bank on SSL 3.0 for ensuring privacy and protection of your data. It is time you ensured that your websites and browsers moved away from SSL to leverage more modern and state-of-the art security protocols like TLS. Meanwhile, to counter FREAK, OpenSSL has published a patch. Other vendors too are in the midst of issuing patches for plugging the vulnerability.

Source : ITNEXT

Now days, as it’s very common and normal trends, The DB size as 100~150…. n GBs are being considered the normal, and even many databases are crossing the terabyte range. In general practice commonly a DBA or sometime Developer starts backups in one process that take a long time and Database is continuing to grow, this back time will increase accordingly, to reduce the backup time, it’s always suggested take backup in Disk instead of tape in off hours. It’s my personal opinion always take backup to San Disk and then archive it to Tape. Disk capacity is matter because it’s directly proportional to costing/expense and to reduce backup size we can use compression. As we know taking a Full backup is very intensive I/O process as every data page in the database must be read and then written to another file. It’s my personal observation, having multiple physical disks, we can achieve higher I/O rates and complete the backup process more quickly with in addition to writing to multiple files, it is also helpful to write to different physical disks to alleviate the I/O bottleneck.

Here are steps, How to take Backup on multiple disks / multiple files using SSMS.

Step 1) Right Click on Database à Task à Back Up

It will shows screen as

Step 2) Click on Add from Destination section and set there multiple physical disk with multiple files by clicking on ADD button again and again.

And finally suppose we have set 2 disk with 2 files it will show like as below

Step 3) Click on OK, Backup will start with 2 physical disks.

The sample script for this process will be as below,

BACKUP DATABASE [VirendraTest] TO DISK N’C:\VirendraTestOnCdrive.bak’,
DISK = N’E:\VirendraTestOnEdrive.bak’ 
WITH NOFORMAT, NOINIT,
NAME
N’VirendraTest-Full Database Backup’SKIP, NOREWIND, NOUNLOADSTATS = 1
GO

 

Please suggest, your finding.

Hi Guys, Now days Payment Card Industry Data Security Standards (PCI DSS) is a preventative standard intended to reduce the risk of payment card fraud and information theft. As per my views after SQL server installation on Production Box, I hope below settings should be set using SP_CONFIGURE as,

  • The ‘Ad Hoc Distributed Queries’ Server Configuration Option should be set to ‘0’
  • The ‘CLR Enabled’ Server Configuration Option should be set to ‘0’
  • The ‘Cross DB Ownership Chaining’ Server Configuration Option should be set to ‘0’
  • The ‘Database Mail XPs’ Server Configuration Option should be set to ‘0’.
  • The ‘Ole Automation Procedures’ Server Configuration Option should be set to ‘0’
  • The ‘Remote Access’ Server Configuration Option should be set to ‘0’
  • The ‘Remote Admin Connections’ Server Configuration Option should be set to ‘0’.
  • The ‘Scan For Startup Procs’ Server Configuration Option should be set to ‘0’.
  • The ‘SQL Mail XPs’ Server Configuration Option should be set to ‘0’
  • The ‘Trustworthy’ Database Property should be set to Off
  • Unnecessary SQL Server Protocols should be disabled.
  • SQL Server should be configured to use non-standard ports
  • The ‘Hide Instance’ option should be set to ‘Yes’ for production SQL Server instances.
  • Non-clustered SQL Server instances within production environments should be

    designated as hidden to prevent advertisement by the SQL Server Browser service.

  • The ‘sa’ login account should be disabled or renamed.

Extended Stored Procedures

The following extended stored procedures should not be used by any application or maintenance script.

  • Execute on ‘xp_availablemedia’ to PUBLIC should be revoked.
  • The ‘xp_cmdshell’ option should be set to disabled
  • Execute on ‘xp_dirtree’ to PUBLIC should be revoked.
  • Execute on ‘xp_enumgroups’ to PUBLIC should be revoked.
  • Execute on ‘xp_fixeddrives’ to PUBLIC should be revoked.
  • Execute on ‘xp_servicecontrol’ to PUBLIC should be revoked.
  • Execute on ‘xp_subdirs’ set to PUBLIC should be revoked.
  • Execute on ‘xp_regaddmultistring’ to PUBLIC should be revoked.
  • Execute on ‘xp_regdeletekey’ to PUBLIC should be revoked.
  • Execute on ‘xp_regdeletevalue’ to PUBLIC should be revoked
  • Execute on ‘xp_regenumvalues’ to PUBLIC should be revoked.
  • Execute on ‘xp_regremovemultistring’ to PUBLIC should be revoked
  • Execute on ‘xp_regwrite’ to PUBLIC should be revoked
  • Execute on ‘xp_regread’ to PUBLIC should be revoked.

Authentication and Authorization

  • The ‘Server Authentication’ Property should be set to Windows Authentication mode
  • CONNECT permissions on the ‘guest user’ should be revoked within all SQL Server databases excluding the master, msdb and tempdb
  • Orphaned Users should be dropped from SQL Server database. A database user for which the corresponding SQL Server login is undefined or is incorrectly defined on a server instance cannot log in to the instance and is referred to as orphaned and should be removed.

Password Policies

  • The ‘MUST_CHANGE’ Option should be set to ‘ON’ for all SQL authenticated logins
  • The ‘CHECK_EXPIRATION’ Option should be set to ‘ON’ for all SQL authenticated logins within the ‘Sysadmin’ Role
  • The ‘CHECK_POLICY’ Option should be set to ‘ON’ for all SQL authenticated logins

  • Auditing and Logging
  • The ‘Maximum number of error log files’ setting should be set to greater than or equal to 12.
  • The ‘Default Trace Enabled’ Server Configuration option should be set to 1.
  • ‘Login Auditing’ to both failed and successful logins

 

Thanks for Reading, Keep smiling, keep learning

As we use PERFMON – Performance Monitor to identify real time happening on SQL Server, Here using a DMV “sys.dm_os_performance_counters” may be used to calculate number of transactions on SQL Server as

DECLARE @Counter bigint
SELECT @Counter = cntr_value FROM sys.dm_os_performance_counters
WHERE counter_name ‘Transactions/sec’
AND object_name=‘SQLServer:Databases’
AND instance_name =‘VirendraTest’   — Your DB Name

— Wait for 1 second

WAITFOR DELAY ’00:00:01′

SELECT cntr_value @Counter FROM sys.dm_os_performance_counters
WHERE counter_name =‘Transactions/sec’
AND object_name =‘SQLServer:Databases’
AND instance_name =‘VirendraTest’— — Your DB Name