• Using INSERT INTO…SELECT to Bulk Import Data with Minimal Logging

INSERT INTO <target_table> SELECT <columns> FROM <source_table> is the best way to efficiently transfer a large number of rows from one table, such as a staging table, to another table with minimal logging. Minimal logging can improve the performance and will fill the minimum amount of available transaction log space during the transaction. Minimal logging for this statement has the following requirements:
• The recovery model of the database is set to simple or bulk-logged.
• The target table is an empty or nonempty heap.
• The target table is not used in replication.
• The TABLOCK hint is specified for the target table.

Rows that are inserted into a heap as the result of an insert action in a MERGE statement may also be minimally logged. Unlike the BULK INSERT statement, which holds a less restrictive Bulk Update lock, INSERT INTO…SELECT with the TABLOCK hint holds an exclusive (X) lock on the table. This means that we cannot insert rows using parallel insert operations.

  • Using OPENROWSET and BULK to Bulk Import Data

The OPENROWSET function can accept the following table hints, which provide bulk-load optimizations with the INSERT statement:
• The TABLOCK hint can minimize the number of log records for the insert operation. The recovery model of the database must be
set to simple or bulk-logged and the target table cannot be used in replication.
•The IGNORE_CONSTRAINTS hint can temporarily disable FOREIGN KEY and CHECK constraint checking.
• The IGNORE_TRIGGERS hint can temporarily disable trigger execution.
• The KEEPDEFAULTS hint allows the insertion of a table column’s default value, if any, instead of NULL when the data record
lacks a value for the column.
• The KEEPIDENTITY hint allows the identity values in the imported data file to be used for the identity column in the target table.

Here are Database Engine features that are no longer available in SQL Server 2012.

Category

Discontinued feature

Replacement

Backup and Restore

BACKUP { DATABASE | LOG } WITH PASSWORD and BACKUP { DATABASE | LOG } WITH MEDIAPASSWORD are discontinued. RESTORE { DATABASE | LOG } WITH [MEDIA]PASSWORD continues to be deprecated.

None

Backup and Restore

RESTORE { DATABASE | LOG } … WITH DBO_ONLY

RESTORE { DATABASE | LOG } … … WITH RESTRICTED_USER

Compatibility level

80 compatibility levels

Databases must be set to at least compatibility level 90.

Configuration Options

sp_configure ‘user instance timeout’ and ‘user instances enabled’

Use the Local Database feature. For more information, see SqlLocalDB Utility

Connection protocols

Support for the VIA protocol is discontinued.

Use TCP instead.

Database objects

WITH APPEND clause on triggers

Re-create the whole trigger.

Database options

sp_dboption

ALTER DATABASE

Mail

SQL Mail

Use Database Mail. For more information, see Database Mail and Use Database Mail Instead of SQL Mail.

Memory Management

32-bit Address Windowing Extensions (AWE) and 32-bit Hot Add memory support.

Use a 64-bit operating system.

Metadata

DATABASEPROPERTY

DATABASEPROPERTYEX

Programmability

SQL Server Distributed Management Objects (SQL-DMO)

SQL Server Management Objects (SMO)

Query hints

FASTFIRSTROW hint

OPTION (FAST n).

Remote servers

The ability for users to create new remote servers by using sp_addserver is discontinued. sp_addserver with the ‘local’ option remains available. Remote servers preserved during upgrade or created by replication can be used.

Replace remote servers by using linked servers.

Security

sp_dropalias

Replace aliases with a combination of user accounts and database roles. Use sp_dropalias to remove aliases in upgraded databases.

Security

The version parameter of PWDCOMPARE representing a value from a login earlier than SQL Server 2000 is discontinued.

None

Service Broker programmability in SMO

The Microsoft.SqlServer.Management.Smo.Broker.BrokerPriority class no longer implements theMicrosoft.SqlServer.Management.Smo.IObjectPermission interface.

SET options

SET DISABLE_DEF_CNST_CHK

None.

System tables

sys.database_principal_aliases

Use roles instead of aliases.

Transact-SQL

RAISERROR in the format RAISERROR integer ‘string’ is discontinued.

Rewrite the statement using the current RAISERROR(…) syntax.

Transact-SQL syntax

COMPUTE / COMPUTE BY

Use ROLLUP

Transact-SQL syntax

Use of *= and =*

Use ANSI join syntax. For more information, see FROM (Transact-SQL).

XEvents

databases_data_file_size_changed, databases_log_file_size_changed

eventdatabases_log_file_used_size_changed

locks_lock_timeouts_greater_than_0

locks_lock_timeouts

Replaced by database_file_size_change event, database_file_size_change

database_file_size_change event

lock_timeout_greater_than_0

lock_timeout

A data-tier application is an entity that contains all of the database and instance objects used by an application. It provides a single unit for authoring, deploying and managing the data tier objects, instead of having to manage them separately through scripts. A Dedicated Administrator Connection (DAC) allows tighter integration of data tier development and gives administrators an application level view of resource usage in their systems.

Data Tier Applications can be developed using Visual Studio, and then be deployed to a database using the Data Tier Deployment Wizard.

You can also Extract a Data-Tier Application for usage in Visual Studio, or easily deploy to a different instance of SQL Server, including a SQL Azure instance.


Let see a step by step example
A)
Extracting a Data Tier Application

  1. Suppose First server named as VIRENDRA1_SQL2012, and DB Name is as VIRENDRATEST
  2. Now from SQL Server Management Studio Object Explorer, right click on VIRENDRATEST on VIRENDRA1_SQL2012
  3. Click Tasks |Extract Data-Tier Application
  4. As the Data Tier Application Wizard displays, click Next
  5. In the Set Properties Window specify C:\VIRENDRATEST.dacpac
    as the filename under Save to DAC package file
  6. If promped that the file is already existing, replace the existing file.
  7. Click Next
  8. On the Validation and Summary Page, click Next
  9. As the Data Extraction completes, click Finish.

You have now successfully exported a Database as a Data-Tier Application.

B) Deploying a Data Tier Application

  1. Suppose First server named as VIRENDRA2_SQL2012, and DB Name is as VIRENDRATEST
  2. In SQL Server Management Studio Object Explorer, Right-Click VIRENDRA2_SQL2012
  3. Select Deploy Data Tier Application
  4. As the Introduction Page displays, click Next
  5. In the Select Package Window Browse to C:\VIRENDRATEST.dacpac
  6. Select the Data Tier Application Package you previously exported
  7. Click Next
  8. As Package Validation completes, click Next on the Update Configuration Page
  9. Click Next on the Summary Page
  10. As Deployment completes, click Finish

You have now successfully deployed a Data Tier Application to a SQL Server Instance running on a different server.

Here are some new SSMS 2012 features as

  1. Exploring Intellisense Enhancement – Enhanced Intellisense will now provide partial syntax completion, while in previous versions Intellisense would only display the objects that start with the syntax you started typing.
    Let see example

Its showing all objects which contain “TABLE”.

2) At the bottom left corner of the query window, notice there is a Zoom window that displays 100%, change this to 150%. The enhancement to be able to zoom in the query window enables you to read queries more accurately. Its for results message windows too.

3) Code Snippets are an enhancements to the Template Feature that was introduced in SQL Server Management Studio. You can also create and insert your own code snippets, or Import them from a pre-created snippet library.

Hi, Here are some Differences between SQL Server 2008/R2 and 2012.

Sr. No.

SQL Server 2008

SQL Server 2012

1 Exceptions handle using TRY….CATCH Unique Exceptions handling with THROW
2 High Availability features as Log Shipping, Replication, Mirroring & Clustering New Feature ALWAYS ON introduced with addition of 2008 features.
3 Web Development and Business Intelligence Enhanced with business intelligence features. Excel PowerPivot by adding more drill and KPI through. In Addition with SQL server 2008, Web Development and Business Intelligence Enhanced with business intelligence features and Excel PowerPivot by adding more drill and KPI’s.
4 Could not supported for Windows Server Core Support. Supported for Windows Server Core Support
5 Columnstore Indexes not supported. New Columnstore Indexes feature that is completely unique to SQL Server. They are special type of read-only index designed to be use with Data Warehouse queries. Basically, data is grouped and stored in a flat, compressed column index, greatly reducing I/O and memory utilization on large queries.
6 PowerShell Supported Enhanced PowerShell Supported
7 Distributed replay features not available. Distributed replay allows you to capture a workload on a production server, and replay it on another machine. This way changes in underlying schemas, support packs, or hardware changes can be tested under production conditions.
8 PowerView not available in BI features PowerView is a fairly powerful self-service BI toolkit that allows users to create mash ups of BI reports from all over the Enterprise.
9 EXECUTE … with RECOMPLIE feature Enhanced EXECUTE with many option like WITH RESULT SET….
10 Maximum numbers of concurrent connections to SQL Server 2008 is 32767 SQL server 2012 has unlimited concurrent connections.
11 The SQL Server 2008 uses 27 bit bit precision for spatial calculations.
The SQL Server 2012 uses 48 bit precision for spatial calculations
12 TRY_CONVERT() and FORMAT() functions are not available in SQL Server 2008
TRY_CONVERT() and FORMAT() functions are newly included in SQL Server 2012
13 ORDER BY Clause does not have OFFSET / FETCH options for paging ORDER BY Clause have OFFSET / FETCH options for paging
14 SQL Server 2008 is code named as Katmai. SQL Server 2012 is code named as Denali
15 In SQL Server 2008, audit is an Enterprise-only feature. Only available in Enterprise, Evaluation, and Developer Edition. In SQL Server 2012,support for server auditing is expanded to include all editions of SQL Server.
16 Sequence is not available in SQL Server 2008
Sequence is included in SQL Server 2012.Sequence is a user defined object that generates a sequence of a number
17 The Full Text Search in SQL Server 2008 does not allow us to search and index data stored in extended properties or metadata. The Full Text Search in SQL Server 2012 has been enhanced by allowing us to search and index data stored in extended properties or metadata. Consider a PDF document that has “properties” filled in like Name, Type, Folder path, Size, Date Created, etc. In the newest release of SQL Server, this data could be indexes and searched along with the data in the document itself. The data does have to be exposed to work, but it’s possible now.
18 Analysis Services in SQL Server does not have BI Semantic Model
Analysis Services will include a new BI Semantic Model (BISM). BISM is a

(BISM) concept. 3-layer model that includes: Data Model, Business Logic and Data Access

19 The BCP not support –K Option. The BCP Utility and SQLCMD Utility utilities enhanced with -K option, which allows to specify read-only access to a secondary replica in an AlwaysOn availability group.
20 sys.dm_exec_query_stats  sys.dm_exec_query_stats  added four columns to help troubleshoot long running queries. You can use the total_rows, min_rows, max_rows and last_rows aggregate row count columns to separate queries that are returning a large number of rows from problematic queries that may be missing an index or have a bad query plan.

Other more precious features will be added with this document very soon.
J

As a DBA we have to maintain our all databases Dictionaries, Here is a Script , from which we can generate a full view of Tables structure details.

SELECT a.Name  [Table Name],
b.name  [Column Name],
c.name  [Data Type],
b.length [Column Length],
b.isnullable [Allow Nulls],
CASE WHEN d.name is null  THEN 0 ELSE 1 END  [Primary Key],
CASE WHEN e.parent_object_id is null THEN 0 ELSE 1 END [ForiegnKey],
CASE WHEN e.parent_object_id is null
THEN ‘-‘ ELSE g.name END [Reference Table],
       CASE WHEN h.value is null THEN ‘-‘ ELSE h.value END [Description]
from sysobjects as a
join syscolumns as on a.id b.id
join systypes as on b.xtype  c.xtype
left join (SELECT so.id,sc.colid,sc.name FROM syscolumns sc
JOIN sysobjects so ON o.id sc.id
JOIN sysindexkeys si ON so.id si.id and sc.colid  si.colid
WHERE si.indid = 1on a.id d.id and b.colid d.colid
Left join sys.foreign_key_columns as on a.id e.parent_object_id
          and b.colid e.parent_column_id
left join sys.objects as on e.referenced_object_id g.object_id
left join sys.extended_properties as on a.id h.major_id
and b.colid h.minor_id
where a.type ‘U’
order by a.name

We can also get somehow details, not as above , using below

SELECT  FROM INFORMATION_SCHEMA.COLUMNS

JJJ

Here are initial tips for writing efficient/  cost-effective Queries

  • When using ANDput the condition least likely to be true first. The database system evaluates conditions from left to right, subject to operator precedence. If you have two or more AND operators in a condition, the one to the left is evaluated first, and if and only if it’s true is the next condition evaluated. Finally, if that condition is true, then the third condition is evaluated. You can save the database system work, and hence increase speed, by putting the least likely condition first. For example, if you were looking for all members living in Delhi and born before January 1, 1960, you could write the following query:
    SELECT  FirstName LastName  FROM  EMPLOYEE  WHERE  State  ‘Delhi’  AND  DateOfBirth  ‘1960-01-01’
    The query would work fine; however, the number of members born before that date is very small, whereas plenty of people live in New State. This means that State = Delhi will occur a number of times and the database system will go on to check the second condition, DateOfBirth < ‘1960-01-01’. If you swap the conditions around, the least likely condition (DateOfBirth < ‘1960-01-01’) is evaluated first:
    SELECT  FirstName LastName  FROM  MemberDetails  WHERE
    DateOfBirth  ‘1960-01-01’  AND  State  ‘Delhi’;
    Because the condition is mostly going to be false, the second condition will rarely be executed, which saves time. It’s not a big deal when there are few records, but it is when there are a lot of them.
  • When using ORput the condition most likely to be true first. Whereas AND needs both sides to be true for the overall condition to be true, OR needs only one side to be true. If the left-hand side is true, there’s no need for OR to check the other condition, so you can save time by putting the most likely condition first. Consider the following statement:
    SELECT  FirstName LastName  FROM  MemberDetails  WHERE  State
    ‘Delhi’  OR  DateOfBirth  ‘1960-01-01’;
    If Delhi is true, and it is true more often than DateOfBirth < ‘1960-01-01’ is true, then there’s no need for the database system to evaluate the other condition, thus saving time.
  • DISTINCT can be faster than GROUP BYDISTINCT and GROUP BY often do the same thing: limit results to unique rows. However,DISTINCT is often faster with some database systems than GROUP BY. For example, examine the following GROUP BY:

    SELECT  MemberId  FROM  Orders  GROUP  BY  MemberId;
    The 
    GROUP BY could be rewritten using the DISTINCT keyword:
    SELECT  DISTINCT  MemberId  FROM  Orders

  • Use IN with your subqueries. When you write a query similar to the following, the database system has to get all the results from the subquery to make sure that it returns only one value,
    SELECT  FirstName LastName  FROM  EMPLOYEE  WHERE
    EMPID  (SELECT  EMPID  FROM  Orders  WHERE  OrderId  = 2);
    If you rewrite the query using the IN operator, the database system only needs to get results until there’s a match with the values returned by the subquery; it doesn’t necessarily have to get all the values:

    SELECT  FirstName LastName  FROM  EMPLOYEE  WHERE  EMPID  IN (SELECT  EMPID  FROM  Orders  WHERE  OrderId  = 2);

  • Avoid using SELECT * FROM. Specifying which columns you need has a few advantages, not all of them about efficiency. First, it makes clear which columns you’re actually using. If you use SELECT * and actually use only two out of seven of the columns, it’s hard to guess from the SQL alone which ones you’re using. If you say SELECT FirstName, LastName…..then it’s quite obvious which columns you’re using. From an efficiency standpoint, specifying columns reduces the amount of data that has to pass between the database and the application connecting to the database. This is especially important where the database is connected over a network.
  • Search on integer columns. If you have a choice, and often you don’t, search on integer columns. For example, if you are looking for the member whose name is VIRENDRA YADUVANSHI and whose MemberId is 101, then it makes sense to search via the MemberId because it’s much faster.

Here is the complete list of Deprecated Database Engine Features in SQL Server 2012. This topic describes the deprecated SQL Server Database Engine features that are still available in SQL Server 2012. These features are scheduled to be removed in a future release of SQL Server. Deprecated features should not be used in new applications.
For more please visit
http://msdn.microsoft.com/en-us/library/ms143729.aspx

Its sometime happened, our DC (Main Primary DB Instance) may be down due to some reasion and we have to start our log shipped Secondary Server which is at DR location, needs to act as Primary Server for smooth Production.

In my Example lets Primary Server is named as ‘PRIMARY’ and DB which is used for Log shipping at server ‘SECONDARY’ is VIRENDRATEST.

Now suppose I would like to change my PRIMARY Server’s production environment as a ‘SECONDARY’ and ‘SECONDARY’ as a PRIMARY. Below are the steps for same.

  1. First be sure, all applications hits are stopped to DB server. For same we have to contact Application Team/Production to STOP/DISABLE the services/Application, After confirmation from there let go ahead.
  2. Take Full BACKUP of PRIMARY Server’s DB.
  3. Run LS-Backup Job at Primary Server and Disable it.
  4. Run LS-Copy Job at SECONDARY Server , after complition of this, disable this job.
  5. Run LS-Restore job at Secondary Server and after completion of it, Disable it
  6. Now Take LOG Backup from PRIMARY server as using command as
    BACKUP LOG VIRENDRATEST TO Disk=N’D:\LS_BACKUP\VIRENDRATESTLOG.BAK’ WITH NORECOVERY
  7. Copy LOG Backup file(of Step 6) of PRIMARY at SCECONDARY server manually.
  8. At SECONDARY , restores last BACKUP LOG file with option WITH RECOVERY
  9. Now our SECONDARY Server’s DB is up for production and PRIMARY Server’s DB is in RESTORING mode
  10. Now Configure LOG SHIPPING again at SECONDARY as PRIMARY and PRIMARY as SECONDARY.
  11. Run LS-Backup Job at newly PRIMARY server.
  12. Refresh both servers instance, both server should be in sysn order with proper status.
  13. Check the Logshipping where its working properly or not
  14. Inform to Production team to Start/Enable all Application/Services

I found a very powerful SSMS add-in as SSMSBoost.
“SSMS Boost add-in” is designed to improve productivity when working with Microsoft SQL Server in SQL Server Management Studio. The main goal of project is to speed-up daily tasks of SQL DBA and T-SQL developers. Please visit http://www.ssmsboost.com/ for more details and download and enjoy!