Posts Tagged ‘Bulk Import’

  • Using INSERT INTO…SELECT to Bulk Import Data with Minimal Logging

INSERT INTO <target_table> SELECT <columns> FROM <source_table> is the best way to efficiently transfer a large number of rows from one table, such as a staging table, to another table with minimal logging. Minimal logging can improve the performance and will fill the minimum amount of available transaction log space during the transaction. Minimal logging for this statement has the following requirements:
• The recovery model of the database is set to simple or bulk-logged.
• The target table is an empty or nonempty heap.
• The target table is not used in replication.
• The TABLOCK hint is specified for the target table.

Rows that are inserted into a heap as the result of an insert action in a MERGE statement may also be minimally logged. Unlike the BULK INSERT statement, which holds a less restrictive Bulk Update lock, INSERT INTO…SELECT with the TABLOCK hint holds an exclusive (X) lock on the table. This means that we cannot insert rows using parallel insert operations.

  • Using OPENROWSET and BULK to Bulk Import Data

The OPENROWSET function can accept the following table hints, which provide bulk-load optimizations with the INSERT statement:
• The TABLOCK hint can minimize the number of log records for the insert operation. The recovery model of the database must be
set to simple or bulk-logged and the target table cannot be used in replication.
•The IGNORE_CONSTRAINTS hint can temporarily disable FOREIGN KEY and CHECK constraint checking.
• The IGNORE_TRIGGERS hint can temporarily disable trigger execution.
• The KEEPDEFAULTS hint allows the insertion of a table column’s default value, if any, instead of NULL when the data record
lacks a value for the column.
• The KEEPIDENTITY hint allows the identity values in the imported data file to be used for the identity column in the target table.