Posts Tagged ‘SQL index’

Here, I am writing some indexing related the best practices during index creation with SQL Server DB’s tables.

  • Periodically, run the Index Wizard or Database Engine Tuning Advisor against current Profiler traces to identify potentially missing indexes.
  • Check and remove unused indexes.
  • Its not a thumb rule to get optimized performance by creating Indexes, First check and create index instead of creating redundant indexes.
  • As a rule of thumb, every table should have at least a clustered index. Generally, but not always, the clustered index should be on a column that monotonically increases — such as an identity column, or some other column where the value is increasing — and is unique. In many cases, the primary key is the ideal column for a clustered index.
  • As we can have only one clustered index per table, take extra time to carefully consider how it will be used. Consider the type of queries that will be used against the table, and make an efficient judgement to which query is the most critical, and if this query will benefit from having a clustered index.
  • If a column in a table is not at least 95% unique, then most likely the query optimizer will not use a non-clustered index based on that column. Because of this, generally don’t want to add non-clustered indexes to columns that aren’t at least 95% unique.
  • Keep the width of indexes as narrow as possible. This reduces the size of the index and reduces the number of disk I/O reads required to read the index, boosting performance.
  • If possible, avoid adding a clustered index to a GUID column. GUIDs take up 16-bytes of storage, more than an Identify column, which makes the index larger, which increases I/O reads, which can hurt performance.
  • Try to avoid index on Timestamp column, its extra burden for I/O.
  • Indexes should be considered on all columns that are frequently accessed by the JOIN, WHERE, ORDER BY, GROUP BY, TOP, and DISTINCT clauses.
  • Don’t automatically add indexes on a table because it seems like the right thing to do. Only add indexes if you know that they will be used by the queries run against the table.
  • When creating indexes, try to make them unique indexes if at all possible. SQL Server can often search through a unique index faster than a non-unique index because in a unique index, each Row is unique, and once the needed record is found, SQL Server doesn’t have to look any further.
  • If we are regularly using joins between two or more tables in your queries, performance will be optimized if each of the joined columns has appropriate indexes.
  • Don’t automatically accept the default value of 100 for the fill factor for your indexes. It may or may not best meet your needs. A high fill factor is good for seldom changed data, but highly Modified data needs a lower fill factor to reduce page splitting.
  • Don’t over index OLTP tables, as every index will increases the time it takes to perform INSERTS, UPDATES, and DELETES. There is a fine line between having the ideal number of Indexes (for SELECTs) and the ideal number to minimize the overhead that occurs with indexes during data modifications.
  • If it’s known, an application is performing the same query over and over on the same table, consider creating a non-clustered covering index on the table. A covering index, which is a form of a composite index, includes all of the columns referenced in SELECT, JOIN, and WHERE clauses of a query.

    Happy Reading: Please Comment !

Advertisements