SQL Server A to Z – Isolation Levels

In the SQL Server database, isolation levels determine how concurrent transactions play with each other.  They control the locking behavior of a transaction, including what kind of locks, if any, are requested on a resource, as well as how long those locks are maintained.  The isolation level also controls how we see data that is being modified by another process.

Concurrency side effects

Ideally, you want an isolation level that incurs the minimum amount of locking possible, and still gets you committed data in an acceptable timeframe.  With potentially hundreds of processes all accessing the same data, this is no small request.  There are a few potential side effects to concurrency, such as dirty reads, phantom reads, and blocking.

Dirty reads – reads on data that has been modifed by another transaction but not yet committed.  If the modifying transaction ends up getting rolled back, the reading transaction has invalid (aka dirty) data.

Phantom reads – when one transaction accesses the same set of data more than once and between those reads another transaction modifies that same data.  This can cause records to appear/disappear from one read to another.

Blocking – when one transaction holds a lock on a resource and another transaction tries to obtain an incompatible lock on the same resource.  For example, if both transactions try to lock a record exclusively.  The second transaction will be blocked by the first, and will have to wait until the first lock is released.

Isolation Levels

Let’s start with an overview of the isolation levels available in SQL Server.  The first is Read Uncommitted.  Read uncommitted is the lowest isolation level, and the only thing is ensures is that the data you’re reading isn’t physically corrupt.  Read uncommitted transactions do not issue any shared locks on the data being read.  Therefore they do not block other transactions, nor are they blocked, even by other transactions’ exclusive locks.  Therefore, with read uncommitted isolation you’re susceptible to dirty and phantom reads, but if you have a long-running query, this level is the least likely to cause blocking problems.

The next isolation level is Read Committed.  Read committed is the default isolation level in SQL Server.  It ensures that the data being read is committed, thereby eliminating the possibility of dirty reads.  However, it does not eliminate phantom reads.  Shared locks are issued on rows being read, so this level can cause blocking for other transactions attemting to modify that data.

Repeatable read isolation eliminates dirty and, to a degree, phantom reads.  Shared locks are placed on all data being read and are maintained until the transaction is committed or rolled back.  This prevents data from disappearing or changing between individual reads.  However another transaction can always insert new records that meet your criteria, and you’ll suddenly have additional rows in your dataset.  You want to be careful to use repeatable read only where necessary, as long-running transactions can end up blocking other processes for quite a while.

If you’re familiar with Oracle databases, Snapshot isolation will sound familiar.  Snapshot isolation ensures that all data read by a transaction will be a consistent version of the data as it existed at the start of the transaction.  It does not issue locks on read requests, thus it doesn’t block other transactions.  With snapshot isolation, there are no dirty reads and no phantom reads, either.  How does SQL Server do it?  Row versioning.  Row versions of changed data are maintained in Tempdb with a transaction sequence number.  So make sure you have adequate space in tempdb if you decide Snapshot isolation is right for you.

The last, and most restrictive, isolation level is Serializable.  With Serializable isolation, uncommitted data isn’t read (so no dirty reads), data being read cannot be modified by other transactions, and new data cannot be inserted if it will fall inside the dataset of this transaction (so no phantom reads).  SQL Server accomplishes this by issuing range locks on the data being used by this transaction.  Like repeatable read, this level should only be used when absolutely necessary

Further reading

For more information on the isolation levels available in SQL Server check out the following:


Understanding Row-Versioning-Based Isolation Levels

Working with Snapshot Isolation


Also recommended:

Leave a comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

2 thoughts on “SQL Server A to Z – Isolation Levels