1. Basic locking
2. Two-phase locking
3. Types of locks
4. Starvation
5. Deadlock
   5.1. Timeout
   5.2. Avoidance through ordering elements
   5.3. Wait-for graph

1. Basic locking

To address serializability, we add the notion of locking to our system. In particular, each database element has a lock, with the following rules:

Unfortunately, this system alone doesn't even work. Consider the below two transactions.

Transaction 1 (copy A into B): l1(A), r1(A), u1(A), l1(B), w1(B), u1(B)
Transaction 2 (copy B into A): l2(B), r2(B), u2(B), l2(A), w2(A), u2(A)

One possible schedule is the following:

l1(A), l2(B), r1(A), r2(B), u1(A), u2(B), l1(B), l2(A), w1(B), w2(A), u1(B), u2(A)

This transaction has no locking violations: Each transaction operates only on data elements for which it holds a lock, and at no time do two transactions hold locks for the same data element. Let's trace what happens if A starts out at 2 and B at 3:

init l1(A), l2(B), r1(A), r2(B), u1(A), u2(B), l1(B), l2(A), w1(B), w2(A), u1(B), u2(A)
A: 2 2 3
B: 3 3 2

So A ends up at 3 and B ends up at 2. This outcome wouldn't happen with any serial schedule: If our serial schedule does transaction 1 first, then transaction 2, then both A and B end up at 2; and if it does transaction 2 first, then transaction 1, then they both end up at 3. Since the outcome of our proposed schedule doesn't match any of the possible serial schedules' outcomes, this schedule is not serializable.

Valid locking, therefore, does not ensure a serializable schedule. That's a big problem, which we need to address.

2. Two-phase locking

In two-phase locking, we require that each transaction finish acquiring all its locks before it releases any of them. That is, each transaction can be split into two phases: In the first phase, it accumulates locks without releasing any of them; and in the second phase, it releases locks without acquiring any. Of course, the real operations for the transaction (reads and writes) can happen in either phase, as long as the transaction operates on a data element only during the time that it holds the lock for that element.

Our earlier example violated two-phase locking: Transaction 1 released its lock on A before acquiring a lock of B. However, we can manipulate the transactions slightly so they do conform to two-phase locking.

Transaction 1 (copy A into B): l1(A), r1(A), l1(B), u1(A), w1(B), u1(B)
Transaction 2 (copy B into A): l2(B), r2(B), l2(A), u2(B), w2(A), u2(A)

The claim is that any valid schedule for a set of transations using two-phase locking must necessarily be conflict-serializable. In fact, we can argue this formally.

Theorem: For a set of transactions using two-phase locking, any schedule conforming to the locking rules must necessarily be conflict-serializable.

Proof: First, let us take our schedule and use it to establish an ordering of transactions: We'll say Transaction i comes before Transaction j if the first unlock by Transaction i in the schedule comes before the first unlock by Transaction j.

Now we'll consider the algorithm for verifying conflict-serializability, where we build a graph containing a vertex for each transaction. You can visualize this graph where we've arranged the vertices in a line according to the transaction ordering of the previous paragraph. Our claim will be that every edge in this graph will be pointing forward in the ordering, and so the graph cannot possibly contain any cycles.

So consider two operations (reads/writes) on the same data element X by different transactions, i and j. Since they involve the same element, these operations may lead to an edge in our graph, and we want to show that this edge must be pointing forward in the ordering — that i must precede j in our ordering on transactions. (An edge may not result from this operation pair, since both may be reads — but we're claiming that the edge will point forward if it exists.)

Since i and j must each own the lock while operating on X, there must be an unlock by i and lock by j between the two operations. Our schedule then includes this sequence:

oi(X) … ui(X) … lj(X) … oj(X)

Note that i must have its first unlock before or at the ui(X) operation, while j must have its first unlock after the lj(X) operation. Thus, i has its first unlock before j's first unlock, and so it precedes j in our ordering. Any edge from i to j would therefore be pointing forward in the ordering.

Since all edges point forward in the linear ordering, there cannot possibly any cycles arising in our graph. Thus the schedule must be conflict-serializable.

3. Types of locks

Most systems use two types of locks: A transaction requests a shared lock when it only wishes to read an element, and it requests an exclusive lock when it also wants the ability to modify an element.

The idea here is that we can allow multiple transactions to hold a shared lock on the same element, since all such transactions are simply reading the same value of the element and so don't conflict with one another; allowing this increases concurrency. But a shared lock prevents another transaction from acquiring an exclusive lock; and an exclusive locks prevents any other transaction from acquiring a lock of either type. This is summarized by the following table.

currentshared allow deny
exclusive deny deny

Adding a third type of lock called an update lock is occasionally useful: An update locks starts out as a shared lock, but it carries with it the right to be later upgraded to an exclusive lock. We can allow a transaction to acquire an update lock on an element while others hold shared locks, but we can only allow one such transaction since we can't grant the right to upgrade to two transactions simultaneously.

currentshared allow allow deny
update deny deny deny
exclusive deny deny deny

4. Starvation

If we have locks, then we must consider carefully the issue of starvation. In particular, we need a policy that guarantees that no transaction will be waiting on a lock indefinitely.

As an example of this, suppose that transaction Ω requests an exclusive lock on an element. But it cannot be granted immediately because another transaction A holds a shared lock. Before A releases its lock, suppose another transaction B is granted a shared lock on this same element. And before B releases its request, another transaction C is granted a shared lock. As long as the DBMS continues granting more shared locks before the previous one releases its locks, Ω will never be granted its exclusive lock — it will simply starve.

Getting around starvation isn't very difficult, though. We simply need a more refined policy than always granting shared locks whenever possible. One possible policy is a first-come-first-serve policy: When a transaction requests a lock that can't immediately be granted, we grant no other locks on that element until the first request is granted.

An alternative policy is oldest-first: We track the time when each transaction starts. Whenever two transactions desire a lock on the same element, we ensure that the one that started earlier gets served first.

5. Deadlock

Another major issue with using locks is the possibility of deadlock: One transaction may obtain a lock on A, then another obtains a lock on B, then the first transaction requests a lock on B and the second requests a lock on A. Neither can be satisfied.

More generally, deadlock occurs when there is any cycle of transactions: One transaction obtains a lock on A and requests a lock on B; another has already obtained a lock on B and would like a lock on C; another has already obtained a lock on C and would like a lock on D; and so on, until we reach a transaction that has already obtained a lock on Z and is requesting a lock on A. None of the transactions will proceed.

Any system involving locks will have to deal with deadlock somehow.

5.1. Timeout

The simplest technique, which is the most widely used, is simply that whenever any transaction has spent more than a fixed time (say, a minute) requesting a lock without obtaining it, we kill the transaction and release its locks.

This approach has the advantage of simplicity and minimal overhead as long as deadlocks do not occur (which is the more common case). The disadvantages of timeouts are that it sometimes kills a transaction spuriously; also, when deadlock does occur, there is quite a delay before this is finally recognized.

5.2. Avoidance through ordering elements

Another approach is to identify an ordering on our database elements and to require each transaction to acquire its locks in this order. Thus, a transaction that will end up desiring locks on both A and B must obtain its lock on A before obtaining a lock on B.

With this rule in place, deadlock simply won't happen: Any cycle that might lead to deadlock would posit some transaction that holds a lock on one element that is later in the order than the lock it wishes to acquire.

While simple, this approach is generally impractical for database systems, since it presumes that the DBMS will know before a transaction begins which locks the transaction will require. A developer using a DBMS, then, would need to report this up front before the transaction begins.

5.3. Wait-for graph

Here, we maintain a directed graph called a wait-for graph with a vertex for each transaction. Whenever one transaction requests a lock that can't immediately be fulfilled, we insert a directed edge from that transaction's vertex to each vertex of a transaction holding a lock on the that element. If a cycle were to creep into this graph, then this would imply deadlock.

In fact, though, we simply won't allow cycles to arise: If a transaction has a lock request that leads to an edge to complete a cycle, we immediately cancel the requesting transaction rather than allow a cycle.