Concurrent Transactions in DBMS
⏱ Estimated reading time: 3 min
A transaction in DBMS is a sequence of operations performed as a single logical unit of work. When multiple transactions execute simultaneously, it is called concurrency. Modern database systems support concurrent transactions to improve performance, throughput, and system utilization.
However, concurrency also introduces several problems related to data consistency, which DBMS handles through concurrency control techniques.
What Are Concurrent Transactions?
Concurrent transactions are transactions executed at the same time in a multi-user database environment.
DBMS allows concurrent execution to:
✔ Improve system performance
✔ Reduce waiting time
✔ Utilize CPU and I/O efficiently
✔ Support multiple users simultaneously
Benefits of Concurrent Transactions
-
Higher Throughput – More transactions complete in less time.
-
Reduced Response Time – Users do not wait for others to finish.
-
Better Resource Utilization – CPU and disk remain busy.
-
Scalability – Supports large numbers of users and applications.
Problems Due to Concurrent Transactions
Without proper control, concurrency can lead to inconsistency. The main problems are:
1️⃣ Dirty Read
A transaction reads uncommitted data written by another transaction.
Example:
T1 updates balance → T2 reads updated value → T1 rolls back.
2️⃣ Unrepeatable Read
A transaction reads the same data twice and gets different values because another transaction modified it in between.
3️⃣ Lost Update
Two transactions update the same data; one update overwrites the other.
4️⃣ Phantom Read
A transaction re-executes a query and finds new rows inserted by another transaction.
5️⃣ Inconsistent Retrieval
A transaction reads several values while another transaction is updating them.
How DBMS Handles Concurrent Transactions?
DBMS uses Concurrency Control Protocols to maintain consistency.
Concurrency Control Mechanisms
1️⃣ Locks (Locking Protocols)
Locks prevent two transactions from accessing the same data simultaneously.
Types:
-
Shared Lock (S-lock): For reading
-
Exclusive Lock (X-lock): For writing
Protocols:
-
Two-Phase Locking (2PL)
-
Strict Two-Phase Locking
-
Rigorous 2PL
These ensure serializability and recoverability.
2️⃣ Timestamp Ordering
Each transaction gets a timestamp.
Operations are ordered by timestamps to avoid conflicts.
3️⃣ Optimistic Concurrency Control
Assumes conflicts are rare.
Validates transactions before commit.
4️⃣ Multi-Version Concurrency Control (MVCC)
Keeps multiple versions of data.
Used in PostgreSQL, MySQL InnoDB, Oracle.
Benefits:
✔ No blocking reads
✔ High performance
Transaction Isolation Levels
DBMS defines isolation levels to balance consistency and performance:
| Isolation Level | Problems Prevented |
|---|---|
| Read Uncommitted | — |
| Read Committed | Dirty Reads |
| Repeatable Read | Dirty + Unrepeatable Read |
| Serializable | All problems prevented |
Serializability
To ensure correctness, concurrent transactions must be serializable, meaning the result is the same as if transactions ran one after another.
Types:
-
Conflict Serializability
-
View Serializability
Conclusion
Concurrent transactions are essential for multi-user DBMS environments to improve performance and throughput. However, concurrency can introduce several anomalies, which are managed using locking, timestamp ordering, MVCC, and isolation levels. Understanding concurrency and its control mechanisms ensures data correctness, consistency, and reliability in database systems.
Register Now
Share this Post
← Back to Tutorials