EF Core Optimistic Concurrency vs Pessimistic Locking in .NET: Which Conflict Strategy Should Your Team Use in 2026?

Concurrency conflicts are silent killers in enterprise .NET applications. Two users update the same order record simultaneously โ one wins, one loses data, and your application has no idea. EF Core gives you two primary weapons to fight this: optimistic concurrency and pessimistic locking. But picking the wrong one for the wrong scenario costs you either performance or data integrity. This guide breaks down both strategies, adds a third option most teams overlook, and gives you a clear decision matrix so you can stop guessing.
๐ Want implementation-ready .NET source code you can drop straight into your project? Join Coding Droplets on Patreon for exclusive tutorials, premium code samples, and early access to new content. ๐ https://www.patreon.com/CodingDroplets
What Is the Concurrency Problem in EF Core?
When two or more requests read the same database row, modify it independently, and then try to save their changes, only one of those writes can be correct. The other is working from stale data. This is a lost update โ and it happens constantly in multi-user systems, microservices with shared databases, and any API that handles inventory, reservations, financial balances, or collaborative documents.
EF Core does not prevent lost updates by default. If two threads load the same entity and both call SaveChangesAsync, the second write silently overwrites the first. You need an explicit concurrency strategy.
Optimistic Concurrency in EF Core
Optimistic concurrency operates on a trust-first assumption: collisions are rare, so we do not lock data up front. Instead, EF Core records the state of the row at read time and checks whether it has changed when the write occurs. If someone else modified the record in the meantime, EF Core throws a DbUpdateConcurrencyException rather than saving stale data.
The mechanism works through a concurrency token โ typically a RowVersion column (SQL Server timestamp/rowversion type) or a [ConcurrencyCheck] property on a specific field. EF Core includes this token in the WHERE clause of every UPDATE statement it generates. If zero rows are affected, the token changed, and EF Core raises the exception.
Key characteristics:
- No database locks held between read and write
- Scales well under high read volume
- Requires application-level conflict detection and retry logic
- Best suited to scenarios where conflicts are infrequent โ reads far outnumber writes on the same row
When optimistic concurrency works well:
- High-traffic read APIs where most requests never modify the same record simultaneously
- E-commerce product catalogue updates where conflicts are occasional
- User profile edits (low collision probability)
- Any workload where a retry-on-conflict policy is acceptable
Where optimistic concurrency breaks down:
- Inventory decrement under high concurrency โ many requests compete for the same stock quantity, causing cascade retries
- Financial transfers where a missed conflict means an incorrect balance
- Reservation systems with a narrow availability window โ optimistic conflicts under load require aggressive retry logic that can spiral into retry storms
Pessimistic Locking in EF Core
Pessimistic locking takes the opposite assumption: conflicts are likely or the cost of a conflict is too high to tolerate. Rather than checking at write time, it prevents concurrent access entirely by acquiring a database-level lock before reading. Other transactions attempting to modify the same row are blocked until the lock is released.
EF Core does not have a first-class pessimistic lock API (unlike some ORMs). You implement it via raw SQL hints inside a transaction. For SQL Server, that means SELECT ... WITH (UPDLOCK, ROWLOCK). For PostgreSQL, it is SELECT ... FOR UPDATE. Both patterns acquire an exclusive lock that persists for the duration of the transaction.
Key characteristics:
- Lock is held from read to write โ guaranteed mutual exclusion
- No conflict exceptions; serialization is enforced at the database layer
- Does not scale as well under high concurrency โ threads queue waiting for the lock
- Transaction duration is critical โ long-held locks become bottlenecks fast
When pessimistic locking is the right call:
- Payment processing and ledger updates where a conflict means financial loss
- Seat or appointment reservation where exactly one allocation must succeed
- Counter decrement for limited-availability resources (flash sales, license seat allocation)
- Any scenario where retrying a failed operation carries unacceptable side effects
Where pessimistic locking becomes a liability:
- High-throughput APIs processing thousands of requests per second โ serialized locks create a queue and kill latency
- Operations that span multiple tables โ deadlock risk increases significantly
- Microservice boundaries โ holding a database lock across a network call to another service is a recipe for cascading stalls
The Third Option: Application-Level Distributed Locking
Many teams treat this as a binary choice and miss a third strategy that often fits better in distributed systems: application-level locking using a distributed lock manager such as Redis (via the Redlock algorithm or the DistributedLock NuGet package).
Instead of relying on the database to serialize access, the application acquires a named lock on a specific resource key before reading or writing. Only one instance holds the lock at a time. The database layer handles no locking at all.
When distributed application-level locks make sense:
- Multi-instance deployments (Kubernetes, Azure App Service multiple instances) where database-level pessimistic locks are difficult to coordinate
- Cross-service coordination โ you need to guard a logical operation that spans multiple databases or services
- You want lock timeout control at the application layer without worrying about database connection pooling effects
Trade-offs to accept:
- Introduces Redis (or another distributed cache) as a dependency
- Network latency for lock acquisition adds to every operation in the hot path
- Lock expiry tuning requires care โ too short and you get false releases; too long and failures stall the system
Side-By-Side Comparison
| Dimension | Optimistic Concurrency | Pessimistic Locking | Distributed App Lock |
|---|---|---|---|
| Lock held at DB | No | Yes | No |
| Failure mode | Exception on save | Blocked wait | Exception on timeout |
| Throughput | High | Lower under contention | Medium |
| Retry logic required | Yes | No | Yes (timeout case) |
| Deadlock risk | None | Yes (multi-row) | Low (with timeouts) |
| Multi-instance safe | Yes | Yes (DB-level) | Yes (Redis-level) |
| Complexity | Low | Medium | Medium-High |
| Best fit | Low-collision reads/writes | Critical financial ops | Cross-service coordination |
Real-World Trade-Offs
The Inventory Problem
A product has 1 unit of stock. Ten concurrent requests try to reserve it. With optimistic concurrency, all ten read stock = 1, all ten generate an UPDATE WHERE rowversion = X statement, and nine will fail with DbUpdateConcurrencyException. If your retry policy re-checks stock after the exception, eight requests correctly see stock = 0 and stop. This works โ but you need robust retry logic and idempotent handlers.
With pessimistic locking, all ten requests queue at the database. The first acquires the lock, decrements to 0, releases. Request two reads stock = 0 and exits cleanly without an exception. Simpler outcome, but request 2 through 10 waited in line. At 100 requests per second on the same SKU, that queue is a problem.
The Balance Transfer Problem
A bank transfer debits account A and credits account B. This is a classic two-row operation. Optimistic concurrency can fail on either row independently, creating a partial retry scenario that requires careful transaction coordination. Pessimistic locking with a properly scoped transaction and row-level locks on both accounts is the safer default. The serialization overhead is acceptable for financial operations โ correctness is the constraint, not throughput.
Decision Matrix: Which Strategy Fits Your Scenario
| Scenario | Recommended Strategy | Reason |
|---|---|---|
| User profile updates, low collision | Optimistic | Conflicts rare; simple retry is fine |
| Product catalogue edits | Optimistic | Infrequent same-row writes |
| Inventory decrement at scale | Pessimistic or distributed lock | Collision probability is high |
| Payment / ledger update | Pessimistic | Correctness > throughput |
| Seat/appointment reservation | Pessimistic | Exactly-once allocation required |
| Cross-service resource guard | Distributed app lock | Spans services/databases |
| High-read, low-write API | Optimistic | Locks are pure overhead |
| Flash sale / limited availability | Pessimistic or distributed lock | High contention, correctness critical |
Anti-Patterns to Avoid
Optimistic concurrency without retry logic. Throwing DbUpdateConcurrencyException to the caller and returning a 500 is not a strategy. Your application must catch the exception, reload the entity, re-apply the business logic, and retry โ with a bounded attempt count and backoff.
Pessimistic locking on tables with high fan-out. Locking a parent row that is touched by hundreds of child operations creates a sequential bottleneck. Scope your locks as narrowly as possible โ to the specific row, not the table.
Holding pessimistic locks across network calls. Acquiring a database lock, calling an external HTTP service, then releasing the lock is asking for trouble. External calls may time out or hang. Lock duration should cover only the data access, not downstream dependencies.
Using optimistic concurrency for financial operations without understanding the retry semantics. A retry is not idempotent by default. If your SaveChangesAsync retry path double-charges a customer because the business logic re-ran, the concurrency strategy is correct but the retry implementation is wrong.
Skipping concurrency entirely because "it probably won't happen." It will. Under load, it always does.
Recommendation: What Your .NET Team Should Standardize On
Start with optimistic concurrency as the default. It is the correct choice for the majority of enterprise workloads: lower complexity, no lock contention, and straightforward exception handling. Configure a RowVersion column on entities that are likely to be contested, wire up a DbUpdateConcurrencyException handler with bounded retries, and ship.
Switch to pessimistic locking โ scoped tightly, within short transactions โ for operations where the business cost of a conflict exceeds the performance cost of serialization. Financial operations, allocation of scarce resources, and anywhere "retry" has an observable side effect belong in this bucket.
Introduce distributed application-level locking when you are operating across service boundaries or need lock semantics that outlive a single database transaction.
The teams that get into trouble are the ones that apply one strategy globally. Concurrency management is not a project-wide setting โ it is a per-operation decision.
โ Prefer a one-time tip? Buy us a coffee โ every bit helps keep the content coming!
Frequently Asked Questions
What is the difference between optimistic concurrency and pessimistic locking in EF Core?
Optimistic concurrency does not hold a database lock. It records a concurrency token (such as a RowVersion) at read time and checks it at write time. If the token changed, EF Core throws DbUpdateConcurrencyException. Pessimistic locking acquires a database-level lock (via SQL hints like UPDLOCK or FOR UPDATE) before the read, blocking any other transaction from modifying the row until the lock is released.
Does EF Core support pessimistic locking natively?
EF Core does not have a built-in pessimistic lock API. You implement it using FromSqlRaw or ExecuteSqlRaw with database-specific lock hints inside an explicit transaction. SQL Server uses WITH (UPDLOCK, ROWLOCK); PostgreSQL uses FOR UPDATE.
When should I use optimistic concurrency in ASP.NET Core APIs?
Use optimistic concurrency when conflicts are infrequent โ high-read, low-write workloads such as user profile edits, content management, or product catalogues. It avoids lock overhead and scales well. Ensure you have a DbUpdateConcurrencyException handler with retry logic.
Can optimistic concurrency cause a lost update?
No โ that is exactly what it prevents. Without any concurrency control, a lost update occurs silently. With optimistic concurrency, the second writer receives a DbUpdateConcurrencyException, signaling that the data changed since it was read. Your application must handle this exception and decide whether to retry, merge, or reject the operation.
What is a RowVersion column in EF Core?
A RowVersion column is an 8-byte timestamp value that SQL Server automatically increments every time a row is updated. EF Core uses it as a concurrency token: it includes the original RowVersion value in the WHERE clause of UPDATE statements. If the row was modified by another transaction, the RowVersion will have changed and the UPDATE will affect zero rows, triggering DbUpdateConcurrencyException.
Is pessimistic locking safe in multi-instance deployments?
Yes โ database-level pessimistic locks work across application instances because the lock lives in the database, not in memory. All instances connecting to the same database server will be serialized correctly. However, connection pool pressure and lock duration become critical factors at scale.
What is the risk of using pessimistic locking with long-running transactions?
The primary risks are deadlocks (if multiple rows are locked in inconsistent order) and throughput degradation (as concurrent requests queue waiting for the lock). Best practice is to keep pessimistic lock transactions as short as possible โ acquire the lock, read, write, release. Never hold a database lock while calling an external service or performing a slow operation.
Should I use optimistic or pessimistic concurrency for inventory management?
It depends on expected contention. For moderate concurrency, optimistic concurrency with a robust retry handler works. For high-throughput flash sale inventory (hundreds of requests per second on the same SKU), pessimistic locking or a distributed application-level lock provides more predictable behaviour with fewer retry cascades.




