Skip to main content

Command Palette

Search for a command to run...

EF Core Bulk Operations in .NET: SaveChanges vs ExecuteUpdate/ExecuteDelete vs EFCore.BulkExtensions โ€” Which Should Your Team Use in 2026?

Published
โ€ข11 min read

EF Core bulk operations have quietly become one of the most consequential performance decisions a .NET team makes. With EF Core 7 introducing ExecuteUpdate and ExecuteDelete, and EF Core 10 refining them further, the landscape has shifted โ€” but many teams are still defaulting to SaveChanges for everything.

๐ŸŽ Want implementation-ready .NET source code you can drop straight into your project? Join Coding Droplets on Patreon for exclusive tutorials, premium code samples, and early access to new content. ๐Ÿ‘‰ https://www.patreon.com/CodingDroplets

The Three Contenders: What Each Strategy Actually Does

Before picking a winner, you need to understand what each approach is doing under the hood.

SaveChanges (with AddRange/RemoveRange) is EF Core's default persistence path. Every entity goes through the change tracker โ€” EF inspects it, detects modifications, generates SQL per entity, and batches those SQL commands in a single round-trip (since EF Core 5). It is thorough, safe, and developer-friendly. It also carries the full overhead of change tracking, entity materialisation, and interceptor execution.

ExecuteUpdate and ExecuteDelete bypass the change tracker entirely. Introduced in EF Core 7 and enhanced in EF Core 10, these methods translate directly to a single UPDATE ... WHERE or DELETE ... WHERE SQL statement. No entities are loaded into memory. No change-tracking events fire. The database handles everything in one operation.

EFCore.BulkExtensions (and similar third-party libraries like Z.EntityFramework.Extensions) use native database bulk-copy mechanisms โ€” SqlBulkCopy for SQL Server, COPY for PostgreSQL โ€” to push large datasets with near-zero per-row overhead. They excel at mass inserts where neither SaveChanges nor ExecuteUpdate/ExecuteDelete can compete.

How Does the Performance Actually Stack Up?

Benchmarks published across the community for EF Core 10 on SQL Server show a striking picture:

For UPDATE operations on 10,000 rows:

  • SaveChanges + change tracking: baseline (each row tracked, N SQL calls batched)
  • ExecuteUpdate: roughly 300โ€“550ร— faster โ€” single SQL statement, no entity materialisation
  • Third-party bulk libraries: comparable to ExecuteUpdate for simple set-based updates

For DELETE operations on 10,000 rows:

  • RemoveRange + SaveChanges: baseline
  • ExecuteDelete: roughly 250โ€“360ร— faster โ€” single DELETE WHERE statement
  • Third-party libraries: marginally ahead when row count exceeds hundreds of thousands

For INSERT operations on 10,000 rows:

  • AddRange + SaveChanges: reasonable โ€” EF Core's internal batching is efficient up to ~50,000 rows
  • Third-party bulk insert (SqlBulkCopy-backed): 10โ€“50ร— faster for very large datasets
  • ExecuteUpdate/ExecuteDelete: not applicable for inserts

The gap is real and not a synthetic benchmark artefact. ExecuteUpdate and ExecuteDelete issue a single SQL statement regardless of row count. SaveChanges issues a batch of N statements, each round-tripping through the change tracker.

What Do You Lose with ExecuteUpdate and ExecuteDelete?

The performance wins are not free. ExecuteUpdate and ExecuteDelete deliberately skip the change tracker. That means:

No interceptor execution. If you have ISaveChangesInterceptor implementations for audit logging, soft-delete enforcement, or domain event dispatch, they do not fire. Your audit trail has a gap.

No optimistic concurrency checks. SaveChanges respects concurrency tokens (row version, timestamp). ExecuteUpdate ignores them unless you manually add a WHERE clause condition.

No generated value return. When inserting via SaveChanges, EF Core reads back generated identity columns and populates them on the entity. ExecuteUpdate and ExecuteDelete return a row count, nothing more.

No in-memory state update. If you have entities already tracked in the DbContext, executing ExecuteDelete against the same rows does not remove them from the change tracker. Your in-memory state diverges from the database.

No domain event dispatch at the application level. Teams using the Outbox pattern or dispatching domain events from SaveChanges overrides will find those events silently skipped.

Is the .NET Team Aware of This? Yes โ€” Deliberately

This is not a bug or oversight. The EF Core team's documentation explicitly frames ExecuteUpdate/ExecuteDelete as the right choice when you do not need change-tracking semantics โ€” and SaveChanges as the right choice when you do. The two methods exist in different parts of the contract for good reason.

When Does SaveChanges Win?

Despite the performance headlines, SaveChanges remains the correct default for the majority of enterprise API endpoints:

  • When you need interceptors to fire (auditing, domain events, soft-delete enforcement)
  • When you need optimistic concurrency to be enforced without writing custom SQL predicates
  • When row count is small (dozens to low thousands) โ€” the batch overhead is negligible
  • When you need generated column values (identity, computed, row version) returned to the application
  • When entities are already tracked and you want EF Core to handle state management

For most CRUD endpoints serving HTTP requests, SaveChanges is correct. Premature optimisation here causes real correctness bugs.

When Should Your Team Use ExecuteUpdate and ExecuteDelete?

These methods belong in a specific, well-defined set of scenarios:

Batch status updates driven by criteria. Archiving inactive users, marking overdue invoices, bulk-flagging records โ€” any scenario where you know the criteria and do not need per-entity side-effects. The SQL WHERE clause is your friend here.

Cascading maintenance operations. Expiring sessions, clearing stale cache tokens, resetting retry counters โ€” infrastructure operations where domain logic does not apply.

Scheduled background jobs. Jobs that process large sets of rows on a timer, where interceptors firing per-row would create unacceptable overhead or nonsensical audit entries.

Reporting-adjacent cleanup. Pre-aggregating or denormalising data in scheduled tasks where the operation is intentionally low-level.

The common thread: these are infrastructure or administrative operations, not user-driven domain operations. The moment your bulk operation has business rules (conditional logic per entity, domain events, concurrency requirements), reach for SaveChanges or at minimum wrap ExecuteUpdate with explicit compensating logic.

Where Do Third-Party Bulk Libraries Fit In?

EFCore.BulkExtensions and Z.EntityFramework.Extensions remain relevant in specific niches:

Bulk inserts at scale. If you are inserting 100,000+ rows โ€” ETL pipelines, data migration, log ingestion โ€” SqlBulkCopy is still the fastest mechanism for SQL Server. EF Core's AddRange + SaveChanges batching handles tens of thousands well, but beyond that, the per-row cost adds up.

Upserts (merge semantics). ExecuteUpdate requires the rows to already exist. For true upsert scenarios โ€” insert if new, update if exists โ€” EFCore.BulkExtensions provides BulkInsertOrUpdate which maps to a SQL MERGE or ON CONFLICT DO UPDATE.

Cross-database bulk patterns. For PostgreSQL, MySQL, or Oracle-specific bulk mechanisms, third-party libraries handle dialect differences so your application code stays generic.

The licensing consideration. EFCore.BulkExtensions is MIT-licensed and free. Z.EntityFramework.Extensions (Entity Framework Plus) requires a commercial licence for production use. Evaluate this before standardising on it in enterprise projects.

Side-by-Side Comparison

Criterion SaveChanges ExecuteUpdate/Delete EFCore.BulkExtensions
Change tracker fires โœ… Yes โŒ No โŒ No
Interceptors fire โœ… Yes โŒ No โŒ No
Concurrency tokens respected โœ… Yes Manual Manual
Generated values returned โœ… Yes โŒ No Partial
Single SQL statement โŒ Batched โœ… Yes โœ… Yes
Insert performance at scale Moderate N/A โœ… Highest
Update performance at scale Baseline โœ… 300โ€“550ร— โœ… Comparable
Licence cost Free Free Free (MIT) / Commercial
EF Core version required Any EF Core 7+ Any

What Does an Enterprise Decision Look Like?

A practical decision tree for most .NET teams:

  1. Is this a domain operation driven by a user action? โ†’ Use SaveChanges. Interceptors, events, and concurrency matter.
  2. Is this a criteria-based bulk update or delete (no per-entity logic)? โ†’ Use ExecuteUpdate/ExecuteDelete. Add explicit concurrency conditions if needed.
  3. Are you inserting 50,000+ rows in a single operation? โ†’ Evaluate EFCore.BulkExtensions for insert scenarios.
  4. Do you need upsert semantics? โ†’ Use EFCore.BulkExtensions BulkInsertOrUpdate.
  5. Is your team already using SaveChanges overrides or interceptors for audit/events? โ†’ Document explicitly which code paths bypass them and ensure compensating logging for ExecuteUpdate/ExecuteDelete paths.

Anti-Patterns to Avoid

Replacing all SaveChanges calls with ExecuteUpdate for performance. This breaks audit trails, domain events, and concurrency enforcement โ€” often silently. Correctness regressions from this pattern are rarely caught in unit tests.

Using SaveChanges for background jobs processing millions of rows. The change tracker overhead is real at scale. Background jobs that process large datasets should use ExecuteUpdate/ExecuteDelete or bulk libraries from the start.

Mixing tracked entities and ExecuteDelete without clearing the tracker. If an entity is tracked in the current DbContext scope and you ExecuteDelete it by criteria, the tracked entity remains in memory as if it still exists. Subsequent SaveChanges calls may attempt to update a row that no longer exists.

Assuming third-party bulk libraries respect your domain model. EFCore.BulkExtensions at its core maps to SqlBulkCopy. Your value converters, owned entity configurations, and shadow property mappings may not behave as expected. Test bulk paths explicitly with your domain model configuration.

The Clear Winner Recommendation

There is no single winner โ€” the right tool depends on the operation:

  • For user-facing domain operations: SaveChanges is correct. Do not compromise domain semantics for performance you do not need.
  • For criteria-based batch updates and deletes in background jobs or maintenance tasks: ExecuteUpdate and ExecuteDelete are the right choice in EF Core 7+. They are built-in, maintainable, and dramatically faster.
  • For high-volume bulk inserts or upserts in ETL/migration/ingestion pipelines: EFCore.BulkExtensions adds real value and its MIT licence removes friction.

The most common mistake is treating this as a permanent architectural choice. The correct approach is routing each operation to the right tool based on its requirements โ€” not standardising the entire application on one path.

โ˜• Prefer a one-time tip? Buy us a coffee โ€” every bit helps keep the content coming!

Frequently Asked Questions

Does ExecuteUpdate fire EF Core interceptors? No. ExecuteUpdate and ExecuteDelete bypass the change tracker and the SaveChanges pipeline entirely, so ISaveChangesInterceptor and IDbCommandInterceptor implementations registered on the DbContext do not fire for these operations. If you depend on interceptors for auditing or domain event dispatch, you must handle those explicitly alongside ExecuteUpdate/ExecuteDelete calls.

Is ExecuteUpdate safe to use with optimistic concurrency tokens? Not automatically. SaveChanges respects [ConcurrencyCheck] attributes and IsRowVersion() configuration and throws DbUpdateConcurrencyException when a conflict is detected. ExecuteUpdate generates a plain UPDATE WHERE statement without a concurrency token condition unless you manually add one to the .Where() predicate. Teams using row versioning must add explicit version checks to ExecuteUpdate calls.

Can I use ExecuteUpdate and SaveChanges in the same transaction? Yes. You can enlist both in a single IDbContextTransaction using BeginTransactionAsync(). This is the recommended pattern when you need fast set-based updates alongside change-tracked operations in the same logical unit of work โ€” wrap both in a transaction and commit together.

When should I choose EFCore.BulkExtensions over ExecuteUpdate for updates? For set-based updates where all rows share the same new value (e.g., "set Status = 'Archived' where LastActiveDate < cutoff"), ExecuteUpdate is sufficient and requires no third-party dependency. For updates where each row needs a different value from an in-memory collection (e.g., batch-updating prices from a CSV import), EFCore.BulkExtensions' BulkUpdate method is more practical โ€” it can match and update each row individually via SqlBulkCopy staging, which ExecuteUpdate cannot do natively.

Does EFCore.BulkExtensions respect value converters and owned entities? Partially. EFCore.BulkExtensions reads your DbContext model metadata and applies many configurations โ€” including value converters and column mappings โ€” automatically. However, complex owned entity hierarchies, TPH/TPT inheritance, and certain custom conventions may not be handled correctly. Always integration-test bulk operation paths against your actual model configuration before shipping to production.

How do I audit bulk operations that bypass the change tracker? Common approaches include: (1) adding explicit log writes before or after ExecuteUpdate/ExecuteDelete calls in application service code; (2) using database-side triggers for tables that require an immutable audit trail; (3) creating a thin application-level wrapper that logs the operation metadata (affected criteria, row count, timestamp, actor) to a separate audit table independently of EF Core's change-tracking pipeline.

Is SaveChanges batching still relevant in EF Core 10? Yes. EF Core 10 continues to batch SaveChanges SQL commands into a single round-trip when inserting, updating, or deleting multiple rows โ€” up to the configured MaxBatchSize. For typical API workloads handling dozens to a few thousand rows per request, this batching makes SaveChanges highly efficient. The performance gap with ExecuteUpdate/ExecuteDelete only becomes operationally significant at tens of thousands of rows or in tight loops.