Skip to main content

Command Palette

Search for a command to run...

C# Multithreading and Concurrency Interview Questions for Senior .NET Developers (2026)

Published
โ€ข15 min read
C# Multithreading and Concurrency Interview Questions for Senior .NET Developers (2026)

Multithreading and concurrency remain among the most heavily tested topics in senior .NET interviews. Whether you are interviewing at a fintech firm building high-throughput payment processors or an e-commerce platform handling thousands of concurrent orders, interviewers use C# concurrency questions to separate developers who have merely used async/await from those who genuinely understand what happens inside the CLR's threading infrastructure. This guide covers the questions that matter most in 2026, updated for .NET 10 and C# 14, grouped from foundational concepts through advanced production-grade patterns.

๐ŸŽ Want implementation-ready .NET source code you can drop straight into your project? Join Coding Droplets on Patreon for exclusive tutorials, premium code samples, and early access to new content. ๐Ÿ‘‰ https://www.patreon.com/CodingDroplets


Basic-Level Questions

What Is the Difference Between Concurrency and Parallelism in .NET?

Concurrency means structuring a program so that multiple tasks can be in progress at the same time, not necessarily executing simultaneously. Parallelism means multiple tasks are actually executing simultaneously on multiple CPU cores.

In .NET terms: async/await is concurrency โ€” a single thread can handle many I/O-bound tasks by yielding control while waiting. Parallel.ForEach and Task.WhenAll with CPU-bound work is parallelism โ€” multiple thread pool threads execute code at the same time on separate cores.

The practical distinction matters because misapplying parallelism to I/O-bound work wastes threads, while applying only concurrency to CPU-bound work underutilises available cores.


What Is a Race Condition and How Do You Prevent It?

A race condition occurs when two or more threads read and write shared state in an uncoordinated way, producing results that depend on the unpredictable order of execution.

Prevention strategies in C#:

  • lock statement โ€” mutual exclusion using a monitor; the most common approach for short critical sections
  • Interlocked class โ€” atomic operations (Increment, CompareExchange) for simple counter or flag updates without a lock
  • Immutable data โ€” threads that never write to shared state cannot race
  • Thread-local storage โ€” [ThreadStatic] or ThreadLocal<T> gives each thread its own copy
  • Concurrent collections โ€” ConcurrentDictionary<TKey, TValue>, ConcurrentQueue<T> manage internal synchronisation for you

Interviewers want to hear that you reach for the lightest mechanism first: Interlocked for counters, lock for short critical sections, and concurrent collections for shared data structures.


What Is the Difference Between a Thread and a Task in C#?

A Thread (System.Threading.Thread) is a low-level OS construct โ€” creating one allocates roughly 1 MB of stack memory and kernel resources. You manage its lifetime explicitly.

A Task (System.Threading.Tasks.Task) is a higher-level abstraction that represents a unit of work, typically backed by a thread pool thread managed by the CLR. Tasks support:

  • Composability (ContinueWith, Task.WhenAll, Task.WhenAny)
  • Cancellation via CancellationToken
  • Exception propagation through AggregateException
  • async/await integration

In 2026 .NET applications, raw Thread creation is rare. You use Task or async/await for nearly everything. Reserve Thread for scenarios where you need a dedicated long-running thread with a specific priority or apartment state (e.g., COM interop).


What Is the ThreadPool and How Does It Work?

The CLR ThreadPool is a shared pool of worker threads managed by the runtime. Instead of creating and destroying threads for each work item โ€” which is expensive โ€” you queue work to the pool, and available threads pick it up.

Key behaviours:

  • Minimum threads: The pool maintains a minimum number of idle threads. When all are busy, new threads are injected using an adaptive algorithm (hill-climbing) that measures throughput and adds threads when beneficial.
  • Maximum threads: Configurable via ThreadPool.SetMaxThreads. Defaults are very high (hundreds to thousands depending on platform).
  • Thread injection delay: When the pool is saturated and new work arrives, there is a deliberate delay before injecting new threads (500 ms default). This is why blocking thread pool threads during high load causes latency spikes โ€” the pool does not immediately compensate.

Senior interviewers often follow up: "Why should you never block a thread pool thread with Thread.Sleep or synchronous I/O inside a Task?" The answer: you starve the pool, trigger injection delays, and degrade throughput across the entire process.


What Does volatile Do in C# and When Should You Use It?

The volatile keyword tells the compiler and CPU not to cache a field value in a register and not to reorder reads and writes around it. It ensures that reads always fetch the latest value from main memory.

It is suitable for simple flag fields (e.g., bool _cancelling) read by one thread and written by another, where you only need visibility, not atomicity of compound operations.

volatile is often misunderstood as a general synchronisation tool. It is not. It does not prevent race conditions on compound operations (read-modify-write). For those, use Interlocked or lock.


Intermediate-Level Questions

What Is the Difference Between lock, Monitor, Mutex, and SemaphoreSlim?

Primitive Scope Async-Compatible Use Case
lock In-process โŒ Short critical sections; syntactic sugar for Monitor
Monitor In-process โŒ Same as lock but with TryEnter, Wait/Pulse for signalling
Mutex Cross-process โŒ Global named locks; e.g., ensuring single instance of a process
SemaphoreSlim In-process โœ… (WaitAsync) Rate-limiting concurrent access; e.g., limiting to N concurrent DB connections
Semaphore Cross-process โŒ Cross-process semaphore (rare)

The key senior answer here: prefer SemaphoreSlim with WaitAsync inside async code because lock cannot be held across await boundaries (the compiler enforces this). For cross-process coordination, use Mutex or a distributed lock.


How Do You Use CancellationToken Correctly?

CancellationToken enables cooperative cancellation. The caller creates a CancellationTokenSource, passes the token to async methods, and can cancel via cts.Cancel().

Correct usage patterns:

  • Pass it everywhere โ€” every async method in the call chain should accept and forward the token
  • Check ct.IsCancellationRequested in CPU-bound loops rather than calling ct.ThrowIfCancellationRequested() inside tight inner loops (reduces exception overhead)
  • Register cleanup โ€” use ct.Register(() => ...) to release resources when cancellation occurs
  • Linked tokens โ€” combine an external cancellation with a timeout using CancellationTokenSource.CreateLinkedTokenSource(externalToken, timeoutCts.Token)

Common anti-pattern: catching OperationCanceledException at every level and swallowing it. Only the outermost caller should decide whether a cancellation is an error or expected behaviour.


What Are Concurrent Collections and When Should You Use Them?

The System.Collections.Concurrent namespace provides thread-safe collection types:

Collection Best For
ConcurrentDictionary<TK, TV> Shared cache; producer-consumer with key lookups
ConcurrentQueue<T> FIFO work queues; multiple producers, multiple consumers
ConcurrentStack<T> LIFO work; e.g., undo stacks in multi-threaded editors
ConcurrentBag<T> Unordered, producer-consumer where the same thread often consumes what it produced
BlockingCollection<T> Bounded producer-consumer pipelines with back-pressure

ConcurrentDictionary is the most commonly misused. Its AddOrUpdate and GetOrAdd methods are atomic for the dictionary state, but the factory delegate you pass in may be called multiple times under contention. Never put side-effect code (like database writes) inside the factory delegate.


What Is the Task Parallel Library (TPL) and What Are Its Key Types?

The TPL simplifies parallel and concurrent programming by abstracting thread management. Key types:

  • Task / Task<T> โ€” represents a unit of asynchronous or parallel work
  • Parallel class โ€” Parallel.For, Parallel.ForEach, Parallel.Invoke for data parallelism; automatically partitions work across thread pool threads
  • Task.Factory โ€” exposes advanced task creation options (e.g., LongRunning to request a dedicated thread instead of a pool thread)
  • TaskCompletionSource<T> โ€” bridges event/callback APIs into the Task-based world
  • Dataflow (TPL Dataflow) โ€” pipeline and actor-model style processing with bounded buffers

Parallel.ForEach is frequently over-applied. It is correct for CPU-bound work on large collections. It is wrong for I/O-bound operations โ€” use async/await with Task.WhenAll instead.


What Is a Deadlock in .NET and How Do You Diagnose and Prevent It?

A deadlock occurs when two or more threads are each waiting for a resource held by the other, creating a circular dependency that prevents any of them from making progress.

Classic .NET deadlock scenario: calling .Result or .Wait() on a Task inside code that runs on a SynchronizationContext (e.g., legacy ASP.NET or WinForms). The blocking call holds the context thread while the continuation tries to resume on the same context, producing a deadlock.

Prevention:

  • Never block on async code โ€” use await all the way up the call stack
  • Use ConfigureAwait(false) in library code to avoid capturing the sync context
  • Apply timeouts โ€” SemaphoreSlim.WaitAsync(timeout) instead of unbounded waits
  • Lock ordering โ€” when multiple locks are necessary, always acquire them in the same global order

Diagnosis: use WinDbg with the SOS extension, or the Parallel Stacks window in Visual Studio, or dotnet-dump with clrthreads + clrstack to identify threads blocked waiting for each other.


Advanced-Level Questions

How Does the async/await State Machine Work Internally?

When the C# compiler encounters an async method, it transforms it into a state machine struct (a value type on the heap via boxing). Each await point becomes a state transition:

  1. The method executes synchronously until it reaches an await on an incomplete Task
  2. The continuation (what follows the await) is registered as a callback on the awaited task
  3. The method returns an incomplete Task to its caller
  4. When the awaited operation completes, the callback resumes the state machine from the correct state

The key insight for senior candidates: there is no dedicated thread blocked waiting during an I/O-bound await. The thread returns to the pool. This is the scalability advantage of async/await for I/O-heavy workloads โ€” a single thread can handle thousands of concurrent in-flight requests.

In .NET 10, the JIT further optimises simple async state machines to avoid heap allocations when the task completes synchronously โ€” an important performance detail for hot paths.


What Is ValueTask<T> and When Should You Prefer It Over Task<T>?

Task<T> always allocates a heap object. For methods that frequently complete synchronously (e.g., a cache hit path), this allocation happens on every call even though no async work occurs.

ValueTask<T> avoids that allocation when the result is available immediately:

  • If the operation completes synchronously, ValueTask<T> is a struct wrapping the result โ€” zero heap allocation
  • If the operation is truly async, it wraps a pooled IValueTaskSource<T> implementation

Rules for senior use:

  • Do not await a ValueTask<T> more than once โ€” doing so is undefined behaviour
  • Do not call .Result on an incomplete ValueTask<T> โ€” it does not block; it throws
  • Use ValueTask<T> on hot paths where the synchronous path is common (e.g., caching layers, struct-backed async iterators)
  • Do not use it by default everywhere โ€” for general APIs where async is always needed, Task<T> is simpler and safer

What Are IAsyncEnumerable<T> and Async Streams?

IAsyncEnumerable<T> (C# 8 / .NET Core 3.0+) allows you to produce and consume sequences of data asynchronously, one item at a time, without buffering the entire collection.

In 2026 senior interviews, expect questions on:

  • Back-pressure: IAsyncEnumerable<T> is pull-based โ€” the consumer controls the pace. Contrast with IObservable<T> (push-based, Rx.NET) where the producer drives the pace.
  • Cancellation: Pass a CancellationToken via [EnumeratorCancellation] parameter pattern
  • WithCancellation: Use on the consumer side via await foreach (var item in source.WithCancellation(ct))
  • Interaction with Channel<T>: For producer-consumer pipelines in ASP.NET Core, System.Threading.Channels is the preferred approach; IAsyncEnumerable<T> is well-suited for streaming query results from EF Core or HTTP responses.

How Do System.Threading.Channels Work and When Are They Preferable to BlockingCollection<T>?

Channel<T> (introduced in .NET Core 2.1) is a high-performance, async-first producer-consumer primitive. It supports:

  • Unbounded channels โ€” producers never block; suitable when you trust producers not to flood memory
  • Bounded channels โ€” producers wait (or drop, or throw) when the buffer is full; provides back-pressure

Comparison with BlockingCollection<T>:

Feature BlockingCollection<T> Channel<T>
Async support โŒ Blocking only โœ… ReadAsync/WriteAsync
Back-pressure (async) โŒ Blocks the thread โœ… WaitToWriteAsync
Allocation efficiency Moderate Very low
.NET version .NET Framework+ .NET Core 2.1+

For modern ASP.NET Core background processing pipelines, Channel<T> is the correct choice. It pairs naturally with IHostedService/BackgroundService for in-process work queues.


What Is the Lock Type in C# 13+ and How Does It Differ from lock (object)?

C# 13 introduced System.Threading.Lock โ€” a new struct-based synchronisation primitive designed to replace the lock (object) pattern with improved semantics and performance.

Key differences:

  • Lock has an explicit EnterScope() method returning a Lock.Scope disposable, enabling the using pattern
  • The compiler generates more efficient code โ€” avoids the Monitor.Enter/Monitor.Exit overhead of the classic lock keyword when targeting Lock directly
  • Lock participates in ref struct scope and is more amenable to future runtime intrinsics

For .NET 9 and .NET 10 codebases, migrating hot-path critical sections from lock (object) to the new Lock type is a low-risk, measurable throughput improvement worth noting when interviewers ask what is new in .NET threading in 2026.


How Do You Write Thread-Safe Singletons in .NET Without Locking?

The correct approach is Lazy<T> with LazyThreadSafetyMode.ExecutionAndPublication (the default):

Lazy<T> uses a lightweight double-checked locking implementation internally. The factory is called exactly once, even under concurrent access, and subsequent calls are lock-free reads.

Alternative for static class scenarios: the CLR's type initializer guarantee โ€” a static field initialised at declaration is guaranteed to be initialised exactly once before first use, under the CLR's type-load lock. This is the "static holder" pattern, where a private static inner class holds the singleton instance.

Interviewers test this because naive double-checked locking without volatile is a classic .NET gotcha that produces incorrect results on hardware with weak memory models.


FAQ

What Is the Difference Between async/await and Multithreading in C#?

async/await is primarily a concurrency model for I/O-bound work โ€” it allows a single thread to handle many concurrent operations by releasing the thread while waiting for I/O. Multithreading uses multiple threads running in parallel, typically for CPU-bound work. They are complementary: use async/await for I/O, use Task.Run or the TPL for CPU-bound parallelism.

Why Should You Avoid Task.Run Inside ASP.NET Core Controllers?

ASP.NET Core is already async. Wrapping synchronous work in Task.Run inside a controller adds unnecessary thread pool overhead without improving throughput. It is appropriate only for genuinely CPU-bound work (e.g., image processing) to avoid blocking the I/O thread โ€” but even then, consider offloading to a dedicated background queue.

What Is Thread Starvation and How Do You Prevent It in .NET?

Thread starvation occurs when all thread pool threads are blocked (e.g., waiting on synchronous I/O or .Result calls), and new work cannot execute because the pool cannot inject threads fast enough. Prevention: always use async/await for I/O-bound operations, set ThreadPool.SetMinThreads appropriately for workloads with many concurrent I/O-bound tasks, and avoid synchronous blocking on async code.

What Does ConfigureAwait(false) Do and When Should You Use It?

ConfigureAwait(false) tells the awaiter not to capture the current SynchronizationContext and resume the continuation on it. Instead, the continuation runs on whatever thread completes the awaited operation. Use it in library code and non-UI code to avoid deadlocks and improve performance by not marshalling back to a specific context. In ASP.NET Core, there is no SynchronizationContext, so ConfigureAwait(false) has no functional effect โ€” but it is still good practice in shared libraries for compatibility with framework-hosting contexts.

What Are the Most Common C# Multithreading Mistakes Senior Developers Still Make?

  1. Blocking on async code (task.Result, task.Wait()) in synchronous code paths โ€” causes deadlocks
  2. Shared mutable state without synchronisation โ€” race conditions that are hard to reproduce
  3. Using lock across await โ€” the compiler prevents this, but developers work around it incorrectly
  4. Abusing Parallel.ForEach for I/O-bound loops โ€” creates excessive thread pool pressure
  5. Ignoring OperationCanceledException propagation โ€” hiding cancellation state from the caller
  6. Side effects inside ConcurrentDictionary factory delegates โ€” executing non-idempotent code that may run multiple times under contention

How Does the .NET Memory Model Affect Multithreaded C# Code?

The .NET memory model (ECMA 334 and the CLI specification) guarantees that volatile reads and writes, lock acquisitions, and Interlocked operations establish happens-before relationships. Without these, the CPU and JIT are free to reorder instructions and cache values in registers, meaning one thread may not see another thread's writes. In practice on x86/x64, the hardware is relatively strongly ordered, but on ARM (common for cloud VMs and mobile), weaker guarantees mean missing synchronisation that "works fine" on x64 can fail on ARM. This is why volatile, Interlocked, and the System.Threading.Lock type matter even when code appears to work without them.


โ˜• Prefer a one-time tip? Buy us a coffee โ€” every bit helps keep the content coming!


For more .NET deep-dives, visit codingdroplets.com or subscribe to the Coding Droplets YouTube channel.

More from this blog

C

Coding Droplets

127 posts