ASP.NET Core Deployment & Docker Interview Questions for Senior .NET Developers (2026)
Deployment and containerisation have moved from "ops concerns" to core developer competency. In 2026, senior .NET engineers are routinely questioned on Dockerfile structure, Kubernetes readiness probes, CI/CD pipeline design, and production-readiness thinking โ not just on C# syntax and design patterns. Whether you are preparing for a staff engineer interview or evaluating a candidate, this guide covers the questions that consistently appear in senior-level technical rounds, along with clear, substantive answers that separate strong candidates from surface-level ones.
If you want to go deeper than theory โ with annotated, production-ready Dockerfiles, GitHub Actions workflows, and fully wired deployment setups you can clone and adapt โ the complete implementations are available on Patreon. The patterns covered here are exactly what enterprise .NET teams ship.
Understanding deployment in isolation is useful, but seeing it wired into a complete ASP.NET Core Web API โ alongside authentication, caching, error handling, and observability โ is what makes it click. Chapter 15 of the Zero to Production course walks through the full production Dockerfile, GitHub Actions CI/CD pipeline, and Docker Compose setup inside a real codebase you can run immediately.
Basic: Foundational Docker & Containerisation Questions
What is a multi-stage Docker build, and why is it required for .NET applications?
A multi-stage Docker build uses multiple FROM statements in a single Dockerfile. Each stage produces an intermediate image; subsequent stages can copy specific artefacts from earlier ones. For .NET applications, the standard pattern is:
- Stage 1 (build): Uses the full .NET SDK image to restore packages, build the project, and publish the release output.
- Stage 2 (runtime): Uses a smaller ASP.NET runtime image and copies only the published output from stage 1.
The result is a final image that contains no SDK, no source files, and no build tooling โ only the runtime and the compiled application. This typically reduces image size from several hundred megabytes (SDK image) to roughly 100โ250 MB (runtime image), depending on the application.
Without a multi-stage build, the image either bloats with build tools or requires a separate build pipeline outside Docker to produce the publish output.
What is the difference between the .NET SDK image and the ASP.NET runtime image?
| Image | Purpose | Contains | Typical Size |
mcr.microsoft.com/dotnet/sdk | Build, test, publish | Full .NET SDK, build tools, CLI | ~700 MB |
mcr.microsoft.com/dotnet/aspnet | Run ASP.NET Core apps | ASP.NET Core runtime only | ~200 MB |
mcr.microsoft.com/dotnet/runtime | Run non-web .NET apps | .NET runtime only (no ASP.NET) | ~120 MB |
The SDK image should only appear in build stages, never in the final image. Strong candidates know this distinction immediately and explain the security motivation: a smaller attack surface in production.
What is a .dockerignore file, and why does it matter for .NET builds?
.dockerignore works like .gitignore โ it tells the Docker CLI which files to exclude from the build context before sending it to the Docker daemon. For .NET projects, the most important exclusions are:
**/bin/and**/obj/โ local build artefacts that should not interfere with the container's build.git/โ version control history adds unnecessary size*.userandlaunchSettings.jsonโ developer-machine-specific files
Without a .dockerignore, the Docker build context includes all local build output and tooling files, which can slow builds significantly and pollute the container's restore/build steps with stale artefacts.
How should ASP.NET Core configuration work inside a Docker container?
ASP.NET Core reads configuration through a layered provider hierarchy: appsettings.json, appsettings.{Environment}.json, environment variables, and command-line arguments โ in that order, with later providers overriding earlier ones.
In a containerised deployment:
appsettings.jsoncontains non-sensitive defaults that are baked into the image- Sensitive values (connection strings, API keys) are injected at runtime as environment variables โ either from
docker run -e, Docker Composeenvironment:keys, Kubernetes Secrets mounted as env vars, or a secrets manager like Azure Key Vault - The
ASPNETCORE_ENVIRONMENTenvironment variable controls whichappsettings.{Environment}.jsonis loaded
A strong candidate warns against baking secrets into images โ any secret in an image layer is permanently accessible via docker history.
What is the non-root user pattern in Docker, and why is it a production requirement?
By default, Docker containers run as root. This is a security risk: if an attacker exploits a vulnerability in the application, they gain root access inside the container and โ depending on the host configuration โ potentially on the host itself.
The non-root pattern involves creating a dedicated system user in the Dockerfile and switching to that user before the ENTRYPOINT. Microsoft's .NET sample Dockerfiles include this pattern. Enterprise security policies and container platform compliance checks typically flag containers running as root. Candidates who mention this demonstrate production security awareness rather than just "it works on my machine" thinking.
Intermediate: CI/CD, Kubernetes, and Production Readiness
What does a production-ready GitHub Actions workflow for a .NET API look like?
A complete workflow has two distinct jobs:
Job 1 โ Build and Test:
- Checkout code
- Setup .NET SDK (specify exact version, e.g.,
dotnet-version: '10.x') - Restore packages
- Build in Release configuration
- Run unit and integration tests
- Fail the pipeline on any test failure
Job 2 โ Build and Push Docker Image (only on main branch):
- Login to container registry (e.g., GitHub Container Registry
ghcr.ioor ACR) - Build Docker image with a versioned tag (Git SHA is preferred over
latest) - Push image to registry
Key senior-level points:
- Never tag production images as
latestโ it makes rollbacks and audits impossible - SHA tagging (
ghcr.io/org/app:abc1234) gives you an immutable, traceable artefact - Separation of build-and-test from image push is intentional: tests gate the release
- Layer caching (using
actions/cacheor Docker's--cache-from) significantly reduces CI build times
What are the three Kubernetes probe types, and how do they map to ASP.NET Core health checks?
| Probe | Kubernetes Meaning | ASP.NET Core Health Check |
| Liveness | Is the container alive? Restart if failing. | /health/live โ checks if the process is running correctly (not deadlocked or unrecoverable) |
| Readiness | Is the container ready to receive traffic? Remove from Service if failing. | /health/ready โ checks if downstream dependencies (DB, Redis, external services) are reachable |
| Startup | Has the container finished initialising? | /health/startup โ allows slow-starting apps to delay liveness/readiness checks |
A common mistake is configuring a liveness probe that checks the database. If the database goes down, Kubernetes will restart all pods โ compounding an outage rather than isolating it. Liveness should only check the application process itself. Readiness checks dependencies.
How do you handle database migrations in a Dockerised ASP.NET Core application?
There are three common approaches, each with different trade-offs:
1. MigrateAsync at startup (context.Database.MigrateAsync() in Program.cs)
- Simple, self-contained
- Risk: every pod instance runs migrations on startup โ race conditions in multi-replica deployments if EF Core migrations are not idempotent
- Acceptable for single-instance or carefully designed low-risk scenarios
2. Kubernetes init container
- A separate container runs
dotnet ef database updatebefore the main container starts - Kubernetes ensures the init container completes successfully before starting the main pod
- Clean separation of migration from application runtime
3. Separate migration job
- A Kubernetes Job or CI/CD pipeline step runs migrations as a one-time task
- Most controlled approach for production: migrations are an explicit, auditable step, not an implicit startup side effect
Strong candidates can articulate the race condition risk in approach 1 and explain why teams with multiple replicas or strict change management prefer approaches 2 or 3.
What is layer caching in Docker, and how do you optimise a .NET Dockerfile to take advantage of it?
Docker caches each layer produced by a RUN, COPY, or ADD instruction. A cache hit means Docker skips re-executing that layer โ dramatically speeding up rebuilds.
The key optimisation for .NET is to copy and restore the project files before copying application source code. The pattern is: copy .sln and .csproj files first, run dotnet restore, then copy the full source and run dotnet publish. This way, as long as project files haven't changed, Docker reuses the NuGet restore layer from cache โ saving minutes per build in large solutions.
If the restore and the full source copy happen on the same layer, every source code change invalidates the restore cache and forces a full package download on every build. Separating these into distinct layers is the single highest-value Dockerfile optimisation for .NET teams.
How should you pass environment-specific configuration to an ASP.NET Core container at runtime?
The correct pattern for containerised .NET apps:
Non-sensitive configuration: Bake
appsettings.jsondefaults into the image. For environment overrides, mountappsettings.Production.jsonas a ConfigMap volume in Kubernetes (appropriate for non-sensitive environment flags, feature toggles, external URLs).Sensitive configuration: Inject as environment variables from Kubernetes Secrets, Azure Key Vault (via the Key Vault provider), AWS Secrets Manager, or HashiCorp Vault. Never mount secrets as files in Kubernetes unless explicitly encrypting at rest.
ASPNETCORE_URLS: Set this tohttp://+:8080(non-privileged port) in the Dockerfile or compose file โ required when running as a non-root user, since ports below 1024 require elevated privileges.ASPNETCORE_ENVIRONMENT: Always set explicitly (Production,Staging, etc.) โ do not rely on defaults.
What is the Docker Compose role in .NET development, and what should it not be used for?
Use in development:
- Spin up the full local stack (API + PostgreSQL + Redis + any messaging infrastructure) with a single
docker compose up - Ensures every developer has an identical local environment
- Eliminates "works on my machine" dependency issues
What Compose is not for:
- Production deployment โ Compose lacks Kubernetes-grade health management, rolling updates, autoscaling, and multi-node orchestration
- CI/CD artifact management โ use a proper registry and Kubernetes manifests
A well-architected project maintains a docker-compose.yml (local dev) and separate Kubernetes YAML manifests (production) โ Compose is a developer productivity tool, not a production orchestrator.
Advanced: Production Operations, Security, and Architecture
How do you implement zero-downtime deployments for an ASP.NET Core API in Kubernetes?
Zero-downtime deployments rely on:
Rolling update strategy: Kubernetes gradually replaces old pods with new ones (
maxUnavailable: 0,maxSurge: 1is a common production setting). No old pod is terminated until the new pod passes its readiness probe.Correct readiness probe configuration: The new pod must not receive traffic until it is genuinely ready โ including database migrations, connection pool warmup, and cache preloading.
Graceful shutdown handling in ASP.NET Core: When Kubernetes sends
SIGTERM, ASP.NET Core has a configurable grace period (ASPNETCORE_SHUTDOWNTIMEOUTSECONDS) to finish in-flight requests. TheIHostedService.StopAsyncmethod should clean up background workers without abruptly dropping requests.Connection draining: If behind a load balancer or Ingress, ensure the load balancer removes the pod from rotation before the pod terminates (Kubernetes handles this with pod lifecycle hooks โ
preStopsleep is a common workaround for timing gaps).
A candidate who only mentions "rolling update" without addressing readiness probes or graceful shutdown is describing the theory without the operational reality.
What are the security hardening steps you apply to a .NET Docker image before production?
- Non-root user (covered above โ mandatory)
- Use specific image tags, not
latestโmcr.microsoft.com/dotnet/aspnet:10.0-bookworm-slimnotaspnet:latestโ prevents unexpected runtime changes from upstream updates - Distroless or minimal base images โ consider
mcr.microsoft.com/dotnet/aspnet:10.0-alpineto further reduce attack surface (fewer packages = fewer CVEs) - No secrets in image โ verified via image scanning (Trivy, Snyk, Docker Scout)
- Read-only filesystem where possible โ mark the container root as read-only and explicitly mount only writable paths (temp, logs)
- Container vulnerability scanning in CI โ run Trivy or equivalent on every image before pushing to the registry; fail the pipeline on CRITICAL CVEs
- Drop Linux capabilities in Kubernetes:
securityContext.capabilities.drop: ["ALL"]
Strong candidates can name specific tools (Trivy, Snyk) and reference Kubernetes securityContext fields โ indicating they have shipped to production, not just built demos.
How do you handle secrets rotation in a containerised .NET application without redeployment?
Three patterns are commonly used:
1. Kubernetes Secrets with volume mounts + reloading:
Mount the secret as a file. Kubernetes updates the mounted file when the Secret is updated (with a configurable sync delay). ASP.NET Core's IOptionsMonitor<T> with a file-based configuration provider can hot-reload without restart.
2. Azure Key Vault with ASP.NET Core provider:
The Azure.Extensions.AspNetCore.Configuration.Secrets package supports periodic reload intervals. Set ReloadInterval to poll the vault and pick up rotated values without pod restarts.
3. Sidecar/agent pattern: Vault Agent (HashiCorp) or the Secrets Store CSI Driver can inject refreshed secrets as files or environment variables, triggering application reload through signal or file-watching.
The anti-pattern to flag: baking a specific secret value into the image at build time. Any rotation requires a full image rebuild and redeployment โ operationally fragile for credentials that rotate frequently.
What production-readiness checks do you apply before declaring a .NET API ready for live traffic?
A production readiness checklist for a Dockerised ASP.NET Core API:
Container:
- Multi-stage build, non-root user, minimal base image
- No secrets baked in, vulnerability scan passing
Application:
- Structured logging (Serilog or equivalent) with correlation IDs on every log entry
- OpenTelemetry traces and metrics exported to the observability backend
/health/liveand/health/readyendpoints responding correctlyASPNETCORE_ENVIRONMENT=Productionconfirmed- Request timeout middleware configured
- Global error handler returning Problem Details (RFC 7807) โ no raw exceptions exposed
Kubernetes:
- Liveness, readiness, and startup probes configured with appropriate intervals and thresholds
- Resource requests and limits set (missing limits โ noisy neighbour problems; missing requests โ incorrect scheduling)
- PodDisruptionBudget defined for high-availability services
- HorizontalPodAutoscaler configured for traffic-sensitive endpoints
Pipeline:
- Build, test, and security scan all passing
- SHA-tagged image in registry
- Deployment tracked (ArgoCD, Flux, or explicit GitOps)
Senior engineers who run through this kind of checklist fluently signal that they own operational quality, not just feature delivery.
โ Prefer a one-time tip? Buy us a coffee โ every bit helps keep the content coming!
Frequently Asked Questions
What Docker base image should I use for an ASP.NET Core application in 2026?
Use mcr.microsoft.com/dotnet/aspnet:10.0-bookworm-slim for the runtime stage of a multi-stage build. For production security, consider 10.0-alpine if you can handle Alpine's musl libc implications. Always pin to a specific version tag rather than latest to avoid unexpected upstream changes. The SDK image (mcr.microsoft.com/dotnet/sdk:10.0) is used only in the build stage and never ships to production.
What port should an ASP.NET Core container listen on?
Non-privileged ports (1024+) are required when the container runs as a non-root user. The convention in .NET 10 is port 8080, set via the ASPNETCORE_HTTP_PORTS or ASPNETCORE_URLS=http://+:8080 environment variable. In Kubernetes, the container port is 8080 and the Service exposes it on the desired external port.
How do I run Entity Framework Core migrations in a Kubernetes deployment?
The safest pattern for production is a Kubernetes init container or a dedicated Kubernetes Job that runs dotnet ef database update (or a custom migration runner) before the main application pods start. This ensures migrations are applied exactly once, in sequence, before any application instance begins serving traffic. Running MigrateAsync inside each pod's startup is simpler but introduces a race condition risk in multi-replica deployments.
What is the difference between a Kubernetes liveness probe and a readiness probe for ASP.NET Core?
A liveness probe answers: "Is the process still functioning?" If it fails, Kubernetes restarts the pod. Map it to a lightweight /health/live endpoint that only checks the application process. A readiness probe answers: "Is this pod ready to handle requests?" If it fails, the pod is removed from the Service load balancer โ but not restarted. Map it to /health/ready, which checks external dependencies (database, Redis, downstream services). Never check external dependencies in a liveness probe โ a failing database would trigger cascading pod restarts across all replicas.
How do I pass different appsettings to a Docker container per environment?
Three approaches in order of preference: (1) Inject environment-specific values as environment variables at runtime โ the ASP.NET Core configuration system automatically overrides matching keys. (2) Mount a Kubernetes ConfigMap as a volume containing appsettings.Production.json. (3) For secrets, inject values from Kubernetes Secrets or a secrets manager (Azure Key Vault, HashiCorp Vault) โ never bake them into the image. The application code should be identical across environments; only the configuration injected at runtime changes.
Why is the latest Docker tag considered harmful in production?
The latest tag is mutable โ any push to the repository can overwrite it. In production, this means: (1) rollbacks become unreliable because you cannot guarantee which image version latest points to; (2) audits and incident investigation are impaired because you cannot trace which exact image was running at a given time; (3) CI/CD pipelines may silently deploy a different version than intended. The recommended practice is SHA-based tagging (e.g., ghcr.io/org/app:3f8a2c1) for production deployments, with semantic version tags (v1.4.2) as human-readable aliases.
What is ASPNETCORE_ENVIRONMENT and why must it be set explicitly in containers?
ASPNETCORE_ENVIRONMENT tells ASP.NET Core which environment-specific configuration to load (appsettings.Production.json, appsettings.Staging.json) and controls framework behaviours like detailed error pages and developer exception pages. Without it, the default is Production โ but relying on defaults is fragile. Always set it explicitly in the Dockerfile or Kubernetes Deployment manifest. An application running with Development settings in a production container can expose exception details to end users โ a direct security exposure.




