This list contains over 50 challenging questions for experienced ASP.NET Core developers, covering architecture, performance, security, and the framework’s internals.
Architecture & Design Patterns
- Question: Explain the “Vertical Slice Architecture.” How does it differ from traditional N-tier or Onion architecture, and what are its benefits in a modern ASP.NET Core application?
- Answer: Vertical Slice Architecture structures an application around features or “vertical slices” rather than technical layers (UI, BLL, DAL). Each slice is self-contained, handling its own logic from request to response, often including its own specific data access and business rules. This contrasts with N-tier architecture where a feature’s logic is spread across multiple horizontal layers.
- Benefits:
- High Cohesion, Low Coupling: Code for a single feature is located together, making it easier to understand, modify, and delete.
- Improved Scalability: You can optimize individual slices without affecting others.
- Flexibility: Different slices can use different data persistence strategies or patterns if needed. It avoids “one size fits all” architectural constraints.
- Better for Microservices: It’s a natural fit for evolving a monolith into microservices, as each slice is a potential service boundary.
- Question: How would you implement the “Outbox Pattern” in an ASP.NET Core application to ensure reliable message delivery in a distributed system?
- Answer: The Outbox Pattern ensures that an operation that spans a database transaction and a message broker publish is atomic. You can’t use a distributed transaction (like
TransactionScope
) with most modern message brokers. - Implementation:
- Within the same database transaction as your business logic (e.g., creating an order), you insert a message/event into an “Outbox” table in the same database.
- The business transaction commits. Now the state change and the intent to publish an event are durably saved.
- A separate background process (e.g., a
BackgroundService
or a dedicated worker) polls the Outbox table for new messages. - This process reads the messages, publishes them to the message broker (e.g., RabbitMQ, Kafka).
- Upon successful publication, the process marks the message in the Outbox table as processed or deletes it. This prevents duplicate message sending.
- Answer: The Outbox Pattern ensures that an operation that spans a database transaction and a message broker publish is atomic. You can’t use a distributed transaction (like
- Question: Describe the role of
IHostApplicationLifetime
and how you would use it to manage graceful shutdown. Provide a code example.- Answer:
IHostApplicationLifetime
provides events to hook into the application’s lifecycle:ApplicationStarted
,ApplicationStopping
, andApplicationStopped
. It’s crucial for graceful shutdown, allowing you to finish ongoing work, release resources, or notify other systems before the application terminates.
public class MyGracefulShutdownService : IHostedService { private readonly ILogger<MyGracefulShutdownService> _logger; private readonly IHostApplicationLifetime _appLifetime; public MyGracefulShutdownService(ILogger<MyGracefulShutdownService> logger, IHostApplicationLifetime appLifetime) { _logger = logger; _appLifetime = appLifetime; } public Task StartAsync(CancellationToken cancellationToken) { _appLifetime.ApplicationStarted.Register(() => _logger.LogInformation("Application has started.")); _appLifetime.ApplicationStopping.Register(OnShutdown); return Task.CompletedTask; } private void OnShutdown() { _logger.LogInformation("Application is shutting down. Cleaning up resources..."); // Simulate cleanup work Thread.Sleep(2000); _logger.LogInformation("Cleanup complete."); } public Task StopAsync(CancellationToken cancellationToken) => Task.CompletedTask; }
- Answer:
- Question: What is idempotency and why is it critical in REST API design, especially for
POST
orPUT
operations? How would you implement it?- Answer: Idempotency means that making the same request multiple times produces the same result as making it once.
PUT
andDELETE
are naturally idempotent.POST
is not. In scenarios like payment processing, a client might retry aPOST
request due to a network error, potentially causing a duplicate charge. - Implementation:
- The client generates a unique key (e.g., a GUID) called an
Idempotency-Key
and sends it in the request header. - The server-side middleware or action filter checks if it has seen this key before. It maintains a cache (e.g., in Redis or a database table) of processed keys.
- If the key is new, the server processes the request, stores the result and the key in the cache, and returns the result.
- If the key has been seen, the server does not re-process the request but instead returns the cached response from the previous successful operation.
- The client generates a unique key (e.g., a GUID) called an
- Answer: Idempotency means that making the same request multiple times produces the same result as making it once.
- Question: How does ASP.NET Core’s support for
ActivitySource
andActivity
relate to distributed tracing and OpenTelemetry?- Answer:
ActivitySource
andActivity
are .NET’s built-in APIs for creating and managing activities, which represent a unit of work in a distributed trace. OpenTelemetry is a vendor-neutral standard for observability (tracing, metrics, logs). The .NET OpenTelemetry libraries act as a bridge, listening toActivitySource
events and exporting them to an OpenTelemetry-compatible backend (like Jaeger, Zipkin, or Honeycomb). This decouples your application’s instrumentation code from the specific observability platform you use.
- Answer:
Middleware & Pipeline
- Question: Explain the difference between
app.Use()
andapp.Run()
. Why would you choose one over the other?- Answer: Both are used to add middleware to the pipeline.
app.Use(Func<HttpContext, Func<Task>, Task> middleware)
: Chains middleware components. It takes a delegate that receives theHttpContext
and thenext
middleware in the pipeline. It can perform work before and after callingawait next()
. If it doesn’t callnext()
, it short-circuits the pipeline.app.Run(RequestDelegate handler)
: Is a terminal middleware. It takes a delegate that only receives theHttpContext
. It does not have anext
parameter because it’s expected to terminate the request pipeline and generate a response. Anything registered afterapp.Run()
will never be executed. You useapp.Run()
for handlers that should always handle the request, like a simple health check endpoint.
- Answer: Both are used to add middleware to the pipeline.
- Question: How would you write a custom middleware that modifies the response body? What are the challenges involved?
- Answer: Modifying the response body is tricky because the original
Response.Body
stream is write-only and often not seekable. You must replace it with aMemoryStream
to buffer the response, modify it, and then write the modified content back to the original stream.
public class ResponseRewritingMiddleware { private readonly RequestDelegate _next; public ResponseRewritingMiddleware(RequestDelegate next) { _next = next; } public async Task InvokeAsync(HttpContext context) { var originalBodyStream = context.Response.Body; await using var responseBody = new MemoryStream(); context.Response.Body = responseBody; await _next(context); // Let the rest of the pipeline run // Rewind the memory stream to the beginning context.Response.Body.Seek(0, SeekOrigin.Begin); var responseText = await new StreamReader(context.Response.Body).ReadToEndAsync(); // Modify the text (simple example) var modifiedText = responseText.Replace("Hello", "Goodbye"); // Write the modified content back to the original stream await using var memStream = new MemoryStream(System.Text.Encoding.UTF8.GetBytes(modifiedText)); context.Response.ContentLength = memStream.Length; await memStream.CopyToAsync(originalBodyStream); } }
- Challenges: Performance overhead due to buffering the entire response in memory, and potential issues with very large responses.
- Answer: Modifying the response body is tricky because the original
- Question: What is the purpose of
UseWhen()
? Provide a scenario where it’s more suitable than a simpleif
block inside a middleware’sInvokeAsync
method.- Answer:
UseWhen()
conditionally branches the middleware pipeline. It takes a predicate that evaluates theHttpContext
. If the predicate is true, it configures a separate, branched pipeline for that request. This is more efficient than anif
block inside a middleware because it avoids adding the middleware to the main pipeline for requests that don’t match the predicate. - Scenario: You want to apply a set of authentication and authorization middleware only to requests whose path starts with
/api
.
app.UseWhen( context => context.Request.Path.StartsWithSegments("/api"), apiBranch => { apiBranch.UseAuthentication(); apiBranch.UseAuthorization(); // Other API-specific middleware } );
- Answer:
- Question: Starting in .NET 7, we can use
IMiddlewareFactory
. How does this differ from conventional middleware registration and what problem does it solve?- Answer: Conventional middleware can be registered as a type (
app.UseMiddleware<MyMiddleware>()
) or as a delegate. When registered as a type, the middleware can be activated per-request, allowing it to have scoped dependencies injected. However, the middleware instance itself is created viaActivatorUtilities
, which has some overhead.IMiddlewareFactory
allows you to take full control over the creation and disposal of middleware instances. This solves problems in advanced scenarios, such as:- Pooling: You can implement middleware pooling to reuse instances and reduce GC pressure.
- Custom Lifetime: You can manage the middleware’s lifetime in a way that doesn’t align with standard DI scopes.
- Integration with other DI containers: It provides a hook to integrate with third-party DI containers more cleanly.
- Answer: Conventional middleware can be registered as a type (
- Question: With the introduction of
IExceptionHandler
in .NET 8, how has global exception handling changed? IsUseExceptionHandler
middleware now obsolete?- Answer:
IExceptionHandler
provides a new, strongly-typed, and more testable way to handle exceptions globally. It’s a service you register in DI. - How it works: You implement the
IExceptionHandler
interface, which has a single method:ValueTask<bool> TryHandleAsync(...)
. Inside this method, you inspect the exception and can write a response. If you handle the exception, you returntrue
. - Changes:
- It’s cleaner than the
UseExceptionHandler
lambda, which can become cluttered. - It can be registered in DI, so it can have its own dependencies injected.
- Multiple handlers can be registered, and they will be executed in order until one returns
true
.
- It’s cleaner than the
- Is
UseExceptionHandler
obsolete? No.IExceptionHandler
is now the recommended approach, butUseExceptionHandler
is still fully supported. The newapp.UseExceptionHandler()
overload now simply registers anIExceptionHandler
behind the scenes. The new approach is superior for complex error handling logic and testability.
// In Program.cs builder.Services.AddExceptionHandler<GlobalExceptionHandler>(); builder.Services.AddProblemDetails(); // Recommended to use with exception handlers // The handler implementation public class GlobalExceptionHandler : IExceptionHandler { public async ValueTask<bool> TryHandleAsync(HttpContext httpContext, Exception exception, CancellationToken cancellationToken) { // Log the exception, etc. httpContext.Response.StatusCode = StatusCodes.Status500InternalServerError; await httpContext.Response.WriteAsJsonAsync(new { Error = "An unexpected error occurred." }, cancellationToken); return true; // We handled it } }
- Answer:
Dependency Injection (DI)
- Question: Explain the difference between
TryAddScoped
,AddScoped
, andReplace
. When would you use each?- Answer:
AddScoped<TService, TImplementation>()
: RegistersTImplementation
forTService
with a scoped lifetime. If a registration forTService
already exists, this will add another one. WhenTService
is resolved, you’ll get the last one registered.TryAddScoped<TService, TImplementation>()
: Only registers the service if no other registration forTService
already exists. This is useful for library authors who want to provide a default implementation but allow the user to easily override it.services.Replace(ServiceDescriptor.Scoped<TService, TNewImplementation>())
: Finds an existing registration forTService
and replaces it with the new one. This is useful in testing or when you need to definitively override a framework’s default service.
- Answer:
- Question: How does the “Captive Dependency” problem manifest in ASP.NET Core’s DI container? Provide an example.
- Answer: A captive dependency occurs when a service with a longer lifetime holds a reference to a service with a shorter lifetime. The most common example is injecting a
Scoped
service into aSingleton
service. TheScoped
service becomes “captive” in the singleton. For the entire lifetime of the singleton, it will hold onto the same instance of the scoped service that was created when the singleton was first instantiated. This can lead to unexpected behavior and memory leaks, as the scoped service is never disposed of until the application shuts down.
// Scoped service public class MyScopedService { /* ... */ } // Singleton service that captures the scoped service public class MySingletonService { // This is a captive dependency! private readonly MyScopedService _scopedService; public MySingletonService(MyScopedService scopedService) { _scopedService = scopedService; } }
- Solution: Instead of injecting
MyScopedService
directly, injectIServiceProvider
orIServiceScopeFactory
into the singleton. Then, within a method call, create a new scope, resolveMyScopedService
from that scope, use it, and then dispose of the scope.
- Answer: A captive dependency occurs when a service with a longer lifetime holds a reference to a service with a shorter lifetime. The most common example is injecting a
- Question: What is keyed service injection (introduced in .NET 8)? How does it solve problems that were difficult to handle before?
- Answer: Keyed service injection allows you to register multiple implementations of the same service interface, each associated with a unique key (a string or enum). When resolving the service, you specify the key to get the specific implementation you need.
- Problem Solved: Before .NET 8, if you needed multiple implementations of an interface (e.g., different payment gateway providers for
IPaymentGateway
), the common pattern was to register them all as anIEnumerable<IPaymentGateway>
and then use some property on the implementation to find the right one. This was clumsy. Keyed services make this explicit and clean.
// Registration in Program.cs builder.Services.AddKeyedScoped<IPaymentGateway, StripeGateway>("stripe"); builder.Services.AddKeyedScoped<IPaymentGateway, PayPalGateway>("paypal"); // Consumption in a service public class PaymentService { private readonly IPaymentGateway _gateway; // Use the [FromKeyedServices] attribute to inject public PaymentService([FromKeyedServices("stripe")] IPaymentGateway gateway) { _gateway = gateway; } }
- Question: Explain how you can resolve a service from the DI container manually without using constructor injection. What are the potential pitfalls of this approach?
- Answer: You can get an instance of
IServiceProvider
from theHttpContext
viaHttpContext.RequestServices
. This is considered a Service Locator pattern, which is often viewed as an anti-pattern. - Example:
public void MyMethod(HttpContext context) { var myService = context.RequestServices.GetRequiredService<IMyService>(); myService.DoWork(); }
- Pitfalls:
- Hides Dependencies: The class’s dependencies are no longer clear from its constructor signature.
- Harder to Test: You need to mock
HttpContext
andIServiceProvider
to unit test the class. - Violates DI Principles: It tightly couples your code to the DI container’s infrastructure.
- It should only be used as a last resort in places where constructor injection is not possible (e.g., static methods, some legacy code integration).
- Answer: You can get an instance of
- Question: What is the difference between
IServiceScope
andIServiceProvider
?- Answer:
IServiceProvider
: This is the core interface for resolving services from the container. It has a single method,GetService(Type serviceType)
.IServiceScope
: This interface represents a scope for resolved services. It has two main properties:ServiceProvider
: AnIServiceProvider
that is specific to this scope. When you resolve a scoped service from it, you will get the same instance for the lifetime of that scope.Dispose()
: WhenDispose()
is called on the scope, allIDisposable
services that were resolved from that scope are also disposed. In ASP.NET Core, a new scope is created for each HTTP request.
- Answer:
Performance Optimization
- Question: What is
IAsyncEnumerable<T>
and how can it be used in an ASP.NET Core API to improve performance and reduce memory usage, especially with large datasets?- Answer:
IAsyncEnumerable<T>
allows you to represent a stream of data that is retrieved asynchronously. In an API controller, you can return anIAsyncEnumerable<T>
. ASP.NET Core will then stream the results to the client as they become available, rather than buffering the entire collection in memory first. This is incredibly powerful for large database queries or when generating large files.
[HttpGet("stream-products")] public async IAsyncEnumerable<Product> StreamProducts() { // dbContext.Products.AsAsyncEnumerable() comes from EF Core await foreach (var product in _dbContext.Products.AsNoTracking().AsAsyncEnumerable()) { // Some potential processing await Task.Delay(10); // Simulate work yield return product; } }
- The client receives the first product almost immediately, and the server’s memory usage remains low and constant regardless of the total number of products.
- Answer:
- Question: Explain the concept of response caching vs. output caching in ASP.NET Core. How has output caching evolved in recent versions (.NET 7+)?
- Answer:
- Response Caching: This is a client-side/proxy caching mechanism. The server adds HTTP headers like
Cache-Control
,Expires
, andETag
to the response. It tells the browser or a CDN that it’s safe to cache the response for a certain period. The server still executes the controller action on subsequent requests to check if the cache is valid (e.g., viaIf-None-Match
header). - Output Caching: This is a server-side caching mechanism. The server caches the entire response (including headers and body) in its own memory (or a distributed cache like Redis). On a subsequent request, the output caching middleware intercepts the request, finds the cached response, and sends it directly to the client, completely bypassing the execution of the controller action and the rest of the pipeline.
- Response Caching: This is a client-side/proxy caching mechanism. The server adds HTTP headers like
- Evolution: Output caching was a core feature in the old ASP.NET Framework but was absent in early .NET Core. It was reintroduced in .NET 7 as a modern, flexible middleware. It supports caching based on query strings, headers, and custom logic, as well as cache invalidation using tags.
- Answer:
- Question: What is
SocketHttpHandler
and how can you configure it to optimizeHttpClient
performance?- Answer:
SocketHttpHandler
is the default handler used byHttpClient
in modern .NET. It’s a high-performance implementation written in managed code. You can optimize it by configuring its properties when creating anHttpClient
. - Key Optimizations:
PooledConnectionLifetime
: Controls how long a connection can be pooled and reused. This is critical for DNS changes, asHttpClient
will otherwise hold onto connections to old IP addresses indefinitely. A common value is 5-10 minutes.PooledConnectionIdleTimeout
: How long an idle connection remains in the pool.MaxConnectionsPerServer
: The maximum number of concurrent connections to a single server endpoint.
- Configuration with
IHttpClientFactory
:
builder.Services.AddHttpClient("MyClient") .ConfigurePrimaryHttpMessageHandler(() => new SocketsHttpHandler { PooledConnectionLifetime = TimeSpan.FromMinutes(5), MaxConnectionsPerServer = 10 });
- Answer:
- Question: Explain how using
struct
s instead ofclass
es for DTOs can sometimes improve performance. What are the trade-offs?- Answer: Using
struct
s can improve performance by reducing memory allocations and garbage collection pressure. Sincestruct
s are value types, they are typically allocated on the stack (if they are local variables) or inline within their containing object, rather than on the heap. For a high-throughput API processing millions of small DTOs, this can lead to a significant performance win. - Trade-offs:
- Copying:
struct
s are copied by value. Passing a largestruct
to a method can be more expensive than passing a reference to a class. You can mitigate this usingin
parameters or by making themreadonly struct
. - Mutability: Mutable structs are notoriously problematic and should almost always be avoided.
- Boxing: If a
struct
needs to be treated as an object (e.g., put in a non-generic collection), it will be “boxed,” which involves a heap allocation and copy, negating the performance benefit. - Complexity: They are generally more complex to reason about than classes. This is a micro-optimization that should only be applied after profiling reveals a bottleneck in DTO allocation.
- Copying:
- Answer: Using
- Question: What are some strategies for reducing memory allocations in a hot path of an ASP.NET Core application?
- Answer:
- Pooling: Use
ArrayPool<T>
for arrays andObjectPool<T>
for complex objects to reuse them instead of creating new ones. - Use
Span<T>
andMemory<T>
: These types provide a type-safe way to work with contiguous regions of memory without allocation (e.g., slicing a string without creating a new substring). - Avoid LINQ in hot paths: Many LINQ methods allocate enumerators and closures. A
for
orforeach
loop is often more performant. - String Concatenation: Use
StringBuilder
orstring.Create
for complex string manipulation instead of repeated+
operations. - Value Types: Use
struct
s where appropriate (see previous question). - Cache: Cache frequently accessed data that is expensive to compute or retrieve.
- Pooling: Use
- Answer:
Security
- Question: Explain the difference between Authentication and Authorization. How are they configured in the ASP.NET Core pipeline?
- Answer:
- Authentication (
Who are you?
): The process of verifying a user’s identity. It involves validating credentials (like a username/password, JWT, or API key) and creating aClaimsPrincipal
that represents the authenticated user. - Authorization (
What are you allowed to do?
): The process of determining if an authenticated user has permission to perform a specific action or access a resource.
- Authentication (
- Configuration: They are configured as middleware, and the order is critical.
// In Program.cs var app = builder.Build(); // ... // 1. Routing must come first so the endpoint is known app.UseRouting(); // 2. Authentication must come before Authorization. // It establishes the user's identity (ClaimsPrincipal). app.UseAuthentication(); // 3. Authorization uses the identity to make access decisions. app.UseAuthorization(); app.MapControllers();
- Answer:
- Question: What is a JSON Web Token (JWT)? Describe its structure and the flow of JWT-based authentication in an ASP.NET Core API.
- Answer: A JWT is a compact, URL-safe means of representing claims to be transferred between two parties.
- Structure: It consists of three parts separated by dots (
.
):- Header: Contains the token type (
JWT
) and the signing algorithm (e.g.,HS256
). Base64Url encoded. - Payload: Contains the claims (e.g.,
sub
(subject/user ID),exp
(expiration time),iss
(issuer), and custom claims). Base64Url encoded. - Signature: A cryptographic signature of the encoded header and payload, signed with a secret key (for symmetric algorithms) or a private key (for asymmetric algorithms). This verifies the token’s integrity.
- Header: Contains the token type (
- Flow:
- User sends credentials to a login endpoint.
- Server validates credentials.
- Server creates a JWT containing user claims and signs it with a secret key.
- Server sends the JWT back to the client.
- Client stores the JWT (e.g., in
localStorage
or a secure cookie). - For subsequent requests to protected APIs, the client includes the JWT in the
Authorization
header (Authorization: Bearer <token>
). - The
JwtBearer
authentication middleware on the server validates the token’s signature, expiration, and issuer. If valid, it deserializes the claims into aClaimsPrincipal
and attaches it to theHttpContext.User
.
- Question: How would you implement a custom policy-based authorization requirement? For example, a policy that requires a user to be a certain age.
- Answer: Policy-based authorization decouples authorization logic from roles. You create requirements and handlers.
- Step 1: Create the Requirement:
public class MinimumAgeRequirement : IAuthorizationRequirement { public int MinimumAge { get; } public MinimumAgeRequirement(int minimumAge) { MinimumAge = minimumAge; } }
- Step 2: Create the Handler:
public class MinimumAgeHandler : AuthorizationHandler<MinimumAgeRequirement> { protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, MinimumAgeRequirement requirement) { var dateOfBirthClaim = context.User.FindFirst(c => c.Type == ClaimTypes.DateOfBirth); if (dateOfBirthClaim == null) { return Task.CompletedTask; // No claim, can't satisfy } var dateOfBirth = Convert.ToDateTime(dateOfBirthClaim.Value); int calculatedAge = DateTime.Today.Year - dateOfBirth.Year; if (dateOfBirth > DateTime.Today.AddYears(-calculatedAge)) { calculatedAge--; } if (calculatedAge >= requirement.MinimumAge) { context.Succeed(requirement); // Requirement met } return Task.CompletedTask; } }
- Step 3: Register everything:
// In Program.cs builder.Services.AddSingleton<IAuthorizationHandler, MinimumAgeHandler>(); builder.Services.AddAuthorization(options => { options.AddPolicy("Over18", policy => policy.Requirements.Add(new MinimumAgeRequirement(18))); });
- Step 4: Use the policy:
[HttpGet] [Authorize(Policy = "Over18")] public IActionResult GetRestrictedContent() { /* ... */ }
- Question: What is Cross-Site Request Forgery (CSRF/XSRF) and how does ASP.NET Core’s antiforgery system work to prevent it?
- Answer: CSRF is an attack where a malicious website tricks a user’s browser into making an unintended request to another site where the user is authenticated (e.g., submitting a form to transfer money). The browser automatically includes authentication cookies with the request.
- ASP.NET Core’s Protection:
- When a view with a form is rendered, the server generates a unique, random antiforgery token.
- It sends one part of this token to the client as a cookie (e.g.,
.AspNetCore.Antiforgery.xxxx
). - It embeds the other part of the token in the HTML form as a hidden field (
__RequestVerificationToken
). - When the user submits the form, both the cookie and the hidden field value are sent back to the server.
- The antiforgery middleware validates that the cookie and the form field token match. If they don’t, or if one is missing, it rejects the request. An attacker cannot forge the hidden field token, so the attack fails.
- Question: How do you protect sensitive configuration data like API keys and connection strings in an ASP.NET Core application?
- Answer: You should never store secrets in
appsettings.json
or check them into source control. - Recommended Practices:
- Development: Use the “Secret Manager” tool (
dotnet user-secrets set
). It stores secrets in a JSON file in the user’s profile directory, outside the project folder. - Production: Use a secure external store. The best options are:
- Azure Key Vault: The standard for Azure deployments. You use the
Azure.Extensions.AspNetCore.Configuration.Secrets
package to load secrets from Key Vault into the configuration system. - AWS Secrets Manager / Parameter Store: The equivalent for AWS deployments.
- HashiCorp Vault: A popular cloud-agnostic solution.
- Environment Variables: A simple, platform-agnostic way to provide secrets, especially in containerized environments.
- Azure Key Vault: The standard for Azure deployments. You use the
- Development: Use the “Secret Manager” tool (
- Answer: You should never store secrets in
Minimal APIs & Routing
- Question: What are route groups in Minimal APIs? How do they help in organizing endpoints?
- Answer: Route groups, introduced in .NET 7, allow you to group endpoints that share a common prefix and/or common configuration (like authorization policies, CORS settings, etc.). This significantly reduces code duplication.
// Before route groups app.MapGet("/todos/{id}", ...).RequireAuthorization(); app.MapPost("/todos", ...).RequireAuthorization(); app.MapPut("/todos/{id}", ...).RequireAuthorization(); // With route groups var todosGroup = app.MapGroup("/todos").RequireAuthorization(); todosGroup.MapGet("/{id}", ...); todosGroup.MapPost("/", ...); todosGroup.MapPut("/{id}", ...);
- Question: How do you implement endpoint filters in Minimal APIs? Provide an example of a validation filter.
- Answer: Endpoint filters (similar to action filters in MVC) are functions that can run code before and after a route handler. They are great for cross-cutting concerns like validation, logging, or transforming requests/responses.
// A simple validation filter for a Todo item public class ValidationFilter<T> : IEndpointFilter where T : class { private readonly IValidator<T> _validator; // Assuming FluentValidation public ValidationFilter(IValidator<T> validator) => _validator = validator; public async ValueTask<object?> InvokeAsync(EndpointFilterInvocationContext context, EndpointFilterDelegate next) { var argument = context.GetArgument<T>(0); // Get the first argument of type T var validationResult = await _validator.ValidateAsync(argument); if (!validationResult.IsValid) { return Results.ValidationProblem(validationResult.ToDictionary()); } return await next(context); } } // Usage: app.MapPost("/todos", (Todo todo) => { ... }) .AddEndpointFilter<ValidationFilter<Todo>>();
- Question: Explain the difference between
Results.Ok()
,Results.Json()
, andResults.Content()
.- Answer:
Results.Ok(object? data)
: Produces a200 OK
response. Ifdata
is provided, it serializes it to JSON. It’s the most common success result.Results.Json(object? data, ...)
: Also serializes data to JSON but gives you more control over the serialization process, allowing you to specifyJsonSerializerOptions
,contentType
, and thestatusCode
.Results.Content(string content, ...)
: Produces a response with a raw string body. You can specify theContent-Type
andEncoding
. This is useful for returning non-JSON content like XML, CSV, or plain text.
- Answer:
- Question: How does link generation work in Minimal APIs? How would you generate a URL for a “created” resource in a
POST
handler?- Answer: Link generation allows you to create URLs pointing to other endpoints by name. You use
Results.CreatedAtRoute
orResults.Created
. You must first name your “get” endpoint using.WithName()
.
app.MapGet("/products/{id}", (int id) => { /* get product */ }) .WithName("GetProduct"); // Name the endpoint app.MapPost("/products", (Product product) => { // ... save the product, assume it gets id = 123 product.Id = 123; // Generates a 201 Created response with a "Location" header // pointing to "/products/123" return Results.CreatedAtRoute("GetProduct", new { id = product.Id }, product); });
- Answer: Link generation allows you to create URLs pointing to other endpoints by name. You use
- Question: Can you mix Minimal APIs and MVC controllers in the same application? What are the considerations?
- Answer: Yes, you absolutely can. They are designed to coexist. The routing system handles both seamlessly.
- Considerations:
- Shared Services: Both can use the same services from the DI container.
- Middleware: The middleware pipeline applies to both.
- Use Cases: It’s a common pattern to use Minimal APIs for simple, performance-critical data endpoints (like CRUD operations) and use MVC controllers for more complex endpoints that require view rendering, complex model binding, or the structure that action filters provide.
- Consistency: You need to be mindful of maintaining consistent conventions for routing, error handling, and response formats between the two styles.
Blazor & Frontend
- Question: Explain the key differences between the hosting models of Blazor Server and Blazor WebAssembly (Wasm), including their impact on performance, security, and scalability.
- Answer:
- Blazor Server:
- Architecture: The component logic runs on the server. UI updates, event handling, and JavaScript calls are passed over a real-time SignalR connection.
- Performance: Initial load is very fast as only a small JS file is downloaded. Latency can be an issue as every user interaction requires a server roundtrip.
- Security: The component code remains on the server, so it’s more secure. It can directly access server resources and secrets.
- Scalability: Each active user maintains a persistent connection and state on the server, consuming server memory and CPU. This can limit scalability. Requires sticky sessions if deployed behind a load balancer.
- Blazor WebAssembly:
- Architecture: The entire application (component logic, .NET runtime) is downloaded to the browser and runs on a WebAssembly-based .NET runtime.
- Performance: Initial load can be slow due to the large download size. After loading, UI interactions are instant as there are no server roundtrips.
- Security: The code runs in the browser’s sandbox. It cannot directly access server resources; it must call backend APIs just like any other SPA framework. Secrets cannot be stored in the client-side code.
- Scalability: Highly scalable, as the server’s only job is to serve the initial files and then act as a backend API.
- Blazor Server:
- Answer:
- Question: What is the new Blazor United / “Auto” render mode in .NET 8? How does it aim to provide the best of both worlds?
- Answer: The “Auto” render mode is a new feature in .NET 8 that intelligently switches between Blazor Server and Blazor WebAssembly.
- How it works:
- When a user first visits a page using “Auto” mode, it initially renders using the Blazor Server model. This provides a very fast initial load time.
- In the background, the browser starts downloading the .NET runtime and the application’s WebAssembly bundle.
- Once the download is complete, the application “flips” from the Server connection to running entirely on the client via WebAssembly for all subsequent interactions.
- Benefit: It combines the fast initial page load of Blazor Server with the low-latency interactivity and offline capabilities of Blazor WebAssembly, providing a better user experience.
- Question: How do you manage state in a complex Blazor application? Compare component parameters, cascading values, and dedicated state container libraries.
- Answer:
- Component Parameters: For passing state down from a parent to a direct child. Simple and effective for local state.
- Cascading Values: For passing state down through a component hierarchy without having to pass it through every intermediate component. Useful for “ambient” state like the current theme or user information. Can become hard to track where the data comes from in deep hierarchies.
- State Container Libraries (e.g., Fluxor, Blazor-State): The most robust solution for managing complex, application-wide state. They implement patterns like Redux/Flux.
- Centralized Store: State is held in a single, immutable store.
- Actions/Reducers: Components dispatch “actions” to express an intent to change state. “Reducers” are pure functions that take the current state and an action and produce the new state.
- Benefits: Predictable state management, easy debugging (can log all actions), and decouples components from each other. This is the recommended approach for any non-trivial application.
- Answer:
- Question: What is prerendering in Blazor and what problem does it solve?
- Answer: Prerendering is the process of rendering a Blazor component on the server into static HTML during the initial HTTP request. This static HTML is sent to the browser immediately.
- Problem Solved: It dramatically improves the perceived performance and is crucial for Search Engine Optimization (SEO). Without prerendering, a Blazor Wasm app would just serve an empty
<div>
and a loading message, which is bad for SEO and user experience. With prerendering, the user sees the fully rendered page content instantly, while the Blazor runtime and application bundle are downloading in the background to make the page interactive.
- Question: How would you implement a custom authentication state provider for a Blazor Wasm application that uses a third-party identity provider?
- Answer: You need to create a class that inherits from
AuthenticationStateProvider
and override theGetAuthenticationStateAsync
method. - Steps:
- Create Custom Provider:
public class CustomAuthStateProvider : AuthenticationStateProvider { private readonly ILocalStorageService _localStorage; // Using a local storage wrapper private readonly HttpClient _httpClient; public CustomAuthStateProvider(ILocalStorageService localStorage, HttpClient httpClient) { _localStorage = localStorage; _httpClient = httpClient; } public override async Task<AuthenticationState> GetAuthenticationStateAsync() { var savedToken = await _localStorage.GetItemAsync<string>("authToken"); if (string.IsNullOrWhiteSpace(savedToken)) { return new AuthenticationState(new ClaimsPrincipal(new ClaimsIdentity())); // Anonymous user } // Attach token to HttpClient for future requests _httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("bearer", savedToken); // Parse claims from the token (don't trust the payload, just use it for UI) // In a real app, you might call a /userinfo endpoint to get fresh claims var claims = ParseClaimsFromJwt(savedToken); var identity = new ClaimsIdentity(claims, "jwt"); var user = new ClaimsPrincipal(identity); return new AuthenticationState(user); } public void NotifyUserAuthentication(string token) { var authenticatedUser = new ClaimsPrincipal(new ClaimsIdentity(ParseClaimsFromJwt(token), "jwt")); var authState = Task.FromResult(new AuthenticationState(authenticatedUser)); NotifyAuthenticationStateChanged(authState); } // ... other methods for login/logout }
- Register it: In
Program.cs
, register your provider and theAuthorizationCore
services.
builder.Services.AddScoped<AuthenticationStateProvider, CustomAuthStateProvider>(); builder.Services.AddAuthorizationCore();
- Login Logic: After a successful login with the third-party provider, you would store the received JWT in local storage and call
NotifyUserAuthentication
on your provider to update the application’s auth state.
- Answer: You need to create a class that inherits from
Entity Framework Core
- Question: Explain the difference between
AsNoTracking()
,AsNoTrackingWithIdentityResolution()
, andAsTracking()
. When should each be used?- Answer:
AsTracking()
: This is the default behavior. EF Core keeps track of the entities it queries. Any changes you make to these entities will be detected bySaveChanges()
and persisted to the database. Use this for write operations (update, delete).AsNoTracking()
: EF Core does not track the queried entities. They are plain objects. This is significantly faster and uses less memory because the overhead of change tracking is eliminated. This should be the default for all read-only queries.AsNoTrackingWithIdentityResolution()
: This is a hybrid introduced in EF Core 5. It does not track changes, but it does track entity instances by their primary key. This ensures that if the same database row appears multiple times in a result set (e.g., in a query with joins), EF Core will materialize only one object instance for it. This is useful for read-only queries that return complex object graphs, preventing data duplication while still being faster than full tracking.
- Answer:
- Question: What is a “cartesian explosion” in an EF Core query, and how do you prevent it using split queries?
- Answer: A cartesian explosion happens when you use multiple
Include()
calls on one-to-many relationships in a single query. For example, if you query forBlogs
andInclude
theirPosts
andInclude
theirTags
, the generated SQL will join all three tables. If a blog has 10 posts and each post has 5 tags, you will get 10 * 5 = 50 rows of data for that single blog, with the blog and post data being duplicated in every row. This is highly inefficient. - Prevention with Split Queries: Using
.AsSplitQuery()
tells EF Core to generate multiple SQL queries instead of one massive one. In the example above, it would generate three queries: one forBlogs
, one forPosts
, and one forTags
. EF Core then stitches the relationships together in memory. This avoids data duplication and is often much more performant for queries with multiple one-to-many includes.
var blogs = await dbContext.Blogs .Include(b => b.Posts) .Include(b => b.Contributors) .AsSplitQuery() // The magic happens here .ToListAsync();
- Answer: A cartesian explosion happens when you use multiple
- Question: How do you implement optimistic concurrency control in EF Core?
- Answer: Optimistic concurrency assumes that conflicts are rare and doesn’t lock the data. Instead, it checks if the data has changed before saving.
- Implementation:
- Mark a property on your entity with the
[Timestamp]
attribute (for SQL Server, this maps to arowversion
column) or[ConcurrencyCheck]
attribute.
public class Product { public int Id { get; set; } public string Name { get; set; } [Timestamp] public byte[] RowVersion { get; set; } }
- When you fetch a
Product
and then callSaveChanges()
to update it, EF Core will include theRowVersion
in theWHERE
clause of theUPDATE
statement (WHERE Id = @p0 AND RowVersion = @p1
). - If another user has modified the row in the meantime, their update will have changed the
RowVersion
. YourUPDATE
statement will find no rows to update (it will return 0 rows affected). - EF Core detects this and throws a
DbUpdateConcurrencyException
. You mustcatch
this exception and decide how to resolve the conflict (e.g., notify the user, retry, or merge changes).
- Mark a property on your entity with the
- Question: What are owned entity types (complex types) and when are they useful?
- Answer: Owned entity types allow you to map types that do not have their own primary key but are conceptually part of a “parent” or “owner” entity. For example, an
Address
class (Street
,City
,ZipCode
) can be an owned type of aCustomer
entity. In the database, theAddress
properties will be mapped as columns in theCustomers
table. - Usefulness:
- Domain Modeling: They allow you to create richer domain models by grouping related properties into their own objects, improving encapsulation and clarity.
- Value Objects: They are the primary way to implement the Value Object pattern from Domain-Driven Design.
- Code Reusability: The same owned type (like
Address
) can be reused by multiple owner entities.
- Answer: Owned entity types allow you to map types that do not have their own primary key but are conceptually part of a “parent” or “owner” entity. For example, an
- Question: Explain the purpose of
DbConnectionInterceptor
and provide a use case.- Answer:
DbConnectionInterceptor
is an EF Core interceptor that allows you to intercept low-level database operations on theDbConnection
, such as opening the connection, beginning a transaction, or executing a command. - Use Case: A common use case is to set a session-level context variable in the database for auditing or multi-tenancy. For example, on Azure SQL, you can use
SESSION_CONTEXT
to pass the current user’s ID to the database, which can then be used by row-level security policies.
public class SessionContextInterceptor : DbConnectionInterceptor { public override InterceptionResult ConnectionOpening( DbConnection connection, ConnectionEventData eventData, InterceptionResult result) { // Set a session context variable on the database connection var command = connection.CreateCommand(); command.CommandText = @"exec sp_set_session_context @key=N'UserId', @value=@userId"; command.Parameters.Add(new SqlParameter("@userId", GetCurrentUserId())); // Logic to get user ID command.ExecuteNonQuery(); return result; } }
- Answer:
Concurrency & Asynchronous Programming
- Question: Why should you almost never use
.Result
or.Wait()
on aTask
in an ASP.NET Core application? What is the underlying problem?- Answer: Using
.Result
or.Wait()
can cause deadlocks in synchronization-context-aware environments. In classic ASP.NET, there was a one-thread-per-requestSynchronizationContext
. If you blocked a thread waiting for aTask
to complete (.Result
), and thatTask
‘s continuation needed to run on the same captured context (thread), you would have a deadlock. The thread is blocked waiting for the task, and the task is waiting for the thread to become free. - While ASP.NET Core does not have a
SynchronizationContext
, blocking the thread is still a very bad practice. It ties up a thread pool thread, preventing it from serving other requests, which harms scalability. This is known as “thread pool starvation.” The correct approach is to useawait
all the way up the call stack.
- Answer: Using
- Question: What is the purpose of
ConfigureAwait(false)
? Is it still necessary in ASP.NET Core?- Answer:
ConfigureAwait(false)
tells theawait
keyword that it does not need to resume the continuation on the original captured context. This was critical for avoiding the deadlocks described in the previous question in UI and classic ASP.NET applications. - In ASP.NET Core: Since there is no
SynchronizationContext
, usingConfigureAwait(false)
is not strictly necessary to prevent deadlocks. However, it is still considered a good practice in general-purpose library code. The reason is that you don’t know if your library will be consumed by a UI app or classic ASP.NET app that does have a context. In your own ASP.NET Core application code (e.g., controllers, services), it provides a minor performance benefit by avoiding an unnecessary check for a context, but it’s not critical for correctness. The general consensus is to use it in libraries and omit it in application-level code for clarity.
- Answer:
- Question: Explain the difference between
Task.Run()
and just making a methodasync
. When would you useTask.Run()
in an ASP.NET Core context?- Answer:
async Task MyMethodAsync()
: This makes a method asynchronous. It does not necessarily run the code on a new thread. It allows the use ofawait
, which will free up the current thread while waiting for an I/O-bound operation to complete. This is for I/O-bound work.Task.Run(() => MyMethod())
: This explicitly queues the specified work to run on a thread pool thread. This is for offloading CPU-bound work.
- When to use
Task.Run()
in ASP.NET Core: Almost never. An ASP.NET Core request is already running on a thread pool thread. Offloading work withTask.Run()
just shuffles the work from one thread pool thread to another, adding unnecessary overhead. The only valid (and rare) use case is if you need to call a long-running, synchronous, CPU-bound legacy library that you cannot make asynchronous. In this case,Task.Run()
can prevent that blocking call from starving the request-handling thread.
- Answer:
- Question: What is a
ValueTask<T>
and when should it be used instead ofTask<T>
?- Answer:
ValueTask<T>
is astruct
that wraps either aT
result (for synchronous completion) or aTask<T>
(for asynchronous completion). Its purpose is to avoid allocating aTask
object on the heap in cases where an async method is likely to complete synchronously. - When to use it: Use
ValueTask<T>
when you have an async method that you expect will frequently complete synchronously from a cache or a simple check. A good example is reading from a buffered stream; the data might already be in the buffer, so no async I/O is needed. - Caveats: Because it’s a
struct
, it has limitations. You should notawait
aValueTask<T>
more than once, and you should not block on it with.Result
. If you need to do those things, call.AsTask()
on it first. In general, stick toTask<T>
unless profiling shows thatTask
allocations are a performance bottleneck in a specific hot path.
- Answer:
- Question: How would you use a
SemaphoreSlim
to limit the degree of concurrency for a specific resource-intensive operation within an ASP.NET Core application?- Answer:
SemaphoreSlim
is a lightweight semaphore that can be used to limit the number of threads that can access a resource or a block of code concurrently. This is useful for throttling calls to an external API with a rate limit or controlling access to a limited resource like a memory-intensive library.
public class RateLimitedService { // Allow only 5 concurrent calls to the expensive operation private static readonly SemaphoreSlim _semaphore = new SemaphoreSlim(5, 5); public async Task<string> DoExpensiveWorkAsync() { await _semaphore.WaitAsync(); // Wait for an open slot try { // Access the resource-intensive code // e.g., call a third-party API await Task.Delay(1000); // Simulate work return "Work done"; } finally { _semaphore.Release(); // Release the slot } } }
- Answer:
Internals & Hosting
- Question: Describe the ASP.NET Core hosting model. What is the relationship between Kestrel, IIS, and the in-process vs. out-of-process hosting models?
- Answer:
- Kestrel: A cross-platform, high-performance, in-process web server for ASP.NET Core. It’s the default server and is responsible for handling HTTP requests.
- Reverse Proxy (IIS, Nginx, Apache): In a production environment, you typically run a mature web server like IIS or Nginx in front of Kestrel. The reverse proxy receives requests from the internet and forwards them to Kestrel. It provides features Kestrel doesn’t, like request filtering, load balancing, SSL termination, and port sharing.
- Hosting Models (with IIS):
- In-Process (Default & Recommended): The ASP.NET Core app runs in the same process as the IIS worker process (
w3wp.exe
). The ASP.NET Core Module for IIS (ASPNCM
) is a native IIS module that loads the .NET runtime and your app into the worker process. This offers the best performance as requests are not proxied over a network loopback. - Out-of-Process: The ASP.NET Core app runs in a separate process (e.g.,
dotnet.exe
), and Kestrel is used. The ASP.NET Core Module acts as a reverse proxy, forwarding requests from IIS to the Kestrel process. This model is more flexible if you need to run multiple apps in the same app pool but is slightly less performant.
- In-Process (Default & Recommended): The ASP.NET Core app runs in the same process as the IIS worker process (
- Answer:
- Question: What is
IStartupFilter
and how does it differ from middleware?- Answer:
IStartupFilter
allows you to add middleware to the beginning or end of the entire application’s middleware pipeline from within a library or another part of your code. It’s a way to “wrap” the entire pipeline configured by the user inProgram.cs
. - How it works: It has a single method,
Configure(Action<IApplicationBuilder> next)
. You return a newAction<IApplicationBuilder>
that can add middleware before or after calling the originalnext
action. - Difference: Regular middleware is added sequentially in
Program.cs
. AnIStartupFilter
can force a piece of middleware to be the absolute first or last thing to run, regardless of whereapp.UseMiddleware<T>()
was called. This is useful for diagnostics or monitoring tools that need to time the entire request pipeline.
- Answer:
- Question: How does ASP.NET Core handle configuration from multiple sources (e.g.,
appsettings.json
, environment variables, user secrets)? Explain the override system.- Answer: ASP.NET Core has a layered configuration system. You register multiple configuration providers, and they are layered on top of each other. Each subsequent provider overrides the values from the previous ones. The default order is:
appsettings.json
appsettings.{Environment}.json
(e.g.,appsettings.Development.json
)- User Secrets (in Development)
- Environment Variables
- Command-line arguments
- This means a value set as an environment variable will override a value in
appsettings.json
. This is a powerful system that allows you to have default settings in code and override them for different environments without changing the code itself.
- Answer: ASP.NET Core has a layered configuration system. You register multiple configuration providers, and they are layered on top of each other. Each subsequent provider overrides the values from the previous ones. The default order is:
- Question: What is Native AOT (Ahead-of-Time compilation) in .NET 8, and what are its implications for ASP.NET Core applications? What are the trade-offs?
- Answer: Native AOT compiles a .NET application directly into native machine code at build time, rather than using a JIT (Just-In-Time) compiler at runtime.
- Implications/Benefits for ASP.NET Core:
- Startup Time: Drastically faster startup time because there’s no JIT compilation to do.
- Memory Usage: Significantly lower memory footprint.
- Smaller Size: The final published output is a single, self-contained executable with no .NET runtime dependency, making it ideal for containers and serverless functions.
- Trade-offs/Limitations:
- Reflection: AOT has limited support for reflection. A lot of dynamic code generation, which some libraries rely on, is not supported. The ASP.NET Core framework has been made mostly AOT-compatible, but many third-party libraries are not.
- Longer Build Times: The build process is much slower.
- Platform Specific: You must compile specifically for your target OS and architecture (e.g.,
linux-x64
).
- Question: Explain the role of
WebApplicationFactory<TEntryPoint>
in integration testing.- Answer:
WebApplicationFactory<TEntryPoint>
is the cornerstone of integration testing in ASP.NET Core. It’s a factory provided by theMicrosoft.AspNetCore.Mvc.Testing
package that bootstraps your application in memory for testing purposes. - Role & Benefits:
- It starts up your web application’s host, including the DI container, configuration, and middleware pipeline, without actually listening on a real network port.
- It allows you to create an
HttpClient
that sends requests directly to the in-memory server, bypassing the network stack for fast and reliable tests. - You can use its
WithWebHostBuilder
method to override or mock services in the DI container for a specific test run. For example, you can replace the real database context with an in-memory database or mock an external API client.
// Example Test public class ApiTests : IClassFixture<WebApplicationFactory<Program>> // 'Program' is the entry point { private readonly WebApplicationFactory<Program> _factory; public ApiTests(WebApplicationFactory<Program> factory) => _factory = factory; [Fact] public async Task Get_EndpointsReturnSuccess() { // Arrange var client = _factory.CreateClient(); // Act var response = await client.GetAsync("/health"); // Assert response.EnsureSuccessStatusCode(); // Status Code 200-299 Assert.Equal("text/plain", response.Content.Headers.ContentType.ToString()); } }
- Answer:
- Question: What is the
IProblemDetailsService
and how does it help standardize error responses in APIs?- Answer: RFC 7807 (“Problem Details for HTTP APIs”) defines a standard JSON format for returning error details from an API.
IProblemDetailsService
is a service in ASP.NET Core (.NET 7+) that helps generate these standard error responses. When you callapp.UseExceptionHandler()
orapp.UseStatusCodePages()
, this service is invoked to create a consistentProblemDetails
object for the response body. This provides clients with a predictable error structure, including fields liketype
,title
,status
, anddetail
, making error handling on the client side much more robust. You can customize its behavior to add your own extensions, like atraceId
.
- Answer: RFC 7807 (“Problem Details for HTTP APIs”) defines a standard JSON format for returning error details from an API.