Containerising an ASP.NET Core Web API Application with Docker

Containerising an ASP.NET Core Web API Application with Docker

Introduction

Containerisation has become a cornerstone of modern software development, enabling developers to package and deploy applications consistently and safely.

In this article, we’ll explore how to containerise an ASP.NET Core Web API application using Docker. Specifically, we’ll create a Dockerfile that builds a container image for the application and maps a volume to the host system, allowing us to persist data between container runs.

Before We Begin

Before diving into the specifics of containerisation, ensure you have the following prerequisites:

  1. Docker Engine: Install the Docker Engine on your system. This allows you to manage and run Docker containers.
  2. ASP.NET Core Web API Application: Create an ASP.NET Core Web API project using your preferred IDE or command-line tools.

Creating the Dockerfile

The Dockerfile is a text file that contains instructions for building a Docker image. Here’s the Dockerfile for our ASP.NET Core Web API application:

FROM mcr.microsoft.com/dotnet/sdk:6.0 as builder

WORKDIR /app

COPY . .

RUN dotnet restore
RUN dotnet build

FROM mcr.microsoft.com/dotnet/aspnet:6.0

WORKDIR /app

EXPOSE 5000

COPY --from=builder /app/bin/Debug/net6.0/publish/ .

CMD ["dotnet", "Run"]

This Dockerfile defines two stages:

  1. Builder Stage: It starts with an official Docker image for the .NET SDK, which includes the necessary tools for building ASP.NET Core applications.
  2. Runner Stage: It utilises the official ASP.NET Core image, which contains the ASP.NET Core runtime and libraries. It copies the compiled application from the builder stage and exposes port 5000 for the web API.

Mapping a Volume

To persist data between container runs, we’ll map a volume to the host system. A volume is a special type of storage that allows data to persist even after the container is stopped and restarted.

Here’s how to map a volume to the host system in the Dockerfile:

FROM mcr.microsoft.com/dotnet/sdk:6.0 as builder

WORKDIR /app

COPY . .

RUN dotnet restore
RUN dotnet build

FROM mcr.microsoft.com/dotnet/aspnet:6.0

WORKDIR /app

EXPOSE 5000

VOLUME ["/app/data"]

COPY --from=builder /app/bin/Debug/net6.0/publish/ .

CMD ["dotnet", "Run"]

In this example, we’re mapping the /app/data directory in the container to a directory on the host system. This allows us to store data files in the /app/data directory of the container, and those files will be accessible from the host system as well.

Building and Running the Container

Once you have the Dockerfile in place, you can build the Docker image using the following command:

docker build -t my-aspnet-webapi .

This command builds an image named my-aspnet-webapi from the context of the current directory, which includes the Dockerfile and the ASP.NET Core Web API application files.

To run the container, use the following command:

docker run -d -p 5000:5000 -v /path/to/host/data:/app/data my-aspnet-webapi

This command starts a container in detached mode (-d), maps the host port 5000 to port 5000 inside the container, and maps the /path/to/host/data directory on the host system to the /app/data directory inside the container.

Now, your ASP.NET Core Web API application is running in a containerised environment. You can access the application at http://localhost:5000 in your web browser.

Conclusion

Containerising an ASP.NET Core Web API application with Docker offers several benefits, including portability, consistency, and resource isolation.

By creating a Dockerfile and mapping a volume to the host system, you can ensure your application data persists between container runs and easily deploy it to different environments.

Stephen

Hi, my name is Stephen Finchett. I have been a software engineer for over 30 years and worked on complex, business critical, multi-user systems for all of my career. For the last 15 years, I have been concentrating on web based solutions using the Microsoft Stack including ASP.Net, C#, TypeScript, SQL Server and running everything at scale within Kubernetes.