In my opinion autogenerated Dockerfile for Web .net core application is too large, but why? Why Microsoft decided to create it like this?
This is autogenerated Dockerfile when we add flag "Add docker support" during App creation:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-buster-slim AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-buster AS build
WORKDIR /src
COPY ["app/app.csproj", "app/"]
RUN dotnet restore "app/app.csproj"
COPY . .
WORKDIR "/src/app"
RUN dotnet build "app.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "app.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "app.dll"]
And in my opinion it can looks like this:
FROM mcr.microsoft.com/dotnet/core/sdk:3.0-buster as build
WORKDIR /src
COPY ["app/app.csproj", "app/"]
RUN dotnet restore "app/app.csproj"
COPY . .
WORKDIR "/src/app"
RUN dotnet build "app.csproj" -c Release -o /app/build
RUN dotnet publish "app.csproj" -c Release -o /app/publish
FROM mcr.microsoft.com/dotnet/core/aspnet:3.0-buster-slim
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY --from=build /app/publish .
ENTRYPOINT ["dotnet", "app.dll"]
Why Microsoft decided to first - get the aspnet:3.0-buster-slim just to expose ports and use them later as final? It would be much shorter just to get this image as the last step as in my example. Also do we need double From for the sdk:3.0-buster (fist named as build, second as publish)? It's possible to add multiple RUN one by one as in my example.
Maybe there is some tech suggestions why they decide to do that? Thanks!
A Dockerfile is a series of steps used by the docker build . command. There are at a minimum three steps required:
FROM some-base-image
COPY some-code-or-local-content
CMD the-entrypoint-command
As our application becomes more and more complex additional steps are added. Like restoring packages and dependencies. Commands like below are used for that:
RUN dotnet restore
-or-
RUN npm install
Or the likes. As it becomes more difficult the image build time will increase and the image size itself will increase.
Docker build steps generates multiple docker images and caches them. Notice the below output:
$ docker build .
Sending build context to Docker daemon 310.7MB
Step 1/9 : FROM node:alpine
---> 4c6406de22fd
Step 2/9 : WORKDIR /app
---> Using cache
---> a6d9fba502f3
Step 3/9 : COPY ./package.json ./
---> dc39d95064cf
Step 4/9 : RUN npm install
---> Running in 7ccc864c268c
notice how step 2 is saying Using cache because docker realized that everything upto step 2 is the same as from previous build step it is safe to use the cached from the previous build commands.
One of the focuses of this template is building efficient images. Efficiency could be achieved in two ways:
For #1 using cached images from the previous builds is leveraged. Dividing dockerfile to rely more and more on the previous build makes the build process faster. It is only possible to rely on cache if the Dockerfile is written efficiently.
By separating these stages of build and publish the docker build . command will be better able to use more and more cache from previous steps in docker file.
For #2 avoid installing packages that are not required, for example.
refer docker documentation for more details here.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With