Lab Guide

Containers

Containers are a way to package software (e.g. web server, proxy, batch process worker) so that you can run your code and all of its dependencies in a resource isolated process. You might be thinking, “Wait, isn’t that a virtual machine (VM)?” Containers virtualize the operating system while VMs virtualize the hardware. Containers provide isolation, portability and repeatability, so developers can easily spin up an environment and start building without the heavy lifting. More importantly, containers ensure your code runs in the same way anywhere, so if it works on your laptop, it will also work in production.

Dockerfiles

Review the draft Dockerfile and add the missing instructions indicated by comments in the file:

One of the previous catsndogs.lol developers started working on a Dockerfile in her free time, but she was got decided to move to Tibetan farm yaks.

  1. In the Cloud9 file tree, navigate to workshop-1/app/monolith-service, and double-click on Dockerfile.draft to open the file for editing.

Note: If you would prefer to use the bash shell and a text editor like vi or emacs instead, you’re welcome to do so.

Review the contents, and you’ll see a few comments at the end of the file noting what still needs to be done. Comments are denoted by a “#”.

Docker builds container images by stepping through the instructions listed in the Dockerfile. Docker is built on this idea of layers starting with a base and executing each instruction that introduces change as a new layer. It caches each layer, so as you develop and rebuild the image, Docker will reuse layers (often referred to as intermediate layers) from cache if no modifications were made. Once it reaches the layer where edits are introduced, it will build a new intermediate layer and associate it with this particular build. This makes tasks like image rebuild very efficient, and you can easily maintain multiple build versions.

For example, in the draft file, the first line - FROM ubuntu:latest - specifies a base image as a starting point. The next instruction - RUN apt-get -y update - creates a new layer where Docker updates package lists from the Ubuntu repositories. This continues until you reach the last instruction which in most cases is an ENTRYPOINT (hint hint) or executable being run.

Add the remaining instructions to Dockerfile.draft.

  1. Build the image using the Docker build command.

    This command needs to be run in the same directory where your Dockerfile is. Note the trailing period which tells the build command to look in the current directory for the Dockerfile.

    $ docker build -t monolith-service
    

    You’ll see a bunch of output as Docker builds all layers of the image. If there is a problem along the way, the build process will fail and stop (red text and warnings along the way are fine as long as the build process does not fail). Otherwise, you’ll see a success message at the end of the build output like this:

    Step 9/10 : ENTRYPOINT ["python"]
     ---> Running in 7abf5edefb36
    Removing intermediate container 7abf5edefb36
     ---> 653ccee71620
    Step 10/10 : CMD ["hello-world.py"]
     ---> Running in 291edf3d5a6f
    Removing intermediate container 291edf3d5a6f
     ---> a8d2aabc6a7b
    Successfully built a8d2aabc6a7b
    Successfully tagged monolith-service:latest
    

    Note: Your output will not be exactly like this, but it will be similar.

    Awesome, your Dockerfile built successfully, but our previous developer didn’t optimize the Dockefile for the microservices effort later. Since you’ll be breaking apart the monolith codebase into microservices, you will be editing the source code (e.g. hello-world.py) often and rebuilding this image a few times. Looking at your existing Dockerfile, what is one thing you can do to improve build times?