Use AWS Fargate to deploy your Expressjs app (3/3)

CodePipeline

Objective: create a CI/CD environment for a node.js + express application

AWS CodePipeline is a service to do CI/CD where you can visualize and automate all the steps required to release an application.

To understand how this service works, take a look at the following diagram:

Pipeline architecture example

In general, a pipeline can be divided in 3 stages:

  1. Source: this will be the repository where the code is stored and will trigger a run of the pipeline when a change is detected.
  2. Build: the build stage prepares all the configuration needed before staging or deploying the code
  3. Staging: the final stage deploys the application to the target location

Between each stage, an artifact is generated that will be used in the next stage as input. To store this artifacts temporarily, Amazon creates S3 buckets to pass the artifacts between each stage.

Creating roles

As a prerequisite before creating the pipeline, we will need to create 2 IAM roles to execute the build portion of our pipeline and a lambda function that will be used to do some cleanup.

The first role to create will be the one using during the build phase. 

On the Amazon console, go to the Security section and click on IAM. On the left tab go to Roles and click on the create button.

Select AWS Service as the type of trusted entity and select CodeBuild as the service that will use the role.

Role creation template

In the next screen, we won't select any policy yet. We will let CodeBuild to create a policy and then modify it to have all the permissions required.

In the last screen, type the role name and a description for what the role will be used.

The second role to create will be the one used for the lambda functions, follow the same steps except that when selecting the service to use the role, select Lambda.

Create Pipeline

The pipeline for our project will have 4 stages: source, build, staging and cleanup. 

To start creating the pipeline, in the Amazon console, go to the Developer Tools section, click on CodePipeline and then click on create pipeline.

In the first step, type a name for the pipeline.

CodePipeline creation step 1

Configure Source

In the second step, select the source provider for the code. For this tutorial, we will be using CodeCommit, however it also supports github as provider.

After selecting CodeCommit as Source provider, type the repository and branch name that will be used. Amazon, automatically generates CloudWatch events that will trigger the pipeline when a change is detected, however, you can change the detection options to enable Pipeline to check for changes periodically.

CodePipeline creation step 2

Configure Build

In the third step, select the build provider for building the containers.For this tutorial, we will be using CodeBuild, however it also supports Jenkins as a provider.

Create a new build project by marking the radio button on the screen, and type a name and description for the build project.

CodePipeline creation step 3 - section 1

In the environment section, select to use an image managed by AWS CodeBuild. Search for Ubuntu as OS, Docker as Runtime and 17.09 as version. Also, leave the buildspec.yml as build specification. Note: if the yml file is named differently, still select this option and continue with the tutorial. But remember to go to the CodeBuild section in AWS and update the specification file name there.

CodePipeline creation step 3 - section 2

For our application, the yml file will contain the following instructions:

version: 0.2
 
phases:
  pre_build:
    commands:
      - echo Logging in to Amazon ECR...
      - aws --version
      - $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
      - REPOSITORY_URI=610373893044.dkr.ecr.us-east-1.amazonaws.com/owi-trainer
      - IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
  build:
    commands:
      - echo Build started on `date`
      - echo Building the Docker image...   
      - docker build -t $REPOSITORY_URI:latest .
      - docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
  post_build:
    commands:
      - echo Build completed on `date`
      - echo Pushing the Docker images...
      - docker push $REPOSITORY_URI:latest
      - docker push $REPOSITORY_URI:$IMAGE_TAG
      - echo Writing image definitions file...
      - printf '[{"name":"node","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
      - echo Finish post build tasks
artifacts:
    files: imagedefinitions.json

This file will specify code build to do 3 phases:

  1. In the pre_build, it will establish a connection with the ECR service and set some variables.
  2. In the build, it will build the docker image from the code uploaded and create a new tag to handle the history of images.
  3. In the post_build, it will push both image tags to the repository and write in the imagedefinitions.json file the name of the container that will handle the image created. Note: the imagedefintions.json file is an empty file in the code source.

The REPOSITORY_URI variable must be changed with the correct repository used and in line 24, change the word "node" for the correct name of the container.

Continuing with the build project, the cache and vpc section doesn't need to be configured for our use cases.

As for the service role, search for the role created and select it. Finally, click on Save Build Project. It will automatically create the build project and assign it to the current pipeline.

Configure Deploy

In the fourth step, select Amazon ECS as the deployment provider for deploying the containers.

Then, in the section that appeared below, select the cluster and the service that will be used to run the tasks. Also, type in the image filename imagedefinitions.json, it's the file configured in the yml to indicate the image to use.

CodePipeline creation step 4

Finish Setup

In the fifth step, select the role for CodePipeline if you have one created, if not, click on create role.

Finally, review all the changes done and click on create.

Next, we will update the policies for the CodeBuild role.

On the Amazon console, go to the Security section and click on IAM. On the left tab go to Roles and search for the role used in CodeBuild. 

In the details screen for the role, search for the policy created by CodeBuild under the permissions tab, and click on edit policy.

To edit the policy, you can use the visual editor or modify the json directly, either way, include the following actions:

  • ecr:GetDownloadUrlForLayer
  • ecr:BatchGetImage
  • ecr:CompleteLayerUpload
  • ecr:DescribeImages
  • ecr:GetAuthorizationToken
  • ecr:UploadLayerPart
  • ecr:BatchDeleteImage
  • ecr:InitiateLayerUpload
  • ecr:BatchCheckLayerAvailability
  • ecr:GetRepositoryPolicy
  • ecr:PutImage

Configure Cleanup

We'll now add the final phase of the CodePipeline.

In the Amazon console, go to the Compute section and click on Lambda. Then click on the create function button in the top right corner.

In the wizard displayed, select the Author from scratch template, type a function name, select node.js 8.10 as runtime environment and select the existing role created before to execute lambda function.

Lambda creation template

In the editor that will appear, the content of the index.js file should be the following:

const _ = require('lodash');
const AWS = require('aws-sdk');

let config = {
  region: 'us-east-1',
  repo: 'owi-trainer'
}

exports.handler = (event, context) => {
    const ecr = new AWS.ECR({region: config.region})
    const pipeline = new AWS.CodePipeline();
    
    const jobId = event['CodePipeline.job']['id'];
    var putJobSuccess = function(message) {
        var params = {
            jobId: jobId
        };
        pipeline.putJobSuccessResult(params, function(err, data) {
            if(err) {
                context.fail(err);      
            } else {
                context.succeed(message);      
            }
        });
    };
    
    var putJobFailure = function(message) {
        var params = {
            jobId: jobId,
            failureDetails: {
                message: JSON.stringify(message),
                type: 'JobFailed',
                externalExecutionId: context.invokeid
            }
        };
        pipeline.putJobFailureResult(params, function(err, data) {
            if(err) context.fail(err.stack);
            else context.fail(message);
        });
    };
    
    ecr.describeImages({ repositoryName: config.repo, filter: { tagStatus: 'TAGGED'} }, function(err, data){
        console.log("Started");
        if (err){
          putJobFailure(err.stack);
        } else{
            var images = _.orderBy(data.imageDetails, ['imagePushedAt'], ['desc']);
            if(images.length > 3){
                var imagesToDelete = _.map(_.slice(images, 3), function(element){
                    return {
                        imageDigest: element.imageDigest
                    };
                });
                ecr.batchDeleteImage({ repositoryName: config.repo, imageIds: imagesToDelete }, function(err, data){
                    if(err){
                        putJobFailure(err.stack);
                    } else{
                        putJobSuccess("Removed " + imagesToDelete.length + " image(s)");
                    }
                });
            }else{
                putJobSuccess("Nothing to delete");
            }
        }
    });
};

This function will describe all the images existing in a repository and order them by the date it was pushed to the repository. If the length of those images is more than 3, it will delete the older images, otherwise, it won't do anything.

Click on save and now let's update the Lambda policy associated to the role.

On the Amazon console, go to the Security section and click on IAM. On the left tab go to Roles and click on the role.

In the details screen, search for the policy created by the Lambda function under the permissions tab, and click on edit policy.

Same as before, update the role using the visual editor or the json directly and include the following actions:

  • ecr:DescribeImages
  • ecr:DescribeRepositories
  • ecr:BatchDeleteImage
  • ecr:ListImages

Lastly, let's add the lambda function to the CodePipeline.

In the Amazon console, go to the Developer Tools section and click on CodePipeline. Search for the pipeline that was created and click on it to go to the details screen.

Click on edit and add a new stage at the end. Type a name for it, in our case: "CleanUp" and then add an action using the button right below the stage name.

In the right pane that will popup, select Invoke as Action Category, type an action name and select Lambda as the provider.

Then select the function you created in the AWS Lambda section that appeared and click on Add Action.

Add Lambda function to pipeline

Summary

After finishing all 3 parts of this tutorial, you should have completed the configuration to use Docker containers in AWS Datapipeline to deploy a node.js + express application.

Use AWS Fargate to deploy your Expressjs app (2/3)

Fargate

Objective: configure the infrastructure used to host Docker containers in AWS

AWS Fargate is a technology that allows you to run applications without proviosining or manage the compute infrastructure. In other words, Fargate executes the instances without the user involvement. As of now, the only orchestration tool allowed is Amazon ECS, but there will be support for Kubernetes in the near future.

To understand how the infrastructure works, take a look at the following diagram:

Fargate infrastructure

Amazon uses 4 objects to run Docker instances, starting from the center of the diagram:

  1. Container: this object is the host that will "contain" the docker image. 
  2. Task definition: the task definition defines from which docker image the instances will be created. Also, we can define here port mappings, volumes to access or environment variables that will be used by the container.
  3. Service: the service is the master that controls the tasks that will be run by Fargate. We can determine from here the load balancer, the amount of tasks that will be executed and the auto scaling rules if needed.
  4. Cluster: the final piece is the cluster, this is just a collection of services. It also manages the VPC used for the services.

The final piece that does not appear in the diagram is the ECR or Elastic Container Registry. This will be the repository used to store the images created by Docker and our first step in configuring all the infrastructure.

Create Elastic Container Repository 

On the Amazon console, go to the compute section and click on Elastic Container Service. On the left tab go to Amazon ECR > Repositories and click on Create Repository.

In the wizard displayed, type the name of the repository

ECR repository creation

After clicking "Next Step", the repository will be created and you will receive a success indicator.

ECR repository creation success

Copy the repository URI created by amazon since we will be using this later when configuring the CodePipeline.

Create Cluster

On the Amazon console, go to the compute section and click on Elastic Container Service. On the left tab go to Amazon ECS > Clusters and click on Create Cluster. Amazon provides a Wizard to create the whole infrastructure with just a few steps, however, at the time of creating this tutorial, the options provided were limited and other configurations needed to be done that required to recreate the objects individually.

In the wizard displayed, select the Networking only template (Powered by AWS Fargate) and click the next step button.

Cluster template

Type the name of the cluster and mark the checkbox to create the VPC. This VPC will be used by the load balancer.

Cluster creation details

Click on create and you will receive a success message indicating that the cluster has been created.

Create Load Balancer

On the Amazon console, go to the computer section and click on EC2. On the left tab go to Load Balancing > Load Balancers and click on Create Load Balancer

In the wizard displayed, select the Application Load Balancer template and click on create.

Load balancer template

Type the name of the load balancer, make sure it's marked as internet-facing using ipv4 addresses. In the Listeners section you can include HTTPS protocol, by default it's not added. Note: if HTTPS is added, a certificate will be required later and this will not be covered in this tutorial.

Last, select the VPC created by the cluster in the dropdown at the bottom of the screen, and at least two subnets from it.

Load balancer step 1

We will skip step 2 since it's meant to configure HTTPS details. On step 3, we will select the security groups used by the load balancer. Selecting the default and the same security group created by the cluster is enough.

Load balancer step 3

On step 4, we will create a new target group. Indicate a name and the health check path, the rest of the parameters can be left as default. Note: the health check path must be a valid path of the service to be accessed, and it shouldn't have any authentication associated. As a recommendation, create a valid route called "health" that simply return a success message. If no health check is configured, or the path is not valid, Fargate will deregister the tasks marked as unhealthy for an amount of time.

Load balancer step 4

On step 5, we should register targets, but we don't have any yet, so we can leave this as it is and continue to the last step. 

On step 6, review that all the information displayed is correct, if not, make the corrections and come back to this step to finish.

Click on create and you will receive a success message indicating that the load balancer has been created.

Create Task Definition and Container

On the Amazon console, go to the compute section and click on Elastic Container Service. On the left tab go to Amazon ECS > Task Definitions and click on the create button.

In the wizard displayed, select the Fargate template and click on next step.

Task definition template

In the next screen, under the first section, type the definition name and select the ecsTaskExecutionRole from the Task Role dropdown.

Task definition details

Under the Task size section, select the amount of memory and number of CPUI used by the tasks. And in the container definitions subsection click on add container.

Task definition details

In the container popup, type the name of the container, a base image, memory limits and port mappings.

Container details

Review all the configuration done and click on create. You will receive a success message indicating that the task definition has been created.

Create Service

Our last step in Fargate is to create the service that will run the tasks. 

On the Amazon console, go to the compute section and click on Elastic Container Service. On the left tab go to Amazon ECS > Clusters and click on the cluster created before. In the details screen displayed, under the Services tab, click on Create.

On the screen displayed, select Fargate as Launch Type, the task definition and cluster created before. Type a service name and 1 task only in the number of tasks input. Note: the number of tasks will be the number of instances run by Fargate.

Service creation step 1

After clicking on the next step button, you will configure the most critical parts of the service. Let's go section by section. 

In the VPC and Security groups, select the cluster VPC, all the subnets avaiable in the VPC and mark the auto-assign public IP as enabled.

Service creation step 2 - section 1

In the Load Balancing section, select Application Load Balancer and search for the Load Balancer created before in the dropdown displayed. The section above (Health check grace period), will be enabled and you can type a grace period for the health check. Note: the health check grace period will determine, when using a load balancer, how much time Fargate will wait before terminating an unhealthy task and instantiate a new one.

Service creation step 2 - section 2

In the Container to load balance section, click on the add to load balancer button. Select the port listener for port 80 and the targe group configured when creating the load balancer.

Service creation step 2 - section 3

In the Service Discovery section, enable the service discovery integration, select to create a new private namespace and provide a name for it. 

Mark to create a new service discovery service and provide a name for it, and leave the Task Health Propagation checked.

Service creation step 2 - section 4

For the last section of this screen, type the TTL value for DNS resolvers.

Service creation step 2 - section 5

After all this configurations are done, click on Next Step. We will skip Step 3 since it configures auto scaling, and it won't be necessary for our application. In case you foresee that it will need this feature, set the rules to auto scale.

Finally, review all the configuration done and click on create. You will receive a success message indicating that the service has been created.

Continue with the third part of this tutorial here

Use AWS Fargate to deploy your Expressjs app (1/3)

Steps

The following guide helps you to configure CodePipeline services in AWS to run Docker containers for an ExpressJS application. The whole process can be divided in 3 phases, which are: 

  1. Docker

    1. Create Docker account

    2. Install Docker CE

    3. Download Docker Image

    4. Write Docker File

    5. Build and Run Docker Container

    6. Cleanup

  2. Fargate

    1. Create Elastic Container Repository

    2. Create Cluster

    3. Create Load Balancer

    4. Create Task Definition and Container

    5. Create Service

  3. CodePipeline

    1. Creating roles

    2. Create pipeline

      1. Configure source

      2. Configure build

      3. Configure deploy

      4. Finish setup

      5. Configure cleanup

Docker

Objective: write a Dockerfile which will be used to create each container in CodePipeline

Create Docker account

To create a docker account, navigate to https:/hub.docker.com/ and fill the requested information.

Docker sign in form page

Install Docker CE

After the account has been created, download Docker CE for your platform from this link: https://www.docker.com/get-docker. This tutorial does not depend on any platform used for development.

When the download has finished, it would take a couple of minutes to install it, and it will prompt you to restart the computer.

Download Docker Image

After installing Docker, use any Command Prompt to execute commands in Docker. The first one that we will run is to verify that the installation was completed successfully using "docker version"; if everything went good, it should show an output similar to this:

Docker version

Then, to run Docker locally with our application, we will need to download an image using the following command: "docker pull [docker-image]" where you can replace the docker-image tag for the image version that is currently in LTS. When writing this tutorial, the current image version is "node:8.11.1-alpine". The alpine keyword is a flavor of the multiple versions of node for a particular version, others are slim, carbon and wheezy; they all differ in the tools installed by default or the Linux version used to create the image. For the Alpine version, it's based in Alpine Linux which is much smaller than others.

After running the command, the prompt will be updated with 3 bars that indicate the progress of the download and when finished, it will look like this:

Docker pull image

Write Docker file

The Dockerfile is used by Docker to build a container. Once a build task is executed by Docker, it will read the image used as base (the one we downloaded before), setup environment variables like the port in which the service will run, copy all the files used by the application and run commands to prepare the application.

For our project, the contents of the Dockerfile are the following:

# Base Image
FROM node:8.11.1-alpine
 
# Environment variables
ENV PORT=80
ENV NODE_ENV=PROD
 
# Files to copy
COPY /helpers /src/helpers
COPY /repData /src/repData
COPY /routes /src/routes
COPY app.js /src/
COPY package-lock.json /src/
COPY package.json /src/
 
# Commands
RUN cd /src; npm install
CMD [ "node", "/src/app.js" ]

Build and Run Docker Container

Now that everything has been configured, we can build the docker image and run it locally. For that, we will use two commands:

  • "docker build -t [image-name] ." (Notice the dot at the end)

    • Replace the tag image-name with a meaningful name, this will be name used to create the image with the application dockerized.

Docker build image

  • "docker run -p 80:80 --name [container-name] [image-name]"

    • Replace the tag image-name with the name typed before

    • Replace the tag [container-name] with a name that will be used to reference the docker container once it's running.

Docker run image

After the container is running, you can access the service by accessing localhost from any browser. 

Cleanup

After we have successfully created the Dockerfile, build a container and tested that the application runs without problem, we need to clean up the local environment. For that, we will use the following commands:

  1. "docker ps -a": this command will list all the containers. Look at the ContainerId column and search for the container that you want to remove.

  2. "docker stop [container-id]": this command will stop the container, replace the tag [container-id] with the container id found in the previous command. Note: docker does not need the full identifier. If the 3 first characters are unique in the containers, that's enough for it to recognize which contianer to stop.

  3. "docker rm [container-id]": this command will remove the container, same as before, replace the tag [container-id].

  4. "docker images": this command will list all the images existing locally. There should be at least 2 images: the base image and the one we created. Look at the ImageId column and search for the image that you want to remove.

  5. "docker rmi [image-id]": this command will remove the image we indicate, replace the tag [image-id] with the image id found in the previous command. Note: same concept as container-id applies for the image-id tag

Continue with the second part of this tutorial here