Google Authentication using AWS-Amplify (+ deployment)

Authentication using AWS is a process I covered in a previous post, however, this time we are going to use a tool provided by Amazon called Amplify.

For this tutorial, we are going to create a simple application using Facebook’s create-react-app. Then, we will add the authentication layer by using AWS-amplify, and finally, add the hosting in S3 buckets. But before that, let’s provide some basic concepts.

What is AWS-Amplify?

According to their own site, AWS-amplify is defined as a library for frontend and mobile developers that are building cloud-based applications, and facilitates them the necessary tools to add multiple cloud features. For this tutorial, we will focus on two main features: storage and authentication, but Amplify provide many more like:

  • Analytics

  • API integration

  • Push notifications

  • Cache

  • Among others

What is create-react-app and how to install it?

Create-react-app is the best tool to use whenever you want to start creating a web application with React, and for someone like me that likes Typescript, it has now the built-in capability to create apps using it.

Installing it into your machine is like installing any global package from npm. Just type “npm install -g create-react-app“ and voliá!

There are some dependencies needed although, for example, you must have at least node version 6. This library also allows you to focus on creating your application instead of dealing with webpack or babel configuration.

Now, let’s start with the real deal, and work on our Google authenticated application.

Create the app

For this tutorial, I will use the following versions:

  • node: 10.15.0

  • npm: 6.4.1

  • create-react-app: 2.1.3

  • aws-amplify: 1.1.19

To create the app, we will run the following line in your preferred terminal: “create-react-app google-auth-tuto --typescript“. This will generate all the necessary code you need to start working.

Running the app

To start using the application, run “npm install” in your terminal to verify that you have all the necessary packages installed. Then, in the package.json file generated, some scripts have been created by default, this time we will use the “start” script; so simply run “npm start” and it will open a tab in your browser after the application finishes compiling your code.

npm start

Now that our application is running, we can start using AWS-amplify to add some cloud functions, but first, we will need to configure amplify in your machine. For that, you can follow the next video that explains how to do it (taken from aws-amplify main site).

Configuring amplify in the project

Now that amplify is configured in your machine, we can add it to our application by running: “amplify init” in your project root folder. It will prompt you several questions and after them it will start creating some resources in your account, here is an example of what you will see in your terminal.

amplify init configuration

At the end, if this is the first time you are running aws-amplify, it will create a new profile instead of using an existing one. In this example, I’ve used my profile named ljapp-amplify, so this section might be different for you.

Important: always create different profiles for your AWS accounts, in my case, I have to use multiple accounts for my companies’ clients, so it facilitates a lot my work.

After the AWS resources have been created, let’s add the authentication layer to our app. AWS-amplify have different categories of resources, authentication is one of them. So, let’s add it by running: “amplify auth add“. Same as before, you will see some configurations asked by amplify, here is a summary of what you will receive.

amplify auth add

The only information that you might be wondering how to get is the Google Web Client Id. For that, please follow the instructions found here, under the “Create a client ID and client secret” section.

Finally, run “amplify push” and this will start creating all the authentication resources in your account.

amplify push

Important: AWS-amplify uses Identity Pools for 3rd party integration instead of user pools. Since identity pools doesn’t manage groups, we can only authenticate them. So, if we need to provide specific permissions or roles, we need to use claims (or switch to user pools manually) and configure it manually in AWS console.

Modifying React code

Up till now, we have setup all the foundation in the AWS account via amplify, but we still need to add logic in our react application. For that, we will install two npm packages:

  • npm install aws-amplify

  • npm install aws-amplify-react

Then, we will modify our App.ts file with the following code.

import React, { Component } from 'react';
import Amplify from 'aws-amplify';
import { withAuthenticator } from 'aws-amplify-react';

import logo from './logo.svg';
import aws_exports from './aws-exports';
import './App.css';

Amplify.configure(aws_exports);

class App extends Component {
  render() {
    return (
      <div className="App">
        <header className="App-header">
          <img src={logo} className="App-logo" alt="logo" />
          <p>
            Edit <code>src/App.tsx</code> and save to reload.
          </p>
          <a
            className="App-link"
            href="https://reactjs.org"
            target="_blank"
            rel="noopener noreferrer"
          >
            Learn React
          </a>
        </header>
      </div>
    );
  }
}

const federated = {
  google_client_id: 'SOME_NUMBER_HERE.apps.googleusercontent.com',
};

export default withAuthenticator(App, true, [], federated);

The second parameter in the “withAuthenticator” high-order component, will create a header for our application with some minimal information like the name of the user logged in, and also, renders the log out button.

Important: By default, aws-amplify provides some default screens that can be customized, but also, it allows for creating our own components for login, register, among others. This will not be covered in today’s tutorial and we will be using the default screens.

As of today, the package aws-amplify-react hasn’t been updated with a typescript definition, so we will need to add a file that declares it as a module (with the name aws-amplify-react.d.ts), to avoid typescript errors during development. The contents of the file are:

declare module 'aws-amplify-react';

Now that everything is set, we can run our application again and we will be seeing the following screen.

Amplify login screen

And then, we can log in using google’s button and after verifying our account, we will get into the application.

User logged into the application

Hosting the application

Now that everything is setup, we can host our application in the cloud with amplify. For that, we will add the hosting feature by running the next command: “amplify hosting add“, and same as before, some configuration is required.

amplify hosting add

Shortly, it will ask you to run amplify publish, and this will create the S3 bucket if it doesn’t exist, and open right away a browser tab with the application hosted on that bucket.

Summary

Now that all sections have been covered, I hope that the application has been a success and you have created a React application that can use google authentication, and hosted easily in S3 buckets in AWS. In an upcoming tutorial, I will talk about using Cognito User Pools to do 3rd party authentication.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

How to provide temporary access to S3 Buckets?

There are times when we store assets like images or videos in S3 buckets to be displayed in our websites. But what happens when we want to secure those assets so that only authenticated users can see them?

Well, there are many ways to provide security, one of the most common is used the "Referer" header but this can be spoofed, so we lose the security we wanted before. Another one is using Cloudfront and create signed URLs but that requires a lot of development work, and the last option was to use API Gateway to return binary data. After analyzing all this options, I determined that none of them provided the security we needed nor satisfied all of our use cases. Finally, I came up with another solution using a little bit of all the approaches mentioned before.

In order to provide security to the S3 folder, we are going to use signed URLs to provide temporary access to the bucket where the assets are hosted. To create the signed URLs we are using 2 lambda functions, the first one will be running under a IAM role that will create the signed URLs and the second one will be an authorizer function for the first one that will verify if the user making the request have the proper credentials. Here is a diagram of how the security flow for the S3 bucket works:

S3 Security Architecture

S3 Security Architecture

The first step to accomplish this is to remove the public policy that the bucket has, we want the bucket to be as closed as possible.

The second step will be to create a lambda function that will generate the signed URLs. For that, we need to create a lambda function called resolver and type the code provided below:

const AWS = require('aws-sdk');
 
exports.handler = (event, context, callback) => {
    AWS.config.update({
        region: "us-east-2"
    });
 
    const s3 = new AWS.S3({signatureVersion: 'v4', signatureCache: false});
    var key = event["queryStringParameters"]["key"];
    s3.getSignedUrl('getObject', {
        Bucket: "owi-trainer-assets",
        Key: key,
        Expires: 7200
    }, function(error, data){
        if(error) {
            context.done(error);
        }else{
            var response = {
                statusCode: 301,
                headers: {
                    "Location" : data
                },
                body: null
            };
            callback(null, response);
        }
    })
};

The getSignedUrl function from the SDK receives 3 parameters, the name of the operation that will be allowed from the URL created, an object containing the configuration (bucket, key of the object in the bucket and the expiration time in seconds), and lastly, the callback that will be executed once the URL is generated. As you can see, we are returning a code 301 in the response to force the client to redirect the request to the generated URL.

The third step is create an API Gateway endpoint that works as a proxy to the lambda function. The only important aspect here is to grab the ID of the API endpoint because we will need it for the next step. The ID can be obtained from the UI when the endpoint is created, in the next image, the text highlighted in yellow is the ID we need.

Gateway ID

Gateway ID

The fourth step is to create the validator lambda function that will verify that the client requesting an asset is a valid client. For that, we will follow the following steps.

  1. The validator function requires 2 NPM packages that not provided by default in the lambda ecosystem. So we will need to upload a zip file that contains all the necessary libraries.
  2. To accomplish that, create a folder named validator and navigate to it in a command window. In there, type "npm init" to create a package.json file and install these two components:
    1. aws-auth-policy: contains the AuthPolicy class that is required for a Gateway authorizer to perform actions.
    2. jsonwebtoken: this library is going to be used to validate the JWT tokens sent in the query string from the client.
  3. Inside of the validator folder created before, add an index.js file that will contain the logic to validate the tokens. The code will be provided below.
  4. Finally, create a lambda function named validator and upload the folder in a zip file.
var jwt = require('jsonwebtoken');
var AuthPolicy = require("aws-auth-policy");
 
exports.handler = (event, context) => {
    jwt.verify(event.queryStringParameters.token, "<SECRET TOKEN TO AUTHENTICATE JWT>",
    function(err, decoded){
        if(err) {
            console.log(err);
            context.fail("Unable to load encryption key");
        }
        else{
            console.log("Decoded: " + JSON.stringify(decoded));
 
            var policy = new AuthPolicy(decoded.sub, "<AWS-ACCOUNT-ID>", {
                region: "<REGION>",
                restApiId: "<API GATEWAY ID>",
                stage: "<STAGE>"
            });
            policy.allowMethod(AuthPolicy.HttpVerb.GET, "*");
 
            context.succeed(policy.build());
        }
    });
};

Finally, the fifth and last step is to add the authorizer in the API Gateway, for that, go to the Authorizers section in the Gateway you created and click on  "Create New Authorizer". Follow the details as follows:

Authorizer Configuration

Authorizer Configuration

As you can see, the token will be sent as part of the query string, other options are to send the token as a header or a stage variable.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on @cannyengineer to get updated on every new post.

Uploading files to AWS S3 Buckets

Well... I said that I was going to step away a little bit from AWS but seems like it wasn't that easy. Today, I'm writing about uploading files to S3 using a node.js backend service. 

This will be a short tutorial divided in two steps, first the front end development and then the backend work. so let's start.

Web site

I'm using an Angular application created with the Angular-cli and Bootstrap as the front-end framework to design the website, however, in this tutorial I'm not going to focus on how to setup all of this. For UI notifications, we are using ngx-toastr (if you don't know about it, look at my review here).

To create the file upload component and give some styles, I used the following code:

<div class="custom-file" style="width: auto;">
  <input type="file" class="custom-file-input" 
         accept="application/pdf"
         (change)="upload($event.target.files, fileInput)" 
         id="customPreReadFile" #fileInput/>
  <label class="custom-file-label" for="customPreReadFile">{{getFileName()}}</label>
</div>

As you can see, we are allowing only PDF files, but this restriction can be disabled or modified to meet your needs.

On the component code, we created two methods, the "upload" method called on the change event and "getFileName" to display an instruction text or the name of the file if one was already selected. The code for both methods is as follows:

upload(files: FileList, fileInput: any) {
  if(files[0].type.indexOf("pdf") === -1){
    this.toastr.error("The file selected is not a PDF.", "Error");
    fileInput.value = "";
    return;
  }
  this.toastr.info("Uploading file...");
  this.uploadService.uploadFile(files[0], this.identifier).subscribe(data => {
    this.toastr.success("File has been uploaded.", "Success");    
  });
}

getFileName(): string {
  var fileName = this.file ?
      this.file.name :
  'Upload File';
  return fileName;
}

The service method is the one that prepares the file to be sent to the node.js service as follows:

uploadFile(file: File, id: string): Observable<any> {
  const formData: FormData = new FormData();
  formData.append("file", file, '${id}/${file.name}');
  return this.httpClient.post(environment.apiEndPoint 
    + '/admin/upload/'
    + id, formData);
}

Node JS Service

Having configured all the required parts in the front end code, we need to adapt our Node JS service to receive the file. The service uses Express to configure the REST API, but we also use a package called formidable to process the form data sent from the Angular application easily. Similar to the Web Site section, I'm not focusing on how to setup the node service, but rather the exact code to process the file upload. 

Before digging into the code, I'll explain a little bit about what formidable does. In short, formidable parses the content of the form sent in the request and saves it to a local temporary location; from there, we can grab the file and do any logic we want with it.

The express endpoint code looks like this: 

 

var IncomingForm = require('formidable').IncomingForm;
var fs = require('fs');
router.post('/admin/upload/:id', function (req, res) {
    var id = req.params.id;
    var s3Uploader = new S3Uploader(req);
    var form = new IncomingForm();
    var fileName = "";
    var buffer = null;
    form.on('file', (field, file) => {
        fileName = file.name;
        buffer = fs.readFileSync(file.path);
    });
    form.on('end', () => {
        s3Uploader.uploadFile(fileName, buffer).then(fileData => {
          res.json({
            successful: true,
            fileData
          });
        }).catch(err => {
            console.log(err);
            res.sendStatus(500);
        });
    });
    form.parse(req);
});

Before moving to the next part to upload the file to S3, let's explain what we are doing here. After importing the necessary dependencies, inside of the request handler we are doing multiple things:

  1. Creating an instance of an "S3Uploader" helper to send the files to S3.
  2. Configuring the "IncomingForm" instance from formidable.
    1. Define an event handler when a file is processed by formidable that retrieves the file name and creates a buffer that we will send to the S3 service.
    2. Define an event handler when the form has been processed to call the upload file method in the S3 helper.
  3. Calling the parse method from Formidable to start the whole process.

The "S3Uploader" object has the following code:

var AWS = require('aws-sdk');
function S3Uploader(request) {
  var jwtToken = request ? request.headers.cognitoauthorization : null;
  let credentials = {
    IdentityPoolId: "<IDENTITY POOL ID>",
    Logins: {}
  };
  credentials.Logins['cognito-idp.<COGNITO REGION>.amazonaws.com/<USER POOL ID>'] = jwtToken;

  AWS.config.update({
    credentials: new AWS.CognitoIdentityCredentials(credentials, {
      region: "<COGNITO REGION>"
    }),
    region: "<S3 BUCKET REGION>"
  });

  let s3 = new AWS.S3();
  function uploadFile(key, file) {
    var s3Config = {
      Bucket: "<BUCKET NAME>",
      Key: key,
      Body: file
    };
    return new Promise((resolve, reject) => {
      s3.putObject(s3Config, (err, resp) => {
        if (err) {
          console.log(err);
          reject({success: false, data: err});
        }
        resolve({sucess: true, data: resp});
      })
    });
  }
}

If the first part about configuring the AWS SDK to use proper credentials, I invite you to read my post on how to manage credentials properly using Cognito or even an older post where I explain how to use Cognito and the Federated Identities to create users with roles that can access AWS resources. 

In short, what we are doing is, retrieving the authentication token generated by cognito when the user logs in so that we can configure the AWS SDK use the permissions from the user. 

After all that, we just need to instantiate an object to use the S3 APIs and send the data to the bucket.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on @cannyengineer to get updated on every new post.