Tech Review: Angular Stack (2/many)

The second entry of this series is going to talk about two things that are common problems in Javascript. If you have ever experience problems with Dates, then one of the libraries I'm reviewing here will facilitate your development experience a lot; the second library is about putting a loading indicator when there is an async request being executed. So let's dive in:

Dates

Let's start with the date management library. According with npmjs, at the time of writing this post, there has been more than 953 559 downloads in average per week, it's MIT licensed and currently on version 1.29.0. Also, according to github, this repository has over 11.000 stars, 98 contributors, and a pretty active community (I hope that with all of those stats, it gives enough background about how big it is). Bottom line, I present to you, if you didn't know about it, date-fns.

According to their home page, date-fns is like lodash but for dates. With over 140 functions to process dates, it doesn't fall short and pretty sure it will cover all the use cases that you can think of. 

So, why and how do I use it? I'm answering both questions of my review from the start, because they are tightly related. Some of the main functions that I look for when handling dates are: formatting a date to specific customer requirements and calculating dates in future. 

Formatting a date is really simple, for example you can have some code like this one:

import { format } from 'date-fns';

export class Utils {
  getDateFormatted(date: Date) {
    return format(date, "MM/DD/YYYY hh:mm:ss")
  }
}

Date calculation becomes really simple too, with the 140 functions or more that you can use, they export some useful like the ones below:

import { addDays, setSeconds } from 'date-fns';

export class Utils {
  getEmailTriggerDate(date: Date){
    return setSeconds(addDays(date, 7), 0);
  }
}

This example only mentions the functions addDays and setSeconds, but there are much more and I invite you to read the documentation over here.

Loading Indicator

Our second library has around 1.500 downloads per week, MIT licensed also and is on version 2.3.0. According to github, this repository has 170 stars and 2 contributors. I present to you mpalourdio's ng-http-loader.

Depending on the version of Angular that you are using, a different version of the library must be included when installing the package. This is the description from their home page:

"The latest compatible version with angular 4 is 0.3.4. If you want to use Angular 5, use versions 0.4.0 and above. The latest compatible version with angular 5 is version 0.9.1. Versions 1.0.0+ and 2.0.0+ are angular 6 / RxJS 6 compatible only."

First question, why do I use it? because I don't have to do anything basically! other than configuring the loader icon following the simple steps from the instructions they provide. This library was built with the condition to intercept all http requests performed using the HttpClientModule from Angular, in other words, as long as you use this module, the loader will appear automatically. 

Also, for the cases where you might use a third party tool that does not use that module, for example the AWS-SDK, then you can manually trigger the loader to appear using an Angular service they provide.

So, how do I use it? first let's discuss a little bit about the configuration. The most important thing is to include the main module from their library to our app module, so that we can use the library from our application.

import { NgHttpLoaderModule } from 'ng-http-loader/ng-http-loader.module';
import { AppComponent } from './app.component';

@NgModule({
  declarations: [
    /** DECLARATIONS */
  ],
  imports: [
    /** OTHER IMPORTS */
    NgHttpLoaderModule
  ],
  bootstrap: [AppComponent]
})
export class AppModule { }

You can do some customization that is provided out of the box, like different spinners using the following configuration in the app.component.ts file.

import { Spinkit } from 'ng-http-loader/spinkits';

@Component({
  selector: 'app-root',
  templateUrl: './app.component.html'
})
export class AppComponent {
  private spinkit = Spinkit;
  spinnerType = this.spinkit.skWanderingCubes;
}

And a simple add in the app.component.html

<spinner [spinner]="spinnerType"></spinner>

With this default configuration, the spinner will appear every time a request is made, however, I also mentioned that there is a way to manually show the spinner. Here is how:

import { SpinnerVisibilityService } from 'ng-http-loader/services/spinner-visibility.service';

@Component({
    selector: 'my-component',
    templateUrl: 'my-component.component.html'
})
export class MyComponent extends OnInit{
  constructor(private spinner: SpinnerVisibilityService) {
  }
  
  someAction() {
    this.spinner.show();
    //Some more logic
    this.spinner.hide();
  }
}

Summary

I've nothing else to say except that I love this components and I'll keep using them as long as I can. Both of them simplify a lot my work and I definitely like using things that make my life easier.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

How to provide temporary access to S3 Buckets?

There are times when we store assets like images or videos in S3 buckets to be displayed in our websites. But what happens when we want to secure those assets so that only authenticated users can see them?

Well, there are many ways to provide security, one of the most common is used the "Referer" header but this can be spoofed, so we lose the security we wanted before. Another one is using Cloudfront and create signed URLs but that requires a lot of development work, and the last option was to use API Gateway to return binary data. After analyzing all this options, I determined that none of them provided the security we needed nor satisfied all of our use cases. Finally, I came up with another solution using a little bit of all the approaches mentioned before.

In order to provide security to the S3 folder, we are going to use signed URLs to provide temporary access to the bucket where the assets are hosted. To create the signed URLs we are using 2 lambda functions, the first one will be running under a IAM role that will create the signed URLs and the second one will be an authorizer function for the first one that will verify if the user making the request have the proper credentials. Here is a diagram of how the security flow for the S3 bucket works:

S3 Security Architecture

S3 Security Architecture

The first step to accomplish this is to remove the public policy that the bucket has, we want the bucket to be as closed as possible.

The second step will be to create a lambda function that will generate the signed URLs. For that, we need to create a lambda function called resolver and type the code provided below:

const AWS = require('aws-sdk');
 
exports.handler = (event, context, callback) => {
    AWS.config.update({
        region: "us-east-2"
    });
 
    const s3 = new AWS.S3({signatureVersion: 'v4', signatureCache: false});
    var key = event["queryStringParameters"]["key"];
    s3.getSignedUrl('getObject', {
        Bucket: "owi-trainer-assets",
        Key: key,
        Expires: 7200
    }, function(error, data){
        if(error) {
            context.done(error);
        }else{
            var response = {
                statusCode: 301,
                headers: {
                    "Location" : data
                },
                body: null
            };
            callback(null, response);
        }
    })
};

The getSignedUrl function from the SDK receives 3 parameters, the name of the operation that will be allowed from the URL created, an object containing the configuration (bucket, key of the object in the bucket and the expiration time in seconds), and lastly, the callback that will be executed once the URL is generated. As you can see, we are returning a code 301 in the response to force the client to redirect the request to the generated URL.

The third step is create an API Gateway endpoint that works as a proxy to the lambda function. The only important aspect here is to grab the ID of the API endpoint because we will need it for the next step. The ID can be obtained from the UI when the endpoint is created, in the next image, the text highlighted in yellow is the ID we need.

Gateway ID

Gateway ID

The fourth step is to create the validator lambda function that will verify that the client requesting an asset is a valid client. For that, we will follow the following steps.

  1. The validator function requires 2 NPM packages that not provided by default in the lambda ecosystem. So we will need to upload a zip file that contains all the necessary libraries.
  2. To accomplish that, create a folder named validator and navigate to it in a command window. In there, type "npm init" to create a package.json file and install these two components:
    1. aws-auth-policy: contains the AuthPolicy class that is required for a Gateway authorizer to perform actions.
    2. jsonwebtoken: this library is going to be used to validate the JWT tokens sent in the query string from the client.
  3. Inside of the validator folder created before, add an index.js file that will contain the logic to validate the tokens. The code will be provided below.
  4. Finally, create a lambda function named validator and upload the folder in a zip file.
var jwt = require('jsonwebtoken');
var AuthPolicy = require("aws-auth-policy");
 
exports.handler = (event, context) => {
    jwt.verify(event.queryStringParameters.token, "<SECRET TOKEN TO AUTHENTICATE JWT>",
    function(err, decoded){
        if(err) {
            console.log(err);
            context.fail("Unable to load encryption key");
        }
        else{
            console.log("Decoded: " + JSON.stringify(decoded));
 
            var policy = new AuthPolicy(decoded.sub, "<AWS-ACCOUNT-ID>", {
                region: "<REGION>",
                restApiId: "<API GATEWAY ID>",
                stage: "<STAGE>"
            });
            policy.allowMethod(AuthPolicy.HttpVerb.GET, "*");
 
            context.succeed(policy.build());
        }
    });
};

Finally, the fifth and last step is to add the authorizer in the API Gateway, for that, go to the Authorizers section in the Gateway you created and click on  "Create New Authorizer". Follow the details as follows:

Authorizer Configuration

Authorizer Configuration

As you can see, the token will be sent as part of the query string, other options are to send the token as a header or a stage variable.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on @cannyengineer to get updated on every new post.

Uploading files to AWS S3 Buckets

Well... I said that I was going to step away a little bit from AWS but seems like it wasn't that easy. Today, I'm writing about uploading files to S3 using a node.js backend service. 

This will be a short tutorial divided in two steps, first the front end development and then the backend work. so let's start.

Web site

I'm using an Angular application created with the Angular-cli and Bootstrap as the front-end framework to design the website, however, in this tutorial I'm not going to focus on how to setup all of this. For UI notifications, we are using ngx-toastr (if you don't know about it, look at my review here).

To create the file upload component and give some styles, I used the following code:

<div class="custom-file" style="width: auto;">
  <input type="file" class="custom-file-input" 
         accept="application/pdf"
         (change)="upload($event.target.files, fileInput)" 
         id="customPreReadFile" #fileInput/>
  <label class="custom-file-label" for="customPreReadFile">{{getFileName()}}</label>
</div>

As you can see, we are allowing only PDF files, but this restriction can be disabled or modified to meet your needs.

On the component code, we created two methods, the "upload" method called on the change event and "getFileName" to display an instruction text or the name of the file if one was already selected. The code for both methods is as follows:

upload(files: FileList, fileInput: any) {
  if(files[0].type.indexOf("pdf") === -1){
    this.toastr.error("The file selected is not a PDF.", "Error");
    fileInput.value = "";
    return;
  }
  this.toastr.info("Uploading file...");
  this.uploadService.uploadFile(files[0], this.identifier).subscribe(data => {
    this.toastr.success("File has been uploaded.", "Success");    
  });
}

getFileName(): string {
  var fileName = this.file ?
      this.file.name :
  'Upload File';
  return fileName;
}

The service method is the one that prepares the file to be sent to the node.js service as follows:

uploadFile(file: File, id: string): Observable<any> {
  const formData: FormData = new FormData();
  formData.append("file", file, '${id}/${file.name}');
  return this.httpClient.post(environment.apiEndPoint 
    + '/admin/upload/'
    + id, formData);
}

Node JS Service

Having configured all the required parts in the front end code, we need to adapt our Node JS service to receive the file. The service uses Express to configure the REST API, but we also use a package called formidable to process the form data sent from the Angular application easily. Similar to the Web Site section, I'm not focusing on how to setup the node service, but rather the exact code to process the file upload. 

Before digging into the code, I'll explain a little bit about what formidable does. In short, formidable parses the content of the form sent in the request and saves it to a local temporary location; from there, we can grab the file and do any logic we want with it.

The express endpoint code looks like this: 

 

var IncomingForm = require('formidable').IncomingForm;
var fs = require('fs');
router.post('/admin/upload/:id', function (req, res) {
    var id = req.params.id;
    var s3Uploader = new S3Uploader(req);
    var form = new IncomingForm();
    var fileName = "";
    var buffer = null;
    form.on('file', (field, file) => {
        fileName = file.name;
        buffer = fs.readFileSync(file.path);
    });
    form.on('end', () => {
        s3Uploader.uploadFile(fileName, buffer).then(fileData => {
          res.json({
            successful: true,
            fileData
          });
        }).catch(err => {
            console.log(err);
            res.sendStatus(500);
        });
    });
    form.parse(req);
});

Before moving to the next part to upload the file to S3, let's explain what we are doing here. After importing the necessary dependencies, inside of the request handler we are doing multiple things:

  1. Creating an instance of an "S3Uploader" helper to send the files to S3.
  2. Configuring the "IncomingForm" instance from formidable.
    1. Define an event handler when a file is processed by formidable that retrieves the file name and creates a buffer that we will send to the S3 service.
    2. Define an event handler when the form has been processed to call the upload file method in the S3 helper.
  3. Calling the parse method from Formidable to start the whole process.

The "S3Uploader" object has the following code:

var AWS = require('aws-sdk');
function S3Uploader(request) {
  var jwtToken = request ? request.headers.cognitoauthorization : null;
  let credentials = {
    IdentityPoolId: "<IDENTITY POOL ID>",
    Logins: {}
  };
  credentials.Logins['cognito-idp.<COGNITO REGION>.amazonaws.com/<USER POOL ID>'] = jwtToken;

  AWS.config.update({
    credentials: new AWS.CognitoIdentityCredentials(credentials, {
      region: "<COGNITO REGION>"
    }),
    region: "<S3 BUCKET REGION>"
  });

  let s3 = new AWS.S3();
  function uploadFile(key, file) {
    var s3Config = {
      Bucket: "<BUCKET NAME>",
      Key: key,
      Body: file
    };
    return new Promise((resolve, reject) => {
      s3.putObject(s3Config, (err, resp) => {
        if (err) {
          console.log(err);
          reject({success: false, data: err});
        }
        resolve({sucess: true, data: resp});
      })
    });
  }
}

If the first part about configuring the AWS SDK to use proper credentials, I invite you to read my post on how to manage credentials properly using Cognito or even an older post where I explain how to use Cognito and the Federated Identities to create users with roles that can access AWS resources. 

In short, what we are doing is, retrieving the authentication token generated by cognito when the user logs in so that we can configure the AWS SDK use the permissions from the user. 

After all that, we just need to instantiate an object to use the S3 APIs and send the data to the bucket.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on @cannyengineer to get updated on every new post.

Tech Review: Angular Stack (1/many)

Being the first post in my Tech Review series, I wanted to start with something I use often, so I'm gonna start writing about two components that are really common in enterprise applications, a grid and a UI notification mechanism.

Also, this time I'm not doing any type of comparison between the component I'm reviewing and others, nor I'm going to give a calification, this post is going to focus on the good things of each one, and some examples on using them. 

Grid Component

Let's start with the grid component. According with npmjs, at the time of writing this post, there has been more than 50.000 downloads in the last week, it's MIT licensed and currently on version 13.0.1. Also, according to github, this repository has over 2.700 stars, 111 contributors, and a pretty active community (I hope that with all of those stats, it gives enough background about how big it is). Bottom line, I present to you, if you didn't know about it, Swimlane's grid component.

First question in my review, why do I use it? and to answer that I have two words only: features and simplicity. I like components that are simple to use for the most basic requirements that I need to fulfill; and for more complex things, I use them as base for my custom made components. 

For this grid in particular, here are some of the features available out-of-the-box (taken from their main page)[1]:

  • Handle large data sets ( Virtual DOM )
  • Expressive Header and Cell Templates
  • Horizontal & Vertical Scrolling
  • Column Reordering & Resizing
  • Client/Server side Pagination & Sorting
  • Intelligent Column Width Algorithms ( Force-fill & Flex-grow )
  • Cell & Row Selection ( Single, Multi, Keyboard, Checkbox )
  • Fixed AND Fluid height
  • Row Detail View

The features I like the most are the column reordering, pagination, sorting, custom commands and the flexible fluid layout. Those are the ones that I care the most and covers almost the principal use cases that I've seen.

Second question, how do I use it? well, normally if you use a grid in an application, I'm pretty sure you will end up using it in more places, so the first thing I do is create a custom component that wraps the grid and enables more features that do not come out-of-the-box. For example, this is the current version of the GridComponent I'm using:

import { Component, Input, Output, EventEmitter, OnInit, ViewChild, SimpleChanges } from '@angular/core';
import { DatatableComponent } from '@swimlane/ngx-datatable';
import {Md5} from 'ts-md5/dist/md5';

@Component({
  selector: 'grid',
  templateUrl: './grid.component.html',

})

export class GridComponent implements OnInit {
    @Input() filtered: boolean = false;
    @Input() data: any;
    @Input() columns: any;
    @Input() rowClass: any;
    @Input() selectionType: string = 'single';
    @Output() rowSelected: EventEmitter<any> = new EventEmitter();
    @ViewChild('table') table: DatatableComponent;

    rows = [];
    lastSelectedRowHash;
    selectedRow = [];

    constructor() { }

    onRowSelected($event) {
        if(this.selectedRow == null) return;
        if(this.selectedRow.length <= 0) return;
        var stringifyObject = JSON.stringify(this.selectedRow[0]);
        var selectedObjectHash = Md5.hashStr(stringifyObject);
        if(this.lastSelectedRowHash == selectedObjectHash)
            return;
        this.lastSelectedRowHash = selectedObjectHash;
        this.rowSelected.emit(this.selectedRow);
    }

    cancelSelection(){
        this.selectedRow = [];
        this.lastSelectedRowHash = "";
    }

    updateFilter(event) {
        if(!this.filtered) return;
        const val = event.target.value.toLowerCase();
        const temp = this.data.filter((d) => {
            var matches = false;
            for(var index = 0; index < this.columns.length; index++){
                var property = this.columns[index].prop;
                var data = d[property] == null ? "" : d[property].toString();
                matches = data.toLowerCase().indexOf(val) !== -1 || !val;
                if(matches) break;
            }
            return matches;
        });
        this.rows = temp;
        this.table.offset = 0;
    }

    ngOnInit() {
        if(this.data){
            this.rows = this.data;
        }
    }

    ngOnChanges(changes: SimpleChanges){
        for(var prop in changes){
            if(prop == 'data'){
                this.rows = changes[prop].currentValue;
            }
        }
    }
}

As you can see, I've added things such as saving the last state of the selected element to avoid firing the event unnecessarily, an easy way to unselect all elements and filtering. Obviously, this has an HTML counterpart:

<div class="row mb-3">
    <div class="col-md-4">
        <input type='text'
            class="form-control"
            placeholder='Type to filter...'
            (keyup)='updateFilter($event)'
            *ngIf="filtered"/>
    </div>
</div>
<div class='panel-body'>
    <ngx-datatable #table
        class="material selection-cell"
        [rows]="rows"
        [loadingIndicator]="loadingIndicator"
        [columns]="columns"
        [columnMode]="'force'"
        [headerHeight]="40"
        [footerHeight]="40"
        [limit]="10"
        [rowHeight]="'auto'"
        [reorderable]="reorderable"
        [selectionType]="selectionType"
        [selected]="selectedRow"
        (select)='onRowSelected($event)'
        [rowClass]="rowClass">
    </ngx-datatable>
</div>

Creating an HTML wrapper allowed me to centralize common configurations like the column mode for the fluid layout I mentioned before, or the height of the rows, among other things.

UI Notification Mechanism

Continuing with the second component of this review, let's give some stats like before. According to npmjs, at the time of writing this post, there has been more than 20.000 downloads in the last week, it's MIT licensed and currently on version 8.8.0. Also, according to github, this repository has 545 stars and 17contributors. Bottom line, I present to you, if you didn't know about it, Scott Cooper's ngx-toastr.

So, why do I use it? again, simplicity is something that I looked for, but more important this time, I needed a component that satisfied all my use cases from start and matches the UI style without interfering with the layout design. For all those reasons, ngx-toastr met all my requirements and exceeded my expectations.

Here are some of the features available (again, taken from their main page) [2]:

  • Toast Component Injection without being passed ViewContainerRef
  • No use of *ngFor. Fewer dirty checks and higher performance.
  • AoT compilation and lazy loading compatible
  • Component inheritance for custom toasts
  • SystemJS/UMD rollup bundle
  • Animations using Angular's Web Animations API
  • Output toasts to an optional target directive

Then, how do I use it? Apart from following the installation and configuration instructions, I've done nothing. This is an awesome plug-and-play component, but if you prefer, you can customize the UI design simply by following a few extra steps.

Summary

Reaching this point, I've nothing else to say except that both are excellent components that will work great in whatever project you need to work on. And hopefully, this little post makes you want to try them. 

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

Technical Debt: A Practical Approach

Today, I'm gonna step away from the Amazon tutorials, I think I have spent enough words on it, so it's time to move to other areas, and as you can see from the title, this post is going to be all about technical debt.

For the last couple of years, this topic has been a rollercoaster (take a look at the google trends chart here), but rather than just writing about what is it, as many people already do, I will write about how it can be combined in a regular scrum development process. 

What's Technical Debt?

Before actually diving into the main topic, I need to give a little bit of background. If you are not familiar with the concept, this section might interest you, otherwise, it's still a good read.

The term technical debt is not something that started appearing in the web recently, nor is something I created. It was coined by Ward Cunningham around 1992 in his article titled "The WyCash portfolio management system" [1]. 

If I need to summarize the definition, I would say that it's the result of any decision that you make in software development that has a visible result and a secondary not-so-visible effect.

In the financial counterpart, any debt generates an interest that makes the debt increase over time. At some point, you might pay the debt, but not after incurring in a lot of expenses. In this scenario, the decision was to ask for a loan, and the secondary effect was the interest that made you pay more and more over time.

If we come back to software development, a perfect example is when you are comparing two solutions, the first one is a quick and dirty solution, and the other one takes more time but provide a cleaner result. If you choose the first one because it reduces the time to publish a product, or there is a hard deadline, then you are starting a loan and the interest starts to run. The interest can appear in many forms, for example, it could be the "Spaguetti code" that is created, or the high complexity of a method that will require extra time when doing any support activity, among other possible outcomes.

Incurring in technical debt is not bad by its definition, the bad part is when the debt is not handled properly. So, how can we handle it?

Managing Technical Debt

In 2014, a group of researchers performed a simulation to determine what's the best strategy to manage technical debt in an agile environment. They came up with a few strategies and decided that the best approach is to list the technical aspects that are most important to the developers, monitor those aspects using automated tools and remediate any failure based on thresholds limits. Sounds like a lot, I know, and that's why they explicitly state that, even though this is the best approach, it would require resources and time that are not always available. If you want to read the full article, you can purchase it from here or download it if you have an IEEE account.

Last year, to complete my master's degree, I had to lead a project and decided to focus it on managing technical debt. I already had a strategy in place, based on what these researchers simulated, I just had to prove that it works.

So, I started by doing some research to see if there were other efforts of people doing the same as me and chose a static code analysis tool among the ones in the market. We opted to use SonarQube to execute that part in a continuous way, and created some custom alerts whenever a metric failed based on some threshold limits. Sonarqube, in my opinion, leads the market not only because it is free and open source, but because the number of functionalities and languages t supports exceed any of its competitors.

After that, we altered our development process. Added weekly code review meetings to check the progress on the technical debt metrics, allocated time (no higher than 20% of 1 resource per week) to remediate failing metrics and established a more standardized code review strategy. 

We evaluated these changes during 5 sprints of 1 week, where we added new features and worked on remediating failing metrics. At the end of those sprints, we obtained the following results:

  1. Defects reduced in 30%, according to Sonarqube.
  2. Days of technical debt reduced in 25%.
  3. Security vulnerabilities reduced in 86%.
  4. Reduced cyclomatic complexity in 129 possible flows accross the application, resulting in simpler classes and functions.

Also, 4 months after the changes were applied, we noticed a reduction in the number of support tickets by 60% and the resolution time was improved in 50%, saving time in support activities and reducing costs. 

Even though the whole project took other areas into consideration, like process engineering and automated testing, we ended up adding technical debt management activities to our development process effectively. So, if you want to read the complete document about this project, in spanish although, it's publicly available from here.

If you reached here, I hope you enjoyed this post and if you are interested in applying something like this in your organization, don't hesitate in contacting me for more information.