Hosting websites in Amazon - Setting up URLs and certificates

Developing an application is the part that takes all the interest from any developer; I mean, the name kind of implies that right?. However, a complete development process does not finish when the team write the last line of code, we still have to make the deployment.

During most of the process, we can use simpler configurations. For example, we can use the default links provided when we host in a S3 bucket, if we are using an EC2 instance we can use the direct IP, or even use a load balancer that points to a set of Docker instances. However, in all of these cases, the URLs are not friendly since they are autogenerated by Amazon.

This post will explain how to do a proper deployment of an application for a production-level environment. For this, we will use the following technologies:

  1. S3 bucket: where the application code will be hosted. We won’t run through this during this post, but you can check my Amplify posts to see how to do it.

  2. Cloudfront: Content delivery network service

  3. Route53: domain registration service

  4. Certificate Manager: SSL certificates provider

Requesting the certificate

As a prerequisite for this part, we will need to create a hosted zone in Route 53 where the certificate will be associated. For that, go to the Route 53 service, in the left menu select hosted zones, and create one if you don’t have already.

Route 53 - hosted zone

The first step that we will cover is requesting the SSL certificate. For this, we will go to the Certificate Manager service in Amazon and click on “Request a Certificate” and select a public certificate.

Certificate - Step 1

The next step will ask us to register all the domain names that we want the certificate to be used for.

Certificate - step 2

Important: we can use a wildcard to use multiple subdomains. For example, you can type *.mydomain.com, and it will cover cases like admin.mydomain.com.

After setting the domain names, it will ask us to select the verification method of the certificate. Since we are going to use only Amazon services, let’s select DNS validation.

Certificate - step 3

Finally, we will review all the selections and continue with the wizard. After the request has been created, we need to validate and create the domain for the certificate in Route 53, however, this is an automated step that the Certificate Manager can do.

Certificate - step 4

In the dashboard for the certificate manager, you will see the certificate that you just created, and inside of it, it will prompt you to create the DNS records in Route 53.

Certificate - step 5

Creating the Cloudfront distribution

Now that we have our certificate set up and ready to be used, we need to create the cloud front distribution that will point to our S3 bucket. For that, we will go to the Cloud front service dashboard and click on “Create distribution”.

For our use case, we will select “Web” as the delivery method (this is the first screen that will show). From there, we will be presented a big form with a lot of options, but for most of them we will leave the default values except in 3 specific areas.

First, we need to select the S3 bucket where the application is hosted as the “Origin Domain Name” (this field is a dropdown, even though it doesn’t look like it).

Cloudfront - step 1

The second change that we will make to the form is under the “Default Cache Behavior Settings” section. In there, we will select “Redirect HTTP to HTTPS” in the viewer protocol policy option. This will automatically redirect the user to the HTTPS site, even if they don’t specify it in the address bar.

Cloudfront - step 2

Finally, under the “Distribution Settings” section, we will select the certificate created previously.

Cloudfront - step 3

After all of that, click on “Create Distribution" button at the end of the screen and wait some time while Amazon provision all the resources for it.

Important: when you are using React or Angular applications, we need to set the default root object to be the index.html file. For that, in the same form, a couple of fields after selecting the certificate, it can be specified. If you didn’t set it up the first time, you can modify the distribution after its creation.

Create the actual URL

Previously, we used briefly Route53 to validate the certificate request, however this time we will create the real URL that our application will use. So, we go to the Route53 service in amazon and in the left bar menu click on Hosted zones.

From the hosted zone that you created before, we will add a new record set. If all the previous steps were executed successfully, there should already be one record set for the SSL certificate.

In the new record set, we will type the name of the URL that we want to use, for example: admin.mydomain.com, set the record set to be an alias and from the input that will appear, select the cloudfront distribution created earlier.

Route 53 - record set

After finishing creating it, we will need to wait some time while Amazon register the domain in the DNS servers and propagates the changes. Shortly after, you will be able to use your new URL to access your website.

Bonus track: what about APIs?

As a small tip, if you want to do this for an API that is being hosted in ECS or Beanstalk, instead of creating a Cloudfront distribution, you can create the record set in Route 53 directly and point to the load balancer that is associated to those instances. The first two steps are still required, :)

Summary

Now that all sections have been covered, I hope that your application has been deployed successfully, and your customers will be happy by having a pretty URL that is easy to remember (hopefully). If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

Deploying a REST API using Express to Elastic Beanstalk

A couple of weeks ago, I was trying to deploy a small service, but I didn’t wanted to start creating the full infrastructure suite on Fargate yet; hence, the best option is Elastic Beanstalk. For a couple of days I did the deployment manually, but after reading a little bit more about Beanstalk, I found a CLI that simplified a lot of the deployment process.

Here is some documentation if you want to read about it, but for the mean time, I will write a little bit about how I used it. It might be useful to you, if like me, you are looking for ways to do a quick deployment and don’t want to invest time in configuring the CI/CD pipeline or a more complex setup.

Prerequisites

The express application was built using:

  1. Node version 10.15.0

  2. generator-express-no-stress-typescript version 4.2.1

Installing the CLI

To install the CLI, there are two approaches, using HomeBrew or the other one using PIP. In my case, I used Brew because it’s so much simpler (and also because the other way failed for me :).

Just type the following and brew should take care of everything for you:

brew install awsebcli

Configuring the CLI

I’m going to assume you are already using AWS profiles in your machine, if not, I recommend reading about them here.

To start the configuration in your project, navigate to the root folder of it, and type the following command:

eb init —profile [YOUR PROFILE NAME]

It will start a series of questions in the next sequence:

  1. Region: in our case, we selected us-east-1

  2. Elastic Beanstalk Application: it will prompt you create a new app if none is already created.

  3. Platform: in our case, we selected node.js

  4. SSH configuration

  5. Keypair: to connect to the instance, you will need a key pair to use via SSH. The CLI will help you to create one as well.

Important: there is a step asking you if you want to use CodeCommit, this will help you create a pipeline, however, since we are using other source control tools, we ignored it.

Once the CLI finishes, you will see a new folder in your project called: .elasticbeanstalk (notice the dot at the beginning). If you wanna read more about configuration options, go here and here.

First deployment

Now that our app is configured (at least for starters), we need to do a deployment. For that, we need to create an environment with the command:

eb create [ENVIRONMENT NAME]

We used DEV, and after it finishes, we need to update some configuration in the AWS console. We will update the node command, this command is the one nginx will use whenever a new deployment is done.

Elastic Beanstalk environment configuration

Just one more thing, as you can see, the environment uses nginx as base server, so make sure that the application is listening on port 8081 (look at the documentation here). Finally, we can run:

eb deploy

Important: I had an issue, a really weird one when I started doing changes and deploying them, and it seemed like the deployment never grabbed the latest changes. After reading in internet for some time, I found this post; and in one of his IMPORTANT notes, Jared mentioned that the changes must be committed to Git for them to be deployed. I don’t have the answer to why this happens, but seems important to note.

If you are using plain javascript, this could be all, it will give you the URL to use to connect and whatever you have built, should be ready for usage. However, as many of my other posts, I like typescript, and this isn’t the end of our journey.

Deploying Typescript

Using typescript, the deployment is not as straightforward, so we need to add one more step to our config.yml file and in the package.json.

Important: these steps are necessary because Elastic Beanstalk do not install dev dependencies in their machines, so we need to overcome that issue with this new process.

First, we need to add something that helps us to do the compilation of Typescript to Javascript. For that, we will add a shell script in the root of our app with the following contents:

zip dist/$npm_package_name.zip -r dist package.json package-lock.json .env

Then, we will modify the package.json’s scripts with the following:

{
  ....
  "scripts": {
      "compile": "ts-node build.ts && tsc && sh dist.sh",
       ...
  },
  ...
}

And finally, in our config.yml file under the .elasticbeanstalk folder, we will add the following content before the global declarations:

deploy:
  artifact: dist/npm_package_name.zip

Now, let’s explain a little bit about what we are doing.

The scripts in the package.json file were updated to run the shell script after the compilation is finished.

The shell file grabs everything that was compiled and move it into a zip file.

Important: The “npm_package_name” in the shell file will refer to the package.json name attribute. Make sure that in the config.yml file, you type the same name

Next, in the config.yml, we specify the file that we will deploying to the environment in Elastic Beanstalk. So, the ebcli will only grab the zip file and send it. Under the hoods, elastic beanstalk will unzip it and run the command specified in the beanstalk environment.

Finally, run again the eb deploy command, and now our Typescript Express API will run in the cloud.

Summary

Now that all sections have been covered, I hope that the application has been a success and you have deployed your app into the cloud. If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

Google Authentication using AWS-Amplify (+ deployment)

Authentication using AWS is a process I covered in a previous post, however, this time we are going to use a tool provided by Amazon called Amplify.

For this tutorial, we are going to create a simple application using Facebook’s create-react-app. Then, we will add the authentication layer by using AWS-amplify, and finally, add the hosting in S3 buckets. But before that, let’s provide some basic concepts.

What is AWS-Amplify?

According to their own site, AWS-amplify is defined as a library for frontend and mobile developers that are building cloud-based applications, and facilitates them the necessary tools to add multiple cloud features. For this tutorial, we will focus on two main features: storage and authentication, but Amplify provide many more like:

  • Analytics

  • API integration

  • Push notifications

  • Cache

  • Among others

What is create-react-app and how to install it?

Create-react-app is the best tool to use whenever you want to start creating a web application with React, and for someone like me that likes Typescript, it has now the built-in capability to create apps using it.

Installing it into your machine is like installing any global package from npm. Just type “npm install -g create-react-app“ and voliá!

There are some dependencies needed although, for example, you must have at least node version 6. This library also allows you to focus on creating your application instead of dealing with webpack or babel configuration.

Now, let’s start with the real deal, and work on our Google authenticated application.

Create the app

For this tutorial, I will use the following versions:

  • node: 10.15.0

  • npm: 6.4.1

  • create-react-app: 2.1.3

  • aws-amplify: 1.1.19

To create the app, we will run the following line in your preferred terminal: “create-react-app google-auth-tuto --typescript“. This will generate all the necessary code you need to start working.

Running the app

To start using the application, run “npm install” in your terminal to verify that you have all the necessary packages installed. Then, in the package.json file generated, some scripts have been created by default, this time we will use the “start” script; so simply run “npm start” and it will open a tab in your browser after the application finishes compiling your code.

npm start

Now that our application is running, we can start using AWS-amplify to add some cloud functions, but first, we will need to configure amplify in your machine. For that, you can follow the next video that explains how to do it (taken from aws-amplify main site).

Configuring amplify in the project

Now that amplify is configured in your machine, we can add it to our application by running: “amplify init” in your project root folder. It will prompt you several questions and after them it will start creating some resources in your account, here is an example of what you will see in your terminal.

amplify init configuration

At the end, if this is the first time you are running aws-amplify, it will create a new profile instead of using an existing one. In this example, I’ve used my profile named ljapp-amplify, so this section might be different for you.

Important: always create different profiles for your AWS accounts, in my case, I have to use multiple accounts for my companies’ clients, so it facilitates a lot my work.

After the AWS resources have been created, let’s add the authentication layer to our app. AWS-amplify have different categories of resources, authentication is one of them. So, let’s add it by running: “amplify auth add“. Same as before, you will see some configurations asked by amplify, here is a summary of what you will receive.

amplify auth add

The only information that you might be wondering how to get is the Google Web Client Id. For that, please follow the instructions found here, under the “Create a client ID and client secret” section.

Finally, run “amplify push” and this will start creating all the authentication resources in your account.

amplify push

Important: AWS-amplify uses Identity Pools for 3rd party integration instead of user pools. Since identity pools doesn’t manage groups, we can only authenticate them. So, if we need to provide specific permissions or roles, we need to use claims (or switch to user pools manually) and configure it manually in AWS console.

Modifying React code

Up till now, we have setup all the foundation in the AWS account via amplify, but we still need to add logic in our react application. For that, we will install two npm packages:

  • npm install aws-amplify

  • npm install aws-amplify-react

Then, we will modify our App.ts file with the following code.

import React, { Component } from 'react';
import Amplify from 'aws-amplify';
import { withAuthenticator } from 'aws-amplify-react';

import logo from './logo.svg';
import aws_exports from './aws-exports';
import './App.css';

Amplify.configure(aws_exports);

class App extends Component {
  render() {
    return (
      <div className="App">
        <header className="App-header">
          <img src={logo} className="App-logo" alt="logo" />
          <p>
            Edit <code>src/App.tsx</code> and save to reload.
          </p>
          <a
            className="App-link"
            href="https://reactjs.org"
            target="_blank"
            rel="noopener noreferrer"
          >
            Learn React
          </a>
        </header>
      </div>
    );
  }
}

const federated = {
  google_client_id: 'SOME_NUMBER_HERE.apps.googleusercontent.com',
};

export default withAuthenticator(App, true, [], federated);

The second parameter in the “withAuthenticator” high-order component, will create a header for our application with some minimal information like the name of the user logged in, and also, renders the log out button.

Important: By default, aws-amplify provides some default screens that can be customized, but also, it allows for creating our own components for login, register, among others. This will not be covered in today’s tutorial and we will be using the default screens.

As of today, the package aws-amplify-react hasn’t been updated with a typescript definition, so we will need to add a file that declares it as a module (with the name aws-amplify-react.d.ts), to avoid typescript errors during development. The contents of the file are:

declare module 'aws-amplify-react';

Now that everything is set, we can run our application again and we will be seeing the following screen.

Amplify login screen

And then, we can log in using google’s button and after verifying our account, we will get into the application.

User logged into the application

Hosting the application

Now that everything is setup, we can host our application in the cloud with amplify. For that, we will add the hosting feature by running the next command: “amplify hosting add“, and same as before, some configuration is required.

amplify hosting add

Shortly, it will ask you to run amplify publish, and this will create the S3 bucket if it doesn’t exist, and open right away a browser tab with the application hosted on that bucket.

Summary

Now that all sections have been covered, I hope that the application has been a success and you have created a React application that can use google authentication, and hosted easily in S3 buckets in AWS. In an upcoming tutorial, I will talk about using Cognito User Pools to do 3rd party authentication.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

Creating cron jobs in node.js: a real-life example using BambooHR

Do you have a requirement that you need to run some kind of process every X number of hours? Wondering on creating scheduled jobs in Node? If that’s why you are here, then this post will work for you.

This time, I’m going to write about node-cron. An NPM package used to schedule tasks that will execute in certain periods of time defined by cron expressions. Let’s start with some basics:

What’s a cron expression?

cron expression is a string containing some subexpressions that describe the details of the schedule that you want to create. Every subexpression is separated by a white space and have a limited amount of options to be set. The cron expression is defined from left to right, and it can contain from 5 to 7 subexpressions (fields from now on).

The library we selected, works with cron expressions from 5 to 6 fields and it works like this:

  • The first option is to set a scheduler with seconds. This field is optional and is only used when the cron expression has 6 fields. It accepts values from 0 to 59, or the wildcard ( * ).

  • The second option is used for minutes. It accepts values from 0 to 59, or the wildcard ( * ).

  • Third for hours. It accepts values from 0 to 23, or the wildcard ( * ).

  • Fourth for day of month. It accepts values from 1 to 31, or the wildcard ( * ).

  • Fifth for month. It accepts values from 1 to 12, the names of the months, or the wildcard ( * ).

  • Sixth and last for Day of week. It accepts values from 0 to 7, the name of each day, or the wildcard ( * ).

Besides the values accepted, each subexpression can have special operators that allows for more complex scenarios, for example:

  • Run every minute 10th and 20th minute: 10,20 * * * *

  • Run every 2 hours: * */2 * * *

  • Run every Sunday: * * * * Sunday

About node-cron

As mentioned before, we are using node-cron. An NPM package with more than 50,000 downloads weekly, and currently, as the time of this post, on version 2.0.3. In Github, it has 713 stars, 10 contributors and 20 releases since February 2016, which was the first release.

Since we are going to work in Typescript, I suggest also to install the types package for node-cron. You can install it by running:

npm install @types/node-cron —save-dev

Setting up node-cron

Creating a scheduled task with node-cron is a really easy task, and actually the basic examples from the documentation of the package explain it really well. Here is one of the examples from the page:

var cron = require('node-cron');
 
 cron.schedule('0 1 * * *', () => {
   console.log('Runing a job at 01:00 at America/Sao_Paulo timezone');
 }, {
   scheduled: true,
   timezone: "America/Sao_Paulo"
 });

However, this example falls short if you work with a more real-life scenario; like retrieving information from a datasource, manipulate it and then insert it into another database. This is a typical case of a process that needs to be done when you are doing some kind of synchronization between two systems. And actually, today we are doing that same example.

Our use case will be to retrieve information from a system called BambooHR (used to manage employees of a company, salaries, vacations, etc), compare it with data from another system and then insert, update or delete the differences. So let’s start first with the cron job.

The cron job

We are going to create first a class that will contain all the logic of the tasks that will be run, for our case it will be called BambooCron. Here is the code for it:

import { schedule, ScheduleOptions, ScheduledTask } from 'node-cron';
import { parseExpression } from 'cron-parser';
import _ from 'lodash';
import moment from 'moment';
import { BambooService } from '../data-access/bamboo/bamboo.service';
import { UserService } from '../api/services/user.service';
import { TimeOffService } from '../api/services/timeOff.service';
import { IHumanResourceManagerService } from '../data-access/IHumanResourceManagerService';

export default class BambooCron {
    private options: ScheduleOptions = {
        scheduled: false
    };
    private task: ScheduledTask;
    private bambooService: IHumanResourceManagerService;
    private usersService: UserService;
    private timeOffsService: TimeOffService;

    constructor() {
        this.task = schedule(process.env.CRON_EXPRESSION
            , this.executeCronJob
            , this.options);
    }

    public startJob() {
        this.task.start();
    }

    private executeCronJob = async () => {
        const format = 'YYYY-MM-DD hh:mm:ss';
        console.info(`Starting cron job at: ${moment().format(format)}`);

        this.usersService = new UserService();
        this.bambooService = new BambooService();
        this.timeOffsService = new TimeOffService();
        await this.processEmployees();
        await this.processTimeOff();

        const cronDate = parseExpression(process.env.CRON_EXPRESSION).next();
        console.info(`Finished cron job. Next iteration at: ${moment(cronDate.toDate()).format(format)}`);
    }

    private async processEmployees() {
        const employees = await this.bambooService.getEmployees();
        const users = await this.usersService.getAllUser();
        const usersToAdd = _.differenceWith(employees, users, (employee, user) => {
            return employee.id === user.bambooId;
        });
        const usersToDelete = _.differenceWith(users, employees, (user, employee) => {
            return employee.id === user.bambooId;
        });
        usersToAdd.forEach(async (employee) => {
            await this.usersService.saveUser(employee);
        });
        usersToDelete.forEach(async (user) => {
            await this.usersService.removeUser(user);
        });
    }

    private async processTimeOff() {
        const bambooTimeOffs = await this.bambooService.getTimeOffs();
        const dbTimeOffs = await this.timeOffsService.getAllFromProvider('bamboo');
        const users = await this.usersService.getAllUser();
        const timeOffsToAdd = _.differenceWith(bambooTimeOffs, dbTimeOffs, (bambooTimeOff, dbTimeOff) => {
            return bambooTimeOff.id === dbTimeOff.bambooId;
        });
        const timeOffsToDelete = _.differenceWith(dbTimeOffs, bambooTimeOffs, (dbTimeOff, bambooTimeOff) => {
            return bambooTimeOff.id === dbTimeOff.bambooId;
        });
        timeOffsToAdd.forEach(async (timeOff) => {
            const user = users.find(x => x.bambooId === timeOff.employeeId);
            if (user)
                await this.timeOffsService.saveTimeOff(timeOff, user.userNm);
        });
        timeOffsToDelete.forEach(async (user) => {
            await this.timeOffsService.removeTimeOff(user);
        });
    }
}

Let’s explain this class by sections. First, the constructor is where the task is going to be scheduled. The method schedule, imported from node-cron, receives 3 parameters: the cron expression that is being retrieved from the environment file, then the callback to the job code and lastly, some options of the scheduler (in out case, the only option we set is that it won’t start immediately).

The method startJob is a simple one, since we specify that the job is not going to start as soon as we schedule it, we need to have a way to start it programmatically.

The following method is executeCronJob, here is where everything happens, at least from a high level. From here, we initialize all the services that we are using to retrieve or insert information and also we print some information messages to the console like the time the task is running and when will be the next time the job runs.

The next two methods are similar but works for different entities, so let’s explain the flow for each one. The first step is retrieve all the information needed by calling methods from the services instantiated in the executeCronJob method. Then, we compare the data using lodash’s differenceWith method (another famous package). And finally, from the arrays created, we either delete or add information to the database by calling the services again (no updates are being managed in this example).

A big design improvement

As I’m writing this post, I’m noticing that the methods processEmployees and processTimeOff are, in essence, the same thing. So they can be abstracted to another method that encompasses the implementations. Feel free to design it differently.

The bamboo service

Now, we are going to work with the service that retrieves information from bamboo.

import fetch from 'node-fetch';
import moment from 'moment';
import { IHumanResourceManagerService } from '../IHumanResourceManagerService';
import { Employee } from './employee';
import { VacationTimeOff } from './vacationTimeOff';

export class BambooService implements IHumanResourceManagerService {
    private bambooHeaders = {
        method: 'GET',
        headers: { 'Accept': 'application/json' }
    };

    private getBaseUrl(endpoint) {
        const key = process.env.bambooKey;
        const baseEndpoint = ':x@api.bamboohr.com/api/gateway.php';
        const subdomain = process.env.bambooSubDomain;
        return `https://${key}${baseEndpoint}/${subdomain}/v1/${endpoint}`;
    }

    public async getEmployees(): Promise<Employee[]> {
        const url: string = this.getBaseUrl('employees/directory');
        try {
            const response = await fetch(url, this.bambooHeaders);
            const directory = await response.json();
            return directory.employees
                .filter(employee => employee.workEmail)
                .map((employee) => {
                return {
                    ...employee,
                    id: parseInt(employee.id),
                }
            });
        } catch (error) {
            throw error;
        }
    }

    public async getTimeOffs(): Promise<VacationTimeOff[]> {
        const today = moment();
        const startDate = today.format('YYYY-MM-DD');
        const endDate = today.add(3, 'M').startOf('month').format('YYYY-MM-DD');
        const url: string = this.getBaseUrl('time_off/requests/?status=approved&start=${startDate}&end=${endDate}');
        try {
            const response = await fetch(url, this.bambooHeaders);
            const timesOff = await response.json();
            return timesOff.map((timeOff) => {
                return {
                    ...timeOff,
                    id: parseInt(timeOff.id),
                    employeeId: parseInt(timeOff.employeeId),
                };
            });
        } catch (error) {
            throw error;
        }
    }
}

Again, let’s review this by sections. First, we create some reusable headers and a getBaseUrl method. This method will create the URL that will be used to connect to Bamboo; this URL is created by reading some configurations from an environment file.

Then, we have two methods that get the information, one for the employees and another one for the time offs from Bamboo. Some logic is applied in here to limit the information retrieved, for example, for the time offs we just want to retrieve the requests created or updated for the upcoming 3 months, anything prior to that is not needed for our target system.

The database services

From the BambooCron class, we also use services that connects to our database. In our system, we are using typeorm (which I talked previously here), an ORM with mysql integration and supports typescript out-of-the-box. For this post, I’m just going to show the service to manage users, however all of them follow a similar approach, so you can extrapolate for the rest of the entities.

import { BaseService } from "./base.service";
import { Employee } from "../../data-access/bamboo/employee";
import { User } from "../../data-access/entity/user";


export class UserService extends BaseService{
  public getAllUser = async () =>{
    return this.dbContext.users.find({
      where: { statusTxt: 'active' }
    });
  }

  public async saveUser(employee: Employee): Promise<User> {
    let newUser = this.createUser(employee);
    try {
      await this.dbContext.users.insert(newUser);
      return newUser;
    } catch (error) {
      throw error;
    }
  }

  public async removeUser(user: User) {
    try {
      user.statusTxt = <any>{ statusTxt: 'inactive' };
      await this.dbContext.users.save(user);
    } catch (error) {
      throw error;
    }
  }

  private createUser(employee: Employee): User {
    const userNm = employee.workEmail.substring(0, employee.workEmail.indexOf('@'));
    const user: User = this.dbContext.users.create({
      bambooId: employee.id,
      email: employee.workEmail,
      fullNm: employee.displayName,
      userNm: userNm,
      statusTxt: <any>{ statusTxt: 'active' }
    });
    return user;
  }
}

The User service is pretty straight-forward. It has some CRUD operations like getting active users, saving new users and finally removing them (soft delete by changing the status). It extends a BaseService class which looks like this:

import { DbContext } from "../../data-access/dbcontext";

export class BaseService{
  protected dbContext:DbContext = new DbContext();
}

This one is even easier, since it only exposes a property that is called DbContext. This property is exposed to every service that inherits from it, and basically it grants the ability to use connections from typeorm to execute queries or transactions with the database. Finally, this is how the DbContext class looks like:

import { Connection, createConnection, EntityManager, Repository } from "typeorm";
import { User } from "./entity/user";

export class DbContext {
    private connection: Connection;
    constructor (){
        this.init();
    }

    private async init(){
        try {
            this.connection = await createConnection({
                "name": `connection-${new Date().getTime()}`,
                "type": "mysql",
                "host": ANY_HOST_HERE,
                "port": 3306,
                "username": ANY_USERNAME_HERE,
                "password": ANY_PASSWORD_HERE,
                "database": ANY_DATABASE_HERE,
                "synchronize": false,
                "logging": true,
                "entities": [
                    User
                ]
            });
        } catch (error) {
            throw error;
        }
    }
    
    public get manager() : EntityManager {
        return this.connection.manager;
    }

    public get users(): Repository<User>{
        return this.manager.getRepository(User);
    }
}

The DbContext class is a reduced version of the one I use, it has more entities but the rest of the design is the same. First, we have an init method that creates a connection every time the DbContext is instantiated, and this connection receives all the entities and database information needed to create it.

And then, for every entity, we expose a getter property that expose the repository for each one of the entities that the typeorm will map to.

Finally, where do we execute all of this code. Since it needs to be executed or started as soon as the Node service starts, we add the code to the index.ts file of express.js, like this:

...IMPORTS AND OTHER STUFF HERE

const cron = new BambooCron();
...SOME LOGIC HERE TO PREPARE THE SERVICE OR OTHER THINGS
cron.startJob();

const port = parseInt(process.env.PORT);
export default new Server()
  .router(routes)
  .listen(port);

Summary

Finally, we have arrived to the end, and if you are here also, it means that you have created all the necessary code to run a scheduled task using node-cron and typeorm. Now, this is just one of the many use cases that can be covered with this design, so please adapt it as best as you see fit to whatever case you have to solve.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

Companies Agility: A report by Scrum Alliance

A couple of days ago, the Scrum Alliance published a report called “The Elusive Agile Enterprise: How the Right Leadership Mindset, Workforce and Culture Can Transform Your Organization”.

In this report, Scrum Alliance and Forbes surveyed more than 1,000 executives to determine how important is the agility in an organization, its degree of success in transformation efforts and how much progress the companies have implementing this type of frameworks. Among all the respondents, there were 10% that are from Latin America (I’m from Costa Rica), so, I would have liked to read the results from this area, but the overall results are equally interesting. Also, those executives weren’t only from a technology oriented company, but also from other areas, however, I would like to comment on some aspects of the report from an IT perspective.

Geography distribution

Personal creation based on report data

One key thing to notice is that they want to measure the agility of a company, not the agile framework they are using (still, there is a 77% of these companies that leverage Scrum as their main framework). But, what is agility and what is being agile? agility is the property of an organization to respond to market changes and still deliver value to the customers, whereas agile is an organizational approach and mindset defined by the values and principles detailed in the Agile Manifesto.

Based on this premise, the report defines several benefits that are obtained after achieving a high level of agility within the company, such as:

  1. Faster time to market:

  2. Faster innovation

  3. Improved financial results

  4. Improved employee morale

Organizational changes

To achieve the mentioned benefits, the respondents affirmed that there were several changes that needed to be applied in order to redefine the company’s processes and reflect the agile mindset. In the report, Scrum Alliance mentioned the top 7 changes, but in order to not spoil the article, I would like to comment some of them with experiences that I’ve seen first-hand to be applied, and succeed.

  • Introduce an agile mindset

Being agile is not only about using Scrum (or any other flavor), perform some of the ceremonies or events that are required, and deliver software. The processes definitely are important, but the people are equally, or more important.

Having an upper management that believes in agile methodologies and promote them, facilitates the transition heavily. Implementing a change from the bottom-up, is nearly impossible, but when it comes from the higher grounds, it’s a quick, fluid and flexible process.

I worked for one organization that had Scrum implemented partially, but not the agile mindset. They performed some of the ceremonies when possible, defined their sprints to have incremental products, but in the other hand, had a requirements process that resembled a lot to the waterfall scheme and the releases were done until the end of the development. The development process was really flexible, and we were able to adapt to some changes, but still there were some gears that didn’t feel right.

The main problem was that the customers weren’t involved with the development process at all, and they were expecting that we exceeded their expectations. This wasn’t going to work, and in fact, it didn’t!

To remediate this, we tried to create backlogs that allowed customers to give their inputs and define what needed to be done. It worked in some things, but still there wasn’t much involvement from the customers, even though that we invited them. At this point, there wasn’t a single point of authority, someone that could work as the Product Owner (something fundamental), so we had to facilitate its creation.

To do this, we talked with the managers of each area that were part of the application and explained them what changes were needed and the possible benefits. They trusted us, and a new group was formed that worked as the Product Owner; this group consisted on a representative of each area, and even though this is not the regular Scrum process, it worked much better and we got much more feedback than before.

The agile mindset was introduced, little by little, to obtain success.

black-brain.jpg
  • Create incentives that promote agility

In another organization, the agile mindset was much better. Some processes were already defined there, customers agreed with the methodology and got involved in it, the ceremonies were executed and the benefits were visible. Even so, this organization needed some optimization because the processes weren’t applied uniformly across all development teams.

To solve this, a group of people from the organization decided to create a group to lead all the Agile efforts, and the first big task was to standardize the process across every team. Among many options, the one who won was to create a contest, but not a simple one.

The contest consisted in having all teams follow the process of the organization and the Scrum best practices. There were 4 phases and for every phase, a common goal. Each team earned points depending on how good the practices and the process were followed. For example: for the first phase the DSUs were the main goal, and a team earned one point for every DSU done in less than 10 minutes, using the parking lot technique granted extra points. For the second phase, backlog grooming, sprint planning and sprint retrospective events were evaluated. The next phases evaluated customer involvement and product deliveries.

At the end of the contest, a winner was selected from all teams and some prizes were given, but the real outcome was that all the teams managed to follow the same process and practices.

  • Train workforce

As I mentioned before, people are the most important factor when there are changes in any organization. People will determine how quick the change is applied, but there will always be blockers that are needed to be managed, for example: resistance to change, lack of communication, ignorance.

In my opinion, training people to work with Scrum is mandatory, and there are really clever activities that embodies the agile mindset, demonstrates how Scrum is supposed to work and make people enjoy the time spent learning about it. For example, I’ve been in trainings that uses Lego to create a city, building a tower with marshmallows and spaghetti, but the most recent training that I had was using stacks of cards to simulate a development process.

Key Findings in Report

In the report, there are some key findings after all the survey was executed and analyzed, all of them are interesting, and similar to the organizational changes, I’ll comment in just a few.

  • “Many organizations are adopting an ad-hoc approach to agile: 21% of respondents use Agile when/where needed, and 23% use it within specific functions. However, adoption needs to be enterprise-wide (and consistent) to realize real results.”

I agree that the adoption must be enterprise-wide, and I want to believe it, however the reality is not that. As the same survey expose, the number of companies that have adopted agile in every area of the company is less than 10%, and that’s because it’s not a simple process. Implementing an ad-hoc approach is a middle ground solution, that will reduce costs and obtain benefits.

Agile Adoption

Personal creation based on report data
  • “Not everyone eagerly embraces agility: Longtime employees (29%) are the biggest detractors of organizational agility and may stand in the way of widespread adoption. This is a prime opportunity for senior-level executives to address employee concerns and shift mindset.”

It’s true that longtime employees are one the of the biggest detractors, but that’s because the resistance to change is stronger in them (Star Wars jokes aside :). However, I wouldn’t limit this to just a group of people. There was one time that I had someone assigned that didn’t believed in Scrum, and it was due to a bad experience before where the execution was done incorrect; for example Sprint Plannings of 3 hours, DSUs of more than 30 minutes and other bad practices.

  • Many organizations eliminate hierarchy in the hopes of increasing agility: 44% of survey respondents have introduced a flatter structure to become more Agile. But that may be premature; Agile is about creating the right dynamics for teams to iterate quickly, not simply moving boxes around on organizational charts.

For this one, I believe that changing the structure is good, simplifying it and making it more easy to work with. However, I don’t believe that you need to flat every structure. There are frameworks like LeSS (Large Scale Scrum) that help making organizations more lean and scale scrum across all of it.

Summary

Moving to an agile process is not easy, evidenced by this survey. There will always be changes required, trainings needed and a really good management. If you are interested, read the whole report from the Scrum Alliance, there are really good insights to incorporate to your own company.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.