Deploying a REST API using Express to Elastic Beanstalk

A couple of weeks ago, I was trying to deploy a small service, but I didn’t wanted to start creating the full infrastructure suite on Fargate yet; hence, the best option is Elastic Beanstalk. For a couple of days I did the deployment manually, but after reading a little bit more about Beanstalk, I found a CLI that simplified a lot of the deployment process.

Here is some documentation if you want to read about it, but for the mean time, I will write a little bit about how I used it. It might be useful to you, if like me, you are looking for ways to do a quick deployment and don’t want to invest time in configuring the CI/CD pipeline or a more complex setup.

Prerequisites

The express application was built using:

  1. Node version 10.15.0

  2. generator-express-no-stress-typescript version 4.2.1

Installing the CLI

To install the CLI, there are two approaches, using HomeBrew or the other one using PIP. In my case, I used Brew because it’s so much simpler (and also because the other way failed for me :).

Just type the following and brew should take care of everything for you:

brew install awsebcli

Configuring the CLI

I’m going to assume you are already using AWS profiles in your machine, if not, I recommend reading about them here.

To start the configuration in your project, navigate to the root folder of it, and type the following command:

eb init —profile [YOUR PROFILE NAME]

It will start a series of questions in the next sequence:

  1. Region: in our case, we selected us-east-1

  2. Elastic Beanstalk Application: it will prompt you create a new app if none is already created.

  3. Platform: in our case, we selected node.js

  4. SSH configuration

  5. Keypair: to connect to the instance, you will need a key pair to use via SSH. The CLI will help you to create one as well.

Important: there is a step asking you if you want to use CodeCommit, this will help you create a pipeline, however, since we are using other source control tools, we ignored it.

Once the CLI finishes, you will see a new folder in your project called: .elasticbeanstalk (notice the dot at the beginning). If you wanna read more about configuration options, go here and here.

First deployment

Now that our app is configured (at least for starters), we need to do a deployment. For that, we need to create an environment with the command:

eb create [ENVIRONMENT NAME]

We used DEV, and after it finishes, we need to update some configuration in the AWS console. We will update the node command, this command is the one nginx will use whenever a new deployment is done.

Elastic Beanstalk environment configuration

Just one more thing, as you can see, the environment uses nginx as base server, so make sure that the application is listening on port 8081 (look at the documentation here). Finally, we can run:

eb deploy

Important: I had an issue, a really weird one when I started doing changes and deploying them, and it seemed like the deployment never grabbed the latest changes. After reading in internet for some time, I found this post; and in one of his IMPORTANT notes, Jared mentioned that the changes must be committed to Git for them to be deployed. I don’t have the answer to why this happens, but seems important to note.

If you are using plain javascript, this could be all, it will give you the URL to use to connect and whatever you have built, should be ready for usage. However, as many of my other posts, I like typescript, and this isn’t the end of our journey.

Deploying Typescript

Using typescript, the deployment is not as straightforward, so we need to add one more step to our config.yml file and in the package.json.

Important: these steps are necessary because Elastic Beanstalk do not install dev dependencies in their machines, so we need to overcome that issue with this new process.

First, we need to add something that helps us to do the compilation of Typescript to Javascript. For that, we will add a shell script in the root of our app with the following contents:

zip dist/$npm_package_name.zip -r dist package.json package-lock.json .env

Then, we will modify the package.json’s scripts with the following:

{
  ....
  "scripts": {
      "compile": "ts-node build.ts && tsc && sh dist.sh",
       ...
  },
  ...
}

And finally, in our config.yml file under the .elasticbeanstalk folder, we will add the following content before the global declarations:

deploy:
  artifact: dist/npm_package_name.zip

Now, let’s explain a little bit about what we are doing.

The scripts in the package.json file were updated to run the shell script after the compilation is finished.

The shell file grabs everything that was compiled and move it into a zip file.

Important: The “npm_package_name” in the shell file will refer to the package.json name attribute. Make sure that in the config.yml file, you type the same name

Next, in the config.yml, we specify the file that we will deploying to the environment in Elastic Beanstalk. So, the ebcli will only grab the zip file and send it. Under the hoods, elastic beanstalk will unzip it and run the command specified in the beanstalk environment.

Finally, run again the eb deploy command, and now our Typescript Express API will run in the cloud.

Summary

Now that all sections have been covered, I hope that the application has been a success and you have deployed your app into the cloud. If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

Google Authentication using AWS-Amplify (+ deployment)

Authentication using AWS is a process I covered in a previous post, however, this time we are going to use a tool provided by Amazon called Amplify.

For this tutorial, we are going to create a simple application using Facebook’s create-react-app. Then, we will add the authentication layer by using AWS-amplify, and finally, add the hosting in S3 buckets. But before that, let’s provide some basic concepts.

What is AWS-Amplify?

According to their own site, AWS-amplify is defined as a library for frontend and mobile developers that are building cloud-based applications, and facilitates them the necessary tools to add multiple cloud features. For this tutorial, we will focus on two main features: storage and authentication, but Amplify provide many more like:

  • Analytics

  • API integration

  • Push notifications

  • Cache

  • Among others

What is create-react-app and how to install it?

Create-react-app is the best tool to use whenever you want to start creating a web application with React, and for someone like me that likes Typescript, it has now the built-in capability to create apps using it.

Installing it into your machine is like installing any global package from npm. Just type “npm install -g create-react-app“ and voliá!

There are some dependencies needed although, for example, you must have at least node version 6. This library also allows you to focus on creating your application instead of dealing with webpack or babel configuration.

Now, let’s start with the real deal, and work on our Google authenticated application.

Create the app

For this tutorial, I will use the following versions:

  • node: 10.15.0

  • npm: 6.4.1

  • create-react-app: 2.1.3

  • aws-amplify: 1.1.19

To create the app, we will run the following line in your preferred terminal: “create-react-app google-auth-tuto --typescript“. This will generate all the necessary code you need to start working.

Running the app

To start using the application, run “npm install” in your terminal to verify that you have all the necessary packages installed. Then, in the package.json file generated, some scripts have been created by default, this time we will use the “start” script; so simply run “npm start” and it will open a tab in your browser after the application finishes compiling your code.

npm start

Now that our application is running, we can start using AWS-amplify to add some cloud functions, but first, we will need to configure amplify in your machine. For that, you can follow the next video that explains how to do it (taken from aws-amplify main site).

Configuring amplify in the project

Now that amplify is configured in your machine, we can add it to our application by running: “amplify init” in your project root folder. It will prompt you several questions and after them it will start creating some resources in your account, here is an example of what you will see in your terminal.

amplify init configuration

At the end, if this is the first time you are running aws-amplify, it will create a new profile instead of using an existing one. In this example, I’ve used my profile named ljapp-amplify, so this section might be different for you.

Important: always create different profiles for your AWS accounts, in my case, I have to use multiple accounts for my companies’ clients, so it facilitates a lot my work.

After the AWS resources have been created, let’s add the authentication layer to our app. AWS-amplify have different categories of resources, authentication is one of them. So, let’s add it by running: “amplify auth add“. Same as before, you will see some configurations asked by amplify, here is a summary of what you will receive.

amplify auth add

The only information that you might be wondering how to get is the Google Web Client Id. For that, please follow the instructions found here, under the “Create a client ID and client secret” section.

Finally, run “amplify push” and this will start creating all the authentication resources in your account.

amplify push

Important: AWS-amplify uses Identity Pools for 3rd party integration instead of user pools. Since identity pools doesn’t manage groups, we can only authenticate them. So, if we need to provide specific permissions or roles, we need to use claims (or switch to user pools manually) and configure it manually in AWS console.

Modifying React code

Up till now, we have setup all the foundation in the AWS account via amplify, but we still need to add logic in our react application. For that, we will install two npm packages:

  • npm install aws-amplify

  • npm install aws-amplify-react

Then, we will modify our App.ts file with the following code.

import React, { Component } from 'react';
import Amplify from 'aws-amplify';
import { withAuthenticator } from 'aws-amplify-react';

import logo from './logo.svg';
import aws_exports from './aws-exports';
import './App.css';

Amplify.configure(aws_exports);

class App extends Component {
  render() {
    return (
      <div className="App">
        <header className="App-header">
          <img src={logo} className="App-logo" alt="logo" />
          <p>
            Edit <code>src/App.tsx</code> and save to reload.
          </p>
          <a
            className="App-link"
            href="https://reactjs.org"
            target="_blank"
            rel="noopener noreferrer"
          >
            Learn React
          </a>
        </header>
      </div>
    );
  }
}

const federated = {
  google_client_id: 'SOME_NUMBER_HERE.apps.googleusercontent.com',
};

export default withAuthenticator(App, true, [], federated);

The second parameter in the “withAuthenticator” high-order component, will create a header for our application with some minimal information like the name of the user logged in, and also, renders the log out button.

Important: By default, aws-amplify provides some default screens that can be customized, but also, it allows for creating our own components for login, register, among others. This will not be covered in today’s tutorial and we will be using the default screens.

As of today, the package aws-amplify-react hasn’t been updated with a typescript definition, so we will need to add a file that declares it as a module (with the name aws-amplify-react.d.ts), to avoid typescript errors during development. The contents of the file are:

declare module 'aws-amplify-react';

Now that everything is set, we can run our application again and we will be seeing the following screen.

Amplify login screen

And then, we can log in using google’s button and after verifying our account, we will get into the application.

User logged into the application

Hosting the application

Now that everything is setup, we can host our application in the cloud with amplify. For that, we will add the hosting feature by running the next command: “amplify hosting add“, and same as before, some configuration is required.

amplify hosting add

Shortly, it will ask you to run amplify publish, and this will create the S3 bucket if it doesn’t exist, and open right away a browser tab with the application hosted on that bucket.

Summary

Now that all sections have been covered, I hope that the application has been a success and you have created a React application that can use google authentication, and hosted easily in S3 buckets in AWS. In an upcoming tutorial, I will talk about using Cognito User Pools to do 3rd party authentication.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

Creating cron jobs in node.js: a real-life example using BambooHR

Do you have a requirement that you need to run some kind of process every X number of hours? Wondering on creating scheduled jobs in Node? If that’s why you are here, then this post will work for you.

This time, I’m going to write about node-cron. An NPM package used to schedule tasks that will execute in certain periods of time defined by cron expressions. Let’s start with some basics:

What’s a cron expression?

cron expression is a string containing some subexpressions that describe the details of the schedule that you want to create. Every subexpression is separated by a white space and have a limited amount of options to be set. The cron expression is defined from left to right, and it can contain from 5 to 7 subexpressions (fields from now on).

The library we selected, works with cron expressions from 5 to 6 fields and it works like this:

  • The first option is to set a scheduler with seconds. This field is optional and is only used when the cron expression has 6 fields. It accepts values from 0 to 59, or the wildcard ( * ).

  • The second option is used for minutes. It accepts values from 0 to 59, or the wildcard ( * ).

  • Third for hours. It accepts values from 0 to 23, or the wildcard ( * ).

  • Fourth for day of month. It accepts values from 1 to 31, or the wildcard ( * ).

  • Fifth for month. It accepts values from 1 to 12, the names of the months, or the wildcard ( * ).

  • Sixth and last for Day of week. It accepts values from 0 to 7, the name of each day, or the wildcard ( * ).

Besides the values accepted, each subexpression can have special operators that allows for more complex scenarios, for example:

  • Run every minute 10th and 20th minute: 10,20 * * * *

  • Run every 2 hours: * */2 * * *

  • Run every Sunday: * * * * Sunday

About node-cron

As mentioned before, we are using node-cron. An NPM package with more than 50,000 downloads weekly, and currently, as the time of this post, on version 2.0.3. In Github, it has 713 stars, 10 contributors and 20 releases since February 2016, which was the first release.

Since we are going to work in Typescript, I suggest also to install the types package for node-cron. You can install it by running:

npm install @types/node-cron —save-dev

Setting up node-cron

Creating a scheduled task with node-cron is a really easy task, and actually the basic examples from the documentation of the package explain it really well. Here is one of the examples from the page:

var cron = require('node-cron');
 
 cron.schedule('0 1 * * *', () => {
   console.log('Runing a job at 01:00 at America/Sao_Paulo timezone');
 }, {
   scheduled: true,
   timezone: "America/Sao_Paulo"
 });

However, this example falls short if you work with a more real-life scenario; like retrieving information from a datasource, manipulate it and then insert it into another database. This is a typical case of a process that needs to be done when you are doing some kind of synchronization between two systems. And actually, today we are doing that same example.

Our use case will be to retrieve information from a system called BambooHR (used to manage employees of a company, salaries, vacations, etc), compare it with data from another system and then insert, update or delete the differences. So let’s start first with the cron job.

The cron job

We are going to create first a class that will contain all the logic of the tasks that will be run, for our case it will be called BambooCron. Here is the code for it:

import { schedule, ScheduleOptions, ScheduledTask } from 'node-cron';
import { parseExpression } from 'cron-parser';
import _ from 'lodash';
import moment from 'moment';
import { BambooService } from '../data-access/bamboo/bamboo.service';
import { UserService } from '../api/services/user.service';
import { TimeOffService } from '../api/services/timeOff.service';
import { IHumanResourceManagerService } from '../data-access/IHumanResourceManagerService';

export default class BambooCron {
    private options: ScheduleOptions = {
        scheduled: false
    };
    private task: ScheduledTask;
    private bambooService: IHumanResourceManagerService;
    private usersService: UserService;
    private timeOffsService: TimeOffService;

    constructor() {
        this.task = schedule(process.env.CRON_EXPRESSION
            , this.executeCronJob
            , this.options);
    }

    public startJob() {
        this.task.start();
    }

    private executeCronJob = async () => {
        const format = 'YYYY-MM-DD hh:mm:ss';
        console.info(`Starting cron job at: ${moment().format(format)}`);

        this.usersService = new UserService();
        this.bambooService = new BambooService();
        this.timeOffsService = new TimeOffService();
        await this.processEmployees();
        await this.processTimeOff();

        const cronDate = parseExpression(process.env.CRON_EXPRESSION).next();
        console.info(`Finished cron job. Next iteration at: ${moment(cronDate.toDate()).format(format)}`);
    }

    private async processEmployees() {
        const employees = await this.bambooService.getEmployees();
        const users = await this.usersService.getAllUser();
        const usersToAdd = _.differenceWith(employees, users, (employee, user) => {
            return employee.id === user.bambooId;
        });
        const usersToDelete = _.differenceWith(users, employees, (user, employee) => {
            return employee.id === user.bambooId;
        });
        usersToAdd.forEach(async (employee) => {
            await this.usersService.saveUser(employee);
        });
        usersToDelete.forEach(async (user) => {
            await this.usersService.removeUser(user);
        });
    }

    private async processTimeOff() {
        const bambooTimeOffs = await this.bambooService.getTimeOffs();
        const dbTimeOffs = await this.timeOffsService.getAllFromProvider('bamboo');
        const users = await this.usersService.getAllUser();
        const timeOffsToAdd = _.differenceWith(bambooTimeOffs, dbTimeOffs, (bambooTimeOff, dbTimeOff) => {
            return bambooTimeOff.id === dbTimeOff.bambooId;
        });
        const timeOffsToDelete = _.differenceWith(dbTimeOffs, bambooTimeOffs, (dbTimeOff, bambooTimeOff) => {
            return bambooTimeOff.id === dbTimeOff.bambooId;
        });
        timeOffsToAdd.forEach(async (timeOff) => {
            const user = users.find(x => x.bambooId === timeOff.employeeId);
            if (user)
                await this.timeOffsService.saveTimeOff(timeOff, user.userNm);
        });
        timeOffsToDelete.forEach(async (user) => {
            await this.timeOffsService.removeTimeOff(user);
        });
    }
}

Let’s explain this class by sections. First, the constructor is where the task is going to be scheduled. The method schedule, imported from node-cron, receives 3 parameters: the cron expression that is being retrieved from the environment file, then the callback to the job code and lastly, some options of the scheduler (in out case, the only option we set is that it won’t start immediately).

The method startJob is a simple one, since we specify that the job is not going to start as soon as we schedule it, we need to have a way to start it programmatically.

The following method is executeCronJob, here is where everything happens, at least from a high level. From here, we initialize all the services that we are using to retrieve or insert information and also we print some information messages to the console like the time the task is running and when will be the next time the job runs.

The next two methods are similar but works for different entities, so let’s explain the flow for each one. The first step is retrieve all the information needed by calling methods from the services instantiated in the executeCronJob method. Then, we compare the data using lodash’s differenceWith method (another famous package). And finally, from the arrays created, we either delete or add information to the database by calling the services again (no updates are being managed in this example).

A big design improvement

As I’m writing this post, I’m noticing that the methods processEmployees and processTimeOff are, in essence, the same thing. So they can be abstracted to another method that encompasses the implementations. Feel free to design it differently.

The bamboo service

Now, we are going to work with the service that retrieves information from bamboo.

import fetch from 'node-fetch';
import moment from 'moment';
import { IHumanResourceManagerService } from '../IHumanResourceManagerService';
import { Employee } from './employee';
import { VacationTimeOff } from './vacationTimeOff';

export class BambooService implements IHumanResourceManagerService {
    private bambooHeaders = {
        method: 'GET',
        headers: { 'Accept': 'application/json' }
    };

    private getBaseUrl(endpoint) {
        const key = process.env.bambooKey;
        const baseEndpoint = ':x@api.bamboohr.com/api/gateway.php';
        const subdomain = process.env.bambooSubDomain;
        return `https://${key}${baseEndpoint}/${subdomain}/v1/${endpoint}`;
    }

    public async getEmployees(): Promise<Employee[]> {
        const url: string = this.getBaseUrl('employees/directory');
        try {
            const response = await fetch(url, this.bambooHeaders);
            const directory = await response.json();
            return directory.employees
                .filter(employee => employee.workEmail)
                .map((employee) => {
                return {
                    ...employee,
                    id: parseInt(employee.id),
                }
            });
        } catch (error) {
            throw error;
        }
    }

    public async getTimeOffs(): Promise<VacationTimeOff[]> {
        const today = moment();
        const startDate = today.format('YYYY-MM-DD');
        const endDate = today.add(3, 'M').startOf('month').format('YYYY-MM-DD');
        const url: string = this.getBaseUrl('time_off/requests/?status=approved&start=${startDate}&end=${endDate}');
        try {
            const response = await fetch(url, this.bambooHeaders);
            const timesOff = await response.json();
            return timesOff.map((timeOff) => {
                return {
                    ...timeOff,
                    id: parseInt(timeOff.id),
                    employeeId: parseInt(timeOff.employeeId),
                };
            });
        } catch (error) {
            throw error;
        }
    }
}

Again, let’s review this by sections. First, we create some reusable headers and a getBaseUrl method. This method will create the URL that will be used to connect to Bamboo; this URL is created by reading some configurations from an environment file.

Then, we have two methods that get the information, one for the employees and another one for the time offs from Bamboo. Some logic is applied in here to limit the information retrieved, for example, for the time offs we just want to retrieve the requests created or updated for the upcoming 3 months, anything prior to that is not needed for our target system.

The database services

From the BambooCron class, we also use services that connects to our database. In our system, we are using typeorm (which I talked previously here), an ORM with mysql integration and supports typescript out-of-the-box. For this post, I’m just going to show the service to manage users, however all of them follow a similar approach, so you can extrapolate for the rest of the entities.

import { BaseService } from "./base.service";
import { Employee } from "../../data-access/bamboo/employee";
import { User } from "../../data-access/entity/user";


export class UserService extends BaseService{
  public getAllUser = async () =>{
    return this.dbContext.users.find({
      where: { statusTxt: 'active' }
    });
  }

  public async saveUser(employee: Employee): Promise<User> {
    let newUser = this.createUser(employee);
    try {
      await this.dbContext.users.insert(newUser);
      return newUser;
    } catch (error) {
      throw error;
    }
  }

  public async removeUser(user: User) {
    try {
      user.statusTxt = <any>{ statusTxt: 'inactive' };
      await this.dbContext.users.save(user);
    } catch (error) {
      throw error;
    }
  }

  private createUser(employee: Employee): User {
    const userNm = employee.workEmail.substring(0, employee.workEmail.indexOf('@'));
    const user: User = this.dbContext.users.create({
      bambooId: employee.id,
      email: employee.workEmail,
      fullNm: employee.displayName,
      userNm: userNm,
      statusTxt: <any>{ statusTxt: 'active' }
    });
    return user;
  }
}

The User service is pretty straight-forward. It has some CRUD operations like getting active users, saving new users and finally removing them (soft delete by changing the status). It extends a BaseService class which looks like this:

import { DbContext } from "../../data-access/dbcontext";

export class BaseService{
  protected dbContext:DbContext = new DbContext();
}

This one is even easier, since it only exposes a property that is called DbContext. This property is exposed to every service that inherits from it, and basically it grants the ability to use connections from typeorm to execute queries or transactions with the database. Finally, this is how the DbContext class looks like:

import { Connection, createConnection, EntityManager, Repository } from "typeorm";
import { User } from "./entity/user";

export class DbContext {
    private connection: Connection;
    constructor (){
        this.init();
    }

    private async init(){
        try {
            this.connection = await createConnection({
                "name": `connection-${new Date().getTime()}`,
                "type": "mysql",
                "host": ANY_HOST_HERE,
                "port": 3306,
                "username": ANY_USERNAME_HERE,
                "password": ANY_PASSWORD_HERE,
                "database": ANY_DATABASE_HERE,
                "synchronize": false,
                "logging": true,
                "entities": [
                    User
                ]
            });
        } catch (error) {
            throw error;
        }
    }
    
    public get manager() : EntityManager {
        return this.connection.manager;
    }

    public get users(): Repository<User>{
        return this.manager.getRepository(User);
    }
}

The DbContext class is a reduced version of the one I use, it has more entities but the rest of the design is the same. First, we have an init method that creates a connection every time the DbContext is instantiated, and this connection receives all the entities and database information needed to create it.

And then, for every entity, we expose a getter property that expose the repository for each one of the entities that the typeorm will map to.

Finally, where do we execute all of this code. Since it needs to be executed or started as soon as the Node service starts, we add the code to the index.ts file of express.js, like this:

...IMPORTS AND OTHER STUFF HERE

const cron = new BambooCron();
...SOME LOGIC HERE TO PREPARE THE SERVICE OR OTHER THINGS
cron.startJob();

const port = parseInt(process.env.PORT);
export default new Server()
  .router(routes)
  .listen(port);

Summary

Finally, we have arrived to the end, and if you are here also, it means that you have created all the necessary code to run a scheduled task using node-cron and typeorm. Now, this is just one of the many use cases that can be covered with this design, so please adapt it as best as you see fit to whatever case you have to solve.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

Companies Agility: A report by Scrum Alliance

A couple of days ago, the Scrum Alliance published a report called “The Elusive Agile Enterprise: How the Right Leadership Mindset, Workforce and Culture Can Transform Your Organization”.

In this report, Scrum Alliance and Forbes surveyed more than 1,000 executives to determine how important is the agility in an organization, its degree of success in transformation efforts and how much progress the companies have implementing this type of frameworks. Among all the respondents, there were 10% that are from Latin America (I’m from Costa Rica), so, I would have liked to read the results from this area, but the overall results are equally interesting. Also, those executives weren’t only from a technology oriented company, but also from other areas, however, I would like to comment on some aspects of the report from an IT perspective.

Geography distribution

Personal creation based on report data

One key thing to notice is that they want to measure the agility of a company, not the agile framework they are using (still, there is a 77% of these companies that leverage Scrum as their main framework). But, what is agility and what is being agile? agility is the property of an organization to respond to market changes and still deliver value to the customers, whereas agile is an organizational approach and mindset defined by the values and principles detailed in the Agile Manifesto.

Based on this premise, the report defines several benefits that are obtained after achieving a high level of agility within the company, such as:

  1. Faster time to market:

  2. Faster innovation

  3. Improved financial results

  4. Improved employee morale

Organizational changes

To achieve the mentioned benefits, the respondents affirmed that there were several changes that needed to be applied in order to redefine the company’s processes and reflect the agile mindset. In the report, Scrum Alliance mentioned the top 7 changes, but in order to not spoil the article, I would like to comment some of them with experiences that I’ve seen first-hand to be applied, and succeed.

  • Introduce an agile mindset

Being agile is not only about using Scrum (or any other flavor), perform some of the ceremonies or events that are required, and deliver software. The processes definitely are important, but the people are equally, or more important.

Having an upper management that believes in agile methodologies and promote them, facilitates the transition heavily. Implementing a change from the bottom-up, is nearly impossible, but when it comes from the higher grounds, it’s a quick, fluid and flexible process.

I worked for one organization that had Scrum implemented partially, but not the agile mindset. They performed some of the ceremonies when possible, defined their sprints to have incremental products, but in the other hand, had a requirements process that resembled a lot to the waterfall scheme and the releases were done until the end of the development. The development process was really flexible, and we were able to adapt to some changes, but still there were some gears that didn’t feel right.

The main problem was that the customers weren’t involved with the development process at all, and they were expecting that we exceeded their expectations. This wasn’t going to work, and in fact, it didn’t!

To remediate this, we tried to create backlogs that allowed customers to give their inputs and define what needed to be done. It worked in some things, but still there wasn’t much involvement from the customers, even though that we invited them. At this point, there wasn’t a single point of authority, someone that could work as the Product Owner (something fundamental), so we had to facilitate its creation.

To do this, we talked with the managers of each area that were part of the application and explained them what changes were needed and the possible benefits. They trusted us, and a new group was formed that worked as the Product Owner; this group consisted on a representative of each area, and even though this is not the regular Scrum process, it worked much better and we got much more feedback than before.

The agile mindset was introduced, little by little, to obtain success.

black-brain.jpg
  • Create incentives that promote agility

In another organization, the agile mindset was much better. Some processes were already defined there, customers agreed with the methodology and got involved in it, the ceremonies were executed and the benefits were visible. Even so, this organization needed some optimization because the processes weren’t applied uniformly across all development teams.

To solve this, a group of people from the organization decided to create a group to lead all the Agile efforts, and the first big task was to standardize the process across every team. Among many options, the one who won was to create a contest, but not a simple one.

The contest consisted in having all teams follow the process of the organization and the Scrum best practices. There were 4 phases and for every phase, a common goal. Each team earned points depending on how good the practices and the process were followed. For example: for the first phase the DSUs were the main goal, and a team earned one point for every DSU done in less than 10 minutes, using the parking lot technique granted extra points. For the second phase, backlog grooming, sprint planning and sprint retrospective events were evaluated. The next phases evaluated customer involvement and product deliveries.

At the end of the contest, a winner was selected from all teams and some prizes were given, but the real outcome was that all the teams managed to follow the same process and practices.

  • Train workforce

As I mentioned before, people are the most important factor when there are changes in any organization. People will determine how quick the change is applied, but there will always be blockers that are needed to be managed, for example: resistance to change, lack of communication, ignorance.

In my opinion, training people to work with Scrum is mandatory, and there are really clever activities that embodies the agile mindset, demonstrates how Scrum is supposed to work and make people enjoy the time spent learning about it. For example, I’ve been in trainings that uses Lego to create a city, building a tower with marshmallows and spaghetti, but the most recent training that I had was using stacks of cards to simulate a development process.

Key Findings in Report

In the report, there are some key findings after all the survey was executed and analyzed, all of them are interesting, and similar to the organizational changes, I’ll comment in just a few.

  • “Many organizations are adopting an ad-hoc approach to agile: 21% of respondents use Agile when/where needed, and 23% use it within specific functions. However, adoption needs to be enterprise-wide (and consistent) to realize real results.”

I agree that the adoption must be enterprise-wide, and I want to believe it, however the reality is not that. As the same survey expose, the number of companies that have adopted agile in every area of the company is less than 10%, and that’s because it’s not a simple process. Implementing an ad-hoc approach is a middle ground solution, that will reduce costs and obtain benefits.

Agile Adoption

Personal creation based on report data
  • “Not everyone eagerly embraces agility: Longtime employees (29%) are the biggest detractors of organizational agility and may stand in the way of widespread adoption. This is a prime opportunity for senior-level executives to address employee concerns and shift mindset.”

It’s true that longtime employees are one the of the biggest detractors, but that’s because the resistance to change is stronger in them (Star Wars jokes aside :). However, I wouldn’t limit this to just a group of people. There was one time that I had someone assigned that didn’t believed in Scrum, and it was due to a bad experience before where the execution was done incorrect; for example Sprint Plannings of 3 hours, DSUs of more than 30 minutes and other bad practices.

  • Many organizations eliminate hierarchy in the hopes of increasing agility: 44% of survey respondents have introduced a flatter structure to become more Agile. But that may be premature; Agile is about creating the right dynamics for teams to iterate quickly, not simply moving boxes around on organizational charts.

For this one, I believe that changing the structure is good, simplifying it and making it more easy to work with. However, I don’t believe that you need to flat every structure. There are frameworks like LeSS (Large Scale Scrum) that help making organizations more lean and scale scrum across all of it.

Summary

Moving to an agile process is not easy, evidenced by this survey. There will always be changes required, trainings needed and a really good management. If you are interested, read the whole report from the Scrum Alliance, there are really good insights to incorporate to your own company.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

Node.js and ORMs? TypeORM at your service

As I’ve mentioned in other posts, I’m working with more Node.js projects than ever and my experience with .NET applications is reducing (not complaining, if you think that). However, most of those projects have been with no-sql databases like DynamoDB and I haven’t had the need to use any ORM for it, even though there are options.

Recently, I was assigned a project that needed a big rescue. It’s an internal application of the company, rewritten many times but none of those times it has been completed, and the design itself wasn’t as good as you would like it to be. My assignment, design the architecture, define the technologies to be used, estimate it and assign tasks to a group of developers to work on it. So far, so good.

I decided to use React in the front end, and one main REST service in ExpressJS. Both solutions are going to be written in typescript. But what about the database? well, mysql was my choice and if you ask me the reason, it’s because the business logic makes more sense in a relational environment, but also because I want that my team (and myself) use an ORM to connect to a relational database like MySQL.

A quick search in Google will give you many results about ORMs but I found an interesting article that compares many of them and gives them a rank, take a look at it here. The only comment I would make there is that mongoose is limited to only one database, whereas TypeORM support multiple ones so in my mind, they should switch positions.

Ok, enough chit-chat, let’s start with the tech-y comments. First, I’m starting with a simple definition on an ORM.

What’s an ORM?

ORM stands for Object Relational Mapping, and it’s a mechanism that enables developers to manipulate data directly from the source without the hassle that it normally would take. They map the data sources to objects in code that can be queried, and the ORM transforms the actions over those objects to the specific commands in the specific data source. In other words, they abstract the data access layer from the developers and serves a “virtual object database” to be used within the programming language.

There are many ORM tools in the community, here are some of the most famous ones per programming language:

  1. Hibernate -> Java

  2. Entity Framework -> C#

  3. CakePHP -> PHP

  4. Django -> Python

  5. ActiveRecord -> Ruby

What’s TypeORM?

As the name suggests, and we have mentioned many times over the post, it is an ORM that runs in NodeJS; however it supports other environments like PhoneGap, React Native, Nativescript, etc.

It’s built to be used with Typescript or the latest versions of Javascript (from 5 to 8). Currently, version number is 0.2.9, but do not get fooled by this, it has over 3000 commits in Github! more than 40 thousand downloads per week! and finally, but not least, over 9000 stars!

The first version was deployed in December 6th of 2016, and had 36 releases since then. Being a young tool, it has been influenced by other ones like Hibernate and Entity Framework, so if you noticed stuff that feel familiar, it’s because they are.

From their website, here are some of the main features it provides:

  1. Both DataMapper and ActiveRecord

  2. Eager and Lazy relations

  3. Multiple inheritance patterns

  4. Transactions

  5. Cross-database queries

  6. Query caching

  7. Support for 8 different databases

  8. And many more here

Is there a model generator?

If you have worked with Entity Framework, you can do reverse engineering to create all the POCO classes from an existing database. Well, for TypeORM there is something similar.

Konnonable’s typeorm-model-generator package solves all of this for you. It can create all the object classes that you need to use in your application and it supports 6 databases, leaving Mongo and sql.js outside of the equation.

This package is even younger than typeorm, its first release was in July 2017 and it had 24 releases since then. It’s far less known in the community since there are not many downloads per week according to npmjs (around 300 per week).

Still, this package works like a charm and the configuration is as simple as you can imagine. Take a look at the next line:

typeorm-model-generator -e mysql -h [HOSTNAME] -d [DATABASE] -u [USER] -x [PASSWORD] -p 3306 --noConfig -o . --cf camel --ce pascal --cp camel --lazy

From it you can specify all the necessary parameters to establish a connection to a database, but also some configurations regarding the classes generated like the naming conventions and the “lazyness” of the relationships between them. If you want to take a look at all the available options for configuration, click here to view them.

Using TypeORM

After creating the classes with the typeorm-model-generator package, I ended up having classes that look something like this one:

import { Index, Entity, Column, OneToOne, OneToMany, ManyToOne, JoinColumn } from "typeorm";
import { UserStatus } from "./userStatus";
import { ProjectMember } from "./projectMember";
import { StatusReport } from "./statusReport";
import { TimeOff } from "./timeOff";

@Entity("User", { schema: "statusone" })
@Index("StatusTxt", ["statusTxt",])
export class User {
    @Column("varchar", {
        nullable: false,
        primary: true,
        length: 50,
        name: "UserNm"
    })
    userNm: string;

    @Column("varchar", {
        nullable: false,
        length: 150,
        name: "Email"
    })
    email: string;

    @Column("varchar", {
        nullable: false,
        length: 150,
        name: "FullNm"
    })
    fullNm: string;

    @ManyToOne(() => UserStatus, UserStatus => UserStatus.users, { nullable: false, onDelete: 'RESTRICT', onUpdate: 'RESTRICT' })
    @JoinColumn({ name: 'StatusTxt' })
    userStatus: Promise<UserStatus | null>;

    @OneToOne(() => ProjectMember, ProjectMember => ProjectMember.userNm, { onDelete: 'RESTRICT', onUpdate: 'RESTRICT' })
    projectMember: Promise<ProjectMember | null>;

    @OneToMany(() => StatusReport, StatusReport => StatusReport.userNm, { onDelete: 'RESTRICT', onUpdate: 'RESTRICT' })
    statusReports: Promise<StatusReport[]>;

    @OneToMany(() => TimeOff, TimeOff => TimeOff.userNm, { onDelete: 'RESTRICT', onUpdate: 'RESTRICT' })
    timeOffs: Promise<TimeOff[]>;
}

This one is an easy example of a User table in our database with its relationships like the status of the user, all the reports and more.

TypeORM uses heavily decorators, which requires some options to be enabled in the tsconfig file, however don’t worry about them since it’s explained in their installation instructions here. But if you want the easy route, here is mine.

{
  "compileOnSave": false,
  "compilerOptions": {
    "target": "es6",
    "module": "commonjs",
    "esModuleInterop": true,
    "sourceMap": true,
    "moduleResolution": "node",
    "outDir": "dist",
    "typeRoots": ["node_modules/@types"],
    "experimentalDecorators": true,
    "emitDecoratorMetadata": true
  },
  "include": ["typings.d.ts", "server/**/*.ts", "src/**/*.ts"],
  "exclude": ["node_modules"]
}

Ok, so our project have the classes from the database, and the typescript compiler recognizes all the decorators that TypeORM uses, so how do I use it??

I’m not going to expose the architecture I’m planning on using in the application, mostly because I haven’t completed it. But here is an example of a basic query I’ve done with TypeORM.

import {createConnection} from "typeorm";
import {User} from "../data-access/entity/User";

createConnection().then(async connection => {
      try {
        const users = await connection.manager.find(User);
        console.log("Loaded users: ", users); 
      } catch (error) {
        console.log(error);
      }
    }).catch(error => console.log(error));

Just as easy as it looks, you can obtain all the users from the database by creating a connection, and then using the connection manager to find all the objects from the class passed as parameter.

Another more complex query can be the following:

import {createConnection} from "typeorm";
import {User} from "../data-access/entity/User";

createConnection().then(async connection => {
      try {
        let projectMembers = await connection
            .getRepository(User)
            .createQueryBuilder("user")
            .innerJoinAndSelect("user.projectMember", "projectMember")
            .getMany();
        console.log("Loaded members: ", projectMembers); 
      } catch (error) {
        console.log(error);
      }
    }).catch(error => console.log(error));

In this example, we are creating a query that retrieves the relationship to ProjectMember for the users using a syntax called QueryBuilder which reminds me a lot of EntityFramework.

As far as query examples goes, I will stop here and suggest you to read more documentation like here or here.

Is there anything bad?

Nothing is perfect in this world, and I didn’t have to investigate a lot to find something that surprised me. Believe it or not, but apparently TypeORM still doesn’t support bit types for mysql (Check line 81 in this file).

Even though that this shocked me at first, I had the possibility of changing the design to overlook those bit types and use something different, however it’s still something that annoys me and they should add that support soon.

Summary

TypeORM is one more tool to our toolkit, one that will make your life a lot easier and have the support of a big community. It allows you to abstract a lot of the complex code that can come from connecting to a database, and helps you focus on your business instead of figuring out how to do a select in a table.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.