Hosting websites in Amazon - Setting up URLs and certificates

Developing an application is the part that takes all the interest from any developer; I mean, the name kind of implies that right?. However, a complete development process does not finish when the team write the last line of code, we still have to make the deployment.

During most of the process, we can use simpler configurations. For example, we can use the default links provided when we host in a S3 bucket, if we are using an EC2 instance we can use the direct IP, or even use a load balancer that points to a set of Docker instances. However, in all of these cases, the URLs are not friendly since they are autogenerated by Amazon.

This post will explain how to do a proper deployment of an application for a production-level environment. For this, we will use the following technologies:

  1. S3 bucket: where the application code will be hosted. We won’t run through this during this post, but you can check my Amplify posts to see how to do it.

  2. Cloudfront: Content delivery network service

  3. Route53: domain registration service

  4. Certificate Manager: SSL certificates provider

Requesting the certificate

As a prerequisite for this part, we will need to create a hosted zone in Route 53 where the certificate will be associated. For that, go to the Route 53 service, in the left menu select hosted zones, and create one if you don’t have already.

Route 53 - hosted zone

The first step that we will cover is requesting the SSL certificate. For this, we will go to the Certificate Manager service in Amazon and click on “Request a Certificate” and select a public certificate.

Certificate - Step 1

The next step will ask us to register all the domain names that we want the certificate to be used for.

Certificate - step 2

Important: we can use a wildcard to use multiple subdomains. For example, you can type *, and it will cover cases like

After setting the domain names, it will ask us to select the verification method of the certificate. Since we are going to use only Amazon services, let’s select DNS validation.

Certificate - step 3

Finally, we will review all the selections and continue with the wizard. After the request has been created, we need to validate and create the domain for the certificate in Route 53, however, this is an automated step that the Certificate Manager can do.

Certificate - step 4

In the dashboard for the certificate manager, you will see the certificate that you just created, and inside of it, it will prompt you to create the DNS records in Route 53.

Certificate - step 5

Creating the Cloudfront distribution

Now that we have our certificate set up and ready to be used, we need to create the cloud front distribution that will point to our S3 bucket. For that, we will go to the Cloud front service dashboard and click on “Create distribution”.

For our use case, we will select “Web” as the delivery method (this is the first screen that will show). From there, we will be presented a big form with a lot of options, but for most of them we will leave the default values except in 3 specific areas.

First, we need to select the S3 bucket where the application is hosted as the “Origin Domain Name” (this field is a dropdown, even though it doesn’t look like it).

Cloudfront - step 1

The second change that we will make to the form is under the “Default Cache Behavior Settings” section. In there, we will select “Redirect HTTP to HTTPS” in the viewer protocol policy option. This will automatically redirect the user to the HTTPS site, even if they don’t specify it in the address bar.

Cloudfront - step 2

Finally, under the “Distribution Settings” section, we will select the certificate created previously.

Cloudfront - step 3

After all of that, click on “Create Distribution" button at the end of the screen and wait some time while Amazon provision all the resources for it.

Important: when you are using React or Angular applications, we need to set the default root object to be the index.html file. For that, in the same form, a couple of fields after selecting the certificate, it can be specified. If you didn’t set it up the first time, you can modify the distribution after its creation.

Create the actual URL

Previously, we used briefly Route53 to validate the certificate request, however this time we will create the real URL that our application will use. So, we go to the Route53 service in amazon and in the left bar menu click on Hosted zones.

From the hosted zone that you created before, we will add a new record set. If all the previous steps were executed successfully, there should already be one record set for the SSL certificate.

In the new record set, we will type the name of the URL that we want to use, for example:, set the record set to be an alias and from the input that will appear, select the cloudfront distribution created earlier.

Route 53 - record set

After finishing creating it, we will need to wait some time while Amazon register the domain in the DNS servers and propagates the changes. Shortly after, you will be able to use your new URL to access your website.

Bonus track: what about APIs?

As a small tip, if you want to do this for an API that is being hosted in ECS or Beanstalk, instead of creating a Cloudfront distribution, you can create the record set in Route 53 directly and point to the load balancer that is associated to those instances. The first two steps are still required, :)


Now that all sections have been covered, I hope that your application has been deployed successfully, and your customers will be happy by having a pretty URL that is easy to remember (hopefully). If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

Deploying a REST API using Express to Elastic Beanstalk

A couple of weeks ago, I was trying to deploy a small service, but I didn’t wanted to start creating the full infrastructure suite on Fargate yet; hence, the best option is Elastic Beanstalk. For a couple of days I did the deployment manually, but after reading a little bit more about Beanstalk, I found a CLI that simplified a lot of the deployment process.

Here is some documentation if you want to read about it, but for the mean time, I will write a little bit about how I used it. It might be useful to you, if like me, you are looking for ways to do a quick deployment and don’t want to invest time in configuring the CI/CD pipeline or a more complex setup.


The express application was built using:

  1. Node version 10.15.0

  2. generator-express-no-stress-typescript version 4.2.1

Installing the CLI

To install the CLI, there are two approaches, using HomeBrew or the other one using PIP. In my case, I used Brew because it’s so much simpler (and also because the other way failed for me :).

Just type the following and brew should take care of everything for you:

brew install awsebcli

Configuring the CLI

I’m going to assume you are already using AWS profiles in your machine, if not, I recommend reading about them here.

To start the configuration in your project, navigate to the root folder of it, and type the following command:

eb init —profile [YOUR PROFILE NAME]

It will start a series of questions in the next sequence:

  1. Region: in our case, we selected us-east-1

  2. Elastic Beanstalk Application: it will prompt you create a new app if none is already created.

  3. Platform: in our case, we selected node.js

  4. SSH configuration

  5. Keypair: to connect to the instance, you will need a key pair to use via SSH. The CLI will help you to create one as well.

Important: there is a step asking you if you want to use CodeCommit, this will help you create a pipeline, however, since we are using other source control tools, we ignored it.

Once the CLI finishes, you will see a new folder in your project called: .elasticbeanstalk (notice the dot at the beginning). If you wanna read more about configuration options, go here and here.

First deployment

Now that our app is configured (at least for starters), we need to do a deployment. For that, we need to create an environment with the command:


We used DEV, and after it finishes, we need to update some configuration in the AWS console. We will update the node command, this command is the one nginx will use whenever a new deployment is done.

Elastic Beanstalk environment configuration

Just one more thing, as you can see, the environment uses nginx as base server, so make sure that the application is listening on port 8081 (look at the documentation here). Finally, we can run:

eb deploy

Important: I had an issue, a really weird one when I started doing changes and deploying them, and it seemed like the deployment never grabbed the latest changes. After reading in internet for some time, I found this post; and in one of his IMPORTANT notes, Jared mentioned that the changes must be committed to Git for them to be deployed. I don’t have the answer to why this happens, but seems important to note.

If you are using plain javascript, this could be all, it will give you the URL to use to connect and whatever you have built, should be ready for usage. However, as many of my other posts, I like typescript, and this isn’t the end of our journey.

Deploying Typescript

Using typescript, the deployment is not as straightforward, so we need to add one more step to our config.yml file and in the package.json.

Important: these steps are necessary because Elastic Beanstalk do not install dev dependencies in their machines, so we need to overcome that issue with this new process.

First, we need to add something that helps us to do the compilation of Typescript to Javascript. For that, we will add a shell script in the root of our app with the following contents:

zip dist/$ -r dist package.json package-lock.json .env

Then, we will modify the package.json’s scripts with the following:

  "scripts": {
      "compile": "ts-node build.ts && tsc && sh",

And finally, in our config.yml file under the .elasticbeanstalk folder, we will add the following content before the global declarations:

  artifact: dist/

Now, let’s explain a little bit about what we are doing.

The scripts in the package.json file were updated to run the shell script after the compilation is finished.

The shell file grabs everything that was compiled and move it into a zip file.

Important: The “npm_package_name” in the shell file will refer to the package.json name attribute. Make sure that in the config.yml file, you type the same name

Next, in the config.yml, we specify the file that we will deploying to the environment in Elastic Beanstalk. So, the ebcli will only grab the zip file and send it. Under the hoods, elastic beanstalk will unzip it and run the command specified in the beanstalk environment.

Finally, run again the eb deploy command, and now our Typescript Express API will run in the cloud.


Now that all sections have been covered, I hope that the application has been a success and you have deployed your app into the cloud. If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

Google Authentication using AWS-Amplify (+ deployment)

Authentication using AWS is a process I covered in a previous post, however, this time we are going to use a tool provided by Amazon called Amplify.

For this tutorial, we are going to create a simple application using Facebook’s create-react-app. Then, we will add the authentication layer by using AWS-amplify, and finally, add the hosting in S3 buckets. But before that, let’s provide some basic concepts.

What is AWS-Amplify?

According to their own site, AWS-amplify is defined as a library for frontend and mobile developers that are building cloud-based applications, and facilitates them the necessary tools to add multiple cloud features. For this tutorial, we will focus on two main features: storage and authentication, but Amplify provide many more like:

  • Analytics

  • API integration

  • Push notifications

  • Cache

  • Among others

What is create-react-app and how to install it?

Create-react-app is the best tool to use whenever you want to start creating a web application with React, and for someone like me that likes Typescript, it has now the built-in capability to create apps using it.

Installing it into your machine is like installing any global package from npm. Just type “npm install -g create-react-app“ and voliá!

There are some dependencies needed although, for example, you must have at least node version 6. This library also allows you to focus on creating your application instead of dealing with webpack or babel configuration.

Now, let’s start with the real deal, and work on our Google authenticated application.

Create the app

For this tutorial, I will use the following versions:

  • node: 10.15.0

  • npm: 6.4.1

  • create-react-app: 2.1.3

  • aws-amplify: 1.1.19

To create the app, we will run the following line in your preferred terminal: “create-react-app google-auth-tuto --typescript“. This will generate all the necessary code you need to start working.

Running the app

To start using the application, run “npm install” in your terminal to verify that you have all the necessary packages installed. Then, in the package.json file generated, some scripts have been created by default, this time we will use the “start” script; so simply run “npm start” and it will open a tab in your browser after the application finishes compiling your code.

npm start

Now that our application is running, we can start using AWS-amplify to add some cloud functions, but first, we will need to configure amplify in your machine. For that, you can follow the next video that explains how to do it (taken from aws-amplify main site).

Configuring amplify in the project

Now that amplify is configured in your machine, we can add it to our application by running: “amplify init” in your project root folder. It will prompt you several questions and after them it will start creating some resources in your account, here is an example of what you will see in your terminal.

amplify init configuration

At the end, if this is the first time you are running aws-amplify, it will create a new profile instead of using an existing one. In this example, I’ve used my profile named ljapp-amplify, so this section might be different for you.

Important: always create different profiles for your AWS accounts, in my case, I have to use multiple accounts for my companies’ clients, so it facilitates a lot my work.

After the AWS resources have been created, let’s add the authentication layer to our app. AWS-amplify have different categories of resources, authentication is one of them. So, let’s add it by running: “amplify auth add“. Same as before, you will see some configurations asked by amplify, here is a summary of what you will receive.

amplify auth add

The only information that you might be wondering how to get is the Google Web Client Id. For that, please follow the instructions found here, under the “Create a client ID and client secret” section.

Finally, run “amplify push” and this will start creating all the authentication resources in your account.

amplify push

Important: AWS-amplify uses Identity Pools for 3rd party integration instead of user pools. Since identity pools doesn’t manage groups, we can only authenticate them. So, if we need to provide specific permissions or roles, we need to use claims (or switch to user pools manually) and configure it manually in AWS console.

Modifying React code

Up till now, we have setup all the foundation in the AWS account via amplify, but we still need to add logic in our react application. For that, we will install two npm packages:

  • npm install aws-amplify

  • npm install aws-amplify-react

Then, we will modify our App.ts file with the following code.

import React, { Component } from 'react';
import Amplify from 'aws-amplify';
import { withAuthenticator } from 'aws-amplify-react';

import logo from './logo.svg';
import aws_exports from './aws-exports';
import './App.css';


class App extends Component {
  render() {
    return (
      <div className="App">
        <header className="App-header">
          <img src={logo} className="App-logo" alt="logo" />
            Edit <code>src/App.tsx</code> and save to reload.
            rel="noopener noreferrer"
            Learn React

const federated = {
  google_client_id: '',

export default withAuthenticator(App, true, [], federated);

The second parameter in the “withAuthenticator” high-order component, will create a header for our application with some minimal information like the name of the user logged in, and also, renders the log out button.

Important: By default, aws-amplify provides some default screens that can be customized, but also, it allows for creating our own components for login, register, among others. This will not be covered in today’s tutorial and we will be using the default screens.

As of today, the package aws-amplify-react hasn’t been updated with a typescript definition, so we will need to add a file that declares it as a module (with the name aws-amplify-react.d.ts), to avoid typescript errors during development. The contents of the file are:

declare module 'aws-amplify-react';

Now that everything is set, we can run our application again and we will be seeing the following screen.

Amplify login screen

And then, we can log in using google’s button and after verifying our account, we will get into the application.

User logged into the application

Hosting the application

Now that everything is setup, we can host our application in the cloud with amplify. For that, we will add the hosting feature by running the next command: “amplify hosting add“, and same as before, some configuration is required.

amplify hosting add

Shortly, it will ask you to run amplify publish, and this will create the S3 bucket if it doesn’t exist, and open right away a browser tab with the application hosted on that bucket.


Now that all sections have been covered, I hope that the application has been a success and you have created a React application that can use google authentication, and hosted easily in S3 buckets in AWS. In an upcoming tutorial, I will talk about using Cognito User Pools to do 3rd party authentication.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

Companies Agility: A report by Scrum Alliance

A couple of days ago, the Scrum Alliance published a report called “The Elusive Agile Enterprise: How the Right Leadership Mindset, Workforce and Culture Can Transform Your Organization”.

In this report, Scrum Alliance and Forbes surveyed more than 1,000 executives to determine how important is the agility in an organization, its degree of success in transformation efforts and how much progress the companies have implementing this type of frameworks. Among all the respondents, there were 10% that are from Latin America (I’m from Costa Rica), so, I would have liked to read the results from this area, but the overall results are equally interesting. Also, those executives weren’t only from a technology oriented company, but also from other areas, however, I would like to comment on some aspects of the report from an IT perspective.

Geography distribution

Personal creation based on report data

One key thing to notice is that they want to measure the agility of a company, not the agile framework they are using (still, there is a 77% of these companies that leverage Scrum as their main framework). But, what is agility and what is being agile? agility is the property of an organization to respond to market changes and still deliver value to the customers, whereas agile is an organizational approach and mindset defined by the values and principles detailed in the Agile Manifesto.

Based on this premise, the report defines several benefits that are obtained after achieving a high level of agility within the company, such as:

  1. Faster time to market:

  2. Faster innovation

  3. Improved financial results

  4. Improved employee morale

Organizational changes

To achieve the mentioned benefits, the respondents affirmed that there were several changes that needed to be applied in order to redefine the company’s processes and reflect the agile mindset. In the report, Scrum Alliance mentioned the top 7 changes, but in order to not spoil the article, I would like to comment some of them with experiences that I’ve seen first-hand to be applied, and succeed.

  • Introduce an agile mindset

Being agile is not only about using Scrum (or any other flavor), perform some of the ceremonies or events that are required, and deliver software. The processes definitely are important, but the people are equally, or more important.

Having an upper management that believes in agile methodologies and promote them, facilitates the transition heavily. Implementing a change from the bottom-up, is nearly impossible, but when it comes from the higher grounds, it’s a quick, fluid and flexible process.

I worked for one organization that had Scrum implemented partially, but not the agile mindset. They performed some of the ceremonies when possible, defined their sprints to have incremental products, but in the other hand, had a requirements process that resembled a lot to the waterfall scheme and the releases were done until the end of the development. The development process was really flexible, and we were able to adapt to some changes, but still there were some gears that didn’t feel right.

The main problem was that the customers weren’t involved with the development process at all, and they were expecting that we exceeded their expectations. This wasn’t going to work, and in fact, it didn’t!

To remediate this, we tried to create backlogs that allowed customers to give their inputs and define what needed to be done. It worked in some things, but still there wasn’t much involvement from the customers, even though that we invited them. At this point, there wasn’t a single point of authority, someone that could work as the Product Owner (something fundamental), so we had to facilitate its creation.

To do this, we talked with the managers of each area that were part of the application and explained them what changes were needed and the possible benefits. They trusted us, and a new group was formed that worked as the Product Owner; this group consisted on a representative of each area, and even though this is not the regular Scrum process, it worked much better and we got much more feedback than before.

The agile mindset was introduced, little by little, to obtain success.

  • Create incentives that promote agility

In another organization, the agile mindset was much better. Some processes were already defined there, customers agreed with the methodology and got involved in it, the ceremonies were executed and the benefits were visible. Even so, this organization needed some optimization because the processes weren’t applied uniformly across all development teams.

To solve this, a group of people from the organization decided to create a group to lead all the Agile efforts, and the first big task was to standardize the process across every team. Among many options, the one who won was to create a contest, but not a simple one.

The contest consisted in having all teams follow the process of the organization and the Scrum best practices. There were 4 phases and for every phase, a common goal. Each team earned points depending on how good the practices and the process were followed. For example: for the first phase the DSUs were the main goal, and a team earned one point for every DSU done in less than 10 minutes, using the parking lot technique granted extra points. For the second phase, backlog grooming, sprint planning and sprint retrospective events were evaluated. The next phases evaluated customer involvement and product deliveries.

At the end of the contest, a winner was selected from all teams and some prizes were given, but the real outcome was that all the teams managed to follow the same process and practices.

  • Train workforce

As I mentioned before, people are the most important factor when there are changes in any organization. People will determine how quick the change is applied, but there will always be blockers that are needed to be managed, for example: resistance to change, lack of communication, ignorance.

In my opinion, training people to work with Scrum is mandatory, and there are really clever activities that embodies the agile mindset, demonstrates how Scrum is supposed to work and make people enjoy the time spent learning about it. For example, I’ve been in trainings that uses Lego to create a city, building a tower with marshmallows and spaghetti, but the most recent training that I had was using stacks of cards to simulate a development process.

Key Findings in Report

In the report, there are some key findings after all the survey was executed and analyzed, all of them are interesting, and similar to the organizational changes, I’ll comment in just a few.

  • “Many organizations are adopting an ad-hoc approach to agile: 21% of respondents use Agile when/where needed, and 23% use it within specific functions. However, adoption needs to be enterprise-wide (and consistent) to realize real results.”

I agree that the adoption must be enterprise-wide, and I want to believe it, however the reality is not that. As the same survey expose, the number of companies that have adopted agile in every area of the company is less than 10%, and that’s because it’s not a simple process. Implementing an ad-hoc approach is a middle ground solution, that will reduce costs and obtain benefits.

Agile Adoption

Personal creation based on report data
  • “Not everyone eagerly embraces agility: Longtime employees (29%) are the biggest detractors of organizational agility and may stand in the way of widespread adoption. This is a prime opportunity for senior-level executives to address employee concerns and shift mindset.”

It’s true that longtime employees are one the of the biggest detractors, but that’s because the resistance to change is stronger in them (Star Wars jokes aside :). However, I wouldn’t limit this to just a group of people. There was one time that I had someone assigned that didn’t believed in Scrum, and it was due to a bad experience before where the execution was done incorrect; for example Sprint Plannings of 3 hours, DSUs of more than 30 minutes and other bad practices.

  • Many organizations eliminate hierarchy in the hopes of increasing agility: 44% of survey respondents have introduced a flatter structure to become more Agile. But that may be premature; Agile is about creating the right dynamics for teams to iterate quickly, not simply moving boxes around on organizational charts.

For this one, I believe that changing the structure is good, simplifying it and making it more easy to work with. However, I don’t believe that you need to flat every structure. There are frameworks like LeSS (Large Scale Scrum) that help making organizations more lean and scale scrum across all of it.


Moving to an agile process is not easy, evidenced by this survey. There will always be changes required, trainings needed and a really good management. If you are interested, read the whole report from the Scrum Alliance, there are really good insights to incorporate to your own company.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.

Node.js and ORMs? TypeORM at your service

As I’ve mentioned in other posts, I’m working with more Node.js projects than ever and my experience with .NET applications is reducing (not complaining, if you think that). However, most of those projects have been with no-sql databases like DynamoDB and I haven’t had the need to use any ORM for it, even though there are options.

Recently, I was assigned a project that needed a big rescue. It’s an internal application of the company, rewritten many times but none of those times it has been completed, and the design itself wasn’t as good as you would like it to be. My assignment, design the architecture, define the technologies to be used, estimate it and assign tasks to a group of developers to work on it. So far, so good.

I decided to use React in the front end, and one main REST service in ExpressJS. Both solutions are going to be written in typescript. But what about the database? well, mysql was my choice and if you ask me the reason, it’s because the business logic makes more sense in a relational environment, but also because I want that my team (and myself) use an ORM to connect to a relational database like MySQL.

A quick search in Google will give you many results about ORMs but I found an interesting article that compares many of them and gives them a rank, take a look at it here. The only comment I would make there is that mongoose is limited to only one database, whereas TypeORM support multiple ones so in my mind, they should switch positions.

Ok, enough chit-chat, let’s start with the tech-y comments. First, I’m starting with a simple definition on an ORM.

What’s an ORM?

ORM stands for Object Relational Mapping, and it’s a mechanism that enables developers to manipulate data directly from the source without the hassle that it normally would take. They map the data sources to objects in code that can be queried, and the ORM transforms the actions over those objects to the specific commands in the specific data source. In other words, they abstract the data access layer from the developers and serves a “virtual object database” to be used within the programming language.

There are many ORM tools in the community, here are some of the most famous ones per programming language:

  1. Hibernate -> Java

  2. Entity Framework -> C#

  3. CakePHP -> PHP

  4. Django -> Python

  5. ActiveRecord -> Ruby

What’s TypeORM?

As the name suggests, and we have mentioned many times over the post, it is an ORM that runs in NodeJS; however it supports other environments like PhoneGap, React Native, Nativescript, etc.

It’s built to be used with Typescript or the latest versions of Javascript (from 5 to 8). Currently, version number is 0.2.9, but do not get fooled by this, it has over 3000 commits in Github! more than 40 thousand downloads per week! and finally, but not least, over 9000 stars!

The first version was deployed in December 6th of 2016, and had 36 releases since then. Being a young tool, it has been influenced by other ones like Hibernate and Entity Framework, so if you noticed stuff that feel familiar, it’s because they are.

From their website, here are some of the main features it provides:

  1. Both DataMapper and ActiveRecord

  2. Eager and Lazy relations

  3. Multiple inheritance patterns

  4. Transactions

  5. Cross-database queries

  6. Query caching

  7. Support for 8 different databases

  8. And many more here

Is there a model generator?

If you have worked with Entity Framework, you can do reverse engineering to create all the POCO classes from an existing database. Well, for TypeORM there is something similar.

Konnonable’s typeorm-model-generator package solves all of this for you. It can create all the object classes that you need to use in your application and it supports 6 databases, leaving Mongo and sql.js outside of the equation.

This package is even younger than typeorm, its first release was in July 2017 and it had 24 releases since then. It’s far less known in the community since there are not many downloads per week according to npmjs (around 300 per week).

Still, this package works like a charm and the configuration is as simple as you can imagine. Take a look at the next line:

typeorm-model-generator -e mysql -h [HOSTNAME] -d [DATABASE] -u [USER] -x [PASSWORD] -p 3306 --noConfig -o . --cf camel --ce pascal --cp camel --lazy

From it you can specify all the necessary parameters to establish a connection to a database, but also some configurations regarding the classes generated like the naming conventions and the “lazyness” of the relationships between them. If you want to take a look at all the available options for configuration, click here to view them.

Using TypeORM

After creating the classes with the typeorm-model-generator package, I ended up having classes that look something like this one:

import { Index, Entity, Column, OneToOne, OneToMany, ManyToOne, JoinColumn } from "typeorm";
import { UserStatus } from "./userStatus";
import { ProjectMember } from "./projectMember";
import { StatusReport } from "./statusReport";
import { TimeOff } from "./timeOff";

@Entity("User", { schema: "statusone" })
@Index("StatusTxt", ["statusTxt",])
export class User {
    @Column("varchar", {
        nullable: false,
        primary: true,
        length: 50,
        name: "UserNm"
    userNm: string;

    @Column("varchar", {
        nullable: false,
        length: 150,
        name: "Email"
    email: string;

    @Column("varchar", {
        nullable: false,
        length: 150,
        name: "FullNm"
    fullNm: string;

    @ManyToOne(() => UserStatus, UserStatus => UserStatus.users, { nullable: false, onDelete: 'RESTRICT', onUpdate: 'RESTRICT' })
    @JoinColumn({ name: 'StatusTxt' })
    userStatus: Promise<UserStatus | null>;

    @OneToOne(() => ProjectMember, ProjectMember => ProjectMember.userNm, { onDelete: 'RESTRICT', onUpdate: 'RESTRICT' })
    projectMember: Promise<ProjectMember | null>;

    @OneToMany(() => StatusReport, StatusReport => StatusReport.userNm, { onDelete: 'RESTRICT', onUpdate: 'RESTRICT' })
    statusReports: Promise<StatusReport[]>;

    @OneToMany(() => TimeOff, TimeOff => TimeOff.userNm, { onDelete: 'RESTRICT', onUpdate: 'RESTRICT' })
    timeOffs: Promise<TimeOff[]>;

This one is an easy example of a User table in our database with its relationships like the status of the user, all the reports and more.

TypeORM uses heavily decorators, which requires some options to be enabled in the tsconfig file, however don’t worry about them since it’s explained in their installation instructions here. But if you want the easy route, here is mine.

  "compileOnSave": false,
  "compilerOptions": {
    "target": "es6",
    "module": "commonjs",
    "esModuleInterop": true,
    "sourceMap": true,
    "moduleResolution": "node",
    "outDir": "dist",
    "typeRoots": ["node_modules/@types"],
    "experimentalDecorators": true,
    "emitDecoratorMetadata": true
  "include": ["typings.d.ts", "server/**/*.ts", "src/**/*.ts"],
  "exclude": ["node_modules"]

Ok, so our project have the classes from the database, and the typescript compiler recognizes all the decorators that TypeORM uses, so how do I use it??

I’m not going to expose the architecture I’m planning on using in the application, mostly because I haven’t completed it. But here is an example of a basic query I’ve done with TypeORM.

import {createConnection} from "typeorm";
import {User} from "../data-access/entity/User";

createConnection().then(async connection => {
      try {
        const users = await connection.manager.find(User);
        console.log("Loaded users: ", users); 
      } catch (error) {
    }).catch(error => console.log(error));

Just as easy as it looks, you can obtain all the users from the database by creating a connection, and then using the connection manager to find all the objects from the class passed as parameter.

Another more complex query can be the following:

import {createConnection} from "typeorm";
import {User} from "../data-access/entity/User";

createConnection().then(async connection => {
      try {
        let projectMembers = await connection
            .innerJoinAndSelect("user.projectMember", "projectMember")
        console.log("Loaded members: ", projectMembers); 
      } catch (error) {
    }).catch(error => console.log(error));

In this example, we are creating a query that retrieves the relationship to ProjectMember for the users using a syntax called QueryBuilder which reminds me a lot of EntityFramework.

As far as query examples goes, I will stop here and suggest you to read more documentation like here or here.

Is there anything bad?

Nothing is perfect in this world, and I didn’t have to investigate a lot to find something that surprised me. Believe it or not, but apparently TypeORM still doesn’t support bit types for mysql (Check line 81 in this file).

Even though that this shocked me at first, I had the possibility of changing the design to overlook those bit types and use something different, however it’s still something that annoys me and they should add that support soon.


TypeORM is one more tool to our toolkit, one that will make your life a lot easier and have the support of a big community. It allows you to abstract a lot of the complex code that can come from connecting to a database, and helps you focus on your business instead of figuring out how to do a select in a table.

If you have any comment, don't hesitate in contacting me or leaving a comment below. And remember to follow me on twitter to get updated on every new post.