How to run E2E Tests with docker-compose

This guide covers using docker-compose to spin up your application, run E2E tests, and then exit with the results.

The TL;DR is:

  • docker-compose -f docker-compose.e2e.yml up --abort-on-container-exit --exit-code-from app

For the sake of an example, we’ll be testing a hypothetical API written in NodeJS that uses a Postgres database and a Redis instance. We’ll assume tests are run via jest. You can substitute any stack!

To begin, ensure that you have recent versions of docker and docker-compose installed on your machine.

Dockerfile

Ensure you have a Dockerfile in your project’s folder that specifies how to build an image for your app.

The following example is for a web API written in NodeJS.

FROM node:14.4-alpine As example

# install build dependencies
RUN apk update && apk upgrade
RUN apk add python3 g++ make

# install packages for sending mail (msmtp = sendmail for alpine)
RUN apk add msmtp
RUN ln -sf /usr/bin/msmtp /usr/sbin/sendmail

# make target directory for assigning permissions
RUN mkdir -p /usr/src/app/node_modules
RUN chown -R node:node /usr/src/app

# use target directory
WORKDIR /usr/src/app

# set user
USER node

# copy package*.json separately to prevent re-running npm install with every code change
COPY --chown=node:node package*.json ./
RUN npm install

# copy the project code (e.g. consider: --only=production)
COPY --chown=node:node . .

# expose port 3500
EXPOSE 3500

docker-compose

Create a docker-compose.e2e.yml file.

The following example creates a service called app that runs in a container named example.

Note the command property. This should specify the command that will run your tests inside the container. In our example, this is yarn test:e2e.

version: '3.8'

services:
  app:
    container_name: example
    build:
      context: .
      target: example # only build this part of the Dockerfile (see: '... As example' )
    volumes:
      - .:/usr/src/app
      - /usr/src/app/node_modules # 'hack' prevents node_modules/ in the container from being overridden
    working_dir: /usr/src/app
    command: yarn test:e2e
    environment:
      PORT: 3500
      NODE_ENV: test
      DB_HOSTNAME: postgres
      DB_PORT: 5432
      DB_NAME: example
      DB_USERNAME: postgres
      DB_PASSWORD: postgres
      REDIS_HOSTNAME: redis
      REDIS_PORT: 6379
    networks:
      - webnet
    depends_on:
      - redis
      - postgres

  redis:
    container_name: redis
    image: redis:5
    networks:
      - webnet

  postgres:
    container_name: postgres
    image: postgres:12
    networks:
      - webnet
    environment:
      POSTGRES_DB: example
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      PG_DATA: /var/lib/postgresql/data
    volumes:
      # - ./seed.db.sql:/docker-entrypoint-initdb.d/db.sql <- run only once when the pgdata volume is first created (when run via docker-compose)
      - pgdata:/var/lib/postgresql/data # or specify a local folder like ./docker-volumes/pgdata:/var/lib/postgresql/data

networks:
  webnet:

volumes:
  pgdata:
  logs:

Note how each service shares the same network so they can communicate with each other.

Tip: you can use .env files and reference variables from them in a docker-compose.yml file as follows: ${VARIABLE_NAME}.

If you wish to specify a particular .env file in your docker-compose.yml file:

env_file:
  - .env

Run E2E Tests

From your project folder, you can run the following command to run your tests:

docker-compose -f docker-compose.e2e.yml up --abort-on-container-exit --exit-code-from app

The -f flag specifies a custom configuration file for docker-compose. If this is not specified, docker-compose will look for docker-compose.yml by default.

The up command tells docker-compose to bring the services and containers up.

The --abort-on-container-exit and --exit-code-from flags are an important combination.

The first flag shuts things down when our test run is complete, and the second flag will use the exit code from the specified service (in our case the one named app) as the exit code from the overall docker-compose command.

This is a good setup if you have scripts that run tests, or if you have a continuous integration pipeline that automatically runs tests and requires a pass/fail.

Test runners such as jest will generally exit with code 0 (success) if all tests pass, and exit with a non-zero code (failure) if any tests fail.

package.json

If your project uses npm, yarn or their ilk, you can specify commands to run tests in the scripts section.

Our docker-compose.yml file requires the app service to run the command yarn test:e2e. In our hypothetical example app, this is specified as follows:

"test:e2e": "NODE_ENV=test jest --config ./test/jest-e2e.json",

Of course, you will need to specify the command that initiates running E2E tests that’s particular to your project and environment.

To spin up your application via docker-compose and run tests from your workstation (or CI environment, etc), add the following script:

"test:e2e:docker": "docker-compose -f docker-compose.e2e.yml up --abort-on-container-exit --exit-code-from app"

You could then run via npm run test:e2e:docker or yarn test:e2e:docker

If you are using a different package/dependency management solution, you can specify your test-related scripts there. You also have the option to define shell/bash scripts that can run your tests.

NestJS Integration and E2E Tests with TypeORM, PostGres, and JWT

This guide covers a basic setup for integration and/or end-to-end (e2e) testing on a NestJS project that uses TypeORM and Postgres for database connectivity, and JWT for authentication.

Many of the tips and example tests found in this guide are also applicable to applications that use MongoDB.

This guide assumes you wish to use a live database for testing. Some developers do not feel this step is necessary, while others strongly advocate using a database because it tests an application in a scenario that more closely reflects a production environment.

Regardless of where you stand on the question of a live database, integration and end-to-end tests are an important part of an overall testing/QA strategy.

Unit tests alone fall short in a number of areas. In particular with NestJS there can be a lot of mocking of services, repositories, etc. and extensive mocking runs the risk of masking potential bugs.

The code covered by this guide is available at the following gist:

https://gist.github.com/firxworx/575f019c5ebd67976da164f48c2f4375

Creating a Test Database

Start by setting up a database to use for your tests that is different from your development or production databases.

How you proceed significantly depends on your individual project’s setup so it doesn’t make sense to go into too many details for the purpose of this guide.

One relatively straightforward approach is to use docker to spin up a container running Postgres. I plan on writing another guide in the near-future that covers using docker-compose to spin up a test environment and run e2e tests.

You should also consider how your test database is loaded with fixtures (test data) and reset between test runs. This could be done via separate scripts, or you could can manage the state of your test database within your test code.

I prefer the latter approach so that everything related to spinning up tests is defined alongside the tests themselves.

I provide a tip below to get the app’s TypeORM connection so that you can perform raw operations against your database to help setup and teardown test data.

Environment Variables and Test Database

Next, configure your project to use the test database when it is being run in a test environment.

Ideally your app should be coded to check the values of environment variables and change its behaviour depending on if it is being run in a test, development, and/or production scenario.

One approach is to define different TypeORM configurations that can be selected between based on the value of the NODE_ENV variable. For example if NODE_ENV is ‘development’ or ‘test’, you could supply a particular configuration to TypeORM that is different from ‘production’.

A related approach is to populate the individual values of your TypeORM configuration, such as host, port, username and password from different environment variables, such as DB_HOST, DB_PORT, DB_USER, and DB_PASS.

Recall that you can reference process.env that’s built into NodeJS to access the value of any environment variable. For example, process.env.NODE_ENV can be checked in a conditional statement to see if its value is equal to ‘test’, ‘development’, or ‘production’.

There are many was to work with environment variables and manage configurations in the NodeJS and NestJS ecosystems, so I won’t delve too far into any details because your project is likely to be somewhat unique.

In the scripts section of your package.json file, you can ensure that a given environment variable is set for tests by modifying the command to add an environment variable declaration. For example, the following explicitly sets NODE_ENV=test for tests invoked via the test:e2e script:

"test:e2e": "NODE_ENV=test jest --config ./test/jest-e2e.json"

This could just as easily be NODE_ENV=development or anything else that happens to suit your particular needs.

Initializing the Application in beforeAll()

The approach I take in this guide starts with the boilerplate test/jest-e2e.json supplied by the NestJS application template.

The boilerplate provides a beforeEach() implementation. For the purposes of this guide, we are going to delete that method because is more straightforward to setup the NestJS application once via the beforeAll() method and then run our tests against it.

The following beforeAll() method initializes our Nest Application based on the root AppModule:

beforeAll(async () => {
  const moduleFixture: TestingModule = await Test.createTestingModule({
    imports: [AppModule],
  }).compile()

  app = moduleFixture.createNestApplication()
  // app.useLogger(new TestLogger()) // more on this line is below
  await app.init()
})

I am assuming a typical setup where your project’s AppModule imports TypeOrmModule.forRoot(...) to provide database connectivity to the rest of your application, and that any modules that use database entities import them via TypeOrmModule.forFeature([ EntityName, ... ]). How to use TypeORM with NestJS is covered in the docs.

Loading Test Data

As mentioned above, you could load your test data via a scripted step. For example, you could have a script defined in the scripts section of package.json that wipes your test database and loads fresh test data.

Another approach (which can also be used in combination with the above approach) is to load a set of representative test data in a beforeAll() method in your test file. You can also use jest’s beforeEach() and afterEach() methods to load/reset data for more granular control before and/or after individual tests.

A good set of test data should flex the various features of your app.

For example, if your app has a notion of active vs. inactive users, or verified/confirmed vs. unverified/unconfirmed users, these should be reflected in your test data and you can then write test cases to confirm that your application behaves as intended for each case.

In terms of loading or deleting data, a useful tip is that you can access the TypeORM database connection used by the app via:

const connection = app.get(Connection)

From the connection you can access the migrations or manager properties or the createQueryBuilder() method as required to perform setup or teardown operations against the database.

For example, with given entityName and data values, you could insert records into the database via:

// raw INSERT query:
await connection.createQueryBuilder().insert().into(entityName).values(data).execute()

// if you need cascades:
await conn.getRepository(entityName).save(data)

Another tip is that you can also easily start from a blank database that is loaded with your schema via:

await connection.synchronize(true)

The true value is for the dropBeforeSync option. This option drops the entire database and all of its data before synchronizing the schema, making it perfect for the start of a test run.

Case Study from Example Project

In one of my projects I have the following block of code in the beforeAll() method that launches the app, immediately following the await app.init() line:

if (process.env.NODE_ENV === 'test') {
    const connection = app.get(Connection)
    await connection.synchronize(true)
    await loadFixtures('data', connection)
}

The code synchronizes the database schema against a fresh database, and the next line runs a loadFixtures() function that I implemented that loads test fixtures (test data) from a yaml file. This approach was inspired by the discussion on typeorm issue #1550.

Note that I specifically check for the ‘test’ environment in case I accidentally run the e2e test script in my local development environment where I want to preserve my working development data.

Closing Down in afterAll()

After all your tests have finished, remember to ensure that your app is closed down with app.close():

afterAll(async () => {
  await app.close()
})

Example Tests

Suppose our test data includes a valid test user test@example.com with password password.

The following tests a hypothetical auth/login endpoint. It also saves the JWT token received in the response to a higher-scoped variable that we can then reference in subsequent tests:

// assume a higher-scoped variable definition to store the jwt token, e.g.
let jwtToken: string

it('authenticates a user and includes a jwt token in the response', async () => {
  const response = await request(app.getHttpServer())
    .post('/auth/login')
    .send({ email: 'test@example.com', password: 'password' })
    .expect(200)

    // save the access token for subsequent tests
    jwtToken = response.body.accessToken

    // ensure a JWT token is included in the response    
    expect(jwtToken).toMatch(/^[A-Za-z0-9-_=]+\.[A-Za-z0-9-_=]+\.?[A-Za-z0-9-_.+/=]*$/) // jwt regex
})

The next example attempts to login our test@example.com user with an incorrect password. It confirms that the client receives an HTTP 401 response and that it does not include an accessToken:

it('fails to authenticate user with an incorrect password', async () => {
  const response = await request(app.getHttpServer())
    .post('/auth/login')
    .send({ email: 'test@example.com', password: 'wrong' })
    .expect(401)

  expect(response.body.accessToken).not.toBeDefined()
})

We can also confirm our app’s behaviour when an unrecognized user attempts to login. Assume that the user nobody@example.com does not exist in the test database:

it('fails to authenticate user that does not exist', async () => {
  const response = await request(app.getHttpServer())
    .post('/auth/login')
    .send({ email: 'nobody@example.com', password: 'test' })
    .expect(401)

  expect(response.body.accessToken).not.toBeDefined()
})

Since we saved the jwtToken to a higher-scoped variable during the first example test, we can then use that token to make authenticated requests. Supertest includes a set() method where we can set the Authorization header:

it('gets a protected resource with an authenticated request', async () => {
    const response = await request(app.getHttpServer())
      .get('/protected')
      .set('Authorization', `Bearer ${jwtToken}`)
      .expect(200)

    const resources = response.body.data

    // write assertions that reflect your test data scenario
    // e.g. expect(resources).toHaveLength(3) 
})

You can easily modify the above test to send an incorrect or malformed Bearer token and confirm the server response in that case as well.

From here you should have a good foundation to build more tests. Be sure to check the docs for NestJS, jest, and supertest for more details and examples.

If your application is more elaborate and features user roles and permissions, be sure to write tests to confirm that it behaves as intended across different scenarios.

Replacing the Logger

Running tests can generate a lot of log output from your application. One idea to eliminate superfluous log output during tests is to create a LoggerService with mock “no-op” methods and use that as your application logger for tests.

You can add the following class to your project:

class TestLogger implements LoggerService {
  log(message: string) {}
  error(message: string, trace: string) {}
  warn(message: string) {}
  debug(message: string) {}
  verbose(message: string) {}
}

In the above example for beforeAll() there was a commented-out line app.useLogger(new TestLogger()).

With the test logger added to your project and imported into your test file, you could now uncomment this line.

Credit for the mock logger idea goes to Paul Salmon who wrote this post:

https://medium.com/@salmon.3e/integration-testing-with-nestjs-and-typeorm-2ac3f77e7628

Running Tests

The boilerplate NestJS project has defines a test:e2e script in package.json.

In my projects, I modify it to explicitly set the TEST environment variable:

"test:e2e": "NODE_ENV=test jest --config ./test/jest-e2e.json",

If you use yarn, run the test via yarn test:e2e.

If you are using npm the equivalent is: npm run test:e2e.

Finally, here’s that link again to the gist that contains the code dicussed in this guide:

https://gist.github.com/firxworx/575f019c5ebd67976da164f48c2f4375

Let me know in the comments if this guide helped you with testing your NestJS application :).

NestJS Generate PDF with PDFKit and Send to Client

This guide covers using PDFKit in a NestJS API to generate a PDF and send it back to a client.

There are a ton of libraries within the JavaScript/TypeScript ecosystem to generate PDF files. I chose PDFKit because it’s popular and it doesn’t require high-overhead dependencies such as a headless web browser. You can substitute a different library if you prefer.

This guide assumes that you’re using the default Express configuration for NestJS rather than Fastify.

Install Dependencies

The following examples use yarn. You are free to use your favourite package manager such as npm.

yarn add pdfkit
yarn add --dev @types/pdfkit

NestJS Service

Suppose you have a NestJS service that’s decorated with the Injectable() decorator.

Import PDFDocument from pdfkit:

import * as PDFDocument from 'pdfkit'

Create a function within your service to generate your PDF.

The following example generates a basic “Hello World” PDF. The function wraps the PDF production step in a Promise that resolves with a Buffer containing the PDF data:

  async generatePDF(): Promise<Buffer> {
    const pdfBuffer: Buffer = await new Promise(resolve => {
      const doc = new PDFDocument({
        size: 'LETTER',
        bufferPages: true,
      })

      // customize your PDF document
      doc.text('hello world', 100, 50)
      doc.end()

      const buffer = []
      doc.on('data', buffer.push.bind(buffer))
      doc.on('end', () => {
        const data = Buffer.concat(buffer)
        resolve(data)
      })
    })

    return pdfBuffer
  }

Note that there are a ton of options that you can pass to PDFDocument(), and a ton of ways that you can customize the PDF beyond the text() method shown above. Please refer to the PDFKit Docs for all the details.

NestJS Controller

Ensure that your controller’s constructor references your service so that it is made available via NestJS’ Dependency Injection:

// ...
  constructor(
    private exampleService: ExampleService
  ){}
// ...

Implement a function to download a generated PDF.

The following example uses the @Res() decorator (imported from @nestjs/common) to access the underlying ExpressJS Response object (Response is imported from express):

  @Get('/pdf')
  async getPDF(
    @Res() res: Response,
  ): Promise<void> {
    const buffer = await this.exampleService.generatePDF()

    res.set({
      'Content-Type': 'application/pdf',
      'Content-Disposition': 'attachment; filename=example.pdf',
      'Content-Length': buffer.length,
    })

    res.end(buffer)
  }

Rather than use NestJS’ Header() decorator, we set the headers manually on the response object so that we can specify the Content-Length.

The generated PDF is sent back to the client by passing the buffer to res.end().

If you want the client’s browser to open the PDF by default rather than download it, change the attachment keyword in the Content-Disposition header to inline.

If you’re using a different PDF library that returns the PDF as a stream, you can return it to the client via stream.pipe(res).

Don’t forget to add cache-busting headers such as Cache-Control if you want client browsers to always fetch the latest version of your PDF file given repeated requests.

Let me know in the comments if this guide was helpful to you!

Email Module for NestJS with Bull Queue and the Nest Mailer

This guide covers creating a mailer module for your NestJS app that enables you to queue emails via a service that uses @nestjs/bull and redis, which are then handled by a processor that uses the nest-modules/mailer package to send email.

NestJS is an opinionated NodeJS framework for back-end apps and web services that works on top of your choice of ExpressJS or Fastify.

Redis is a popular in-memory key-value database that will serve as the back-bone of our queue. Tasks to send emails will be added to the queue, and the NestJS processor will consume tasks from the queue.

The nest-modules/mailer package is implemented with the popular nodemailer library. Email templates are supported via handlebars.

I wrote this guide because I couldn’t find any NestJS examples that use a queue to send emails. A queue is important to prevent your app from getting bogged down when handling labour-intensive tasks such as sending mail, processing multimedia files, or crunching data.

For simplicity’s sake, the implementation covered by this guide sends emails in the same process as they are queued. The processor will handle queued tasks when the app is idle.

An enhanced implementation could involve a separate “worker” (ideally running on a different server) that takes care of processing the queue. This way, your api is free to quickly respond to client requests without the burden of email processing.

Redis for Development

Perhaps the easiest way to get a redis instance rolling for development purposes is with Docker. Assuming you have Docker installed on your machine, you can run the following command:

docker run -p 6379:6379 --name redisqueue -d redis

Port 6379 is the default redis port. Make sure you don’t already have a conflicting service running on port 6379!

To later stop the redis instance, run the command:

docker stop redisqueue

Installing Dependencies

Install the following project dependencies.

I use yarn but you can easily change the commands to reflect npm or another favourite package manager:

yarn add @nestjs/bull bull

yarn add --dev @types/bull

yarn add @nestjs-modules/mailer

yarn add handlebars

Module Creation

Create a new module named mail in your project.

You can use the nestjs cli to scaffold the module running the following command in the root of your project folder: nest g module mail.

In your module folder, create a new sub-folder called templates/.

Handlebars templates in your src/mail/templates folder won’t automatically be copied over into the project build folder. You can solve this by adding compilerOptions to your project’s nest-cli.json file and specifying an assets folder. An example nest-cli.json file follows:

{
  "collection": "@nestjs/schematics",
  "sourceRoot": "src",
  "compilerOptions": {
    "assets": [
      "mail/templates/**/*"
    ]
  }
}

In the root mail.module.ts file, configure the MailerModule and BullModule in the module’s imports definition.

The following example assumes the config package is being used as a configuration tool. You can substitute your own configuration package, or simply hardcode values to get things working:

import { Module } from '@nestjs/common'
import { MailService } from './mail.service'
import { MailProcessor } from './mail.processor'
import { MailerModule } from '@nestjs-modules/mailer'
import { HandlebarsAdapter } from '@nestjs-modules/mailer/dist/adapters/handlebars.adapter'
import { BullModule } from '@nestjs/bull'
import * as config from 'config'

@Module({
  imports: [
    MailerModule.forRootAsync({
      useFactory: () => ({
        transport: {
          host: config.get('mail.host'),
          port: config.get('mail.port'),
          secure: config.get<boolean>('mail.secure'),
          // tls: { ciphers: 'SSLv3', }, // gmail
          auth: {
            user: config.get('mail.user'),
            pass: config.get('mail.pass'),
          },
        },
        defaults: {
          from: config.get('mail.from'),
        },
        template: {
          dir: __dirname + '/templates',
          adapter: new HandlebarsAdapter(),
          options: {
            strict: true,
          },
        },
      }),
    }),
    BullModule.registerQueueAsync({
      name: config.get('mail.queue.name'),
      useFactory: () => ({
        redis: {
          host: config.get('mail.queue.host'),
          port: config.get('mail.queue.port'),
        },
      }),
    }),
  ],
  controllers: [],
  providers: [
    MailService,
    MailProcessor,
  ],
  exports: [
    MailService,
  ],
})
export class MailModule {}

Creating the Mail Service

Use NestJS’ @InjectQueue() decorator to inject the mailQueue (of type Queue, as imported from the ‘bull’ package):

  constructor(
    @InjectQueue(config.get('mail.queue.name'))
    private mailQueue: Queue,
  ) {}

You can now implement a function that adds a task to the queue. In the example below, the task is named ‘confirmation’ and passed a payload containing the user and confirmation code:

  /** Send email confirmation link to new user account. */
  async sendConfirmationEmail(user: User, code: string): Promise<boolean> {
    try {
      await this.mailQueue.add('confirmation', {
        user,
        code,
      })
      return true
    } catch (error) {
      // this.logger.error(`Error queueing confirmation email to user ${user.email}`)
      return false
    }
  }

Creating the Mail Processor

In order for queued tasks to be handled, we need to define a processor.

Create mail.processor.ts in your mail module folder.

Be sure to import the MailerService provided by the @nestjs-modules/mailer package:

import { MailerService } from '@nestjs-modules/mailer'

Use the @Processor() decorator to identify your class as a processor for the mail queue, and add the MailerService to the constructor to inject it via NestJS’ dependency injection:

@Processor(config.get('mail.queue.name'))
export class MailProcessor {
  private readonly logger = new Logger(this.constructor.name)

  constructor(
    private readonly mailerService: MailerService,
  ) {}

  // ...
}

The following example implements a number of decorated functions using the decorators @OnQueueActive(), @OnQueueCompleted(), and @OnQueueFailed() to provide better visibility and logging into how the processor is working.

To implement a function that handles the ‘confirmation’ task, decorate it with the @Process() decorator and pass it the task name: @Process('confirmation'). Note how the payload is received and can be used in the task.

@Processor(config.get('mail.queue.name'))
export class MailProcessor {
  private readonly logger = new Logger(this.constructor.name)

  constructor(
    private readonly mailerService: MailerService,
  ) {}

  @OnQueueActive()
  onActive(job: Job) {
    this.logger.debug(`Processing job ${job.id} of type ${job.name}. Data: ${JSON.stringify(job.data)}`)
  }

  @OnQueueCompleted()
  onComplete(job: Job, result: any) {
    this.logger.debug(`Completed job ${job.id} of type ${job.name}. Result: ${JSON.stringify(result)}`)
  }

  @OnQueueFailed()
  onError(job: Job<any>, error: any) {
    this.logger.error(`Failed job ${job.id} of type ${job.name}: ${error.message}`, error.stack)
  }

  @Process('confirmation')
  async sendWelcomeEmail(job: Job<{ user: User, code: string }>): Promise<any> {
    this.logger.log(`Sending confirmation email to '${job.data.user.email}'`)

    const url = `${config.get('server.origin')}/auth/${job.data.code}/confirm`

    if (config.get<boolean>('mail.live')) {
      return 'SENT MOCK CONFIRMATION EMAIL'
    }

    try {
      const result = await this.mailerService.sendMail({
        template: 'confirmation',
        context: {
          ...plainToClass(User, job.data.user),
          url: url,
        },
        subject: `Welcome to ${config.get('app.name')}! Please Confirm Your Email Address`,
        to: job.data.user.email,
      })
      return result

    } catch (error) {
      this.logger.error(`Failed to send confirmation email to '${job.data.user.email}'`, error.stack)
      throw error
    }
  }
}

Creating the Handlebars Template

Create the file confirmation.hbs in your mail module’s templates/ subfolder:

<p>Hello {{ firstName }}</p>
<p>Please click the link below to confirm your email address:</p>
<p><a href="{{ url }}" target="_blank">Confirm Email</a></p>

Note how the context is being used to provide data to the email body.

Using the Mail Service in Another Module

Suppose another module needs to send email, such as an auth module that needs to send an email confirmation link to a new user.

Open the module definition file, e.g. src/auth/auth.module.ts then add the MailModule we created to its imports list in the @Module decorator:

// ...
import { MailModule } from '../mail/mail.module'
// ...

@Module({
  imports: [
    // ...
    MailModule
    // ...
  ]
// ...
})

You can then use the MailModule provided service MailService in your controllers and services via Dependency Injection.

Import the MailService (import { MailService } from '../mail/mail.service') and in your constructor, add the definition private mailService: MailService to inject it.

You can than call methods defined in your service, such as:

this.mailService.sendConfirmationEmail(user, '1234')

The mail service will add the email task to the queue, and the mail processor will “pick up” and complete the task when your app is idle.

Wrap-Up

Don’t forget to turn off your redis queue when you are done development!

How to use aws-sdk for NodeJS with AWS Translate

This post covers using the aws-sdk for NodeJS with AWS Translate.

The code examples are written in ES and transpiled with Babel.

Install the AWS SDK

First, install the aws-sdk package in your project using your favourite package manager:

yarn add aws-sdk
# OR
npm i aws-sdk

Ensure There’s a Working AWS Profile

Ensure that you have an AWS profile and configuration properly setup for your user. An AWS Profile is typically stored inside the ~/.aws folder inside your home directory.

Suppose you have a profile named firxworx. An example entry of a useful entry in ~/.aws/config for that profile is:

[profile firxworx]
region = ca-central-1
output = json

A corresponding entry in the ~/.aws/credentials file that specifies credentials for the example firxworx profile looks like this:

[firxworx]
aws_access_key_id=ABCDXXXX
aws_secret_access_key=ABCDXXXX

Refer to the AWS Docs if you need to create a profile and obtain an Access Key ID and Secret Access Key.

Write Your Code

Start by importing the aws-sdk package:

import AWS from 'aws-sdk'

Next, configure AWS by specifying which profile’s credentials to use:

const credentials = new AWS.SharedIniFileCredentials({ profile: 'firxworx' })
AWS.config.credentials = credentials

Specify any other config options. The following line locks AWS to the most current API version (at the time of writing):

AWS.config.apiVersions = {
  translate: '2017-07-01',
}

Reference the AWS Translate homepage and take note of which regions AWS Translate is currently available in. If you need to specify a region that’s different than the default listed in your AWS profile, or you wish for your code to be explicit about which region it’s using, add the following line. Change the region to the valid region that you would like to use:

AWS.config.update({
  region: 'ca-central-1'
})

If you are using any Custom Terminologies, be sure to define them in the same region that you are about to use for AWS Translate. Custom Terminologies are lists of translation overrides that can be uploaded into the AWS Console. They are useful for ensuring that brand names, terms of art, trademarks, etc are translated correctly. Custom Terminology definitions are only available within the region that they were created and saved in.

Next, create an instance of AWS Translate:

const awsTranslate = new AWS.Translate()

At this point everything is setup to write a function that can translate text.

The following implements an async function called awsTranslate(). The function’s params include specifying a hypothetical custom terminology named example-custom-terminology-v1. Do not specify any value in the TerminologyNames array if you are not using any custom terminologies.

A key insight here is the .promise() method in the line containing awsTranslate.translateText(params).promise() which causes the API to return a promise.

async function asyncTranslate(langFrom, langTo, text) {
  const params = {
    SourceLanguageCode: langFrom,
    TargetLanguageCode: langTo,
    Text: text,
    TerminologyNames: [
      'example-custom-terminology-v1'
    ]
  }

  try {
    const translation = await awsTranslate.translateText(params).promise()
    return translation.TranslatedText
  } catch (err) {
    console.log(err, err.stack)
  }
}

The langFrom and langTo must be language codes as understood by AWS Translate. Refer to the docs for a current list of supported language codes: https://docs.aws.amazon.com/translate/latest/dg/what-is.html.

If you had a hypothetical index.js entry point for your NodeJS application and wanted to use the above function, an example invocation could be:

(async () => {

  const translation = await asyncTranslate('en', 'fr', 'Hello World')
  console.log(translation)

})()