It’s been about a year since my last post in this series. Things have – surprise surprise – changed quite a lot in a year, but even if I’d like to update the chosen tech, I’ll mostly use what I originally chose. However, I’ll make a couple of major version updates for the Auth service:

  • Node 6.10.2 -> 8.11.2
  • Restify 4.3.0 -> 7.2.0

The Shippable node image was also updated to 6.5.4 to get Node 8.11.2 support. Due to the node version update the Dockerfiles for all the components were updated as well.

Overall Target

As mentioned in the introduction article, the purpose of the service is to authenticate/authorize the user and return refresh and access tokens for accessing the App service features.

REST APIs

For the fore mentioned functionality three APIs are needed:

  • Login – for the user to login using their username and password for receiving a refresh token
  • Renew – for requesting new access tokens using refresh tokens
  • Logout – for the user to logout to clear tokens on the client side
Testing
Acceptance Tests

The Auth service provides only JSON endpoints so until the App and Front services are implemented, we cannot do much for the acceptance tests. So let’s move on to the API tests.

API Tests

The following tests should cover the most important aspects of the service:

Login:

  • Accessing the endpoint with the POST method with an invalid username should return an error
  • Accessing the endpoint with the POST method with an invalid password should return an error
  • Accessing the endpoint with the POST method with valid credentials should return a new refresh token

Renew:

  • Accessing the endpoint with the GET method without a proper refresh token should return an error
  • Accessing the endpoint with the GET method with a proper refresh token should return a new access token

Logout:

  • Accessing the endpoint with the POST method without a proper refresh token should return an ok and clear token cookie
  • Accessing the endpoint with the POST method with a proper refresh token should return ok and clear the token cookie
Unit Tests

With unit tests we covered the cases not covered by the API tests. In the end we ended up writing only a couple of unit tests for testing the catch-block error handling. We would have probably used unit tests much more in case there was complicated logic in the endpoint handlers or somewhere else in the application. In this article we cover the testing before the implementation, but in reality, the development is an iterative process, following the TDD red-green-refactor cycle.

Implementation

The implementation included the following things:

  • Updating the project structure to follow the structure mentioned in the earlier post
  • Implementing the handlers for the needed API endpoints
  • Taking into account common JWT related and common node.js service security vulnerabilities

To keep the implementation straightforward, we won’t implement APIs for creating or managing users. Due to this, we define the one and only user in the system via Sequelize seeds, but it’s not smart to reveal user credentials in the source code in any production-grade application.

New Modules

Restify, Restify-errors, Restify-async-wrap, Restify-cors-middleware & Cookie

These modules provide some core functionalities needed for a Restify-based server. Cookie is used for basic cookie related operations and restify-async-wrap is used for catching unhandled errors due to await-async usage, which is not completely supported in the Restify framework. Since our API services reside in their own subdomains, we’ll use restify-cors-middleware to enable CORS to make the subdomains accessible for the Front service.

Sequelize, Sequelize-cli, Pg & Pg-hstore

For the database access we’ll use Sequelize ORM and the Sequelize-cli is needed for creating and running migrations and database related tasks. In addition to these, Sequelize also needs the pg and pg-hstore modules for accessing PostgreSQL databases.

Nconf

Even in this kind of simple system there are already quite a few configuration variables which need to be defined. Nconf makes it easy to read configuration variables from the command line, environment variables, config files as well as from in-memory databases. It also supports setting default values and required values so that the server won’t start if a required configuration variable is missing.

Restify-jwt-community & Jsonwebtoken

Restify-jwt-community & jsonwebtoken are used for validating and writing refresh and access tokens.

Bcryptjs

Bcryptjs is used for encoding and validating passwords.

Express-validator

This module is used for validating login endpoint parameters.

Database Setup

For starters it’s nice to have a tool for manual database inspection and testing and the pgAdmin tool was our choice. For the database, we’re using the same PostgreSQL Docker images we’re using in the CI and production environments.

Initialisation

In the beginning Sequelize needs to be initialised:

$ node_modules/.bin/sequelize init --migrations-path db/sequelize/migrations --seeders-path db/sequelize/seeders --models-path models/sequelize --config config/sequelize/config.json

All the init parameters aren’t mandatory, but we customised some of the used paths from their default values, so we defined them with the additional parameters.

The init does quite a few things, one of them being the generation of index.js file which loads the defined models and provides access to them. The file should work as it is, but it didn’t follow our coding conventions, so we modified it slightly.

Configurations

We also need a Sequelizerc configuration file so that Sequelize-cli finds the customised paths when we’re doing migrations etc:

const path = require('path');
module.exports = {
  'config': path.resolve('config', 'sequelize', 'config.json'),
  'models-path': path.resolve('models', 'sequelize'),
  'seeders-path': path.resolve('db', 'sequelize', 'seeders'),
  'migrations-path': path.resolve('db', 'sequelize', 'migrations')
}

In addition to the above configs, various configurations are needed for multiple environments. The configs in each environment are quite similar, but there are some differences in the logging and host settings. For the production the use_env_variable is also set so that the basic connection variables are read from the DATABASE_URL environment variable instead of the config file.

{
  "development": {
    "username": "postgres",
    "password": null,
    "database": "database_dev",
    "host": "db-auth",
    "dialect": "postgres",
    "timezone": "utc",
    "operatorsAliases": false,
    "seederStorage": "sequelize"
  },
  "continuous_integration": {
    "username": "postgres",
    "password": null,
    "database": "database_ci",
    "host": "127.0.0.1",
    "dialect": "postgres",
    "timezone": "utc",
    "logging": false,
    "operatorsAliases": false,
    "seederStorage": "sequelize"
  },
  "production": {
    "use_env_variable": "DATABASE_URL",
    "timezone": "utc",
    "logging": false,
    "operatorsAliases": false,
    "seederStorage": "sequelize"
  }
}
Data Model, Migrations and Seeds

The data models are defined with model files and migrations. The needed user data model and migration can be created in the following way:

$ node_modules/.bin/sequelize model:generate --name User --attributes username:string,passwordHash:string`

We modified the automatically created model and migration a bit and the results are below:

`module.exports = (sequelize, DataTypes) => {
  const User = sequelize.define('User', {
    username: {
      type: DataTypes.STRING,
      allowNull: false,
      unique: true,
      validate: {
        len: [2, 50],
      },
    },
    passwordHash: {
      type: DataTypes.STRING,
      allowNull: false,
      validate: {
        len: [2, 150],
      },
    },
  }, {});
  return User;
};
module.exports = {
  up: (queryInterface, Sequelize) => (
    queryInterface.createTable('Users', {
      id: {
        allowNull: false,
        autoIncrement: true,
        primaryKey: true,
        type: Sequelize.INTEGER,
      },
      username: {
        type: Sequelize.STRING,
        unique: true,
        allowNull: false,
      },
      passwordHash: {
        type: Sequelize.STRING,
        allowNull: false,
      },
      createdAt: {
        allowNull: false,
        type: Sequelize.DATE,
      },
      updatedAt: {
        allowNull: false,
        type: Sequelize.DATE,
      },
    })
  ),
  down: queryInterface => queryInterface.dropTable('Users'),
};

The migration(s) can be run into the database with Sequelize-cli. The Sequelize-cli will do its magic and make sure that only the needed migrations are run against the database:

$ node_modules/.bin/sequelize db:migrate

Since the implemented Front service will be very simple and won’t contain the needed functionality for registering new users, one user is added to the database via seeds:

$ node_modules/.bin/sequelize seed:generate --name first-user
```

The empty template file has to be filled with proper content; this is the seed we’ll use in this case:

```
module.exports = {
  up: queryInterface => (
    queryInterface.bulkInsert('Users', [{
      username: 'FirstUser',
      passwordHash: '$2y$12$M9D5R21X.6TJR8WJ66af6eUyBfYwVgjJibcOcwhptvfp6a8.46yD6', // "Password1"
      updatedAt: new Date(),
      createdAt: new Date(),
    }], {})
  ),
  down: queryInterface => (
    queryInterface.bulkDelete('Users', {
      username: ['FirstUser'],
    })
  ),
};
```

The *FirstUser* – with *Password1* password – is added to the database. The password is not plain text since the passwords will be encrypted using bcrypt. There are plenty of online tools for creating bcrypt encoded passwords for testing purposes. The following script will run the seeds against the database:

`$ node_modules/.bin/sequelize db:seed:all`

#####Database Management
To run the tests reliably against a similarly initialised database, we added some grunt tasks for managing the CI test workflow. The added tasks drop and create the needed database and run the migrations and seeds to make sure the database is always in a similar state before each test run. The tools provided by Sequelize are a good start, but for complicated systems more powerful tooling may be needed to allow parallel API tests with test specific custom test data sets, for instance.

```
drop_db: {
        exec: 'sequelize db:drop || true',
      },
      create_db: {
        exec: 'sequelize db:create',
      },
      init_db: {
        exec: 'sequelize db:migrate',
      },
      seed_db: {
        exec: 'sequelize db:seed:all',
      },
    },
  });
  grunt.loadNpmTasks('grunt-env');
  grunt.loadNpmTasks('grunt-run');
  grunt.registerTask('default', [
    'env:ci',
    'run:drop_db',
    'run:create_db',
    'run:init_db',
    'run:seed_db',
    'run:test',
    'run:eslint',
  ]);
```

We also added *Sequelize* migrate commands in the beginning of the *start* and *start:dev* scripts to make sure the databases are always up-to-date before the service is started.

```
"scripts": {
    "start": "sequelize db:migrate && sequelize db:seed:all && node index.js",
    "start:dev": "sequelize db:migrate && sequelize db:seed:all && node-dev index.js",
    "test": "grunt",
    "test:unit": "grunt unit_tests_only"
  },
```

#####API Endpoints
We won’t go into all the details, but we’ll briefly describe the functionality of the login, renew and logout handlers.

#####Login
We define the parameter validation for the login handler (/login) in the route configuration:

```
module.exports = (server) => {
  server.post('/login', [
    check('username')
      .isLength({ min: 2, max: 50 }),
    check('password')
      .isLength({ min: 8, max: 50 }),
  ], restifyAsyncWrap(postLogin));
};
```

In the beginning the login handler checks the parameter validation results and returns an error if validation errors were found. If the parameters are ok, the login handler tries to find the user from the database based on the username and if it is found, *bcrypt* is used for checking if the given password matches the user password in the database. If both the username and password match, a cookie with a refresh token is returned.

```
const postLogin = async (req, res, next) => {
  if (!validationResult(req).isEmpty()) {
    return next(new errors.UnprocessableEntityError());
  }
  let user;
  try {
    user = await db.User.findOne({
      where: { username: req.params.username },
    });
  } catch (e) {
    return next(new errors.InternalServerError(e));
  }
  if (user === null) {
    return next(new errors.InvalidCredentialsError());
  }
  if (bcrypt.compareSync(req.params.password, user.get('passwordHash'))) {
    res.setHeader(
      'Set-Cookie',
      cookie.serialize(
        refreshToken.cookieName,
        refreshToken.createRefreshToken(user),
        refreshToken.cookieConfig
      )
    );
    res.send(200);
    return next();
  }
  return next(new errors.InvalidCredentialsError());
};
```

#####Renew
The renew endpoint (/protected/renew) is protected and the JWT parsing and validation is set in the route config.

```
module.exports = (server) => {
  server.get(
    '/protected/renew',
    jwt({
      algorithms: ['HS256'],
      audience: nconf.get('jwt:refreshToken:audience'),
      issuer: nconf.get('jwt:refreshToken:issuer'),
      secret: nconf.get('jwt:refreshToken:secret'),
      getToken: tokenFromCookie,
    }),
    restifyAsyncWrap(getRenew));
};
```

In case a cookie with valid refresh token was received, the token content is parsed into the *req.user* object for further use. In case the user is found from the database, an access token is returned in the response body.

```
const getRenew = async (req, res, next) => {
  let user;
  try {
    user = await db.User.findById(req.user.sub);
  } catch (e) {
    return next(new errors.InternalServerError(e));
  }
  if (user === null) {
    return next(new errors.InvalidCredentialsError());
  }
  res.send(200, { accessToken: accessToken.createAccessToken(user) });
  return next();
};
```

#####Logout
The logout handler (/protected/logout) is basically JWT protected, but since the handler doesn’t really do anything user specific at this point, the endpoint is set accessible also without a valid refresh token.

```
module.exports = (server) => {
  server.post(
    '/protected/logout',
    jwt({
      algorithms: ['HS256'],
      audience: nconf.get('jwt:refreshToken:audience'),
      issuer: nconf.get('jwt:refreshToken:issuer'),
      secret: nconf.get('jwt:refreshToken:secret'),
      credentialsRequired: false,
      getToken: tokenFromCookie,
    }),
    restifyAsyncWrap(postLogout));
};
```

Whether the request contains a proper refresh token cookie or not, the handler will clear the refresh token cookie and return an empty access token string in the response body.

```
const postLogout = async (req, res, next) => {
  res.setHeader(
    'Set-Cookie',
    cookie.serialize(
      refreshToken.cookieName,
      '',
      refreshToken.clearCookieConfig
    )
  );
  res.send(200, { accessToken: '' });
  return next();
};
```

#####Health check
Finally we also added a simple health check endpoint into the root path. The endpoint could be used as [a Kontena health check http endpoint](https://kontena.io/docs/using-kontena/loadbalancer.html) and we also used the endpoint with the service security scanners.

```
const db = require('../../../models/sequelize');
const getHealthCheck = async (req, res, next) => {
  await db.sequelize.authenticate();
  res.send(200);
  return next();
};
module.exports = getHealthCheck;
```

#####Test Coverage
We ended up adding a few additional tests to cover all the code execution paths while implementing the functionality and checking the code coverage. The added unit tests are just to test the error handling cases not covered by API tests. I also decided to ignore the app/lib/config/index.js file from the code coverage report, since I thought it’s not worth the effort to test the different code paths related to the environment variables:

```
"coveragePathIgnorePatterns": [
    "<rootDir>/node_modules/",
    "<rootDir>/app/lib/config/"
  ],
```

The API tests were also written in a similar fashion as the unit tests, so that they are next to the tested endpoint handlers in the folder structure.

#####Container Management and CI/CD Updates
In the testing and acceptance testing continuous integration environments the system setup is somewhat similar to production. The database connection parameters and JWT token secrets are read from the environment variables, so we’ll write the needed variables into [Kontena vault](https://kontena.io/docs/using-kontena/vault.html) and forward them to the Auth service:

```
$ kontena vault write AUTH_DATABASE_URL postgres://<username>:<password>@db-auth:5432/database_prod
$ kontena vault write AUTH_ACCESS_TOKEN_SECRET <access token secret>
$ kontena vault write AUTH_REFRESH_TOKEN_SECRET <refresh token secret>
$ kontena vault write AUTH_ALLOWED_ORIGINS https://example.com,https://www.example.com
```

For Kontena we also took the [http-01](https://kontena.io/docs/using-kontena/vault.html#using-lets-encrpyt-certificates) Let’s Encrypt certificate challenge type into use, since it supports automatic certificate renewal. Following modifications were made for all the used environments:

```
Kontena http-01 Let's Encrypt certsMS DOS
$ kontena vault write FRONT_SERVICE_DOMAIN_MAIN example.com
$ kontena vault write FRONT_SERVICE_DOMAIN_ALT www.example.com
$ kontena certificate authorize --type http-01 --linked-service article-project/lb-proxy example.com
$ kontena certificate authorize --type http-01 --linked-service article-project/lb-proxy www.example.com
$ kontena certificate authorize --type http-01 --linked-service article-project/lb-proxy app.example.com
$ kontena certificate authorize --type http-01 --linked-service article-project/lb-proxy auth.example.com
$ kontena certificate request example.com www.example.com app.example.com auth.example.com
```

The updated Kontena stack descriptors are [available in GitLab](https://gitlab.com/article-projects/deployment-scripts/tree/master/descriptors).

There were also some random failings happening in the acceptance test CI environment. The first issue was that the system wasn’t ready to handle the acceptance tests immediately after the deployment – probably due to database migrations and similar reasons – and sometimes the acceptance tests failed due to that. To remedy the issue, we took a *wait-on* module into use for the acceptance tests to check that the service is returning http 200 before running the acceptance tests.

The second issue was related to the Kontena stack setup in CI environments. If the stack was removed and installed again immediately after the removal, a Docker error about “..is already in use by container..” happened quite often. We added a delay between the Kontena stack removal and install to fix the issue, but there might be more reliable solutions available.

#####JWT Best Practices
This Auth0 article about the [IETF’s JWT best practices draft](https://tools.ietf.org/wg/oauth/draft-ietf-oauth-jwt-bcp/) is quite practical and useful:

* Especially in the past some [JWT libraries had critical vulnerabilities](https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/). We’re using recent versions of the actively maintained *jsonwebtoken* module, which is also used under the hood in the *restify-jwt-community* module.

* We’re using the *HS256* algorithm, so some of the attacks related to asymmetric algorithms won’t apply. Symmetric algorithms are more than fine for our simple service, even if asymmetric algorithms are cryptographically stronger.

* We’re enforcing and validating the used algorithm (HS256 in our case) to mitigate *alg: none* and similar attacks

* We’re also validating audience and issuer fields.
* Since we’re using *HS256*, it’s important to [use strong keys for signing the JWTs](https://auth0.com/blog/brute-forcing-hs256-is-possible-the-importance-of-using-strong-keys-to-sign-jwts/). This is something to remember when setting the keys for production-grade applications.

####Other Security Considerations
We’re not going too deep into the security aspects, but we’ve tried to cover some basic [best practices](https://expressjs.com/en/advanced/best-practice-security.html).

#####Parameter Checking & SQL Injections
We’re using [Sequelize ORM](http://docs.sequelizejs.com/) which provides protection against the most obvious SQL injection cases. We also implemented a basic parameter validation for the login endpoint with *express-validator* module.

####Rate Limiting
The Restify provides a throttle plugin for implementing the rate limiting. We took the plugin into use with a simple ip-based configuration, but in practice it may need more advanced configuration to make it truly useful.

```
app.use(restify.plugins.throttle({
  burst: 50,
  rate: 20,
  ip: true,
}));
```

######Cookie Settings
To protect the refresh token cookie we set the following directives for it: *HttpOnly*, *Secure* and *SameSite=Strict*.

#####Snyk
Snyk is a tool and service for finding and reviewing security vulnerabilities. We’ll skip the setup details, but it’s very easy to integrate the Snyk web app with GitLab projects. The web app will automatically scan repos for vulnerable dependencies, create pull requests with fixes and patches, flag pull requests that add new vulnerabilities and notify about new vulnerabilities via email. Snyk is free for individuals and small teams.

![alt](/content/images/2018/07/Snyk-2.png)
[The Snyk command line tool](https://snyk.io/docs/using-snyk) provides quite a few additional features, but you can get many benefits just by spending 15 minutes and taking the web app into use. There are other similar alternatives also available like the [Node Security Platform](https://nodesecurity.io/), which is most likely going to be somehow integrated [with the npm in the near future](https://medium.com/npm-inc/npm-acquires-lift-security-258e257ef639).

#####Helmet
Helmet provides an easy way to add common security related headers for Express based Node.js apps.

```
app.use(helmet({
  contentSecurityPolicy: { directives: { defaultSrc: ["'self'"] } },
  referrerPolicy: { policy: 'same-origin' },
}));
```

Some of the added headers – [X-Frame-Options](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options), [Referrer Policy](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy) and [Content Security Policy](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP) for instance – don’t seem useful for a json REST API service. However, I ended up enabling them anyway just to get good scores from the security scanner results ;).

#####Security Scanners
For scanning the site we used [Mozilla Observatory](https://observatory.mozilla.org/), [securityheaders.com](https://securityheaders.com/) and [F-Secure Radar](https://www.f-secure.com/en/web/business_global/radar). After taking the Helmet into use, the results were very good regarding the used headers.

In addition to headers, F-Secure Radar found the SSL connection supports insecure TLS 1.0. Since in our system we’re using SSL termination, we disabled both TLS 1.0 and 1.1 in the [Kontena HAProxy-based load balancer service](https://kontena.io/docs/using-kontena/loadbalancer.html). This was done by providing some HAProxy configs via Kontena descriptors:

```
lb-proxy:
    image: kontena/lb:latest
    ports:
      - 443:443
      - 80:80
    environment:
      - KONTENA_LB_GLOBAL_SETTINGS=ssl-default-bind-options no-tlsv10 no-tlsv11
    certificates:
      - subject: "${front_service_domain_main}"
        type: env
        name: "SSL_CERT_${front_service_domain_main}"
```

After these changes [Mozilla Observatory](https://observatory.mozilla.org/) and [securityheaders.com](https://securityheaders.com/) gave A+ ratings for the Auth service and [F-Secure Radar](https://www.f-secure.com/en/web/business_global/radar) reported only one (low level) issue.

####Summary
The Auth service is quite simple itself and doesn’t provide much functionality, but then again some kind of user authentication and authorisation is needed for many services. The Auth service could function as a base for various needs, incase some already existing solution or service – like [Auth0](https://auth0.com/), [AWS Cognito](https://aws.amazon.com/cognito/) or [Keycloak](https://www.keycloak.org/) – isn’t a suitable solution for the purpose.

In the next article we’ll finalise the overall service, by implementing the Front and App services.

___
This blog post was originally published on [Juha Kärnä's Blog](https://www.juhakarna.fi/2018/07/04/modern-web-development-setup-auth-service/) on July 4th, 2018. It has been re-published here with permission.