Perfomatix is a Node js development company with certified resources for offshore software development, we focus on following certain best practices to get the most out of these advantages of using NodeJS.
Node

NodeJS Coding Standards and Best Practices [Developer Guide]

NodeJS has a key advantage of scalability helping developers to easily scale the applications in horizontal as well as the vertical directions.

It is also a full-stack JavaScript for serving both the client and the server-side applications, which has the open-source runtime environment that provides the facility of caching single modules.

Here are Node.js coding standards and best practices that will be helpful in your application development process.

1. Project Structure Practices

1.1 Structure your solution by components

The worst large applications pitfall is maintaining a huge code base with hundreds of dependencies – such a monolith slows down developers as they try to incorporate new features.

Instead, partition your code into components, each gets its own folder or a dedicated codebase, and ensure that each unit is kept small and simple. 

The ultimate solution is to develop small software: divide the whole stack into self-contained components that don’t share files with others, each constitutes very few files (e.g. API, service, data access, test, etc.) so that it’s very easy to reason about it.

1.2 Layer your components

Each component should contain ‘layers’ – a dedicated object for the web, logic, and data access code. This not only draws a clean separation of concerns but also significantly eases mocking and testing the system.

Though this is a very common pattern, API developers tend to mix layers by passing the web layer objects (Express req, res) to business logic and data layers – this makes your application dependent on and accessible by Express only.

1.3 Wrap common utilities as npm packages

In a large app that constitutes a large codebase, cross-cutting-concern utilities like logger, encryption and alike, should be wrapped by your own code and exposed as private npm packages.

This allows sharing them among multiple codebases and projects.

Hire Node Js Developers from Perfomatix

1.4 Separate Express ‘app’ and ‘server’

Avoid the habit of defining the entire Express app in a single huge file – separate your ‘Express’ definition to at least two files: the API declaration (app.js) and the networking concerns (WWW).

For even better structure, locate your API declaration within components

1.5 Use environment aware, secure and hierarchical config

A perfect and flawless configuration setup should ensure (a) keys can be read from file AND from environment variable (b) secrets are kept outside committed code (c) config is hierarchical for easier findability.

There are a few packages that can help tick most of those boxes like rc, nconf and config.

2. Error Handling Practices

2.1 Use Async-Await or promises for async error handling

Handling async errors in callback style is probably the fastest way to the pyramid of doom.

Use a reputable promise library or async-await instead which enables a much more compact and familiar code syntax like try-catch.

Code Example – using promises to catch errors

return functionA()

  .then((valueA) => functionB(valueA))

  .then((valueB) => functionC(valueB))

  .then((valueC) => functionD(valueC))

  .catch((err) => logger.error(err))

  .then(alwaysExecuteThisFunction())

Code Example - using async/await to catch errors

async function executeAsyncTask () {

  try {

    const valueA = await functionA();

    const valueB = await functionB(valueA);

    const valueC = await functionC(valueB);

    return await functionD(valueC);

  }

  catch(err) {

    logger.error(err);

  }

}


2.2 Use only the built-in Error object

Many throw errors as a string or as some custom type – this complicates the error handling logic and the interoperability between modules.

Whether you reject a promise, throw an exception or emit an error – using only the built-in Error object will increase uniformity and prevent loss of information.

Code example – Anti Pattern

// throwing a string lacks any stack trace information and other important data properties

if(!productToAdd)

    throw ("How can I add new product when no value provided?");

Code Example – doing it right

// throwing an Error from typical function, whether sync or async

if(!productToAdd)

    throw new Error("How can I add new product when no value provided?");

// 'throwing' an Error from EventEmitter

const myEmitter = new MyEmitter();

myEmitter.emit('error', new Error('whoops!'));

// 'throwing' an Error from a Promise

const addProduct = async (productToAdd) => {

  try {

    const existingProduct = await DAL.getProduct(productToAdd.id);

    if (existingProduct !== null) {

      throw new Error("Product already exists!");

    }

  } catch (err) {

    // ...

  }

}

Code example – doing it even better

// centralized error object that derives from Node’s Error

function AppError(name, httpCode, description, isOperational) {

    Error.call(this);

    Error.captureStackTrace(this);

    this.name = name;

    //...other properties assigned here

};

AppError.prototype.__proto__ = Error.prototype;

module.exports.AppError = AppError;

// client throwing an exception

if(user === null)

    throw new AppError(commonErrors.resourceNotFound, commonHTTPErrors.notFound, "further explanation", true)

2.3 Distinguish operational vs programmer errors

Operational errors (e.g. API received an invalid input) refer to known cases where the error impact is fully understood and can be handled thoughtfully.

On the other hand, programmer error (e.g. trying to read an undefined variable) refers to unknown code failures that dictate to gracefully restart the application.

Code Example – marking an error as operational (trusted)

// marking an error object as operational 

const myError = new Error("How can I add new product when no value provided?");

myError.isOperational = true;




// or if you're using some centralized error factory

class AppError {

  constructor (commonType, description, isOperational) {

    Error.call(this);

    Error.captureStackTrace(this);

    this.commonType = commonType;

    this.description = description;

    this.isOperational = isOperational;

  }

};

throw new AppError(errorManagement.commonErrors.InvalidInput, "Describe here what happened", true);

2.4 Handle errors centrally, not within an Express middleware

Error handling logic such as mail to admin and logging should be encapsulated in a dedicated and centralized object that all endpoints (e.g.

Express middleware, cron jobs, unit-testing) call when an error comes in.

2.5 Document API errors using Swagger or GraphQL

Let your API callers know which errors might come in return so they can handle these thoughtfully without crashing. For RESTful APIs, this is usually done with documentation frameworks like Swagger.

If you’re using GraphQL, you can utilize your schema and comments as well.

2.6 Exit the process gracefully

When an unknown error occurs – there is uncertainty about the application healthiness.

Common practice suggests restarting the process carefully using a process management tool like Forever or PM2.

2.7 Use a mature logger to increase error visibility

A set of mature logging tools like Winston, Bunyan, Log4js or Pino, will speed-up error discovery and understanding. So forget about console.log.

2.8 Test error flows using your favorite test framework

Whether professional automated QA or plain manual developer testing – Ensure that your code not only satisfies positive scenarios but also handles and returns the right errors. Testing frameworks like Mocha & Chai can handle this easily.

2.9 Discover errors and downtime using APM products

Monitoring and performance products (a.k.a APM) proactively gauge your codebase or API so they can automagically highlight errors, crashes and slow parts that you were missing.

2.10 Catch unhandled promise rejections

Any exception thrown within a promise will get swallowed and discarded unless a developer didn’t forget to explicitly handle.

Even if your code is subscribed to process.uncaughtException! Overcome this by registering to the event process.unhandledRejection.

Code example: Catching unresolved and rejected promises

process.on('unhandledRejection', (reason, p) => {

  // I just caught an unhandled promise rejection, since we already have fallback handler for unhandled errors (see below), let throw and let him handle that

  throw reason;

});

process.on('uncaughtException', (error) => {

  // I just received an error that was never handled, time to handle it and then decide whether a restart is needed

  errorManagement.handler.handleError(error);

  if (!errorManagement.handler.isTrustedError(error))

    process.exit(1);

});

2.11 Fail fast, validate arguments using a dedicated library

This should be part of your Express best practices – Assert API input to avoid nasty bugs that are much harder to track later. The validation code is usually tedious unless you are using a very cool helper library like Joi.

3. Code Style Practices

3.1 Use ESLint

ESLint is the de-facto standard for checking possible code errors and fixing code style, not only to identify nitty-gritty spacing issues but also to detect serious code anti-patterns like developers throwing errors without classification.

Though ESLint can automatically fix code styles, other tools like prettier and beautify are more powerful in formatting the fix and work in conjunction with ESLint.

3.2 Node.js specific plugins

On top of ESLint standard rules that cover vanilla JavaScript, add Node.js specific plugins like eslint-plugin-node, eslint-plugin-mocha and eslint-plugin-node-security.

3.3 Start a Codeblock’s Curly Braces on the Same Line

The opening curly braces of a code block should be on the same line as the opening statement.

Code Example

// Do

function someFunction() {

  // code block

}




// Avoid

function someFunction()

{

  // code block

}

3.4 Separate your statements properly

Use ESLint to gain awareness about separation concerns. Prettier or Standardjs can automatically resolve these issues.

Code example

// Do

function doThing() {

    // ...

}

doThing()
// Do

const items = [1, 2, 3]

items.forEach(console.log)

// Avoid — throws exception

const m = new Map()

const a = [1,2,3]

[...m.values()].forEach(console.log)

> [...m.values()].forEach(console.log)

>  ^^^

> SyntaxError: Unexpected token ...

// Avoid — throws exception

const count = 2 // it tries to run 2(), but 2 is not a function

(function doSomething() {

  // do something amazing

}())

// put a semicolon before the immediately invoked function, after the const definition, save the return value of the anonymous function to a variable or avoid IIFEs altogether

3.5 Name your functions

Name all functions, including closures and callbacks. Avoid anonymous functions.

This is especially useful when profiling a node app. Naming all functions will allow you to easily understand what you’re looking at when checking a memory snapshot.

3.6 Use naming conventions for variables, constants, functions, and classes

Use lowerCamelCase when naming constants, variables, and functions and UpperCamelCase (capital first letter as well) when naming classes.

This will help you to easily distinguish between plain variables/functions, and classes that require instantiation. Use descriptive names, but try to keep them short.

Code Example

// for class name we use UpperCamelCase

class SomeClassExample {}

// for const names we use the const keyword and lowerCamelCase

const config = {

  key: 'value'

};

// for variables and functions names we use lowerCamelCase

let someVariableExample = 'value';

function doSomething() {}

3.7 Prefer const over let. Ditch the var

Using const means that once a variable is assigned, it cannot be reassigned. Preferring const will help you to not be tempted to use the same variable for different uses, and make your code clearer.

If a variable needs to be reassigned, in a for loop, for example, use let to declare it. Another important aspect of let is that a variable declared using it is only available in the block scope in which it was defined.

‘var’ is function scoped, not block scope, and shouldn’t be used in ES6 now that you have ‘const’ and let at your disposal.

3.8 Require modules first, not inside functions

Require modules at the beginning of each file, before and outside of any functions.

This simple best practice will not only help you easily and quickly tell the dependencies of a file right at the top but also avoids a couple of potential problems.

3.9 Require modules by folders, as opposed to the files directly

When developing a module/library in a folder, place an index.js file that exposes the module’s internals so every consumer will pass through it.

This serves as an ‘interface’ to your module and eases future changes without breaking the contract.

Code example

// Do

module.exports.SMSProvider = require('./SMSProvider');

module.exports.SMSNumberResolver = require('./SMSNumberResolver');

// Avoid

module.exports.SMSProvider = require('./SMSProvider/SMSProvider.js');

module.exports.SMSNumberResolver = require('./SMSNumberResolver/SMSNumberResolver.js');

3.10 Use the === operator

Prefer the strict equality operator === over the weaker abstract equality operator ==. == will compare two variables after converting them to a common type. There is no type conversion in ===, and both variables must be of the same type to be equal.

Code example

'' == '0'           // false

0 == ''             // true

0 == '0'            // true

false == 'false'    // false

false == '0'        // true

false == undefined  // false

false == null       // false

null == undefined   // true

' \t\r\n ' == 0     // true

All statements above will return false if used with ===

3.11 Use Async Await, avoid callbacks

Node 8 LTS now has full support for Async-await. This is a new way of dealing with asynchronous code which supersedes callbacks and promises.

Async-await is non-blocking, and it makes asynchronous code look synchronous.

The best gift you can give to your code is using async-await which provides a much more compact and familiar code syntax like try-catch.

3.12 Use arrow function expressions (=>)

Though it’s recommended to use async-await and avoid function parameters when dealing with older APIs that accept promises or callbacks – arrow functions make the code structure more compact and keep the lexical context of the root function (i.e. this).

4. Testing And Overall Quality Practices

4.1 At the very least, write API (component) testing

Most projects just don’t have any automated testing due to short timetables or often the ‘testing project’ ran out of control and was abandoned.

For that reason, prioritize and start with API testing which is the easiest way to write and provides more coverage than unit testing (you may even craft API tests without code using tools like Postman.

Afterward, should you have more resources and time, continue with advanced test types like unit testing, DB testing, performance testing, etc.

4.2 Include 3 parts in each test name

Make the test speak at the requirements level so it’s self-explanatory also to QA engineers and developers who are not familiar with the code internals.

State in the test name what is being tested (unit under test), under what circumstances and what is the expected result.

Code example: a test name that includes 3 parts

//1. unit under test

describe('Products Service', function() {

  describe('Add new product', function() {

    //2. scenario and 3. expectation

    it('When no price is specified, then the product status is pending approval', ()=> {

      const newProduct = new ProductService().add(...);

      expect(newProduct.status).to.equal('pendingApproval');

    });

  });

});

Code Example – Anti Pattern: one must read the entire test code to understand the intent

describe('Products Service', function() {

  describe('Add new product', function() {

    it('Should return the right status', ()=> {

        //hmm, what is this test checking? what are the scenario and expectation?

      const newProduct = new ProductService().add(...);

      expect(newProduct.status).to.equal('pendingApproval');

    });

  });

});

4.3 Structure tests by the AAA pattern

Structure your tests with 3 well-separated sections: Arrange, Act & Assert (AAA). The first part includes the test setup, then the execution of the unit under test and finally the assertion phase.

Following this structure guarantees that the reader spends no brain CPU on understanding the test plan.

Code example: a test structured with the AAA pattern

describe.skip('Customer classifier', () => {

    test('When customer spent more than 500$, classify as premium', () => {

        //Arrange

        const customerToClassify = {spent:505, joined: new Date(), id:1}

        const DBStub = sinon.stub(dataAccess, "getCustomer")

            .reply({id:1, classification: 'regular'});

        //Act

        const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);

        //Assert

        expect(receivedClassification).toMatch('premium');

    });

});

Code Example – Anti Pattern: no separation, one bulk, harder to interpret

test('Should be classified as premium', () => {

        const customerToClassify = {spent:505, joined: new Date(), id:1}

        const DBStub = sinon.stub(dataAccess, "getCustomer")

            .reply({id:1, classification: 'regular'});

        const receivedClassification = customerClassifier.classifyCustomer(customerToClassify);

        expect(receivedClassification).toMatch('premium');

    });

4.4 Detect code issues with a linter

Use a code linter to check the basic quality and detect anti-patterns early.

Run it before any test and add it as a pre-commit git-hook to minimize the time needed to review and correct any issue. 

4.5 Avoid global test fixtures and seeds, add data per-test

To prevent tests coupling and easily reason about the test flow, each test should add and act on its own set of DB rows.

Whenever a test needs to pull or assume the existence of some DB data – it must explicitly add that data and avoid mutating any other records.

Code example: each test acts on its own set of data

it("When updating site name, get successful confirmation", async () => {

  //test is adding a fresh new records and acting on the records only

  const siteUnderTest = await SiteService.addSite({

    name: "siteForUpdateTest"

  });

  const updateNameResult = await SiteService.changeName(siteUnderTest, "newName");

  expect(updateNameResult).to.be(true);

});

Code Example – Anti Pattern: tests are not independent and assume the existence of some pre-configured data

before(() => {

  //adding sites and admins data to our DB. Where is the data? outside. At some external json or migration framework

  await DB.AddSeedDataFromJson('seed.json');

});

it("When updating site name, get successful confirmation", async () => {

  //I know that site name "portal" exists - I saw it in the seed files

  const siteToUpdate = await SiteService.getSiteByName("Portal");

  const updateNameResult = await SiteService.changeName(siteToUpdate, "newName");

  expect(updateNameResult).to.be(true);

});

it("When querying by site name, get the right site", async () => {

  //I know that site name "portal" exists - I saw it in the seed files

  const siteToCheck = await SiteService.getSiteByName("Portal");

  expect(siteToCheck.name).to.be.equal("Portal"); //Failure! The previous test change the name :[

});

4.6 Constantly inspect for vulnerable dependencies

Even the most reputable dependencies such as Express have known vulnerabilities.

This can get easily tamed using community and commercial tools such as npm audit and snyk.io that can be invoked from your CI on every build.

4.7 Tag your tests

Different tests must run on different scenarios: quick smoke, IO-less, tests should run when a developer saves or commits a file, full end-to-end tests usually run when a new pull request is submitted, etc.

This can be achieved by tagging tests with keywords like #cold #api #sanity so you can grep with your testing harness and invoke the desired subset.

For example, this is how you would invoke only the sanity test group with Mocha: mocha –grep ‘sanity’.

4.8 Check your test coverage, it helps to identify wrong test patterns

Code coverage tools like Istanbul/NYC are great for various reasons:  it helps to identify a decrease in testing coverage, and last but not least it highlights testing mismatches: by looking at colored code coverage reports you may notice, for example, code areas that are never tested like catch clauses (meaning that tests only invoke the happy paths and not how the app behaves on errors).

Set it to fail builds if the coverage falls under a certain threshold.

4.9 Inspect for outdated packages

Use your preferred tool (e.g. ‘npm outdated’ or npm-check-updates to detect installed packages which are outdated, inject this check into your CI pipeline and even make a build fail in a severe scenario.

For example, a severe scenario might be when an installed package is 5 patch commits behind (e.g. local version is 1.3.1 and repository version is 1.3.8) or it is tagged as deprecated by its author – kill the build and prevent deploying this version.

4.10 Use production-like env for e2e testing

End to end (e2e) testing which includes live data used to be the weakest link of the CI process as it depends on multiple heavy services like DB.

Use an environment which is as close to your real production as possible like a-continue.

4.11 Refactor regularly using static analysis tools

Using static analysis tools helps by giving objective ways to improve code quality and keeps your code maintainable.

You can add static analysis tools to your CI build to fail when it finds code smells.

Its main selling points over plain linting are the ability to inspect quality in the context of multiple files (e.g. detect duplications), perform advanced analysis (e.g. code complexity) and follow the history and progress of code issues. Two examples of tools you can use are Sonarqube and Code Climate.

4.12 Carefully choose your CI platform (Jenkins vs CircleCI vs Travis vs Rest of the world)

Your continuous integration platform (CICD) will host all the quality tools (e.g test, lint) so it should come with a vibrant ecosystem of plugins.

Jenkins used to be the default for many projects as it has the biggest community along with a very powerful platform at the price of a complex setup that demands a steep learning curve.

Nowadays, it has become much easier to set up a CI solution using SaaS tools like CircleCI and others.

These tools allow crafting a flexible CI pipeline without the burden of managing the whole infrastructure. Eventually, it’s a trade-off between robustness and speed – choose your side carefully.

5. Going To Production Practices

5.1. Monitoring

At the very basic level, monitoring means you can easily identify when bad things happen at production.

For example, by getting notified by email or Slack. Start with defining the core set of metrics that must be watched to ensure a healthy state – CPU, server RAM, Node process RAM (less than 1.4GB), the number of errors in the last minute, number of process restarts, average response time.

Then go over some advanced features you might fancy and add to your wishlist.

Some examples of a luxury monitoring feature: DB profiling, cross-service measuring (i.e. measure business transaction), front-end integration, expose raw data to custom BI clients, Slack notifications and many others.

5.2. Increase transparency using smart logging

Logs can be a dumb warehouse of debug statements or the enabler of a beautiful dashboard that tells the story of your app.

Plan your logging platform from day 1: how logs are collected, stored and analyzed to ensure that the desired information (e.g. error rate, following an entire transaction through services and servers, etc) can really be extracted.

5.3. Delegate anything possible (e.g. gzip, SSL) to a reverse proxy

Node is awfully bad at doing CPU intensive tasks like gzipping, SSL termination, etc. You should use ‘real’ middleware services like nginx, HAproxy or cloud vendor services instead.

5.4. Lock dependencies

Your code must be identical across all environments, but amazingly npm lets dependencies drift across environments by default – when you install packages at various environments it tries to fetch packages’ latest patch version.

Overcome this by using npm config files, .npmrc, that tell each environment to save the exact (not the latest) version of each package.

Alternatively, for finer-grained control use npm shrinkwrap. *Update: as of NPM5, dependencies are locked by default. The new package manager in town, Yarn, also got us covered by default.

5.5. Guard process uptime using the right tool

The process must go on and get restarted upon failures. For simple scenarios, process management tools like PM2 might be enough but in today’s ‘dockerized’ world, cluster management tools should be considered as well.

Running dozens of instances without a clear strategy and too many tools together (cluster management, docker, PM2) might lead to DevOps chaos.

5.6. Utilize all CPU cores

At its basic form, a Node app runs on a single CPU core while all others are left idling.

It’s your duty to replicate the Node process and utilize all CPUs – For small-medium apps you may use Node Cluster or PM2.

For a larger app consider replicating the process using some Docker cluster (e.g. K8S, ECS) or deployment scripts that are based on Linux init system (e.g. systemd).

5.7. Create a ‘maintenance endpoint’

Expose a set of system-related information, like memory usage and REPL, etc in a secured API.

Although it’s highly recommended to rely on the standard and battle-tests tools, some valuable information and operations are easier done using code.

5.8. Discover errors and downtime using APM products

Application monitoring and performance products (a.k.a APM) proactively gauge codebase and API so they can auto-magically go beyond traditional monitoring and measure the overall user-experience across services and tiers.

For example, some APM products can highlight a transaction that loads too slow on the end-users side while suggesting the root cause.

5.9. Make your code production-ready

Code with the end in mind, plan for production from day 1. Following is a list of development tips that greatly affect production maintenance and stability:

  • The twelve-factor guide – Get familiar with the Twelve factors guide
  • Be stateless – Save no data locally on a specific web server (see separate bullet – ‘Be Stateless’)
  • Cache – Utilize cache heavily, yet never fail because of cache mismatch
  • Test memory – gauge memory usage and leaks as part of your development flow, tools such as ‘memwatch’ can greatly facilitate this task
  • Name functions – Minimize the usage of anonymous functions (i.e. inline callback) as a typical memory profiler will provide memory usage per method name
  • Use CI tools – Use CI tool to detect failures before sending to production. For example, use ESLint to detect reference errors and undefined variables. Use –trace-sync-io to identify code that uses synchronous APIs (instead of the async version)
  • Log wisely – Include in each log statement contextual information, hopefully in JSON format so log aggregators tools such as Elastic can search upon those properties (see separate bullet – ‘Increase visibility using smart logs’). Also, include transaction-id that identifies each request and allows to correlate lines that describe the same transaction (see separate bullet – ‘Include Transaction-ID’)
  • Error management – Error handling is the Achilles’ heel of Node.js production sites – many Node processes are crashing because of minor errors while others hang on alive in a faulty state instead of crashing. Setting your error handling strategy is absolutely critical.

5.10. Measure and guard the memory usage

The v8 engine has soft limits on memory usage (1.4GB) and there are known paths to leak memory in Node’s code – thus watching Node’s process memory is a must.

In small apps, you may gauge memory periodically using shell commands but in medium-large apps consider baking your memory watch into a robust monitoring system.

5.11. Get your frontend assets out of Node

Serve frontend content using dedicated middleware (nginx, S3, CDN) because Node performance really gets hurt when dealing with many static files due to its single-threaded model.

5.12. Be stateless, kill your servers almost every day

Store any type of data (e.g. user sessions, cache, uploaded files) within external data stores. Consider ‘killing’ your servers periodically or use ‘serverless’ platform (e.g. AWS Lambda) that explicitly enforces a stateless behavior.

5.13. Use tools that automatically detect vulnerabilities

Even the most reputable dependencies such as Express have known vulnerabilities (from time to time) that can put a system at risk.

This can be easily tamed using community and commercial tools that constantly check for vulnerabilities and warn (locally or at GitHub), some can even patch them immediately.

5.14. Assign a transaction id to each log statement

Assign the same identifier, transaction-id: {some value}, to each log entry within a single request.

Then when inspecting errors in logs, easily conclude what happened before and after. Unfortunately, this is not easy to achieve in Node due to its async nature.

Code example: typical Express configuration

// when receiving a new request, start a new isolated context and set a transaction Id. The following example is using the npm library continuation-local-storage to isolate requests

const { createNamespace } = require('continuation-local-storage');

var session = createNamespace('my session');

router.get('/:id', (req, res, next) => {

    session.set('transactionId', 'some unique GUID');

    someService.getById(req.params.id);

    logger.info('Starting now to get something by Id');

});

// Now any other service or components can have access to the contextual, per-request, data

class someService {

    getById(id) {

        logger.info(“Starting to get something by Id”);

        // other logic comes here

    }

}

// The logger can now append the transaction-id to each entry so that entries from the same request will have the same value

class logger {

    info (message)

    {console.log(`${message} ${session.get('transactionId')}`);}

}

5.15. Set NODE_ENV

Set the environment variable NODE_ENV to ‘production’ or ‘development’ to flag whether production optimizations should get activated – many npm packages determine the current environment and optimize their code for production.

5.16. Design automated, atomic and zero-downtime deployments

Research shows that teams who perform many deployments lower the probability of severe production issues.

Fast and automated deployments that don’t require risky manual steps and service downtime significantly improve the deployment process.

You should probably achieve this using Docker combined with CI tools as they became the industry standard for streamlined deployment.

5.17. Use an LTS release of Node.js

Ensure you are using an LTS version of Node.js to receive critical bug fixes, security updates and performance improvements.

5.18. Don’t route logs within the app

Log destinations should not be hard-coded by developers within the application code, but instead should be defined by the execution environment the application runs in.

Developers should write logs to stdout using a logger utility and then let the execution environment (container, server, etc.) pipe the stdout stream to the appropriate destination (i.e. Splunk, Graylog, ElasticSearch, etc.).

Code Example – Anti-pattern: Log routing tightly coupled to the application

const { createLogger, transports, winston } = require('winston');

const winston-mongodb = require('winston-mongodb');

// log to two different files, which the application now must be concerned with

const logger = createLogger({

  transports: [

    new transports.File({ filename: 'combined.log' }),

    exceptionHandlers: [

    new transports.File({ filename: 'exceptions.log' })

  ]

});

// log to MongoDB, which the application now must be concerned with

winston.add(winston.transports.MongoDB, options);

Doing it this way, the application now handles both application/business logic AND log routing logic!

Code Example – Better log handling + Docker example

In the application:

const logger = new winston.Logger({

  level: 'info',

  transports: [

    new (winston.transports.Console)()

  ]

});




logger.log('info', 'Test Log Message with some parameter %s', 'some parameter', { anything: 'This is metadata' });

Then, in the docker container daemon.json:

{

  "log-driver": "splunk", /* just using Splunk as an example, it could be another storage type*/

  "log-opts": {

    "splunk-token": "",

    "splunk-url": "",

    ...

  }

}

So this example ends up looking like log -> stdout -> Docker container -> Splunk

6. Security Best Practices

6.1. Embrace linter security rules

Make use of security-related linter plugins such as eslint-plugin-security to catch security vulnerabilities and issues as early as possible, preferably while they’re being coded.

This can help to catch security weaknesses like using eval, invoking a child process or importing a module with a string literal (e.g. user input).

6.2. Limit concurrent requests using a middleware

DOS attacks are very popular and relatively easy to conduct.

Implement rate limiting using an external service such as cloud load balancers, cloud firewalls, nginx, rate-limiter-flexible package, or (for smaller and less critical apps) a rate-limiting middleware (e.g. express-rate-limit).

Code example: Express rate limiting middleware for certain routes

Using express-rate-limiter npm package:

const RateLimit = require('express-rate-limit');

// important if behind a proxy to ensure client IP is passed to req.ip

app.enable('trust proxy'); 
 
const apiLimiter = new RateLimit({

  windowMs: 15*60*1000, // 15 minutes

  max: 100,

});


// only apply to requests that begin with /user/

app.use('/user/', apiLimiter);

6.3 Extract secrets from config files or use packages to encrypt them

Never store plain-text secrets in configuration files or source code.

Instead, make use of secret-management systems like Vault products, Kubernetes/Docker Secrets, or using environment variables.

As a last resort, secrets stored in source control must be encrypted and managed (rolling keys, expiring, auditing, etc). Make use of pre-commit/push hooks to prevent committing secrets accidentally.

Code example : Accessing an API key stored in an environment variable:

const azure = require('azure');

const apiKey = process.env.AZURE_STORAGE_KEY;

const blobService = azure.createBlobService(apiKey);

Using cryptr to store an encrypted secret:

const Cryptr = require('cryptr');

const cryptr = new Cryptr(process.env.SECRET);

let accessToken = cryptr.decrypt('e74d7c0de21e72aaffc8f2eef2bdb7c1'); 

console.log(accessToken);  /* outputs decrypted string which was not stored in source control*/

6.4. Prevent query injection vulnerabilities with ORM/ODM libraries

To prevent SQL/NoSQL injection and other malicious attacks, always make use of an ORM/ODM or a database library that escapes data or supports named or indexed parameterized queries, and takes care of validating user input for expected types.

Never just use JavaScript template strings or string concatenation to inject values into queries as this opens your application to a wide spectrum of vulnerabilities.

All the reputable Node.js data access libraries (e.g. Sequelize, Knex, mongoose) have built-in protection against injection attacks.

6.5. Adjust the HTTP response headers for enhanced security

Your application should be using secure headers to prevent attackers from using common attacks like cross-site scripting (XSS), clickjacking and other malicious attacks. These can be configured easily using modules like helmet.

6.6. Constantly and automatically inspect for vulnerable dependencies

With the npm ecosystem it is common to have many dependencies for a project.

Dependencies should always be kept in check as new vulnerabilities are found. Use tools like npm audit or snyk to track, monitor and patch vulnerable dependencies.

Integrate these tools with your CI setup so you catch a vulnerable dependency before it makes it to production.

6.7. Avoid using the Node.js crypto library for handling passwords, use Bcrypt

Passwords or secrets (API keys) should be stored using a secure hash + salt function like bcrypt, that should be a preferred choice over its JavaScript implementation due to performance and security reasons.

6.8. Escape HTML, JS and CSS output

Untrusted data that is sent down to the browser might get executed instead of just being displayed, this is commonly referred to as a cross-site-scripting (XSS) attack.

Mitigate this by using dedicated libraries that explicitly mark the data as pure content that should never get executed (i.e. encoding, escaping).

6.9. Validate incoming JSON schemas

Validate the incoming requests’ body payload and ensure it meets expectations, fail fast if it doesn’t.

To avoid tedious validation coding within each route you may use lightweight JSON-based validation schemas such as jsonschema or joi.

6.10. Support blacklisting JWTs

When using JSON Web Tokens (for example, with Passport.js), by default there’s no mechanism to revoke access from issued tokens.

Once you discover some malicious user activity, there’s no way to stop them from accessing the system as long as they hold a valid token.

Mitigate this by implementing a blacklist of untrusted tokens that are validated on each request.

6.11. Prevent brute-force attacks against authorization

A simple and powerful technique is to limit authorization attempts using two metrics:

  1. The first is a number of consecutive failed attempts by the same user unique ID/name and IP address.
  2. The second is a number of failed attempts from an IP address over some long period of time. For example, block an IP address if it makes 100 failed attempts in one day.

Code example: count consecutive failed authorization attempts by user name and IP pair and total fails by IP address.

Using rate-limiter-flexible npm package.

Create two limiters:

  1. The first counts the number of consecutive failed attempts and allows maximum 10 by username and IP pair.
  2. The second blocks IP address for a day on 100 failed attempts per day.
const maxWrongAttemptsByIPperDay = 100;

const maxConsecutiveFailsByUsernameAndIP = 10;

const limiterSlowBruteByIP = new RateLimiterRedis({

  storeClient: redisClient,

  keyPrefix: 'login_fail_ip_per_day',

  points: maxWrongAttemptsByIPperDay,

  duration: 60 * 60 * 24,

  blockDuration: 60 * 60 * 24, // Block for 1 day, if 100 wrong attempts per day

});




const limiterConsecutiveFailsByUsernameAndIP = new RateLimiterRedis({

  storeClient: redisClient,

  keyPrefix: 'login_fail_consecutive_username_and_ip',

  points: maxConsecutiveFailsByUsernameAndIP,

  duration: 60 * 60 * 24 * 90, // Store number for 90 days since first fail

  blockDuration: 60 * 60, // Block for 1 hour

});

6.12. Run Node.js as non-root user

There is a common scenario where Node.js runs as a root user with unlimited permissions.

For example, this is the default behavior in Docker containers.

It’s recommended to create a non-root user and either bake it into the Docker image (examples given below) or run the process on this user’s behalf by invoking the container with the flag “-u username”.

Code example – Building a Docker image as non-root

FROM node:latest

COPY package.json .

RUN npm install

COPY . .

EXPOSE 3000

USER node

CMD ["node", "server.js"]

6.13. Limit payload size using a reverse-proxy or a middleware

The bigger the body payload is, the harder your single thread works in processing it.

This is an opportunity for attackers to bring servers to their knees without tremendous amount of requests (DOS/DDOS attacks).

Mitigate this limiting the body size of incoming requests on the edge (e.g. firewall, ELB) or by configuring express body parser to accept only small-size payloads.

6.14. Avoid JavaScript eval statements

eval is evil as it allows executing custom JavaScript code during run time.

This is not just a performance concern but also an important security concern due to malicious JavaScript code that may be sourced from user input.

Another language feature that should be avoided is new Function constructor. setTimeout and setInterval should never be passed dynamic JavaScript code either.

Code example

// example of malicious code which an attacker was able to input

userInput = "require('child_process').spawn('rm', ['-rf', '/'])";

// malicious code executed

eval(userInput);

6.15. Prevent evil RegEx from overloading your single thread execution

Regular Expressions, while being handy, pose a real threat to JavaScript applications at large, and the Node.js platform in particular.

A user input for text to match might require an outstanding amount of CPU cycles to process. RegEx processing might be inefficient to an extent that a single request that validates 10 words can block the entire event loop for 6 seconds and set the CPU on fire.

For that reason, prefer third-party validation packages like validator.js instead of writing your own Regex patterns, or make use of safe-regex to detect vulnerable regex patterns.

6.16. Avoid module loading using a variable

Avoid requiring/importing another file with a path that was given as parameter due to the concern that it could have originated from user input.

This rule can be extended for accessing files in general (i.e. fs.readFile()) or other sensitive resource access with dynamic variables originating from user input. Eslint-plugin-security linter can catch such patterns and warn early enough.

Code example

// insecure, as helperPath variable may have been modified by user input

const uploadHelpers = require(helperPath);

// secure

const uploadHelpers = require('./helpers/upload');

6.17. Run unsafe code in a sandbox

When tasked to run external code that is given at run-time (e.g. plugin), use any sort of ‘sandbox’ execution environment that isolates and guards the main code against the plugin.

This can be achieved using a dedicated process (e.g. cluster.fork()), serverless environment or dedicated npm packages that act as a sandbox.

As a rule of thumb, one should run his own JavaScript files only.

Theories aside, real-world scenarios demand to execute JavaScript files that are being passed dynamically at run-time.

For example, consider a dynamic framework like webpack that accepts custom loaders and execute those dynamically during build time.

In the existence of some malicious plugin we wish to minimize the damage and maybe even let the flow terminate successfully – this requires to run the plugins in a sandbox environment that is fully isolated in terms of resources, crashes and the information we share with it.

Three main options can help in achieving this isolation:

  • a dedicated child process – this provides a quick information isolation but demand to tame the child process, limit its execution time and recover from errors
  • a cloud serverless framework ticks all the sandbox requirements but deployment and invoking a FaaS function dynamically is not a walk in the park
  • some npm libraries, like sandbox and vm2 allow execution of isolated code in 1 single line of code. Though this latter option wins in simplicity it provides a limited protection

Code example – Using Sandbox library to run code in isolation

const Sandbox = require("sandbox")

  , s = new Sandbox()

s.run( "lol)hai", function( output ) {

  console.log(output);

  //output='Syntax error'

});

// Example – Restricted code

s.run( "process.platform", function( output ) {

  console.log(output);

  //output=Null

})

// Example - Infinite loop

s.run( "while (true) {}", function( output ) {

  console.log(output);

  //output='Timeout'

})

6.18. Take extra care when working with child processes

Avoid using child processes when possible and validate and sanitize input to mitigate shell injection attacks if you still have to.

Prefer using child_process.execFile which by definition will only execute a single command with a set of attributes and will not allow shell parameter expansion.

6.19. Hide error details from clients

An integrated express error handler hides the error details by default.

However, great are the chances that you implement your own error handling logic with custom Error objects (considered by many as a best practice).

If you do so, ensure not to return the entire Error object to the client, which might contain some sensitive application details.

6.20. Configure 2FA for npm or Yarn

Any step in the development chain should be protected with MFA (multi-factor authentication), npm/Yarn are a sweet opportunity for attackers who can get their hands on some developer’s password.

Using developer credentials, attackers can inject malicious code into libraries that are widely installed across projects and services.

Maybe even across the web if published in public. Enabling 2-factor-authentication in npm leaves almost zero chances for attackers to alter your package code.

6.21. Modify session middleware settings

Each web framework and technology has its known weaknesses - telling an attacker which web framework we use is a great help for them.

Using the default settings for session middlewares can expose your app to module- and framework-specific hijacking attacks in a similar way to the X-Powered-By header. Try hiding anything that identifies and reveals your tech stack (E.g. Node.js, express).

Code example: Setting secure cookie settings

// using the express session middleware

app.use(session({  

 secret: 'youruniquesecret', /* secret string used in the signing of the session ID that is stored in the cookie*/

 name: 'youruniquename', // set a unique name to remove the default connect.sid

 cookie: {

   httpOnly: true, /* minimize risk of XSS attacks by restricting the client from reading the cookie*/

   secure: true, // only send cookie over https

   maxAge: 60000*60*24 // set cookie expiry length in ms

 }

}));

6.22. Avoid DOS attacks by explicitly setting when a process should crash

The Node process will crash when errors are not handled. Many best practices even recommend to exit even though an error was caught and got handled.

Express, for example, will crash on any asynchronous error - unless you wrap routes with a catch clause.

This opens a very sweet attack spot for attackers who recognize what input makes the process crash and repeatedly send the same request.

There’s no instant remedy for this but a few techniques can mitigate the pain: Alert with critical severity anytime a process crashes due to an unhandled error, validate the input and avoid crashing the process due to invalid user input, wrap all routes with a catch and consider not to crash when an error originated within a request (as opposed to what happens globally).

6.23. Prevent unsafe redirects

Redirects that do not validate user input can enable attackers to launch phishing scams, steal user credentials, and perform other malicious actions.

6.24. Avoid publishing secrets to the npm registry

Precautions should be taken to avoid the risk of accidentally publishing secrets to public npm registries. An .npmignore file can be used to blacklist specific files or folders, or the files array in package.json can act as a whitelist.

Perfomatix Can Help You

Node.js provides a lot of benefits to any custom application project.

Hire Node.js developers from Perfomatix who will help you analyze your business requirements and give you right solution for your project.

We are the most trustworthy Node.js development company & with the help of our comprehensive experience in Node development we will help you understand the practical aspects of your business.

Perfomatix | Product Engineering Services Company