10 Things You Must Do in Node.js Before Going to Production (Or Risk DESTROYING Your App🔥!)

TLDR; Production Ready Node Apps are NO JOKE. Ignore these Features at your PERIL!

·

9 min read

Creating a Node.js app can be quick and easy, but taking it to production requires careful consideration and planning. Without these steps, you risk destroying your app and causing countless hours of frustration. In this blog post, we'll discuss 10 critical things you must do before taking your Node.js app to production.


Asynchronous Code

Node.js is built on an event-driven architecture that is non-blocking and asynchronous. This means Node.js:

  • Performs I/O operations asynchronously without blocking the event loop.

  • Uses callbacks, promises and async/await to manage the asynchronous flow of execution.

Being comfortable with asynchronous patterns is fundamental to efficiently developing apps that can handle I/O intensive operations.

  1. Callbacks:
fs.readFile(file, (err, data) => {
  // Callback function  
});
  1. Promises:
fs.promises.readFile(file)
  .then( data => {
    // Executes after readFile resolves  
  })
  .catch( err => {
     // Catches errors
  });
  1. async/await:
async function readFile () {
  try {
    const data = await fs.promises.readFile(file);
    // await waits for the promise to resolve  
  } catch(err) {
    // Catches errors   
  }
}

Benefits of asynchronous code:

  • Responsiveness: The event loop is not blocked and can handle other events.

  • Scalability: Node.js can handle a large number of concurrent connections efficiently.

  • Memory efficiency: Node reuses a single thread.

However, asynchronous code can be complex and difficult to reason about. Choosing the right pattern for the task and organizing the flow properly is important.


Error Handling

Node.js has a catch-all exception handler at the root that will crash your app on unhandled errors. Make sure to implement proper error handling using try/catch blocks and error-first callbacks.

  1. Using try/catch:

     try {
       // Some code  
     } catch (err) {
       // Handle error
     }
    

    try/catch is used for synchronous errors.

  2. Using error-first callbacks:

     fs.readFile('file.txt', (err, data) => {
       if (err) {
         console.log(err);  
       } else {
         console.log(data); 
       }
     });
    

    The first argument to the callback is the error, followed by the data.

  3. Using .catch() with promises:

     readFilePromise('file.txt')
       .then(data => {
         // Use data 
       })
       .catch(err => {
         console.log(err);  
       });
    

    .catch() handles errors rejected from the promise chain.

Summary:

  • Node.js has a default error handler that crashes the app on unhandled errors.

  • Use try/catch for synchronous errors.

  • Use error-first callbacks for asynchronous operations.

  • Promises allow catching errors using .catch().

Proper error handling is essential for:

  • Graceful degradation instead of app crashes.

  • Logging and monitoring errors.

  • Retrying failed operations.

  • Providing user feedback.


Caching Data

Since Node.js is single-threaded, caching data in memory can help improve performance by avoiding unnecessary disk access or recomputing data. Node has some in-built modules for caching:

  • node-cache: A simple in-memory key-value store.

  • memcached: An in-memory object caching system.

  • redis: An in-memory data structure store.

You can also build your simple in-memory cache:

Code snippet using node-cache:

const NodeCache = require('node-cache');

const myCache = new NodeCache();

myCache.set('key', 'value');
const value = myCache.get('key');

Code snippet for a simple cache:

 const cache = {};

 function getFromCache(key) {
   if(cache[key]) {
     return cache[key];
   }

   const result = // fetch data from database, API, etc.

   cache[key] = result;
   return result;
 }

Benefits of caching:

  • Improves performance by avoiding unnecessary I/O operations.

  • Reduces latency for subsequent requests for the same data.

  • Reduces load on databases and external APIs.

Things to consider:

  • Invalidation strategy when data changes.

  • Expiration of cache entries.

  • Memory consumption of cache.

  • Disk backup of cache.


Security

Validate all untrusted user input to prevent attacks like SQL injection, XSS, etc. Use TLS for encryption. Handle file uploads carefully.

For the complete security explanation I've written a completely separate article and here's the link.

You can consider this part at the last after following up on other features.


Updates

Node.js uses a single-threaded event loop model. While this is efficient for I/O bound operations, long-running CPU-intensive tasks can block the event loop, impacting the responsiveness of the application.

To solve this, Node allows spawning child processes using the clustering module. Each child process runs in its event loop, taking advantage of multicore CPUs.

A master process spawns worker processes, the load is distributed to workers, and they communicate with the master. This improves scalability and handles intense loads better.

const cluster = require('cluster');
const http = require('http');

if (cluster.isMaster) {

  // Master process
  for (let i = 0; i < 4; i++) {
    cluster.fork(); // Spawn 4 workers
  }

  cluster.on('exit', (worker) => {
    console.log('Worker exited', worker.process.pid);
    cluster.fork(); // Spawn a new worker  
  });

} else {

  // Worker processes  
  http.createServer((req, res) => {
    // Some long running task...
    res.end('Hello World\n');  
  }).listen(8000);

}

Benefits of clustering:

  • Scales your application by adding more workers as needed.

  • Distributes load evenly across CPU cores.

  • Restarts workers on the crash to maintain availability.

However, it introduces complexity in sharing state, load balancing, and handling worker failures.


Monitoring

Monitoring Node applications is essential to detect performance bottlenecks, errors, and crashes in production. Some popular tools are:

  • PM2: A process manager that keeps applications alive, restarts them on crash and monitors logs.

  • New Relic: An APM tool that tracks application performance including response times, error rates, throughput and more.

  • Nodemon: Automatically restarts your Node application when code changes. Especially useful during development.

  • morgan: A logging middleware that logs HTTP requests to make debugging easier.

npm install -g pm2

pm2 start app.js

# Monitor logs
pm2 logs

# Monitor metrics  
pm2 monitor

# Restart app  
pm2 restart app

The benefits of monitoring are:

  • Finding performance bottlenecks before they affect users.

  • Catching errors early before they cause outages.

  • Tracking key metrics over time.

  • Automatically restarting apps on a crash.

Things to monitor:

  • Response times

  • Error rates

  • Memory & CPU usage

  • Network throughput

  • Number of requests

  • Uptime

Monitoring tools can alert you via:

  • Email

  • SMS

  • Push notifications

This ensures issues are caught and addressed promptly to maintain a high quality of service.


Testing

Writing comprehensive tests is essential to ensure the quality, stability and reliability of Node.js applications. The main types of tests are:

  • Unit tests: Test individual units of code in isolation using stubs and mocks.

  • Integration tests: Test how units interact and integrate.

  • End-to-end (e2e) tests: Test the full application from end to end, typically simulating user interactions.

Popular testing libraries are:

  • Mocha - A test framework

  • Chai - An assertion library

  • Sinon - For stubs, spies and mocks

  • Supertest - For e2e testing of APIs

Code snippet using Mocha and Chai:

const assert = require('chai').assert;

describe('Testing addition function', () => {
  it('Adds two numbers', () => {
    const result = add(2, 3);
    assert.equal(result, 5);
  });
})

function add(a, b) {
  return a + b;
}

Benefits of tests:

  • Catch bugs and regressions early

  • Reduce debugging time

  • Give confidence during refactoring

  • Serve as documentation

To have a good test suite, tests should be:

  • Independent

  • Repeatable

  • Fast

  • Comprehensive

Test automation ensures tests are run continuously during development and as part of your CI/CD pipeline.


Modules

Node.js has a vast ecosystem of third-party packages that can be installed and used in your applications. These dependencies make development faster and easier by reusing existing, tested code for common tasks.

You can install dependencies using:

  • npm: The Node Package Manager, comes bundled with Node.

  • yarn: An alternative package manager.

npm install lodash 
// OR
yarn add lodash

This installs the lodash utility library and saves it as a dependency in package.json.

const _ = require('lodash');

_.reversed([1, 2, 3]); 
// [3, 2, 1]

It's important to:

  • Keep dependencies up-to-date to get bug fixes and security patches.

  • Only install necessary dependencies to reduce security surface and bundle size.

  • Use lock files like package-lock.json to "lock" dependencies and their versions. This ensures reproducibility across environments.

  • Audit dependencies regularly for known vulnerabilities using:

npm audit

Benefits of dependencies:

  • Faster development

  • Reusing tested code

  • Stand on the shoulders of giants

But outdated or vulnerable dependencies can lead to security issues, so it's important to keep dependencies up-to-date and monitor for vulnerabilities.


Streams

Streams are a way to process data in chunks rather than all at once. This allows for handling large amounts of data without loading the entire data into memory.

Node.js has various stream implementations:

  • Readable streams - Provide data to be read

  • Writable streams - Write data

  • Duplex streams - Read and write

  • Transform streams - Transform data as it is read

const fs = require('fs');

const readable = fs.createReadStream('file.txt');
const writable = fs.createWriteStream(' newfile.txt');

readable.on('data', (chunk) => {
  writable.write(chunk); 
});

readable.on('end', () => {
  writable.end();  // End writable stream 
});

Here we:

  1. Create a readable stream from a file

  2. Create a writable stream to a new file

  3. On each 'data' event, write the chunk to the writable stream

  4. On the 'end' event, end the writable stream

This lets us process a large file without loading the entire file into memory.

Benefits of streams:

  • Efficient for handling large amounts of data

  • Low memory footprint

  • Pipelining of operations

  • Transforming data on the fly

Uses cases:

  • Reading/writing files

  • Crawling websites

  • Compressing/decompressing data

  • Encrypting/decrypting data


Caching

Implementing caching at multiple layers can significantly improve the performance and scalability of Node.js applications. Some common layers are:

  1. CDN caching - A content delivery network like Cloudflare caches static assets (images, CSS, JS) at edge locations. Requests for cached assets are served directly from the CDN, avoiding origin servers.

  2. Reverse proxy caching - Nginx or Varnish cache responses from the origin server and serve subsequent requests from the cache. This reduces the load on the origin server.

  3. Memory caching - Caching responses in memory (as seen earlier using node-cache, memcached, etc.) avoids hitting the disk, database or network on cache hits.

  4. Database caching - Some databases support result caching to speed up repetitive queries.

  5. Browser caching - The browser caches responses using cache-control headers, avoiding refetching the same resources.

Optimizing caching at each layer based on the data's characteristics provides the most benefit.

const redis = require('redis');  
const client = redis.createClient();

client.set('key', 'value', redis.print);
const value = client.get('key');

client.expire('key', 20); // Expires key after 20 seconds

Benefits of layered caching:

• Improved performance by reducing load at higher layers
• Each layer is optimized for its use case
• Better scalability

Things to consider:

• Managing cache invalidation
• Expiry policies
• Cache efficiency based on hit rate

Did you find this article valuable?

Support Lexy Thinks by becoming a sponsor. Any amount is appreciated!