Skip to content

Linting, Formatting, and Type Checking Commits in an Nx Monorepo with Husky and lint-staged

One way to keep your codebase clean is to enforce linting, formatting, and type checking on every commit. This is made very easy with pre-commit hooks. Using Husky, you can run arbitrary commands before a commit is made. This can be combined with lint-staged, which allows you to run commands on only the files that have been staged for commit. This is useful because you don't want to run linting, formatting, and type checking on every file in your project, but only on the ones that have been changed.

But if you're using an Nx monorepo for your project, things can get a little more complicated. Rather than have you use eslint or prettier directly, Nx has its own scripts for linting and formatting. And type checking is complicated by the use of specific tsconfig.json files for each app or library. Setting up pre-commit hooks with Nx isn't as straightforward as in a simpler repository.

This guide will show you how to set up pre-commit hooks to run linting, formatting, and type checking in an Nx monorepo.

Configure Formatting

Nx comes with a command, nx format:write for applying formatting to affected files which we can give directly to lint-staged. This command uses Prettier under the hood, so it will abide by whatever rules you have in your root-level .prettierrc file. Just install Prettier, and add your preferred configuration.

npm install --save-dev prettier

Then add a .prettierrc file to the root of your project with your preferred configuration. For example, if you want to use single quotes and trailing commas, you can add the following:

{
    "singleQuote": true,
    "trailingComma": "all"
}

Configure Linting

Nx has its own plugin that uses ESLint to lint projects in your monorepo. It also has a plugin with sensible ESLint defaults for your linter commands to use, including ones specific to Nx. To install them, run the following command:

npm i --save-dev @nrwl/linter @nrwl/eslint-plugin-nx

Then, we can create a default .eslintrc.json file in the root of our project:

{
  "root": true,
  "ignorePatterns": ["**/*"],
  "plugins": ["@nrwl/nx"],
  "overrides": [
    {
      "files": ["*.ts", "*.tsx", "*.js", "*.jsx"],
      "rules": {
        "@nrwl/nx/enforce-module-boundaries": [
          "error",
          {
            "enforceBuildableLibDependency": true,
            "allow": [],
            "depConstraints": [
              {
                "sourceTag": "*",
                "onlyDependOnLibsWithTags": ["*"]
              }
            ]
          }
        ]
      }
    },
    {
      "files": ["*.ts", "*.tsx"],
      "extends": ["plugin:@nrwl/nx/typescript"],
      "rules": {}
    },
    {
      "files": ["*.js", "*.jsx"],
      "extends": ["plugin:@nrwl/nx/javascript"],
      "rules": {}
    }
  ]
}

The above ESLint configuration will, by default, apply Nx's module boundary rules to any TypeScript or JavaScript files in your project. It also applies its recommended rules for JavaScript and TypeScript respectively, and gives you room to add your own.

You can also have ESLint configurations specific to your apps and libraries. For example, if you have a React app, you can add a .eslintrc.json file to the root of your app directory with the following contents:

{
    "extends": ["plugin:@nrwl/nx/react", "../../.eslintrc.json"],
    "rules": {
        "no-console": ["error", { "allow": ["warn", "error"] }]
      }
}

Set Up Type Checking

Type checking with tsc is normally a very straightforward process. You can just run tsc --noEmit to check your code for type errors. But things are more complicated in Nx with lint-staged.

There are a two tricky things about type checking with lint-staged in an Nx monorepo. First, different apps and libraries can have their own tsconfig.json files. When type checking each app or library, we need to make sure we're using that specific configuration. The second wrinkle comes from the fact that lint-staged passes a list of staged files to commands it runs by default. And tsc will only accept either a specific tsconfig file, or a list of files to check.

We do want to use the specific tsconfig.json files, and we also only want to run type checking against apps and libraries with changes. To do this, we're going to create some Nx run commands within our apps and libraries and run those instead of calling tsc directly.

Within each app or library you want type checked, open the project.json file, and add a new run command like this one:

{
    // ...
    "targets": {
        // ...
        "typecheck": {
      "executor": "nx:run-commands",
      "options": {
        "commands": ["tsc -p tsconfig.app.json --noEmit"],
        "cwd": "apps/directory-of-your-app-goes-here",
        "forwardAllArgs": false
      }
    },
    }
}

Inside commands is our type-checking command, using the local tsconfig.json file for that specific Nx app. The cwd option tells Nx where to run the command from. The forwardAllArgs option tells Nx to ignore any arguments passed to the command. This is important because tsc will fail if you pass both a tsconfig.json and a list of files from lint-staged.

Now if we ran nx affected --target=typecheck from the command line, we would be able to type check all affected apps and libraries that have a typecheck target in their project.json. Next we'll have lint-staged handle this for us.

Installing Husky and lint-staged

Finally, we'll install and configure Husky and lint-staged. These are the two packages that will allow us to run commands on staged files before a commit is made.

npm install --save-dev husky lint-staged

In your package.json file, add the prepare script to run Husky's install command:

{
    "scripts": {
        "prepare": "husky install"
    }
}

Then, run your prepare script to set up git hooks in your repository. This will create a .husky directory in your project root with the necessary file system permissions.

npm run prepare

The next step is to create our pre-commit hook. We can do this from the command line:

npx husky add .husky/pre-commit "npx lint-staged --concurrent false --relative"

It's important to use Husky's CLI to create our hooks, because it handles file system permissions for us. Creating files manually could cause problems when we actually want to use the git hooks. After running the command, we will now have a file at .husky/pre-commit that looks like this:

#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"

npx lint-staged --concurrent false --relative

Now whenever we try to commit, Husky will run the lint-staged command. We've given it some extra options. First, --concurrent false to make sure attempts to write fixes with formatting and linting don't conflict with simultaneous attempts at type checking. Second is --relative, because our Nx commands for formatting and linting expect a list of file paths relative to the repo root, but lint-staged would otherwise pass the full path by default.

We've got our pre-commit command ready, but we haven't actually configured lint-staged yet. Let's do that next.

Configuring lint-staged

In a simpler repository, it would be easy to add some lint-staged configuration to our package.json file. But because we're trying to check a complex monorepo in Nx, we need to add a separate configuration file. We'll call it lint-staged.config.js and put it in the root of our project.

Here is what our configuration file will look like:

module.exports = {
  '{apps,libs,tools}/**/*.{ts,tsx}': files => {
    return `nx affected --target=typecheck --files=${files.join(',')}`;
  },
  '{apps,libs,tools}/**/*.{js,ts,jsx,tsx,json}': [
    files => `nx affected:lint --files=${files.join(',')}`,
    files => `nx format:write --files=${files.join(',')}`,
  ],
  };

Within our module.exports object, we've defined two globs: one that will match any TypeScript files in our apps, libraries, and tools directories, and another that also matches JavaScript and JSON files in those directories. We only need to run type checking for the TypeScript files, which is why that one is broken out and narrowed down to only those files.

These globs defining our directories can be passed a single command, or an array of commands. It's common with lint-staged to just pass a string like tsc --noEmit or eslint --fix. But we're going to pass a function instead to combine the list of files provided by lint-staged with the desired Nx commands.

The nx affected and nx format:write commands both accept a --files option. And remember that lint-staged always passes in a list of staged files. That array of file paths becomes the argument to our functions, and we concatenate our list of files from lint-staged into a comma-delimitted string and interpolate that into the desired Nx command's --files option. This will override Nx's normal behavior to explicitly tell it to only run the commands on the files that have changed and any other files affected by those changes.

Testing It Out

Now that we've got everything set up, let's try it out. Make a change to a TypeScript file in one of your apps or libraries. Then try to commit that change. You should see the following in your terminal as lint-staged runs:

Preparing lint-staged...
Running tasks for staged files...
  lint-staged.config.js
        {apps,libs,tools}/**/*.{ts,tsx}
            nx affected --target=typecheck --files=apps/your-app/file-you-changed.ts
        {apps,libs,tools}/**/*.{js,ts,jsx,tsx,json}
            nx affected:lint --files=apps/your-app/file-you-changed.ts
            nx format:write --files=apps/your-app/file-you-changed.ts
Applying modifications from tasks...
Cleaning up your temporary files...

Now, whenever you try to commit changes to files that match the globs defined in lint-staged.config.js, the defined commands will run first, and verify that the files contain no type errors, linting errors, or formatting errors. If any of those commands fail, the commit will be aborted, and you'll have to fix the errors before you can commit.

Conclusion

We've now set up a monorepo with Nx and configured it to run type checking, linting, and formatting on staged files before a commit is made. This will help us catch errors before they make it into our codebase, and it will also help us keep our codebase consistent and readable. To see an example Nx monorepo with these configurations, check out this repo.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Setting Up TypeORM Migrations in an Nx/NestJS Project cover image

Setting Up TypeORM Migrations in an Nx/NestJS Project

TypeORM is a powerful Object-Relational Mapping (ORM) library for TypeScript and JavaScript that serves as an easy-to-use interface between an application's business logic and a database, providing an abstraction layer that is not tied to a particular database vendor. TypeORM is the recommended ORM for NestJS as both are written in TypeScript, and TypeORM is one of the most mature ORM frameworks available for TypeScript and JavaScript. One of the key features of any ORM is handling database migrations, and TypeORM is no exception. A database migration is a way to keep the database schema in sync with the application's codebase. Whenever you update your codebase's persistence layer, perhaps you'll want the database schema to be updated as well, and you want a reliable way for all developers in your team to do the same with their local development databases. In this blog post, we'll take a look at how you could implement database migrations in your development workflow if you use a NestJS project. Furthermore, we'll give you some ideas of how nx can help you as well, if you use NestJS in an nx-powered monorepo. Migrations Overview In a nutshell, migrations in TypeORM are TypeScript classes that implement the MigrationInterface interface. This interface has two methods: up and down, where up is used to execute the migration, and down is used to rollback the migration. Assuming that you have an entity (class representing the table) as below: ` If you generate a migration from this entity, it could look as follows: ` As can be seen by the SQL commands, the up method will create the post table, while the down method will drop it. How do we generate the migration file, though? The recommended way is through the TypeORM CLI. TypeORM CLI and TypeScript The CLI can be installed globally, by using npm i -g typeorm. It can also be used without installation by utilizing the npx command: npx typeorm . The TypeORM CLI comes with several scripts that you can use, depending on the project you have, and whether the entities are in JavaScript or TypeScript, with ESM or CommonJS modules: - typeorm: for JavaScript entities - typeorm-ts-node-commonjs: for TypeScript entities using CommonJS - typeorm-ts-node-esm: for TypeScript entities using ESM Many of the TypeORM CLI commands accept a data source file as a mandatory parameter. This file provides configuration for connecting to the database as well as other properties, such as the list of entities to process. The data source file should export an instance of DataSource, as shown in the below example: ` To use this data source, you would need to provide its path through the -d argument to the TypeORM CLI. In a NestJS project using ESM, this would be: ` If the DataSource did not import the Post entity from another file, this would most likely succeed. However, in our case, we would get an error saying that we "cannot use import statement outside a module". The typeorm-ts-node-esm script expects our project to be a module -- and any importing files need to be modules as well. To turn the Post entity file into a module, it would need to be named post.entity.mts to be treated as a module. This kind of approach is not always preferable in NestJS projects, so one alternative is to transform our DataSource configuration to JavaScript - just like NestJS is transpiled to JavaScript through Webpack. The first step is the transpilation step: ` Once transpiled, you can then use the regular typeorm CLI to generate a migration: ` Both commands can be combined together in a package.json script: ` After the migrations are generated, you can use the migration:run command to run the generated migrations. Let's upgrade our package.json with that command: ` Using Tasks in Nx If your NestJS project is part of an nx monorepo, then you can utilize nx project tasks. The benefit of this is that nx will detect your tsconfig.json as well as inject any environment variables defined in the project. Assuming that your NestJS project is located in an app called api, the above npm scripts can be written as nx tasks as follows: ` The typeorm-generate-migration and typeorm-run-migrations tasks depend on the build-migration-config task, meaning that they will always transpile the data source config first, before invoking the typeorm CLI. For example, the previous CreatePost migration could be generated through the following command: ` Conclusion TypeORM is an amazing ORM framework, but there are a few things you should be aware of when running migrations within a big TypeScript project like NestJS. We hope we managed to give you some tips on how to best incorporate migrations in an NestJS project, with and without nx....

Deploying Multiple Apps From a Monorepo to GitHub Pages cover image

Deploying Multiple Apps From a Monorepo to GitHub Pages

Deploying Multiple Apps from a Monorepo to GitHub Pages When it comes to deploying static sites, GitHub Pages is a popular solution thanks to being free and easy to set up in CI. The thing is, however, while it's perfectly suited for hosting a single application, such as a demo of your library, it does not support hosting multiple applications out of the box. It kind of just expects you to have a single app in your repository. It just so happened I ended up with a project that originally had a single app deployed to GitHub Pages via a GitHub Actions workflow, and I had to extend it to be a monorepo with multiple apps. Once a second app was deploy-worthy, I had to figure out how to deploy it to GitHub Pages as well. As I found myself struggling a little bit while figuring out the best way to do it, I decided to write this post to share my experience and hopefully help someone else with a similar problem. The Initial Setup Initially, the project had a GitHub Actions workflow to test, build, and deploy the single app to GitHub Pages. The configuration looked something like this: ` The URL structure for GitHub Pages is [your-organization-name].github.io/[your-repo-name], which means on a merge to the main branch, this action deployed my app to thisdot.github.io/my-repo. Accommodating Multiple Apps As I converted the repository to an Nx monorepo and eventually developed the second application, I needed to deploy it to GitHub Pages too. I researched some options and found a solution to deploy the apps as subdirectories. In the end, the changes to the workflow were not very drastic. As Nx was now building my apps into the dist/apps folder alongside each other, I just had to update the build step to build both apps and the upload step to upload the dist/apps directory instead of the dist/my-app directory. The final workflow at this point looked like this: ` And that seemed to work fine. The apps were deployed to thisdot.github.io/my-repo/app1 and thisdot.github.io/my-repo/app2 respectively. But, then I noticed something was off... Addressing Client-Side Routing My apps were both written with React and used react-router-dom. And as GitHub Pages doesn't support client-side routing out of the box, the routing wasn't working properly and I've been getting 404 errors. One of the apps had a workaround using a custom 404.html from spa-github-pages. The script in that file redirects all 404s to the index.html, preserving the path and query string. But that workaround wasn't working anymore at this point, and adding it to the second app didn't work either. The reason why it wasn't working was that the 404.html wasn't in the root directory of the GitHub pages for that repository, as the apps were now deployed to subdirectories. So, the 404.html was not being picked up by the server. I needed to move the 404.html to the root directory of the apps. I moved the 404.html to a shared folder next to the apps and updated the build script to copy it to the dist/apps directory alongside the two app subdirectories: ` So the whole workflow definition now looked like this: ` Another thing to do was to increase the segmentsToKeep variable in the 404.html script to accommodate the app subdirectories: ` Handling Truly Missing URLs At this point, the routing was working fine for the apps and I thought I was done with this ordeal. But then someone mistyped the URL and the page just kept redirecting to itself and I was getting an infinite loop of redirects. It just kept adding ?/&/and/and/and/and/and/and over and over again to the URL. I had to fix this. So I dug into the 404.html page and figured out, that I'll just check the path segment corresponding to the app name and only execute the redirect logic for known app subdirectories. So I added a allowedPathSegments array and check if the path segment matches one of the allowed ones: ` At that point, the infinite redirect loop was gone. But the 404 page was still not very helpful. It was just blank. So I also took this opportunity to enhance the 404.html to list the available apps and provide some helpful information to the user in case of a truly missing page. I just had to add a bit of HTML code into the body: ` And a bit of javascript to populate the list of apps and show the content: ` Now, when a user mistypes the URL, they get a helpful message and a list of available apps to choose from. If they use one of the available apps, the routing works as expected. This is the final version of the 404.html page: ` Conclusion Deploying multiple apps from an Nx monorepo to GitHub Pages required some adjustments, both in the GitHub Actions workflow and in handling client-side routing. With these changes, I was able to deploy and manage two apps effectively and I should be able to deploy even more apps in the future if they get added to the monorepo. And, while the changes were not very drastic, it wasn't easy to find information on the topic and figure out what to do. That's why I decided to write this post. and I hope it will help someone else with a similar problem....

JavaScript Errors: An Introductory Primer cover image

JavaScript Errors: An Introductory Primer

JavaScript Errors are an integral part of the language, and its runtime environment. They provide valuable feedback when something goes wrong during the execution of your script. And once you understand how to use and handle Errors, you'll find them a much better debugging tool than always reaching for console.log. Why Use Errors? When JavaScript throws errors, it's usually because there's a mistake in the code. For example, trying to access properties of null or undefined would throw a TypeError. Or trying to use a variable before it has been declared would throw a ReferenceError. But these can often be caught before execution by properly linting your code. More often, you'll want to create your own errors in your programs to catch problems unique to what you're trying to build. Throwing your own errors can make it easier for you to interrupt the control flow of your code when necessary conditions aren't met. Why would you want to use Error instead of just console.logging all sorts of things? Because an Error will force you to address it. JavaScript is optimistic. It will do its best to execute despite all sorts of issues in the code. Just logging some problem might not be enough to notice it. You could end up with subtle bugs in your program and not know! Using console.log won't stop your program from continuing to execute. An Error, however, interrupts your program. It tells JavaScript, "we can't proceed until we've fixed this problem". And then JavaScript happily passes the message on to you! Using Errors in JavaScript Here's an example of throwing an error: ` When an Error is thrown, nothing after that throw in your scope will be executed. JavaScript will instead pass the Error to the nearest error handler higher up in the call stack. If no handler is found, the program terminates. Since you probably don't want your programs to crash, it's important to set up Error handling. This is where something like try / catch comes in. Any code you write inside the try will attempt to execute. If an Error is thrown by anything inside the try, then the following catch block is where you can decide how to handle that Error. ` In the catch block, you receive an error object (by convention, this is usually named error or err, but you could give it any name) which you can then handle as needed. Asynchronous Code and Errors There are two ways to write asynchronous JavaScript, and they each have their own way of writing Error handling. If you're using async / await, you can use the try / catch block as in the previous example. However, if you're using Promises, you'll want to chain a catch to the Promise like so: ` Understanding the Error Object The Error object is a built-in object that JavaScript provides to handle runtime errors. All types of errors inherit from it. Error has several useful properties. - message: Probably the most useful of Error's properties, message is a human-readable description of the error. When creating a new Error, the string you pass will become the message. - name: A string representing the error type. By default, this is Error. If you're using a built-in sub-class of Error like TypeError, it will be that instead. Otherwise, if you're creating a custom type of Error, you'll need to set this in the constructor. - stack: While technically non-standard, stack is a widely supported property that gives a full stack trace of where the error was created. - cause: This property allows you to give more specific data when throwing an error. For example, if you want to add a more detailed message to a caught error, you could throw a new Error with your message and pass the original Error as the cause. Or you could add structured data for easier error analysis. Creating an Error object is quite straightforward: ` In addition to the generic Error, JavaScript provides several built-in sub-classes of Error: - EvalError: Thrown when a problem occurs with the eval() function. This only exists for backwards compatibility, and will not be thrown by JavaScript. You shouldn't use eval() anyway. - InternalError: Non-standard error thrown when something goes wrong in the internals of the JavaScript engine. Really only used in Firefox. - RangeError: Thrown when a value is not within the expected range. - ReferenceError: Thrown when a value doesn't exist yet / hasn't been initialized. - SyntaxError: Thrown when a parsing error occurs. - TypeError: Thrown when a variable is not of the expected type. - URIError: Thrown when using a global URI handling function incorrectly. Each of these error types inherits from the Error object and generally adds no additional properties or methods, but they do change the name property to reflect the error type. Making Your Own Custom Errors It's sometimes useful to extend the Error object yourself! This lets you add properties to particular Errors you throw, as well as easily check in catch blocks if an Error is of a particular type. To extend Error, use a class. ` In this example, CustomError extends the Error class. It changes the name to CustomError and gives it the new property foo: 'bar'. You can then throw your CustomError, check if the error in your catch block is an instance of the CustomError, and access the properties associated with your CustomError. This gives you a lot more control over how Errors are structured and validated, which could greatly aid with debugging because your errors won't all just be Errors. Common Confusions There are many ways that using Errors can go subtly wrong. Here are some of the common issues to keep in mind when working with Error. Failure to Catch When an Error is thrown, the program will cease executing anything else in its scope and start working its way back up the call stack until it finds a catch block to deal with the Error. If it never finds a catch, the program will crash. So it's important to make sure you actually catch your errors, or else you might terminate your program unintentionally for small and recoverable issues. It's especially helpful to think about catching errors when you're executing code from external libraries which you don't control. You may import a function to handle something for you, and it unexpectedly throws an Error. You should anticipate this possibility and ask yourself: "Should this code go inside of a try / catch block?" to prevent an error like this from crashing your code. Network Requests Don't Throw on 400 and 500 Statuses You might want to make a request to an API, and then handle an error if the request fails. ` Maybe you made a bad request and got back a 400. Maybe you're not properly authenticated and got a 401 or 403. Maybe the endpoint is invalid and you get a 404. Or maybe the server is having a bad day and you get a 500. In none of those cases will you get an Error! From JavaScript's point of view, your request worked. You sent some data to a place, and the place sent you something back. Mission accomplished! Except it's not. You need to deal with these HTTP error statuses. So if you want to handle responses that aren't OK, you need to do it explicitly. To fix the previous example: ` You Can Throw Anything You should throw new Errors. But you could throw 'literally anything'. There's nothing forcing you to only throw an Error. However, it's a lot harder to handle your errors if there's no consistency in what to expect in your catch blocks. It's a best practice to only throw an Error and not any other kind of JavaScript object. This problem becomes especially clear in TypeScript, when the default type of an error in a catch block is not Error, but unknown. TypeScript has no way to know if an error passed into the catch is going to actually be an Error or not, which can make it more frustrating to write error handling code. For this reason, it's often a good idea to check what exactly you've received before trying to handle it. ` (Alas, you cannot throw 🥳. That's a SyntaxError. But throw new Error('🥳') is still perfectly valid!) Conclusion Wielding JavaScript Errors is a big upgrade from console.logging all the things and hoping for the best. And it's not very hard to do it well! By using Error, your apps will be much more explicit in how you expect things to work, and how you expect things might not work. And when something does go wrong, you'll be more likely to notice and better equipped to figure out the problem....

Systemized Problem Solving in Engineering Leadership Using Data with Ankur Jain cover image

Systemized Problem Solving in Engineering Leadership Using Data with Ankur Jain

What is it like to transition from technologies to Fractional CTO? How much do systems matter when operating at the C Level? Ankur Jain, Fractional CTO and Founder at Sprout discusses the transition from being a technologist to a fractional CTO, and how to define and meet engineering KPIs. He emphasizes the significance of systemizing and design thinking in problem-solving, stressing the need to understand customer needs and deliver effective solutions. By adopting a systematic approach, businesses can effectively identify and address customer needs. Design thinking, on the other hand, encourages a human-centered approach to innovation, ensuring that technology solutions are not only functional but also user-friendly. Ankur insights remind us that successful technology implementation requires a deep understanding of customer pain points and a commitment to delivering effective solutions. In an era where data is abundant, Ankur emphasizes the value of making data-driven decisions. However, he cautions against relying on biased data, which can lead to flawed conclusions. He advises businesses to carefully analyze and interpret data, ensuring that it aligns with the goals and objectives of the organization. By leveraging data effectively, businesses can gain valuable insights, make informed decisions, and drive growth. Ankur highlights the significance of ensuring product-market fit by closely collaborating with early customers. By actively involving customers in the development process, businesses can gain valuable feedback and insights, ensuring that their products or services meet the needs of the target market. Ankur's emphasis on customer collaboration serves as a reminder that successful technology implementation requires a customer-centric approach, where the end-users' needs and preferences are at the forefront of decision-making. Ankur advocates for mentorship and continuous learning in leadership roles. He emphasizes the value of seeking guidance from experienced professionals and gradually growing within organizations. His insights remind us that leadership is a journey of growth and development, and that embracing mentorship and continuous learning can help individuals navigate the complexities of technology leadership more effectively. Download this episode here....