Skip to content

Utilizing Cypress Testing in a Multi-App Monorepo

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

For web developers, Cypress is a pretty well-understood testing library that everyone has at least come across or heard of. Getting it set up for an app is pretty straightforward, and you can be off and writing tests in a matter of minutes. But what if you have a monorepo with multiple apps? Do you set up a per-app test suite and manage multiple sets of code in multiple places? Or have you already set that up and noticed that there's a lot of redundancy with potentially shared code that you'd like to refactor into one place?

I was recently tasked with setting up such a Cypress testing structure in our Showcase section for our starter.dev project. The idea of the Showcases is that we utilized each of our framework packages to create a GitHub clone as an advanced example of an implementation of each. So they all have the same exact UI to the user, but underneath the hood, they all utilize different sets of technologies for the JavaScript framework, GraphQL/Rest, or CSS libraries.

I instantly figured that there had to be a way to write one set of tests that could be utilized against each app, and all I had to do was unify the data-testid attributes across all of the apps. But how do you set it up to automate the process, and against so many different apps? Would it even be possible to start, test, and stop each app through a script? Thankfully, the answer is yes -- and this blog will document and explain a structure that I used when solving that problem.

Prerequisites

If you've already been developing in a monorepo and have everything set up for that, you likely already have all requirements necessary to install Cypress. However, if you're not and you're setting everything up from scratch, the Cypress docs list a few system requirements:

  • macOS 10.9 and above (64-bit only)
  • Linux Ubuntu 12.04 and above, Fedora 21 and Debian 8 (64-bit only)
  • Windows 7 and above (64-bit only)

If you're on a Linux distrobution, pay special attention to the dependencies you'll be needing as well. If you're using npm, you'll need:

  • Node.js 12 or 14 and above

It's possible to download Cypress directly, but I don't recommend that approach for the purposes of this guide.

Project Structure

For this example, I will be showing the structure I used in the starter.dev GitHub Showcases repository. But hopefully it demonstrates that it's flexible enough to be used on any monorepo structure with any number of apps. The folder structure will look something like this when we're done (showing just two apps and the relevant folders/files for succinctness):

project/
├── app1/
│   ├── src/
│   └── package.json
├── app2/
│   ├── src/
│   └── package.json
└── tests-e2e/
    ├── package.json
    ├── yarn.lock
    └── cypress/
        ├── configs/
        ├── fixtures/
        ├── integration/
        ├── plugins/
        └── support/

Installation

In the root of your project, you'll want to make your directory where your Cypress tests will exist (replace tests-e2e with whatever you'd like your folder to be called):

mkdir tests-e2e
cd tests-e2e

Then, once inside this folder, install Cypress via npm:

npm install cypress --save-dev

Or via yarn:

yarn add cypress --dev

Next, we'll install start-server-and-test, which will be needed later on to automate starting our apps and running our test suite against them. Let's also install TypeScript:

npm install start-server-and-test --save-dev
npm install typescript

Or via yarn:

yarn add start-server-and-test --dev
yarn add typescript

In the newly created package.json in this folder, let's add a basic script to open Cypress:

{
  "scripts": {
    "cypress:open": "cypress open"
  },
  "dependencies": {
    "cypress": "^10.0.3",
    "typescript": "^4.7.3"
  },
  "devDependencies": {
    "start-server-and-test": "^1.14.0"
  }
}

Configuration

Now that we have everything installed, we can start configuring Cypress. Let's open Cypress via the newly created script in our last step, npm run cypress:open or yarn run cypress:open. Once Cypress opens, select the E2E Testing configuration and click continue at the bottom of the list to create all the default files (make sure to read what each one does if this is your first time using Cypress!).

You should see in your folder structure that Cypress created a number of files and folders automatically, but let's create a few additional folders:

mkdir cypress/configs
mkdir cypress/e2e       #mkdir cypress/integration if under Cypress 10
mkdir cypress/plugins

Inside cypress/configs let's create a file called app1.config.js, or app1.json if you're using a Cypress version older than 10 (replace app1 with the name of one of the apps you want to test against):

module.exports = defineConfig({
  startCommand: 'yarn dev',
  e2e: {
    setupNodeEvents(on, config) {},
    baseUrl: 'http://localhost:3000',
  },
})

Just regular JSON format for versions prior to Cypress 10:

{
  "baseUrl": "http://localhost:3000",
  "startCommand": "yarn dev",
  "integrationFolder": "cypress/integration"
}

A couple notes, baseUrl is the URL your app will deploy on when started up, startCommand is the commmand your app uses to start (for me, it was different for a few different apps, but if yours all use the same command you may not need this), and integrationFolder is where all the test .spec files will be. This can be customized if you've already decided that you'd like your tests to be written separately and/or only one of them will need unique tests, etc. But we can leave it alone for now.

Additionally, Cypress has quite a few configuration options. But, the one I'd like to point out specifically is the env option. Just like a .env file, you can utilize this to pass in specific parameters or options into your Cypress tests. Specifically, the different apps I was getting this test suite working against handled auth differently in a few cases, so I needed to visit a specific URL to fire off a redirect/auth chain to mock its state.

It looks like this (will be the same in any version of Cypress):

"env": {
    "authUrl": "/"
}

This may not be something you specifically need yourself, but you can pass in whatever you need for your specific apps, and it will only inject into the Cypress state for that app's Cypress config file. So, maybe you could use this to get the name of what app you're testing, etc.

Next, let's add a basic test file inside cypress/e2e or cypress/integration if under Cypress 10. Call it whatever you'd like, but I'll name it first-test.cy.ts (or first-test.spec.ts if you're using a Cypress version older than 10):

describe('My First Test', () => {
  it('Does not do much!', () => {
    expect(true).to.equal(true)
  })
})

Lastly, let's set up a few run scripts to automate running against all our apps. Update the scripts section of your package.json as such:

"scripts": {
    "cypress:open": "cypress open",
    "cypress:run": "cypress run --config-file $CYPRESS_CONFIG",
    "start": "START_COMMAND=`node -p \"require('$CYPRESS_CONFIG').startCommand\"` && cd ../$TARGET_APP && $START_COMMAND",
    "test": "CYPRESS_CONFIG=./cypress/configs/$TARGET_APP.config.ts BASE_URL=`node -p \"require('$CYPRESS_CONFIG').baseUrl\"` && CYPRESS_CONFIG=$CYPRESS_CONFIG start-server-and-test start $BASE_URL cypress:run",
    "test:watch": "yarn run cypress open --config-file cypress/configs/$TARGET_APP.config.ts"
}

For Cypress versions prior to 10:

"scripts": {
    "cypress:open": "cypress open",
    "cypress:run": "cypress run --config-file $CYPRESS_CONFIG",
    "start": "START_COMMAND=`node -p \"require('$CYPRESS_CONFIG').startCommand\"` && cd ../$TARGET_APP && $START_COMMAND",
    "test": "CYPRESS_CONFIG=./cypress/configs/$TARGET_APP.json BASE_URL=`node -p \"require('$CYPRESS_CONFIG').baseUrl\"` && CYPRESS_CONFIG=$CYPRESS_CONFIG start-server-and-test start $BASE_URL cypress:run",
    "test:watch": "yarn run cypress open --config-file cypress/configs/$TARGET_APP.json"
}

Tying it all together

If your entire project structure is set up properly and the file names and configurations all match names properly as described at the start of this blog, the scripts should work as is. The usage would looks like this:

TARGET_APP=app1 yarn run test
TARGET_APP=app1 yarn run test:watch

The first command, test, runs through all of the test specs that your config file you set up points to in the integrationFolder option in the command line. This option is good to quickly verify passing tests in the background and/or on your CI/CD pipeline. It first fetches the config file from cypress/config/, grabs the baseUrl from that config file, utilizes start-server-and-test to start your target app, and once it's running, it will run your test suite and tear everything down. This is a powerful and flexible option to then chain together running all your Cypress test suites for all your apps back to back from the same place.

The next option, test:watch is the option you'll want to use when developing tests. All it does is open Cypress against the target app's config file you've set up. Then in another process, you will still manually need to start your app locally. The benefit of this is once you change code in either the app or your Cypress test spec files, both will update automatically while everything is still open.

Conclusion

The solution laid out here isn't one for every single monorepo. However, I believe it can eliminate redundancy for certain types of monorepo structures where each app inside is similar enough, or even more rarely, each app is the same but the target deployments or underlying technology is different. Instead of a per-app Cypress installation and test suite in each app package, this may be exactly the solution to abstract or even refactor them to one single place. There's even flexibility built into this structure to allow a unique set of tests against only one or some of the apps you will need to have integration tests for.

Of course, this is only the first step. But hopefully it eliminates potentially the most problematic one. If you'd like to read further on writing Cypress tests themselves, we also have a great guide on writing tests themselves with Cypress that you can check out.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

The HTML Dialog Element: Enhancing Accessibility and Ease of Use cover image

The HTML Dialog Element: Enhancing Accessibility and Ease of Use

The HTML Dialog Element: Enhancing Accessibility and Ease of Use Dialogs are a common component added to applications, whether on the web or in native applications. Traditionally there has not been a standard way of implementing these on the web, resulting in many ad-hoc implementations that don’t act consistently across different web applications. Often, commonly expected features are missing from dialogs due to the complexity of implementing them. However, web browsers now offer a standard dialog element. Why use the dialog element? The native dialog element streamlines the implementation of dialogs, modals, and other kinds of non-modal dialogs. It does this by implementing many of the features needed by dialogs for you that are already baked into the browser. This is helpful as it reduces the burden on the developer when making their applications accessible by ensuring that user expectations concerning interaction are met, and it can also potentially simplify the implementation of dialogs in general. Basic usage Adding a dialog using the new tag can be achieved with just a few lines of code. ` However, adding the dialog alone won’t do anything to the page. It will show up only once you call the .showModal() method against it. ` Then if you want to close it you can call the .close() method on the dialog, or press the escape key to close it, just like most other modals work. Also, note how a backdrop appears that darkens the rest of the page and prevents you from interacting with it. Neat! Accessibility and focus management Correctly handling focus is important when making your web applications accessible to all users. Typically you have to move the current focus to the active dialog when showing them, but with the dialog element that’s done for you. By default, the focus will be set on the first focusable element in the dialog. You can optionally change which element receives focus first by setting the autofocus attribute on the element you want the focus to start on, as seen in the previous example where that attribute was added to the close element. Using the .showModal() method to open the dialog also implicitly adds the dialog ARIA role to the dialog element. This helps screen readers understand that a modal has appeared and the screen so it can act accordingly. Adding forms to dialogs Forms can also be added to dialogs, and there’s even a special method value for them. If you add a element with the method set to dialog then the form will have some different behaviors that differ from the standard get and post form methods. First off, no external HTTP request will be made with this new method. What will happen instead is that when the form gets submitted, the returnValue property on the form element will be set to the value of the submit button in the form. So given this example form: ` The form element with the example-form id will have its returnValue set to Submit. In addition to that, the dialog will close immediately after the submit event is done being handled, though not before automatic form validation is done. If this fails then the invalid event will be emitted. You may have already noticed one caveat to all of this. You might not want the form to close automatically when the submit handler is done running. If you perform an asynchronous request with an API or server you may want to wait for a response and show any errors that occur before dismissing the dialog. In this case, you can call event.preventDefault() in the submit event listener like so: ` Once your desired response comes back from the server, you can close it manually by using the .close() method on the dialog. Enhancing the backdrop The backdrop behind the dialog is a mostly translucent gray background by default. However, that backdrop is fully customizable using the ::backdrop pseudo-element. With it, you can set a background-color to any value you want, including gradients, images, etc. You may also want to make clicking the backdrop dismiss the modal, as this is a commonly implemented feature of them. By default, the <dialog> element doesn’t do this for us. There are a couple of changes that we can make to the dialog to get this working. First, an event listener is needed so that we know when the user clicks away from the dialog. ` Alone this event listener looks strange. It appears to dismiss the dialog whenever the dialog is clicked, not the backdrop. That’s the opposite of what we want to do. Unfortunately, you cannot listen for a click event on the backdrop as it is considered to be part of the dialog itself. Adding this event listener by itself will effectively make clicking anywhere on the page dismiss the dialog. To correct for this we need to wrap the contents of the dialog content with another element that will effectively mask the dialog and receive the click instead. A simple element can do! ` Even this isn’t perfect though as the contents of the div may have elements with margins in them that will push the div down, resulting in clicks close to the edges of the dialog to dismiss it. This can be resolved by adding a couple of styles the the wrapping div that will make the margin stay contained within the wrapper element. The dialog element itself also has some default padding that will exacerbate this issue. ` The wrapping div can be made into an inline-block element to contain the margin, and by moving the padding from the parent dialog to the wrapper, clicks made in the padded portions of the dialog will now interact with the wrapper element instead ensuring it won’t be dismissed. Conclusion Using the dialog element offers significant advantages for creating dialogs and modals by simplifying implementation with reasonable default behavior, enhancing accessibility for users that need assistive technologies such as screen readers by using automatic ARIA role assignment, tailored support for form elements, and flexible styling options....

The 2025 Guide to JS Build Tools cover image

The 2025 Guide to JS Build Tools

The 2025 Guide to JS Build Tools In 2025, we're seeing the largest number of JavaScript build tools being actively maintained and used in history. Over the past few years, we've seen the trend of many build tools being rewritten or forked to use a faster and more efficient language like Rust and Go. In the last year, new companies have emerged, even with venture capital funding, with the goal of working on specific sets of build tools. Void Zero is one such recent example. With so many build tools around, it can be difficult to get your head around and understand which one is for what. Hopefully, with this blog post, things will become a bit clearer. But first, let's explain some concepts. Concepts When it comes to build tools, there is no one-size-fits-all solution. Each tool typically focuses on one or two primary features, and often relies on other tools as dependencies to accomplish more. While it might be difficult to explain here all of the possible functionalities a build tool might have, we've attempted to explain some of the most common ones so that you can easily understand how tools compare. Minification The concept of minification has been in the JavaScript ecosystem for a long time, and not without reason. JavaScript is typically delivered from the server to the user's browser through a network whose speed can vary. Thus, there was a need very early in the web development era to compress the source code as much as possible while still making it executable by the browser. This is done through the process of *minification*, which removes unnecessary whitespace, comments, and uses shorter variable names, reducing the total size of the file. This is what an unminified JavaScript looks like: ` This is the same file, minified: ` Closely related to minimizing is the concept of source maps#Source_mapping), which goes hand in hand with minimizing - source maps are essentially mappings between the minified file and the original source code. Why is that needed? Well, primarily for debugging minified code. Without source maps, understanding errors in minified code is nearly impossible because variable names are shortened, and all formatting is removed. With source maps, browser developer tools can help you debug minified code. Tree-Shaking *Tree-shaking* was the next-level upgrade from minification that became possible when ES modules were introduced into the JavaScript language. While a minified file is smaller than the original source code, it can still get quite large for larger apps, especially if it contains parts that are effectively not used. Tree shaking helps eliminate this by performing a static analysis of all your code, building a dependency graph of the modules and how they relate to each other, which allows the bundler to determine which exports are used and which are not. Once unused exports are found, the build tool will remove them entirely. This is also called *dead code elimination*. Bundling Development in JavaScript and TypeScript rarely involves a single file. Typically, we're talking about tens or hundreds of files, each containing a specific part of the application. If we were to deliver all those files to the browser, we would overwhelm both the browser and the network with many small requests. *Bundling* is the process of combining multiple JS/TS files (and often other assets like CSS, images, etc.) into one or more larger files. A bundler will typically start with an entry file and then recursively include every module or file that the entry file depends on, before outputting one or more files containing all the necessary code to deliver to the browser. As you might expect, a bundler will typically also involve minification and tree-shaking, as explained previously, in the process to deliver only the minimum amount of code necessary for the app to function. Transpiling Once TypeScript arrived on the scene, it became necessary to translate it to JavaScript, as browsers did not natively understand TypeScript. Generally speaking, the purpose of a *transpiler* is to transform one language into another. In the JavaScript ecosystem, it's most often used to transpile TypeScript code to JavaScript, optionally targeting a specific version of JavaScript that's supported by older browsers. However, it can also be used to transpile newer JavaScript to older versions. For example, arrow functions, which are specified in ES6, are converted into regular function declarations if the target language is ES5. Additionally, a transpiler can also be used by modern frameworks such as React to transpile JSX syntax (used in React) into plain JavaScript. Typically, with transpilers, the goal is to maintain similar abstractions in the target code. For example, transpiling TypeScript into JavaScript might preserve constructs like loops, conditionals, or function declarations that look natural in both languages. Compiling While a transpiler's purpose is to transform from one language to another without or with little optimization, the purpose of a *compiler* is to perform more extensive transformations and optimizations, or translate code from a high-level programming language into a lower-level one such as bytecode. The focus here is on optimizing for performance or resource efficiency. Unlike transpiling, compiling will often transform abstractions so that they suit the low-level representation, which can then run faster. Hot-Module Reloading (HMR) *Hot-module reloading* (HMR) is an important feature of modern build tools that drastically improves the developer experience while developing apps. In the early days of the web, whenever you'd make a change in your source code, you would need to hit that refresh button on the browser to see the change. This would become quite tedious over time, especially because with a full-page reload, you lose all the application state, such as the state of form inputs or other UI components. With HMR, we can update modules in real-time without requiring a full-page reload, speeding up the feedback loop for any changes made by developers. Not only that, but the full application state is typically preserved, making it easier to test and iterate on code. Development Server When developing web applications, you need to have a locally running development server set up on something like http://localhost:3000. A development server typically serves unminified code to the browser, allowing you to easily debug your application. Additionally, a development server will typically have hot module replacement (HMR) so that you can see the results on the browser as you are developing your application. The Tools Now that you understand the most important features of build tools, let's take a closer look at some of the popular tools available. This is by no means a complete list, as there have been many build tools in the past that were effective and popular at the time. However, here we will focus on those used by the current popular frameworks. In the table below, you can see an overview of all the tools we'll cover, along with the features they primarily focus on and those they support secondarily or through plugins. The tools are presented in alphabetical order below. Babel Babel, which celebrated its 10th anniversary since its initial release last year, is primarily a JavaScript transpiler used to convert modern JavaScript (ES6+) into backward-compatible JavaScript code that can run on older JavaScript engines. Traditionally, developers have used it to take advantage of the newer features of the JavaScript language without worrying about whether their code would run on older browsers. esbuild esbuild, created by Evan Wallace, the co-founder and former CTO of Figma, is primarily a bundler that advertises itself as being one of the fastest bundlers in the market. Unlike all the other tools on this list, esbuild is written in Go. When it was first released, it was unusual for a JavaScript bundler to be written in a language other than JavaScript. However, this choice has provided significant performance benefits. esbuild supports ESM and CommonJS modules, as well as CSS, TypeScript, and JSX. Unlike traditional bundlers, esbuild creates a separate bundle for each entry point file. Nowadays, it is used by tools like Vite and frameworks such as Angular. Metro Unlike other build tools mentioned here, which are mostly web-focused, Metro's primary focus is React Native. It has been specifically optimized for bundling, transforming, and serving JavaScript and assets for React Native apps. Internally, it utilizes Babel as part of its transformation process. Metro is sponsored by Meta and actively maintained by the Meta team. Oxc The JavaScript Oxidation Compiler, or Oxc, is a collection of Rust-based tools. Although it is referred to as a compiler, it is essentially a toolchain that includes a parser, linter, formatter, transpiler, minifier, and resolver. Oxc is sponsored by Void Zero and is set to become the backbone of other Void Zero tools, like Vite. Parcel Feature-wise, Parcel covers a lot of ground (no pun intended). Largely created by Devon Govett, it is designed as a zero-configuration build tool that supports bundling, minification, tree-shaking, transpiling, compiling, HMR, and a development server. It can utilize all the necessary types of assets you will need, from JavaScript to HTML, CSS, and images. The core part of it is mostly written in JavaScript, with a CSS transformer written in Rust, whereas it delegates the JavaScript compilation to a SWC. Likewise, it also has a large collection of community-maintained plugins. Overall, it is a good tool for quick development without requiring extensive configuration. Rolldown Rolldown is the future bundler for Vite, written in Rust and built on top of Oxc, currently leveraging its parser and resolver. Inspired by Rollup (hence the name), it will provide Rollup-compatible APIs and plugin interface, but it will be more similar to esbuild in scope. Currently, it is still in heavy development and it is not ready for production, but we should definitely be hearing more about this bundler in 2025 and beyond. Rollup Rollup is the current bundler for Vite. Originally created by Rich Harris, the creator of Svelte, Rollup is slowly becoming a veteran (speaking in JavaScript years) compared to other build tools here. When it originally launched, it introduced novel ideas focused on ES modules and tree-shaking, at the time when Webpack as its competitor was becoming too complex due to its extensive feature set - Rollup promised a simpler way with a straightforward configuration process that is easy to understand. Rolldown, mentioned previously, is hoped to become a replacement for Rollup at some point. Rsbuild Rsbuild is a high-performance build tool written in Rust and built on top of Rspack. Feature-wise, it has many similiarities with Vite. Both Rsbuild and Rspack are sponsored by the Web Infrastructure Team at ByteDance, which is a division of ByteDance, the parent company of TikTok. Rsbuild is built as a high-level tool on top of Rspack that has many additional features that Rspack itself doesn't provide, such as a better development server, image compression, and type checking. Rspack Rspack, as the name suggests, is a Rust-based alternative to Webpack. It offers a Webpack-compatible API, which is helpful if you are familiar with setting up Webpack configurations. However, if you are not, it might have a steep learning curve. To address this, the same team that built Rspack also developed Rsbuild, which helps you achieve a lot with out-of-the-box configuration. Under the hood, Rspack uses SWC for compiling and transpiling. Feature-wise, it’s quite robust. It includes built-in support for TypeScript, JSX, Sass, Less, CSS modules, Wasm, and more, as well as features like module federation, PostCSS, Lightning CSS, and others. Snowpack Snowpack was created around the same time as Vite, with both aiming to address similar needs in modern web development. Their primary focus was on faster build times and leveraging ES modules. Both Snowpack and Vite introduced a novel idea at the time: instead of bundling files while running a local development server, like traditional bundlers, they served the app unbundled. Each file was built only once and then cached indefinitely. When a file changed, only that specific file was rebuilt. For production builds, Snowpack relied on external bundlers such as Webpack, Rollup, or esbuild. Unfortunately, Snowpack is a tool you’re likely to hear less and less about in the future. It is no longer actively developed, and Vite has become the recommended alternative. SWC SWC, which stands for Speedy Web Compiler, can be used for both compilation and bundling (with the help of SWCpack), although compilation is its primary feature. And it really is speedy, thanks to being written in Rust, as are many other tools on this list. Primarily advertised as an alternative to Babel, its SWC is roughly 20x faster than Babel on a single thread. SWC compiles TypeScript to JavaScript, JSX to JavaScript, and more. It is used by tools such as Parcel and Rspack and by frameworks such as Next.js, which are used for transpiling and minification. SWCpack is the bundling part of SWC. However, active development within the SWC ecosystem is not currently a priority. The main author of SWC now works for Turbopack by Vercel, and the documentation states that SWCpack is presently not in active development. Terser Terser has the smallest scope compared to other tools from this list, but considering that it's used in many of those tools, it's worth separating it into its own section. Terser's primary role is minification. It is the successor to the older UglifyJS, but with better performance and ES6+ support. Vite Vite is a somewhat of a special beast. It's primarily a development server, but calling it just that would be an understatement, as it combines the features of a fast development server with modern build capabilities. Vite shines in different ways depending on how it's used. During development, it provides a fast server that doesn't bundle code like traditional bundlers (e.g., Webpack). Instead, it uses native ES modules, serving them directly to the browser. Since the code isn't bundled, Vite also delivers fast HMR, so any updates you make are nearly instant. Vite uses two bundlers under the hood. During development, it uses esbuild, which also allows it to act as a TypeScript transpiler. For each file you work on, it creates a file for the browser, allowing an easy separation between files which helps HMR. For production, it uses Rollup, which generates a single file for the browser. However, Rollup is not as fast as esbuild, so production builds can be a bit slower than you might expect. (This is why Rollup is being rewritten in Rust as Rolldown. Once complete, you'll have the same bundler for both development and production.) Traditionally, Vite has been used for client-side apps, but with the new Environment API released in Vite 6.0, it bridges the gap between client-side and server-rendered apps. Turbopack Turbopack is a bundler, written in Rust by the creators of webpack and Next.js at Vercel. The idea behind Turbopack was to do a complete rewrite of Webpack from scratch and try to keep a Webpack compatible API as much as possible. This is not an easy feat, and this task is still not over. The enormous popularity of Next.js is also helping Turbopack gain traction in the developer community. Right now, Turbopack is being used as an opt-in feature in Next.js's dev server. Production builds are not yet supported but are planned for future releases. Webpack And finally we arrive at Webpack, the legend among bundlers which has had a dominant position as the primary bundler for a long time. Despite the fact that there are so many alternatives to Webpack now (as we've seen in this blog post), it is still widely used, and some modern frameworks such as Next.js still have it as a default bundler. Initially released back in 2012, its development is still going strong. Its primary features are bundling, code splitting, and HMR, but other features are available as well thanks to its popular plugin system. Configuring Webpack has traditionally been challenging, and since it's written in JavaScript rather than a lower-level language like Rust, its performance lags behind compared to newer tools. As a result, many developers are gradually moving away from it. Conclusion With so many build tools in today's JavaScript ecosystem, many of which are similarly named, it's easy to get lost. Hopefully, this blog post was a useful overview of the tools that are most likely to continue being relevant in 2025. Although, with the speed of development, it may as well be that we will be seeing a completely different picture in 2026!...

Getting Started with Angular: A Mobile Developer’s Approach cover image

Getting Started with Angular: A Mobile Developer’s Approach

At many points in your career, you will find yourself picking up a new framework, platform, language, or (most likely) some combination of all three. In my case, I recently found myself looking for a change of pace, and ended up switching from working in native iOS as a Mobile Engineer to the world of JavaScript as a Front-End Engineer. Specifically, I started out with Angular. No matter how you slice it, learning something brand new can feel daunting — regardless of whether it is mid-career or otherwise. Luckily, Angular is extremely simple and quick to get set up . . . but getting an environment and project set up really is only a small part of the path to learning a platform, isn’t it? As I challenged myself to attempt to learn Angular as quickly as possible, I aimed to first break apart the approach by understanding: * What Angular is * What Angular's goal is * What the fundamental pieces of Angular are So, before we jump straight into some code, let's grasp these concepts first. What *is* Angular? First of all, Angular is not AngularJS. This was slightly confusing for me at first, and I made the mistake of mixing them up initially. Oops. At a macro level, Angular is a modern JavaScript framework that’s built on TypeScript, giving it access to static types, classes, and other elements provided by ES6. This allows code to organize and structure properly so it can be Object-Oriented, and allow for long-term maintenance at any scale, which will be important for the next fundamental concepts. What does Angular aim to achieve? Angular provides a platform to make single-page applications. I think it's important to point out the usage of the word platform, as Angular comes with quite a bit more to help us scale applications to any point we need. It ships with many integrated libraries (more on that in a bit), integration with numerous third-party libraries, and, maybe most importantly, the Angular CLI (command line interface), which gives us access to build, serve, test, and even ship our application. "Application" is also a very important distinction- that is, Angular (like many modern frameworks) aims to provide the tools to create responsive, smooth user experiences. What are the essential building blocks of Angular? To understand Angular, you have to understand what a Component is. But first, it may be easier to first visualize what makes an application: navbars, buttons, tables, table cells, and every other separately distinct building block you can visually see. In Angular, every single one of these is a Component, and everything else that exists is in service of these Components. For me, this is where it really started to click, since in iOS, the same fundamental principle exists: every single object on the screen *must* conform to UIView, including any View Controller. Angular, of course, is no different — the root view of the app itself is also a Component. Templates define the appearance of the Component via HTML. Angular extends the functionality of templates quite a bit through direct text interpolation, directives, and property/event binding. These, combined with CSS, can achieve essentially anything you need it to without having to entangle your Component's business logic with the presentation of said Component. The final core piece of the Angular paradigm is dependency injection. Essentially, dependency injection helps keep your Components extremely light, modular, and maintainable. So, say you have a table that needs data from a request sent to an API endpoint. You *could* just have the inner workings of a server fetch right there all in the component, but you likely have multiple places all over your application where you'll be making API requests, and have to duplicate that code. Instead, Angular lets you define a Service object that can handle that API request, and return the data directly to your component. You don't even need to instantiate the Service since Angular will just automatically do that for you once you inject the dependency to keep things even cleaner. Dependency injection, services, routing, etc. can feel daunting, but I think as long as you view everything else that isn't a Component solely existing to work *in favor* of Components, things become a lot clearer and you can start building apps the Angular way. Speaking of, let's do a bit of project setup, take a look at the structure, build it, and add a component. Prerequisties Now that we know what Angular is, we can finally start our first project. A couple things we will need first: * Visual Studio Code * NodeJS and NPM with at least a basic proficiency with a terminal * Some basic understanding of HTML and JavaScript/TypeScript Creating and running your first project With Node and NPM installed, we can install the Angular CLI with the following command: ` Then we create our first project with: ` Note: The above command will have a couple prompts for you. Answer yes to add Angular routing, and the style sheet format is up to you (I chose SCSS). You can just press enter on both prompts for the default setup as well. And finally, building and serving our new app locally: ` Open http://localhost:4200/ on your favorite browser to see what we just made. One note about ng serve is its ability to pick up code changes and update live — no waiting for build times just to see very small changes here. Nice. Project Structure Of course, it's helpful to know what the CLI just built out for us. Looking at our project structure in VS Code, we can see quite a bit of config files set up in the root directory. Of course, these are all important, but for now, let's focus on getting familiar with a few: * angular.json — This is the configuration file for CLI, so it knows how to build, serve, and test your app. Should be good to go right out of the box. * package.json — The file where the list of all your NPM dependencies live, the package-lock.json specifies the versions for these files (for my iOS friends, this is very similar to the podfile or build.gradle for my Android friends!). Whenever you install a Node package, the tool will automatically update these for you. * node_modules — With all the above dependencies installed, the .gitignore should be ignoring this directory by default, but it's good to know everything is there. Now that we're familiar with the housekeeping of the app, we can dig into the most important part: the code. In the src/app directory, we can see our first module, component, and the files that make it up. Let's ignore those files for now, and create our very first Component. Creating The First Component Within your terminal, we'll run the following command: ` With that, CLI should have made a directory in src/app/my-first-component and the following files: 1. my-first-component.component.ts — This is the file where the data and logic needed to update your Component's Template live. 2. my-first-component.component.html — The associated Template for the Component, where the HTML is defined to actually render our Component. 3. my-first-component.component.scss — The associated style sheet for the Template, where styles scoped only to this Component are defined. 4. my-first-component.component.spec.ts — The very first unit test for the Component, automatically generated by CLI. How nice! Within the component.ts file, let's take special note of the following: ` Here we can note the Template and style sheet URLs, but of particular note is the selector, in our case called app-my-first-component. This is the custom HTML tag we can now use anywhere our Component is imported in our project. The reason app is prepended to the front of the name is because we defined this Component within the app module, but you can absolutely change the name if you want. If we look back to the root app directory in src/app (taking special note of the exact same file structure we just created in my-first-component), let's open app.module.ts. The CLI has already created a declaration for our Component in the NgModule decorator, and imported MyFirstComponentComponent automatically for us. Let's actually get our newly created Component on the screen somewhere. Let's open app.component.html. There's quite a bit of HTML already in here, but we can leave it mostly alone for now. Really, we can throw our Component just about anywhere, but let's search for the following: ` And replace it with: ` Save your changes, and check your browser to see the following: Now that we've got it up on the screen, let's do some slight modification to get data from the logic to the Template. First, add the following inside the class definition of my-first-component.component.ts: ` Then, inside my-first-component.component.html, replace the Template definition with the following: ` The double bracket notation tells Angular to interpolate the value within, and dynamically pull the text defined in our class. Save both files, and you should see the following: Neat, right? Conclusion Whenever someone asks me how easy it is to get started doing native iOS app development, it's easy for me to boast that you can start a project, throw something in Interface Builder, and run it — all within a few minutes' time. Now, I (and hopefully you) can easily say the exact same about Angular. Of course, this is just a very small taste of what Angular has to offer, but hopefully instead of feeling initially lost in a sea of documentation, the relative simplicity of the platform will seem a lot more clear from the essence of what I described here from my journey of also having to learn Angular. From here, I think it's valuable to look through the incredible official Angular docs, and I'd recommend running through some of their tutorials as well....

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers cover image

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers

Before she was a software developer at freeCodeCamp, Jessica Wilkins was a classically trained clarinetist performing across the country. Her days were filled with rehearsals, concerts, and teaching, and she hadn’t considered a tech career until the world changed in 2020. > “When the pandemic hit, most of my gigs were canceled,” she says. “I suddenly had time on my hands and an idea for a site I wanted to build.” That site, a tribute to Black musicians in classical and jazz music, turned into much more than a personal project. It opened the door to a whole new career where her creative instincts and curiosity could thrive just as much as they had in music. Now at freeCodeCamp, Jessica maintains and develops the very JavaScript curriculum that has helped her and millions of developers around the world. We spoke with Jessica about her advice for JavaScript learners, why musicians make great developers, and how inclusive communities are helping more women thrive in tech. Jessica’s Top 3 JavaScript Skill Picks for 2025 If you ask Jessica what it takes to succeed as a JavaScript developer in 2025, she won’t point you straight to the newest library or trend. Instead, she lists three skills that sound simple, but take real time to build: > “Learning how to ask questions and research when you get stuck. Learning how to read error messages. And having a strong foundation in the fundamentals” She says those skills don’t come from shortcuts or shiny tools. They come from building. > “Start with small projects and keep building,” she says. “Books like You Don’t Know JS help you understand the theory, but experience comes from writing and shipping code. You learn a lot by doing.” And don’t forget the people around you. > “Meetups and conferences are amazing,” she adds. “You’ll pick up things faster, get feedback, and make friends who are learning alongside you.” Why So Many Musicians End Up in Tech A musical past like Jessica’s isn’t unheard of in the JavaScript industry. In fact, she’s noticed a surprising number of musicians making the leap into software. > “I think it’s because music and code have a lot in common,” she says. “They both require creativity, pattern recognition, problem-solving… and you can really get into flow when you’re deep in either one.” That crossover between artistry and logic feels like home to people who’ve lived in both worlds. What the Tech Community Is Getting Right Jessica has seen both the challenges and the wins when it comes to supporting women in tech. > “There’s still a lot of toxicity in some corners,” she says. “But the communities that are doing it right—like Women Who Code, Women in Tech, and Virtual Coffee—create safe, supportive spaces to grow and share experiences.” She believes those spaces aren’t just helpful, but they’re essential. > “Having a network makes a huge difference, especially early in your career.” What’s Next for Jessica Wilkins? With a catalog of published articles, open-source projects under her belt, and a growing audience of devs following her journey, Jessica is just getting started. She’s still writing. Still mentoring. Still building. And still proving that creativity doesn’t stop at the orchestra pit—it just finds a new stage. Follow Jessica Wilkins on X and Linkedin to keep up with her work in tech, her musical roots, and whatever she’s building next. Sticker illustration by Jacob Ashley....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co