Skip to content

QR Code Scanning & Generation

QR Code Scanning & Generation

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

QR Code Scanning & Generation with ZXing and Angular

QR codes provide a very valuable way of sharing information with users, and many applications rely on them for various purposes. Common examples of places where QR codes can be useful include but are not limited to theaters, social gatherings and other booking services. Integrating QR codes can seem intimidating, but thankfully they’re not too difficult to handle with Angular and the help of a couple of robust software libraries such as ZXing.

What is ZXing?

ZXing, also known as “Zebra Crossing” is a software library that allows developers to easily integrate scanning many different types of 1D and 2D barcodes. This library was originally written in Java, however many ports to other languages and platforms exist, including JavaScript.

There are a couple of ways to use ZXing. You could either directly use the JavaScript library, which is framework-agnostic, or you could use a framework-specific library. I’m going to demonstrate how to use ZXing to scan QR codes with the Angular version of the library called ngx-scanner specifically, which has a full-fledged scanner component that makes it somewhat easy to integrate.

Let’s Scan some QR Codes with Angular!

Getting ngx-scanner up and running doesn’t take much setup at all. All you have to do is add an import to ZXingScannerModule inside of the module that you will be scanning QR codes in, and add a component to your template. Below shows a real-world example of how the <zxing-scanner> component would be used.

<zxing-scanner
  #scanner
  *ngIf="startCamera$ | async"
  class="scanner"
  [enable]="(enable$ | async) ?? false"
  [autostart]="true"
  [device]="(selectedDevice$ | async) ?? undefined"
  [formats]="allowedFormats"
  (camerasFound)="devices$.next($event)"
  (scanSuccess)="scanSuccess$.next($event)"
  (scanError)="scanError($event)"
></zxing-scanner>

You’ll notice a lot of observables being referenced here. I’ll go over them one by one. Let’s start with the devices subject:

devices$ = new BehaviorSubject<MediaDeviceInfo[]>([]);

This subject has data sent to it whenever cameras have been identified by the scanning component. We want to keep track of these so that we’re able to select a camera to use. In our case we’ll just select the first camera for simplicity's sake, as you can see happens with the selectedDevice$ observable that we have bound to the [device] attribute on the scanner:

selectedDevice$: Observable<MediaDeviceInfo> = this.devices$.pipe(
  map((device) => device[0]),
  distinctUntilChanged(),
  shareReplay(1)
);

We utilize shareReplay here so we don’t need to re-scan the devices whenever additional subscribers check what devices are available.

enable$ is a simple observable that emits true whenever devices have been detected, and startCamera$ is an observable that toggles between true and false whenever data is pushed to the toggleCamera$ subject.

toggleCamera$ = new BehaviorSubject<boolean>(false);

startCamera$ = this.toggleCamera$.pipe(
  scan((acc) => !acc, true),
  startWith(true)
);

The scan function from RxJs is utilized so we have the previous state and can use that to determine in which direction the stored boolean should toggle to.

Finally we push successfully scanned codes to scanSuccess$ which we can then subscribe to in the template.

All together the final component code should look something like this:

@Component({
  selector: 'app-scanner',
  templateUrl: './scanner.component.html',
  styleUrls: ['./scanner.component.scss'],
})
export class ScannerComponent {
  @ViewChild('scanner') scanner!: ZXingScannerComponent;

  allowedFormats = [BarcodeFormat.QR_CODE];

  devices$ = new BehaviorSubject<MediaDeviceInfo[]>([]);

  selectedDevice$: Observable<MediaDeviceInfo> = this.devices$.pipe(
    map((device) => device[0]),
    distinctUntilChanged(),
    shareReplay(1)
  );

  enable$ = this.devices$.pipe(map(Boolean));

  toggleCamera$ = new BehaviorSubject<boolean>(false);

  startCamera$ = this.toggleCamera$.pipe(
    scan((acc) => !acc, true),
    startWith(true)
  );

  scanSuccess$ = new BehaviorSubject<string>('');

  scanError(error: Error) {
    console.error(error);
  }
}

If we use this component then you should get something that looks like this: QR code scanning

Scanning QR codes is awesome, but we can also generate our own QR codes as well!

Generating our own QR Codes

Unfortunately the ZXing scanner component we’re using is limited to just scanning QR codes, however there is another component library called ngx-kjua that can do this. Like ZXing this component is also based off of another pre-existing library that is framework-agnostic, in this case, kjua.

Fortunately kjua is very easy to use. You simply supply it a code and it will render a QR code.

<ngx-kjua
  class="code"
  *ngIf="code$ | async"
  [text]="code$ | async"
  [size]="256"
></ngx-kjua>

How the QR code is rendered can change. There is an optional [render] attribute that you can specify that takes a few values. You can input ‘image’, ‘canvas’ or ‘svg’. The default render mode is ‘svg’. It should be noted that ‘svg’ and ‘canvas’ render modes made copying the image more difficult, so use ‘image’ if you want to make that possible.

The component code associated with this template is very small. The component using ngx-kjua has a code observable that is updated with a new random code whenever a “Generate” button is pressed.

import { Component } from '@angular/core';
import { BehaviorSubject } from 'rxjs';
import randomstring from 'randomstring';

@Component({
  selector: 'app-generator',
  templateUrl: './generator.component.html',
  styleUrls: ['./generator.component.scss'],
})
export class GeneratorComponent {
  code$ = new BehaviorSubject<string>(this.generateCode());

  generateCodeClick(event: MouseEvent) {
    this.code$.next(this.generateCode());
  }

  private generateCode(): string {
    return randomstring.generate({ length: 12, charset: 'alphabetic' });
  }
}

I’ve opted to go with the randomstring package in this case. This component could be modified to instead allow for custom codes using a textbox if you want to encode URLs or other useful values that typically go inside of QR codes.

The result looks like this: QR code scanner result

Summary

Thanks to projects like ZXing and kjua it is easy to integrate handling of QR codes in Angular applications, be it for scanning them or creating them. I hope this guide has helped you. You can find the full example showcasing generation and scanning of QR codes in our blog-demos repository on GitHub.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Overview of the New Signal APIs in Angular cover image

Overview of the New Signal APIs in Angular

Overview of the New Signal APIs in Angular Google's Minko Gechev and Jeremy Elbourn announced many exciting things at NG Conf 2024. Among them is the addition of several new signal-based APIs. They are already in developer preview, so we can play around with them. Let's dig into it, starting with signal-based inputs and the new matching outputs API. Signal Based Inputs Discussions about signal-based inputs have been taking place in the Angular community for some time now, and they are finally here. Until now, you used the @Input() decorator to define inputs. This is what you'd have to write in your component to declare an optional and a required input: ` With the new signal-based inputs, you can write much less boilerplate code. Here is how you can define the same inputs using the new syntax: ` It's not only less boilerplate, but because the values are signals, you can also use them directly in computed signals and effects. That, effectively, means you get to avoid computing combined values in ngOnChanges or using setters for your inputs to be able to compute derived values. In addition, input signals are read-only. The New Output I intentionally avoid calling them signal-based outputs because they are not. They still work the same way as the old outputs. The Angular team has just introduced a new output API that is more consistent with the latest inputs API and allows you to write less boilerplate, similar to the new input API. Here is how you would define an output until now: ` Here is how you can define the same output using the new syntax: ` The thing I like about the new output API is that sometimes it happens to me that I forget to instantiate the EventEmitter because I do this instead: ` You won't forget to instantiate the output with the new syntax because the output function does it for you. Signal Queries I am sure most readers know the @ViewChild, @ViewChildren, @ContentChild, and @ContentChildren decorators very well and have experienced the pain of triggering the infamous ExpressionChangedAfterItHasBeenCheckedError or having the values unavailable when needed. Here is a refresher on how you would use these decorators until now: ` With the new signal queries, similar to the new input API, the values are signals, and you can use them directly in computed signals and effects. You can define the same queries using the new syntax: ` Jeremy Elbourn mentioned that the new signal queries have better type inference and are more consistent with the new input and output APIs. He also showcased a brand new feature not available with the old queries. You can now define a query as required, and the Angular compiler will throw an error if the query has no result, guaranteeing that the value won't be undefined. Here is how you can define a required query: ` Model Inputs Jeremy and Minko announced the last new feature is the model inputs. The name is vague, but the feature is cool—it simplifies the definition of two-way bindings. Until now, to achieve two-way binding, you would have to define an input and an output following a given naming convention. @Input and @Output had to be defined with the same name (followed by "Change" in the case of the output). Then, you could use the template's [()] syntax. ` ` That way, you could keep the value in sync between the parent and the child component. With the new model inputs, you can define a two-way binding with a single line of code. Here is how you can define the same two-way binding using the new syntax: ` The html template stays the same: ` The model function returns a writable signal that can be updated directly. The value will be propagated back to any two-way bindings. Conclusion The new signal-based APIs are a great addition to Angular. They allow you to write less boilerplate code and make your components more reactive. The new APIs are already in developer preview, so you can start playing around with them today. I look forward to seeing how the community will adopt these new features and what other exciting things the Angular team has in store for us, such as zoneless apps by default....

Exploring Angular Forms: A New Alternative with Signals cover image

Exploring Angular Forms: A New Alternative with Signals

Exploring Angular Forms: A New Alternative with Signals In the world of Angular, forms are essential for user interaction, whether you're crafting a simple login page or a more complex user profile interface. Angular traditionally offers two primary approaches: template-driven forms and reactive forms. In my previous series on Angular Reactive Forms, I explored how to harness reactive forms' power to manage complex logic, create dynamic forms, and build custom form controls. A new tool for managing reactivity - signals - has been introduced in version 16 of Angular and has been the focus of Angular maintainers ever since, becoming stable with version 17. Signals allow you to handle state changes declaratively, offering an exciting alternative that combines the simplicity of template-driven forms with the robust reactivity of reactive forms. This article will examine how signals can add reactivity to both simple and complex forms in Angular. Recap: Angular Forms Approaches Before diving into the topic of enhancing template-driven forms with signals, let’s quickly recap Angular's traditional forms approaches: 1. Template-Driven Forms: Defined directly in the HTML template using directives like ngModel, these forms are easy to set up and are ideal for simple forms. However, they may not provide the fine-grained control required for more complex scenarios. Here's a minimal example of a template-driven form: ` ` 2. Reactive Forms: Managed programmatically in the component class using Angular's FormGroup, FormControl, and FormArray classes; reactive forms offer granular control over form state and validation. This approach is well-suited for complex forms, as my previous articles on Angular Reactive Forms discussed. And here's a minimal example of a reactive form: ` ` Introducing Signals as a New Way to Handle Form Reactivity With the release of Angular 16, signals have emerged as a new way to manage reactivity. Signals provide a declarative approach to state management, making your code more predictable and easier to understand. When applied to forms, signals can enhance the simplicity of template-driven forms while offering the reactivity and control typically associated with reactive forms. Let’s explore how signals can be used in both simple and complex form scenarios. Example 1: A Simple Template-Driven Form with Signals Consider a basic login form. Typically, this would be implemented using template-driven forms like this: ` ` This approach works well for simple forms, but by introducing signals, we can keep the simplicity while adding reactive capabilities: ` ` In this example, the form fields are defined as signals, allowing for reactive updates whenever the form state changes. The formValue signal provides a computed value that reflects the current state of the form. This approach offers a more declarative way to manage form state and reactivity, combining the simplicity of template-driven forms with the power of signals. You may be tempted to define the form directly as an object inside a signal. While such an approach may seem more concise, typing into the individual fields does not dispatch reactivity updates, which is usually a deal breaker. Here’s an example StackBlitz with a component suffering from such an issue: Therefore, if you'd like to react to changes in the form fields, it's better to define each field as a separate signal. By defining each form field as a separate signal, you ensure that changes to individual fields trigger reactivity updates correctly. Example 2: A Complex Form with Signals You may see little benefit in using signals for simple forms like the login form above, but they truly shine when handling more complex forms. Let's explore a more intricate scenario - a user profile form that includes fields like firstName, lastName, email, phoneNumbers, and address. The phoneNumbers field is dynamic, allowing users to add or remove phone numbers as needed. Here's how this form might be defined using signals: ` > Notice that the phoneNumbers field is defined as a signal of an array of signals. This structure allows us to track changes to individual phone numbers and update the form state reactively. The addPhoneNumber and removePhoneNumber methods update the phoneNumbers signal array, triggering reactivity updates in the form. ` > In the template, we use the phoneNumbers signal array to dynamically render the phone number input fields. The addPhoneNumber and removePhoneNumber methods allow users to reactively add or remove phone numbers, updating the form state. Notice the usage of the track function, which is necessary to ensure that the ngFor directive tracks changes to the phoneNumbers array correctly. Here's a StackBlitz demo of the complex form example for you to play around with: Validating Forms with Signals Validation is critical to any form, ensuring that user input meets the required criteria before submission. With signals, validation can be handled in a reactive and declarative manner. In the complex form example above, we've implemented a computed signal called formValid, which checks whether all fields meet specific validation criteria. The validation logic can easily be customized to accommodate different rules, such as checking for valid email formats or ensuring that all required fields are filled out. Using signals for validation allows you to create more maintainable and testable code, as the validation rules are clearly defined and react automatically to changes in form fields. It can even be abstracted into a separate utility to make it reusable across different forms. In the complex form example, the formValid signal ensures that all required fields are filled and validates the email and phone numbers format. This approach to validation is a bit simple and needs to be better connected to the actual form fields. While it will work for many use cases, in some cases, you might want to wait until explicit "signal forms" support is added to Angular. Tim Deschryver started implementing some abstractions around signal forms, including validation and wrote an article about it. Let's see if something like this will be added to Angular in the future. Why Use Signals in Angular Forms? The adoption of signals in Angular provides a powerful new way to manage form state and reactivity. Signals offer a flexible, declarative approach that can simplify complex form handling by combining the strengths of template-driven forms and reactive forms. Here are some key benefits of using signals in Angular forms: 1. Declarative State Management: Signals allow you to define form fields and computed values declaratively, making your code more predictable and easier to understand. 2. Reactivity: Signals provide reactive updates to form fields, ensuring that changes to the form state trigger reactivity updates automatically. 3. Granular Control: Signals allow you to define form fields at a granular level, enabling fine-grained control over form state and validation. 4. Dynamic Forms: Signals can be used to create dynamic forms with fields that can be added or removed dynamically, providing a flexible way to handle complex form scenarios. 5. Simplicity: Signals can offer a simpler, more concise way to manage form states than traditional reactive forms, making building and maintaining complex forms easier. Conclusion In my previous articles, we explored the powerful features of Angular reactive forms, from dynamic form construction to custom form controls. With the introduction of signals, Angular developers have a new tool that merges the simplicity of template-driven forms with the reactivity of reactive forms. While many use cases warrant Reactive Forms, signals provide a fresh, powerful alternative for managing form state in Angular applications requiring a more straightforward, declarative approach. As Angular continues to evolve, experimenting with these new features will help you build more maintainable, performant applications. Happy coding!...

Integrating Playwright Tests into Your GitHub Workflow with Vercel cover image

Integrating Playwright Tests into Your GitHub Workflow with Vercel

Vercel previews offer a great way to test PRs for a project. They have a predefined environment and don’t require any additional setup work from the reviewer to test changes quickly. Many projects also use end-to-end tests with Playwright as part of the review process to ensure that no regressions slip uncaught. Usually, workflows configure Playwright to run against a project running on the GitHub action worker itself, maybe with dependencies in Docker containers as well, however, why bother setting that all up and configuring yet another environment for your app to run in when there’s a working preview right there? Not only that, the Vercel preview will be more similar to production as it’s running on the same infrastructure, allowing you to be more confident about the accuracy of your tests. In this article, I’ll show you how you can run Playwright against the Vercel preview associated with a PR. Setting up the Vercel Project To set up a project in Vercel, we first need to have a codebase. I’m going to use the Next.js starter, but you can use whatever you like. What technology stack you use for this project won’t matter, as integrating Playwright with it will be the same experience. You can create a Next.js project with the following command: ` If you’ve selected all of the defaults, you should be able to run npm run dev and navigate to the app at http://localhost:3000. Setting up Playwright We will set up Playwright the standard way and make a few small changes to the configuration and the example test so that they run against our site and not the Playwright site. Setup Playwright in our existing project by running the following command: ` Install all browsers when prompted, and for the workflow question, say no since the one we’re going to use will work differently than the default one. The default workflow doesn’t set up a development server by default, and if that is enabled, it will run on the GitHub action virtual machine instead of against our Vercel deployment. To make Playwright run tests against the Vercel deployment, we’ll need to define a baseUrl in playwright.config.ts and send an additional header called X-Vercel-Protection-Bypass where we'll pass the bypass secret that we generated earlier so that we don’t get blocked from making requests to the deployment. I’ll cover how to add this environment variable to GitHub later. ` Our GitHub workflow will set the DEPLOYMENT_URL environment variable automatically. Now, in tests/example.spec.ts let’s rewrite the tests to work against the Next.js starter that we generated earlier: ` This is similar to the default test provided by Playwright. The main difference is we’re loading pages relative to baseURL instead of Playwright’s website. With that done and your Next.js dev server running, you should be able to run npx playwright test and see 6 passing tests against your local server. Now that the boilerplate is handled let’s get to the interesting part. The Workflow There is a lot going on in the workflow that we’ll be using, so we’ll go through it step by step, starting from the top. At the top of the file, we name the workflow and specify when it will run. ` This workflow will run against new PRs against the default branch and whenever new commits are merged against it. If you only want the workflow to run against PRs, you can remove the push object. Be careful about running workflows against your main branch if the deployment associated with it in Vercel is the production deployment. Some tests might not be safe to run against production such as destructive tests or those that modify customer data. In our simple example, however, this isn’t something to worry about. Installing Playwright in the Virtual Machine Workflows have jobs associated with them, and each job has multiple steps. Our test job takes a few steps to set up our project and install Playwright. ` The actions/checkout@v4 step clones our code since it isn’t available straight out of the gate. After that, we install Node v22 with actions/setup-node@v4, which, at the time of writing this article, is the latest LTS available. The latest LTS version of Node should always work with Playwright. With the project cloned and Node installed, we can install dependencies now. We run npm ci to install packages using the versions specified in the lock file. After our JS dependencies are installed, we have to install dependencies for Playwright now. sudo npx playwright install-deps installs all system dependencies that Playwright needs to work using apt, which is the package manager used by Ubuntu. This command needs to be run as the administrative user since higher privilege is needed to install system packages. Playwright’s dependencies aren’t all available in npm because the browser engines are native code that has native library dependencies that aren’t in the registry. Vercel Preview URL and GitHub Action Await Vercel The next couple of steps is where the magic happens. We need two things to happen to run our tests against the deployment. First, we need the URL of the deployment we want to test. Second, we want to wait until the deployment is ready to go before we run our tests. We have written about this topic before on our blog if you want more information about this step, but we’ll reiterate some of that here. Thankfully, the community has created GitHub actions that allow us to do this called zentered/vercel-preview-url and UnlyEd/github-action-await-vercel. Here is how you can use these actions: ` There are a few things to take note of here. Firstly, some variables need to be set that will differ from project to project. vercel_app in the zentered/vercel-preview-url step needs to be set to the name of your project in Vercel that was created earlier. The other variable that you need is the VERCEL_TOKEN environment variable. You can get this by going to Vercel > Account Settings > Tokens and creating a token in the form that appears. For the scope, select the account that has your project. To put VERCEL_TOKEN into GitHub, navigate to your repo, go to Settings > Secrets and variables > Actions and add it to Repository secrets. We should also add VERCEL_AUTOMATION_BYPASS_SECRETl. In Vercel, go to your project then navigate to Settings > Deployment Protection > Protection Bypass for Automation. From here you can add the secret, copy it to your clipboard, and put it in your GitHub action environment variables just like we did with VERCEL_TOKEN. With the variables taken care of, let’s take a look at how these two steps work together. You will notice that the zentered/vercel-preview-url step has an ID set to vercel_preview_url. We need this so we can pass the URL we receive to the UnlyEd/github-action-await-vercel action, as it needs a URL to know which deployment to wait on. Running Playwright After the last steps we just added, our deployment should be ready to go, and we can run our tests! The following steps will run the Playwright tests against the deployment and save the results to GitHub: ` In the first step, where we run the tests, we pass in the environment variables needed by our Playwright configuration that’s stored in playwright.config.ts. DEPLOYMENT_URL uses the Vercel deployment URL we got in an earlier step, and VERCEL_AUTOMATION_BYPASS_SECRET gets passed the secret with the same name directly from the GitHub secret store. The second step uploads a report of how the tests did to GitHub, regardless of whether they’ve passed or failed. If you need to access these reports, you can find them in the GitHub action log. There will be a link in the last step that will allow you to download a zip file. Once this workflow is in the default branch, it should start working for all new PRs! It’s important to note that this won’t work for forked PRs unless they are explicitly approved, as that’s a potential security hazard that can lead to secrets being leaked. You can read more about this in the GitHub documentation. One Caveat There’s one caveat that is worth mentioning with this approach, which is latency. Since your application is being served by Vercel and not locally on the GitHub action instance itself, there will be longer round-trips to it. This could result in your tests taking longer to execute. How much latency there is can vary based on what region your runner ends up being hosted in and whether the pages you’re loading are served from the edge or not. Conclusion Running your Playwright tests against Vercel preview deployments provides a robust way of running your tests against new code in an environment that more closely aligns with production. Doing this also eliminates the need to create and maintain a 2nd test environment under which your project needs to work....

Understanding Sourcemaps: From Development to Production cover image

Understanding Sourcemaps: From Development to Production

What Are Sourcemaps? Modern web development involves transforming your source code before deploying it. We minify JavaScript to reduce file sizes, bundle multiple files together, transpile TypeScript to JavaScript, and convert modern syntax into browser-compatible code. These optimizations are essential for performance, but they create a significant problem: the code running in production does not look like the original code you wrote. Here's a simple example. Your original code might look like this: ` After minification, it becomes something like this: ` Now imagine trying to debug an error in that minified code. Which line threw the exception? What was the value of variable d? This is where sourcemaps come in. A sourcemap is a JSON file that contains a mapping between your transformed code and your original source files. When you open browser DevTools, the browser reads these mappings and reconstructs your original code, allowing you to debug with variable names, comments, and proper formatting intact. How Sourcemaps Work When you build your application with tools like Webpack, Vite, or Rollup, they can generate sourcemap files alongside your production bundles. A minified file references its sourcemap using a special comment at the end: ` The sourcemap file itself contains a JSON structure with several key fields: ` The mappings field uses an encoding format called VLQ (Variable Length Quantity) to map each position in the minified code back to its original location. The browser's DevTools use this information to show you the original code while you're debugging. Types of Sourcemaps Build tools support several variations of sourcemaps, each with different trade-offs: Inline sourcemaps: The entire mapping is embedded directly in your JavaScript file as a base64 encoded data URL. This increases file size significantly but simplifies deployment during development. ` External sourcemaps: A separate .map file that's referenced by the JavaScript bundle. This is the most common approach, as it keeps your production bundles lean since sourcemaps are only downloaded when DevTools is open. Hidden sourcemaps: External sourcemap files without any reference in the JavaScript bundle. These are useful when you want sourcemaps available for error tracking services like Sentry, but don't want to expose them to end users. Why Sourcemaps During development, sourcemaps are absolutely critical. They will help avoid having to guess where errors occur, making debugging much easier. Most modern build tools enable sourcemaps by default in development mode. Sourcemaps in Production Should you ship sourcemaps to production? It depends. While security by making your code more difficult to read is not real security, there's a legitimate argument that exposing your source code makes it easier for attackers to understand your application's internals. Sourcemaps can reveal internal API endpoints and routing logic, business logic, and algorithmic implementations, code comments that might contain developer notes or TODO items. Anyone with basic developer tools can reconstruct your entire codebase when sourcemaps are publicly accessible. While the Apple leak contained no credentials or secrets, it did expose their component architecture and implementation patterns. Additionally, code comments can inadvertently contain internal URLs, developer names, or company-specific information that could potentially be exploited by attackers. But that’s not all of it. On the other hand, services like Sentry can provide much more actionable error reports when they have access to sourcemaps. So you can understand exactly where errors happened. If a customer reports an issue, being able to see the actual error with proper context makes diagnosis significantly faster. If your security depends on keeping your frontend code secret, you have bigger problems. Any determined attacker can reverse engineer minified JavaScript. It just takes more time. Sourcemaps are only downloaded when DevTools is open, so shipping them to production doesn't affect load times or performance for end users. How to manage sourcemaps in production You don't have to choose between no sourcemaps and publicly accessible ones. For example, you can restrict access to sourcemaps with server configuration. You can make .map accessible from specific IP addresses. Additionally, tools like Sentry allow you to upload sourcemaps during your build process without making them publicly accessible. Then configure your build to generate sourcemaps without the reference comment, or use hidden sourcemaps. Sentry gets the mapping information it needs, but end users can't access the files. Learning from Apple's Incident Apple's sourcemap incident is a valuable reminder that even the largest tech companies can make deployment oversights. But it also highlights something important: the presence of sourcemaps wasn't actually a security vulnerability. This can be achieved by following good security practices. Never include sensitive data in client code. Developers got an interesting look at how Apple structures its Svelte codebase. The lesson is that you must be intentional about your deployment configuration. If you're going to include sourcemaps in production, make that decision deliberately after considering the trade-offs. And if you decide against using public sourcemaps, verify that your build process actually removes them. In this case, the public repo was quickly removed after Apple filed a DMCA takedown. (https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md) Making the Right Choice So what should you do with sourcemaps in your projects? For development: Always enable them. Use fast options, such as eval-source-map in Webpack or the default configuration in Vite. The debugging benefits far outweigh any downsides. For production: Consider your specific situation. But most importantly, make sure your sourcemaps don't accidentally expose secrets. Review your build output, check for hardcoded credentials, and ensure sensitive configurations stay on the backend where they belong. Conclusion Sourcemaps are powerful development tools that bridge the gap between the optimized code your users download and the readable code you write. They're essential for debugging and make error tracking more effective. The question of whether to include them in production doesn't have a unique answer. Whatever you decide, make it a deliberate choice. Review your build configuration. Verify that sourcemaps are handled the way you expect. And remember that proper frontend security doesn't come from hiding your code. Useful Resources * Source map specification - https://tc39.es/ecma426/ * What are sourcemaps - https://web.dev/articles/source-maps * VLQ implementation - https://github.com/Rich-Harris/vlq * Sentry sourcemaps - https://docs.sentry.io/platforms/javascript/sourcemaps/ * Apple DMCA takedown - https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co