Skip to content

Integrating Next.js with New Relic

Integrating Next.js with New Relic

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Integrating Next.js with New Relic

When it comes to application monitoring, New Relic is genuinely one of the veterans in the market. Founded in 2008, it offers a large set of tools designed for developers to help them analyze application performance, troubleshoot problems, and gain insight into user behavior through real-time analytics.

We needed to integrate New Relic's application monitoring with a modern Next.js app built on top of the app router. This blog post will explain your options if you want to do the same in your project. Please keep in mind that depending on when you read this blog post, there may be some advances in how New Relic works with Next.js. You can always refer to the official New Relic website for up-to-date documentation.

The approach is also different, depending on whether you use Vercel's managed Next.js hosting (or any other provider that is lambda-based) or self-host Next.js on an alternative provider such as Fly.io. We'll start with the typical steps you need to do for both ways of deploying; then we'll continue with the self-hosted option, which uses a complete set of features from New Relic, and finish with Vercel where we recommend the OpenTelemetry integration for now.

Common Prerequisite Steps

First, let's install the necessary packages. These were built by New Relic and are needed whether you self-host your Next.js app or host it on Vercel.

npm install newrelic @newrelic/next
npm install -D @types/newrelic

The next thing you'll want to do is define your app name. The name will be different depending on the environment the app is running in, such as a local environment, staging, or production. This will ensure that the data you send to New Relic is always tagged with a proper app name and will not be mixed between environments.

An environment-changing parameter, such as the app name, should always be included in an environment variable. In your local environment, that would be in .env.local, while staging/production or any other hosted environments would be configured in the provider.

NEW_RELIC_APP_NAME='My App - Staging'

To get full coverage with New Relic, in Next.js, we need to cover the parts running on the client (in the browser) and on the server. The server-running part is different on Vercel's managed hosting, where all of the logic is executed within lambdas, and self-hosting variants, where the server logic is executed on a Node.js server.

The client part is configured by going to the New Relic dashboard and clicking "Add Data" / "Browser Monitoring". Choose "place a JavaScript snippet in frontend code," name it the same name you used in the environment variable above, click "Continue" to the last step, and copy the part between the script tags:

<script type="text/javascript">
	// big chunk of JavaScript that you need to copy
</script>

Then paste it to a variable inside any module of your code. For example, this is how we did it:

// app/components/new-relic-scripts-per-env.ts

export const DEV_SCRIPT=`script contents`;
export const STAGING_SCRIPT=`script contents`;
export const PRODUCTION_SCRIPT=`script contents`;

Now, create a component that will use this script and put it to Next's Script component. Then you can use the component in your root layout:

// app/components/new-relic-script.tsx
import Script from "next/script";
import * as React from "react";
import { IS_PRODUCTION, IS_STAGING } from "@/utilities/environment";
import {
  DEV_SCRIPT,
  PRODUCTION_SCRIPT,
  STAGING_SCRIPT,
} from "./new-relic-scripts-per-env";

function getScriptContent() {
  if (IS_PRODUCTION) {
    return PRODUCTION_SCRIPT;
  }

  if (IS_STAGING) {
    return STAGING_SCRIPT;
  }

  return DEV_SCRIPT;
}

export default function NewRelicScript() {
  const scriptContent = getScriptContent();

  return (
    <Script
      id="nr-browser-agent"
      strategy="beforeInteractive"
    >
      {scriptContent}
    </Script>
  );
}

There are several things to note here. First, we need to set a unique ID for the Script component for Next.js to track and optimize the script. We also need to set the strategy to beforeInteractive so that the New Relic script is added to the document’s head element. Finally, we use some environment helper variables. IS_PRODUCTION and IS_STAGING are just helper variables that read the environment variables to determine if we are in a local environment, staging, or production.

Now you can create an app router error page and report any uncaught errors to New Relic:

// app/error.tsx
"use client";

import { useEffect } from "react";

export default function ErrorPage({
  error,
}: {
  error: Error & { digest?: string };
}) {
  useEffect(() => {
    if (window.newrelic) {
      window.newrelic.noticeError(error);
    } else {
      console.error("Cannot report error to New Relic", error);
    }
  }, [error]);

  return (
    <div>
      <h2>Something went wrong!</h2>
    </div>
  );
}

With this in place, the client part is covered, and you should start seeing data in the New Relic dashboard under "All Entities" / "Browser Applications."

Let's now focus on the server part, with both a self-hosted option and Vercel.

Self-Hosting Next.js

If you self-host Next.js, It will typically run through a Node server (containerized or not), meaning you can fully utilize New Relic's APM agent.

Start by adding this to the Next.js config (next.config.js) to mark the newrelic package as external, otherwise you would not be able to import it.

{
  ...
  experimental: {
    ...
    // Without this setting, the Next.js compilation step will routinely
    // try to import files such as `LICENSE` from the `newrelic` module.
    // Note that this property will be renamed in the future: https://github.com/vercel/next.js/pull/65421
    serverComponentsExternalPackages: ["newrelic"],
  },
}

The next step that New Relic recommends is to copy the newrelic.js file from the newrelic library to the root of the folder. However, we want to keep our project root clean, and we'll store the file in o11y/newrelic.js (“o11y” is an abbreviation for observability). This will also demonstrate the configuration needed to make this possible.

This file, by default, will look something like this.

"use strict";

/**
 * New Relic agent configuration.
 *
 * See lib/config/default.js in the agent distribution for a more complete
 * description of configuration variables and their potential values.
 */
exports.config = {
  /**
   * Array of application names.
   */
  app_name: [process.env.NEW_RELIC_APP_NAME],
  /**
   * Your New Relic license key.
   */
  license_key: process.env.NEW_RELIC_LICENSE_KEY,
  agent_enabled: true,
  logging: {
    /**
     * Level at which to log. 'trace' is most useful to New Relic when diagnosing
     * issues with the agent, 'info' and higher will impose the least overhead on
     * production applications.
     */
    level: "info",
  },
  /**
   * When true, all request headers except for those listed in attributes.exclude
   * will be captured for all traces, unless otherwise specified in a destination's
   * attributes include/exclude lists.
   */
  allow_all_headers: true,
  attributes: {
    /**
     * Prefix of attributes to exclude from all destinations. Allows * as wildcard
     * at the end.
     *
     * NOTE: If excluding headers, they must be in camelCase form to be filtered.
     *
     * @name NEW_RELIC_ATTRIBUTES_EXCLUDE
     */
    exclude: [
      "request.headers.cookie",
      "request.headers.authorization",
      "request.headers.proxyAuthorization",
      "request.headers.setCookie*",
      "request.headers.x*",
      "response.headers.cookie",
      "response.headers.authorization",
      "response.headers.proxyAuthorization",
      "response.headers.setCookie*",
      "response.headers.x*",
    ],
  },
};

As you can see, some environment variables are involved, such as NEW_RELIC_APP_NAME and NEW_RELIC_LICENSE_KEY. These environment variables will be needed when the New Relic agent starts up. The trick here is that the agent doesn't start up with your app but earlier by preloading the library through another environment variable, NODE_OPTIONS.

On the server, that would be done by modifying the start script to this:

{
  "scripts": {
    ...
    "start": "NODE_OPTIONS='--loader newrelic/esm-loader.mjs -r @newrelic/next --no-warnings' next start",
  }
}

This ensures the agent is preloaded before starting the app, and the rest of the environment variables (such as NEW_RELIC_APP_NAME and NEW_RELIC_LICENSE_KEY) should already exist. Making sure this works in local development is not that easy due to different stages of loading environment variables. The environment variables from .env.local are loaded by the framework, which is too late for the New Relic agent (which is preloaded by the Node process). However, it is possible with a trick.

First, add the following new environment variables to .env.local:

NEW_RELIC_LICENSE_KEY=# take from 1Password
NEW_RELIC_HOME=o11y
# NODE_OPTIONS is needed only for local development
NODE_OPTIONS='--loader newrelic/esm-loader.mjs -r @newrelic/next --no-warnings'

(We have not mentioned NEW_RELIC_HOME so far, but the New Relic agent implicitly uses it to locate the newrelic.js file. We need to specify that since we moved it to the o11y folder.)

Next, install the nodenv package.

npm install -D node-env-run

The nodenv package provides a cross-platform way of preloading environment variables before we start the Node process. If we provide it with the .env.local file, this means all of the environment variables from that file will be available to the New Relic agent!

Change the scripts in package.json:

{
  "scripts": {
    "dev": "nodenv -E .env.local --exec \"next dev --turbo -H apollo.knopmanmarks.localhost -p 3000\"",
    "start": "NODE_OPTIONS='--loader newrelic/esm-loader.mjs -r @newrelic/next --no-warnings' next start",
  }
}

And that's it. If you now start the local server using the dev command, nodenv will make environment variables available to the Node process. The Node process will preload the New Relic agent (because we have NODE_OPTIONS in .local.env), the agent will read all the environment variables needed by newrelic.js, and everything will work as expected.

The first thing you might notice is the creation of the newrelic_agent.log. This is just a log file by the agent which you can use to troubleshoot if the agent is not working correctly. You can safely add this file to .gitignore.

From this point on, the server-side part of the app can utilize New Relic's API (such as noticeError and recordLogEvent) and the agent will automatically be recording transactions made through the server. The data should be visible in the New Relic dashboard, under "All Entities" / "Services - APM".

This is how you would log an error in your server action, for example:

import newrelic from "newrelic";

export async function createUserAction(...) {
  try {
    // create user logic
  } catch (err) {
    newrelic.noticeError(err);
  }
}

The full source of this type of integration is available on StackBlitz.

Using the New Relic APM agent on Vercel is currently not possible, as the server part of Next.js runs within lambdas, and not within a typical Node server as in the self-hosted option. In the future, this may be possible. At the moment, the only way to integrate Vercel and New Relic is by using the Vercel OpenTelemetry collector, which is only available for some plans.

The first step is to enable the integration in Vercel's dashboard and then install the following packages:

npm install @vercel/otel @opentelemetry/api

Create an instrumentation.ts file in the root of your project and add the following code:

import { registerOTel } from "@vercel/otel";
import { NEW_RELIC_APP_NAME } from "@/utilities/environment";

registerOTel({ serviceName: NEW_RELIC_APP_NAME });

In the the Next.js config (next.config.js), add:

{
  ...
  experimental: {
    instrumentationHook:
      !!process.env.VERCEL_ENV && process.env.VERCEL_ENV !== "development", // Only use instrumentation if deployed to Vercel
  },
}  

The OpenTelemetry integration does not work locally, hence the above conditional check. With the above configuration, the collected OpenTelemetry data should be visible in the New Relic dashboard under "All Entities" / "Services - OpenTelemetry".

Optionally, Logging via New Relic API

When using the Vercel OpenTelemetry integration, there is no easy way to submit custom logs to New Relic. New Relic offers a logging API, however, so to send custom logs from your backend, you could use a logger such as Winston, which has good library support for custom transports, such as the one for New Relic.

Install the following packages:

npm install winston winston-nr

Then create a logger.ts file with the following code:

import {
  IS_VERCEL,
  LOG_LEVEL,
  NEW_RELIC_APP_NAME,
  NEW_RELIC_LICENSE_KEY,
} from "./environment";
import { headers } from "next/headers";
import winston from "winston";
import { NewRelicTransport } from "winston-nr";

const newRelicTransport = new NewRelicTransport({
  apiUrl: "https://log-api.newrelic.com/log/v1",
  apiKey: NEW_RELIC_LICENSE_KEY,
  compression: true,
  retries: 3,
  batchSize: 10,
  batchTimeout: 5000,
  format: winston.format.json(),
});

export const serverLogger = winston.createLogger({
  level: LOG_LEVEL,
  defaultMeta: { serviceName: NEW_RELIC_APP_NAME },
  transports: [newRelicTransport],
});

NewRelicTransport utilizes the New Relic logging API to send application logs. Now you can use the serverLogger in your backend code, from server components to server actions.

Conclusion

As you can see, various options exist for integrating New Relic with the modern, app-router-based Next.js. Sadly, at the time of writing, no one-size-fits-all solution works equally for all hosting options, as Vercel's managed Next.js platform is still relatively young, and so is the app router in Next.js.

If you liked this blog post, feel free to browse our other Next.js blog posts for more tips and tricks in other areas of the Next.js ecosystem.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Vercel & React Native - A New Era of Mobile Development? cover image

Vercel & React Native - A New Era of Mobile Development?

Vercel & React Native - A New Era of Mobile Development? Jared Palmer of Vercel recently announced an acquisition that spiked our interest. Having worked extensively with both Next.js and Vercel, as well as React Native, we were curious to see what the appointment of Fernando Rojo, the creator of Solito, as Vercel's Head of Mobile, would mean for the future of React Native and Vercel. While we can only speculate on what the future holds, we can look closer at Solito and its current integration with Vercel. Based on the information available, we can also make some educated guesses about what the future might hold for React Native and Vercel. What is Solito? Based on a recent tweet by Guillermo Rauch, one might assume that Solito allows you to build mobile apps with Next.js. While that might become a reality in the future, Jamon Holmgren, the CTO of Infinite Red, added some context to the conversation. According to Jamon, Solito is a cross-platform framework built on top of two existing technologies: - For the web, Solito leverages Next.js. - For mobile, Solito takes advantage of Expo. That means that, at the moment, you can't build mobile apps using Next.js & Solito only - you still need Expo and React Native. Even Jamon, however, admits that even the current integration of Solito with Vercel is exciting. Let's take a closer look at what Solito is according to its official website: > This library is two things: > > 1. A tiny wrapper around React Navigation and Next.js that lets you share navigation code across platforms. > > 2. A set of patterns and examples for building cross-platform apps with React Native + Next.js. We can see that Jamon was right - Solito allows you to share navigation code between Next.js and React Native and provides some patterns and components that you can use to build cross-platform apps, but it doesn't replace React Native or Expo. The Cross-Platformness of Solito So, we know Solito provides a way to share navigation and some patterns between Next.js and React Native. But what precisely does that entail? Cross-Platform Hooks and Components If you look at Solito's documentation, you'll see that it's not only navigation you can share between Next.js and React Native. There are a few components that wrap Next.js components and make them available in React Native: - Link - a component that wraps Next.js' Link component and allows you to navigate between screens in React Native. - TextLink - a component that also wraps Next.js' Link component but accepts text nodes as children. - MotiLink - a component that wraps Next.js' Link component and allows you to animate the link using moti, a popular animation library for React Native. - SolitoImage - a component that wraps Next.js' Image component and allows you to display images in React Native. On top of that, Solito provides a few hooks that you can use for shared routing and navigation: - useRouter() - a hook that lets you navigate between screens across platforms using URLs and Next.js Url objects. - useLink() - a hook that lets you create Link components across the two platforms. - createParam() - a function that returns the useParam() and useParams() hooks which allow you to access and update URL parameters across platforms. Shared Logic The Solito starter project is structured as a monorepo containing: - apps/next - the Next.js application. - apps/expo or apps/native - the React Native application. - packages/app - shared packages across the two applications: - features - providers - navigation The shared packages contain the shared logic and components you can use across the two platforms. For example, the features package contains the shared components organized by feature, the providers package contains the shared context providers, and the navigation package includes the shared navigation logic. One of the key principles of Solito is gradual adoption, meaning that if you use Solito and follow the recommended structure and patterns, you can start with a Next.js application only and eventually add a React Native application to the mix. Deployments Deploying the Next.js application built on Solito is as easy as deploying any other Next.js application. You can deploy it to Vercel like any other Next.js application, e.g., by linking your GitHub repository to Vercel and setting up automatic deployments. Deploying the React Native application built on top of Solito to Expo is a little bit more involved - you cannot directly use the Github Action recommended by Expo without some modification as Solito uses a monorepo structure. The adjustment, however, is luckily just a one-liner. You just need to add the working-directory parameter to the eas update --auto command in the Github Action. Here's what the modified part of the Expo Github Action would look like: ` What Does the Future Hold? While we can't predict the future, we can make some educated guesses about what the future might hold for Solito, React Native, Expo, and Vercel, given what we know about the current state of Solito and the recent acquisition of Fernando Rojo by Vercel. A Competitor to Expo? One question that comes to mind is whether Vercel will work towards creating a competitor to Expo. While it's too early to tell, it's not entirely out of the question. Vercel has been expanding its offering beyond Next.js and static sites, and it's not hard to imagine that it might want to provide a more integrated, frictionless solution for building mobile apps, further bridging the gap between web and mobile development. However, Expo is a mature and well-established platform, and building a mobile app toolchain from scratch is no trivial task. It would be easier for Vercel to build on top of Expo and partner with them to provide a more integrated solution for building mobile apps with Next.js. Furthermore, we need to consider Vercel's target audience. Most of Vercel's customers are focused on web development with Next.js, and switching to a mobile-first approach might not be in their best interest. That being said, Vercel has been expanding its offering to cater to a broader audience, and providing a more integrated solution for building mobile apps might be a step in that direction. A Cross-Platform Framework for Mobile Apps with Next.js? Imagine a future where you write your entire application in Next.js — using its routing, file structure, and dev tools — and still produce native mobile apps for iOS and Android. It's unlikely such functionality would be built from scratch. It would likely still rely on React Native + Expo to handle the actual native modules, build processes, and distribution. From the developer’s point of view, however, it would still feel like writing Next.js. While this idea sounds exciting, it's not likely to happen in the near future. Building a cross-platform framework that allows you to build mobile apps with Next.js only would require a lot of work and coordination between Vercel, Expo, and the React Native community. Furthermore, there are some conceptual differences between Next.js and React Native that would need to be addressed, such as Next.js being primarily SSR-oriented and native mobile apps running on the client. Vercel Building on Top of Solito? One of the more likely scenarios is that Vercel will build on top of Solito to provide a more integrated solution for building mobile apps with Next.js. This could involve providing more components, hooks, and patterns for building cross-platform apps, as well as improving the deployment process for React Native applications built on top of Solito. A potential partnership between Vercel and Expo, or at least some kind of closer integration, could also be in the cards in this scenario. While Expo already provides a robust infrastructure for building mobile apps, Vercel could provide complementary services or features that make it easier to build mobile apps on top of Solito. Conclusion Some news regarding Vercel and mobile development is very likely on the horizon. After all, Guillermo Rauch, the CEO of Vercel, has himself stated that Vercel will keep raising the quality bar of the mobile and web ecosystems. While it's unlikely we'll see a full-fledged mobile app framework built on top of Next.js or a direct competitor to Expo in the near future, it's not hard to imagine that Vercel will provide more tools and services for building mobile apps with Next.js. Solito is a step in that direction, and it's exciting to see what the future holds for mobile development with Vercel....

Next.js + MongoDB Connection Storming cover image

Next.js + MongoDB Connection Storming

Building a Next.js application connected to MongoDB can feel like a match made in heaven. MongoDB stores all of its data as JSON objects, which don’t require transformation into JavaScript objects like relational SQL data does. However, when deploying your application to a serverless production environment such as Vercel, it is crucial to manage your database connections properly. If you encounter errors like these, you may be experiencing Connection Storming: * MongoServerSelectionError: connect ECONNREFUSED &lt;IP_ADDRESS>:&lt;PORT> * MongoNetworkError: failed to connect to server [&lt;hostname>:&lt;port>] on first connect * MongoTimeoutError: Server selection timed out after &lt;x> ms * MongoTopologyClosedError: Topology is closed, please connect * Mongo Atlas: Connections % of configured limit has gone above 80 Connection storming occurs when your application has to mount a connection to Mongo for every serverless function or API endpoint call. Vercel executes your application’s code in a highly concurrent and isolated fashion. So, if you create new database connections on each request, your app might quickly exceed the connection limit of your database. We can leverage Vercel’s fluid compute model to keep our database connection objects warm across function invocations. Traditional serverless architecture was designed for quick, stateless web app transactions. Now, especially with the rise of LLM-oriented applications built with Next.js, interactions with applications are becoming more sequential. We just need to ensure that we assign our MongoDB connection to a global variable. Protip: Use global variables Vercel’s fluid compute model means all memory, including global constants like a MongoDB client, stays initialized between requests as long as the instance remains active. By assigning your MongoDB client to a global constant, you avoid redundant setup work and reduce the overhead of cold starts. This enables a more efficient approach to reusing connections for your application’s MongoDB client. The example below demonstrates how to retrieve an array of users from the users collection in MongoDB and either return them through an API request to /api/users or render them as an HTML list at the /users route. To support this, we initialize a global clientPromise variable that maintains the MongoDB connection across warm serverless executions, avoiding re-initialization on every request. ` Using this database connection in your API route code is easy: ` You can also use this database connection in your server-side rendered React components. ` In serverless environments like Vercel, managing database connections efficiently is key to avoiding connection storming. By reusing global variables and understanding the serverless execution model, you can ensure your Next.js app remains stable and performant....

Implementing Dynamic Types in Docusign Extension Apps cover image

Implementing Dynamic Types in Docusign Extension Apps

Implementing Dynamic Types in Docusign Extension Apps In our previous blog post about Docusign Extension Apps, Advanced Authentication and Onboarding Workflows with Docusign Extension Apps, we touched on how you can extend the OAuth 2 flow to build a more powerful onboarding flow for your Extension Apps. In this blog post, we will continue explaining more advanced patterns in developing Extension Apps. For that reason, we assume at least basic familiarity with how Extension Apps work and ideally some experience developing them. To give a brief recap, Docusign Extension Apps are a powerful way to embed custom logic into Docusign agreement workflows. These apps are lightweight services, typically cloud-hosted, that integrate at specific workflow extension points to perform custom actions, such as data validation, participant input collection, or interaction with third-party services. Each Extension App is configured using a manifest file. This manifest defines metadata such as the app's author, support links, and the list of extension points it uses (these are the locations in the workflow where your app's logic will be executed). The extension points that are relevant for us in the context of this blog post are GetTypeNames and GetTypeDefinitions. These are used by Docusign to retrieve the types supported by the Extension App and their definitions, and to show them in the Maestro UI. In most apps, these types are static and rarely change. However, they don't have to be. They can also be dynamic and change based on certain configurations in the target system that the Extension App is integrating with, or based on the user role assigned to the Maestro administrator on the target system. Static vs. Dynamic Types To explain the difference between static and dynamic types, we'll use the example from our previous blog post, where we integrated with an imaginary task management system called TaskVibe. In the example, our Extension App enabled agreement workflows to communicate with TaskVibe, allowing tasks to be read, created, and updated. Our first approach to implementing the GetTypeNames and GetTypeDefinitions endpoints for the TaskVibe Extension App might look like the following. The GetTypeNames endpoint returns a single record named task: ` Given the type name task, the GetTypeDefinitions endpoint would return the following definition for that type: ` As noted in the Docusign documentation, this endpoint must return a Concerto schema representing the type. For clarity, we've omitted most of the Concerto-specific properties. The above declaration states that we have a task type, and this type has properties that correspond to task fields in TaskVibe, such as record ID, title, description, assignee, and so on. The type definition and its properties, as described above, are static and they never change. A TaskVibe task will always have the same properties, and these are essentially set in stone. Now, imagine a scenario where TaskVibe supports custom properties that are also project-dependent. One project in TaskVibe might follow a typical agile workflow with sprints, and the project manager might want a "Sprint" field in every task within that project. Another project might use a Kanban workflow, where the project manager wants a status field with values like "Backlog," "ToDo," and so on. With static types, we would need to return every possible field from any project as part of the GetTypeDefinitions response, and this introduces new challenges. For example, we might be dealing with hundreds of custom field types, and showing them in the Maestro UI might be too overwhelming for the Maestro administrator. Or we might be returning fields that are simply not usable by the Maestro administrator because they relate to projects the administrator doesn't have access to in TaskVibe. With dynamic types, however, we can support this level of customization. Implementing Dynamic Types When Docusign sends a request to the GetTypeNames endpoint and the types are dynamic, the Extension App has a bit more work than before. As we've mentioned earlier, we can no longer return a generic task type. Instead, we need to look into each of the TaskVibe projects the user has access to, and return the tasks as they are represented under each project, with all the custom fields. (Determining access can usually be done by making a query to a user information endpoint on the target system using the same OAuth 2 token used for other calls.) Once we find the task definitions on TaskVibe, we then need to return them in the response of GetTypeNames, where each type corresponds to a task for the given project. This is a big difference from static types, where we would only return a single, generic task. For example: ` The key point here is that we are now returning one type per task in a TaskVibe project. You can think of this as having a separate class for each type of task, in object-oriented lingo. The type name can be any string you choose, but it needs to be unique in the list, and it needs to contain the minimum information necessary to be able to distinguish it from other task definitions in the list. In our case, we've decided to form the ID by concatenating the string "task_" with the ID of the project on TaskVibe. The implementation of the GetTypeDefinitions endpoint needs to: 1. Extract the project ID from the requested type name. 1. Using the project ID, retrieve the task definition from TaskVibe for that project. This definition specifies which fields are present on the project's tasks, including all custom fields. 1. Once the fields are retrieved, map them to the properties of the Concerto schema. The resulting JSON could look like this (again, many of the Concerto properties have been omitted for clarity): ` Now, type definitions are fully dynamic and project-dependent. Caching of Type Definitions on Docusign Docusign maintains a cache of type definitions after an initial connection. This means that changes made to your integration (particularly when using dynamic types) might not be immediately visible in the Maestro UI. To ensure users see the latest data, it's useful to inform them that they may need to refresh their Docusign connection in the App Center UI if new fields are added to their integrated system (like TaskVibe). As an example, a newly added custom field on a TaskVibe project wouldn't be reflected until this refresh occurs. Conclusion In this blog post, we've explored how to leverage dynamic types within Docusign Extension Apps to create more flexible integrations with external systems. While static types offer simplicity, they can be constraining when working with external systems that offer a high level of customization. We hope that this blog post provides you with some ideas on how you can tackle similar problems in your Extension Apps....

Understanding Sourcemaps: From Development to Production cover image

Understanding Sourcemaps: From Development to Production

What Are Sourcemaps? Modern web development involves transforming your source code before deploying it. We minify JavaScript to reduce file sizes, bundle multiple files together, transpile TypeScript to JavaScript, and convert modern syntax into browser-compatible code. These optimizations are essential for performance, but they create a significant problem: the code running in production does not look like the original code you wrote. Here's a simple example. Your original code might look like this: ` After minification, it becomes something like this: ` Now imagine trying to debug an error in that minified code. Which line threw the exception? What was the value of variable d? This is where sourcemaps come in. A sourcemap is a JSON file that contains a mapping between your transformed code and your original source files. When you open browser DevTools, the browser reads these mappings and reconstructs your original code, allowing you to debug with variable names, comments, and proper formatting intact. How Sourcemaps Work When you build your application with tools like Webpack, Vite, or Rollup, they can generate sourcemap files alongside your production bundles. A minified file references its sourcemap using a special comment at the end: ` The sourcemap file itself contains a JSON structure with several key fields: ` The mappings field uses an encoding format called VLQ (Variable Length Quantity) to map each position in the minified code back to its original location. The browser's DevTools use this information to show you the original code while you're debugging. Types of Sourcemaps Build tools support several variations of sourcemaps, each with different trade-offs: Inline sourcemaps: The entire mapping is embedded directly in your JavaScript file as a base64 encoded data URL. This increases file size significantly but simplifies deployment during development. ` External sourcemaps: A separate .map file that's referenced by the JavaScript bundle. This is the most common approach, as it keeps your production bundles lean since sourcemaps are only downloaded when DevTools is open. Hidden sourcemaps: External sourcemap files without any reference in the JavaScript bundle. These are useful when you want sourcemaps available for error tracking services like Sentry, but don't want to expose them to end users. Why Sourcemaps During development, sourcemaps are absolutely critical. They will help avoid having to guess where errors occur, making debugging much easier. Most modern build tools enable sourcemaps by default in development mode. Sourcemaps in Production Should you ship sourcemaps to production? It depends. While security by making your code more difficult to read is not real security, there's a legitimate argument that exposing your source code makes it easier for attackers to understand your application's internals. Sourcemaps can reveal internal API endpoints and routing logic, business logic, and algorithmic implementations, code comments that might contain developer notes or TODO items. Anyone with basic developer tools can reconstruct your entire codebase when sourcemaps are publicly accessible. While the Apple leak contained no credentials or secrets, it did expose their component architecture and implementation patterns. Additionally, code comments can inadvertently contain internal URLs, developer names, or company-specific information that could potentially be exploited by attackers. But that’s not all of it. On the other hand, services like Sentry can provide much more actionable error reports when they have access to sourcemaps. So you can understand exactly where errors happened. If a customer reports an issue, being able to see the actual error with proper context makes diagnosis significantly faster. If your security depends on keeping your frontend code secret, you have bigger problems. Any determined attacker can reverse engineer minified JavaScript. It just takes more time. Sourcemaps are only downloaded when DevTools is open, so shipping them to production doesn't affect load times or performance for end users. How to manage sourcemaps in production You don't have to choose between no sourcemaps and publicly accessible ones. For example, you can restrict access to sourcemaps with server configuration. You can make .map accessible from specific IP addresses. Additionally, tools like Sentry allow you to upload sourcemaps during your build process without making them publicly accessible. Then configure your build to generate sourcemaps without the reference comment, or use hidden sourcemaps. Sentry gets the mapping information it needs, but end users can't access the files. Learning from Apple's Incident Apple's sourcemap incident is a valuable reminder that even the largest tech companies can make deployment oversights. But it also highlights something important: the presence of sourcemaps wasn't actually a security vulnerability. This can be achieved by following good security practices. Never include sensitive data in client code. Developers got an interesting look at how Apple structures its Svelte codebase. The lesson is that you must be intentional about your deployment configuration. If you're going to include sourcemaps in production, make that decision deliberately after considering the trade-offs. And if you decide against using public sourcemaps, verify that your build process actually removes them. In this case, the public repo was quickly removed after Apple filed a DMCA takedown. (https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md) Making the Right Choice So what should you do with sourcemaps in your projects? For development: Always enable them. Use fast options, such as eval-source-map in Webpack or the default configuration in Vite. The debugging benefits far outweigh any downsides. For production: Consider your specific situation. But most importantly, make sure your sourcemaps don't accidentally expose secrets. Review your build output, check for hardcoded credentials, and ensure sensitive configurations stay on the backend where they belong. Conclusion Sourcemaps are powerful development tools that bridge the gap between the optimized code your users download and the readable code you write. They're essential for debugging and make error tracking more effective. The question of whether to include them in production doesn't have a unique answer. Whatever you decide, make it a deliberate choice. Review your build configuration. Verify that sourcemaps are handled the way you expect. And remember that proper frontend security doesn't come from hiding your code. Useful Resources * Source map specification - https://tc39.es/ecma426/ * What are sourcemaps - https://web.dev/articles/source-maps * VLQ implementation - https://github.com/Rich-Harris/vlq * Sentry sourcemaps - https://docs.sentry.io/platforms/javascript/sourcemaps/ * Apple DMCA takedown - https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co