Skip to content

Creating Complex UIs Using the Stripe UI Extension SDK

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Anyone who embarks on a journey to develop apps for the Stripe dashboard will need to grasp the concepts of frontend development using the Stripe UI Extension SDK. While a backend for your Stripe app may not always be necessary, a frontend is most certainly needed. In this blog, I will provide you with some tips & tricks that you might want to consider if you plan to develop more complex UIs.

The Basics

Stripe's UI Extension SDK is React-based. This means that you will need to have some React experience if you want to create Stripe apps. While Stripe apps are web apps, you will never touch HTML or CSS directly. In fact, you're forbidden to. This experience feels almost like developing in React Native. You use the provided UI primitives or components to scaffold your UI.

When you open your app on the Stripe dashboard, you will see one of your app's views. Depending on where you are in the dashboard, you can open a view that is customer-related, payment-related, etc. Or, in the default case, you will open the default view if you are on the main dashboard page.

A typical drawer view will have a ContextView component as its first child, and any other components from the UI toolkit as necessary.

The main building block for your components is a Box. It's a plain block-level container, so you can think of it as a div. The Box, just like most other components, has a css property where you can apply styling.

As mentioned before, you can't use real CSS though. Instead, you'll need to rely on the predefined CSS properties, most of which are very similar to real CSS properties in both naming and functionality.

Just like we have a block-level container, we also have inline containers similar to span. In Stripe's case, this component is called Inline.

All other components provide some additional functionality. Some of them are meant to be containers as well, while others are usually standalone.

When you navigate through the Stripe dashboard, the dashboard will be opening different views in your app. On the Customers page, it will open a view for customers, for example (if such view is defined in stripe-app.json). However, what if you are on a view, but wish to navigate somewhere else within the view? Or maybe have functionality where you need to go back and forth between components? A typical scenario is having a widget, which, when clicked, opens more details, but within the same dashboard view.

For this scenario, you can utilize React Router, specifically its MemoryRouter. Install it first:

npm install react-router-dom@6

Since we don't have access to the DOM, and we cannot manipulate the browser location either, using MemoryRouter is good enough for our needs. It will store the current location as well as history in an internal memory array. You use it just like you would use the regular router.

Let's assume that you need to authenticate your user before using the app. Let's also assume that, once logged in, you are shown an FAQ component with multiple questions. Once you click a FAQ question, you are directed to a FAQ answer within the view, as shown below:

faq-concept

To model such transitions, you would use the React Router. The default view could have the following implementation:

import type { ExtensionContextValue } from '@stripe/ui-extension-sdk/context';
import { ContextView } from '@stripe/ui-extension-sdk/ui';
import {
  APP_BRANDING_COLOR,
  APP_BRANDING_LOGO,
} from './constants/branding.constants';
import { Route, Routes } from 'react-router-dom';
import { MemoryRouter as Router } from 'react-router';
// ... other imports
const DefaultView = ({ userContext, environment }: ExtensionContextValue) => {
  const viewId = environment?.viewportID; // stripe.dashboard.drawer.default for default view
  return (
    <ContextView
      title=" "
      description=" "
      brandIcon={APP_BRANDING_LOGO}
      brandColor={APP_BRANDING_COLOR}
    >
      <Router basename="/" initialEntries={['/init']}>
        <Routes>
          <Route path="/init" element={<AuthInit />} />
          <Route path="/login" element={<Login />} />
          <Route path="/faq-entries/:id" element={<FaqEntry />} />
          <Route path="/" element={<Faq />} />
        </Routes>
      </Router>
    </ContextView>
  );
};
export default DefaultView;

There are several things of note here.

The ContextView is the root component in our DefaultView component. When we don't want to provide title and description of the ContextView, we use a space. You can't use an empty string, or leave out title and description all together.

The first child of the ContextView is a Router. This is where our components will render, depending on our route. Remember that we use a MemoryRouter, and the route is an internal property of the router. It's not reflected in the window's location.

When you open the view for the first time, you are presented with the AuthInit component. This component could check if you are authenticated, and if so, redirect you to /, and consequently the React Router will render the Faq component.

Within the Faq component, we can use the useNavigate hook to navigate to a FaqEntry component to show the expanded question, and answer for a FAQ entry. For example:

import React from 'react';
import { useNavigate } from 'react-router-dom';
export const Faq: React.FC = () => {
  const navigate = useNavigate();
  const onClickFaqEntry = (id: string) => {
    navigate(`/faq-entries/${id}`);
  };
  return <>{/* Display a list of FAQ entries here */}</>;
};

You can even try to extract the Router part into its own component, e.g. NavigationWrapper, and re-use it across views. In that case, you might consider including view IDs in the route to distinguish between different views. Following our example, this means our routes would become:

<Router basename="/" initialEntries={['/init']}>
  <Routes>
    <Route path="/init" element={<AuthInit />} />
    <Route path="/login" element={<Login />} />
    <Route path="stripe.dashboard.drawer.default/faq-entries/:id" element={<FaqEntry />} />
    <Route path="stripe.dashboard.drawer.default" element={<Faq />} />
    <Route path="stripe.dashboard.customer.detail" element={<SomeCustomerRelatedComponent />} />
  </Routes>
</Router>

The current view ID is always available in the environment?.viewportID property, should you ever need it.

Using UserContext and Environment Everywhere

The UserContext and Environment are so useful and so commonly used that it makes sense to store them in React context. Otherwise, you'd need to pass them down from your view component all the way to the components that need them. This is a technique known as "prop drilling" in the React world.

First, define your context object:

export const GlobalContext = createContext<{
  userContext: ExtensionContextValue['userContext'] | null;
  environment: ExtensionContextValue['environment'] | null;
}>({ userContext: null, environment: null });

Now, simply wrap your view component in a context provider, and initialize it with the userContext and environment that are passed to your view:

import type { ExtensionContextValue } from '@stripe/ui-extension-sdk/context';
import { ContextView } from '@stripe/ui-extension-sdk/ui';
import { NavigationWrapper } from './authentication/NavigationWrapper';
import { GlobalContext } from './common/global-context';
import {
  APP_BRANDING_COLOR,
  APP_BRANDING_LOGO,
} from './constants/branding.constants';
const DefaultView = ({ userContext, environment }: ExtensionContextValue) => {
  return (
    <GlobalContext.Provider value={{ userContext, environment }}>
      <ContextView
        title=" "
        description=" "
        brandIcon={APP_BRANDING_LOGO}
        brandColor={APP_BRANDING_COLOR}
      >
        <NavigationWrapper /> {/* Routes go here */}
      </ContextView>
    </GlobalContext.Provider>
  );
};
export default DefaultView;

Now, you can use the useContext hook to retrieve properties from userContext and environment anywhere in your component tree. For example, this is how we would retrieve the payment object if we were on the payments page:

const globalContext = useContext(GlobalContext);
const objectContextId = globalContext.environment?.objectContext?.id;
const objectContextType = globalContext.environment?.objectContext?.object;
useEffect(() => {
  if (!objectContextId) {
    return;
  }
  if (objectContextType === 'payment_intent') {
    const request = stripeApi.paymentIntents.retrieve(
      objectContextId
    ) as Promise<Stripe.PaymentIntent>;
    // Do something with Promise<Stripe.PaymentIntent>
}, [objectContextId, objectContextType]);

Updating Title and Description Dynamically

title and description are properties on the view level, but more often than not, you might want to update them dynamically as they are very context-dependant. In one of the above examples, we might want to display the FAQ answer in the title as we navigate to individual FAQ entries.

In such cases, we can use React context again. Let's extend our GlobalContext with title and description properties, as well as setters for them.

export const GlobalContext = createContext<{
  userContext: ExtensionContextValue['userContext'] | null;
  environment: ExtensionContextValue['environment'] | null;
  title: string | null;
  description: string | null;
  setTitle: (newTitle: string) => void;
  setDescription: (newDescription: string) => void;
}>({
  userContext: null,
  environment: null,
  title: null,
  description: null,
  setTitle: (newTitle: string) => {},
  setDescription: (newDescription: string) => {},
});

The DefaultView component now becomes:

import type { ExtensionContextValue } from '@stripe/ui-extension-sdk/context';
import { ContextView } from '@stripe/ui-extension-sdk/ui';
import { NavigationWrapper } from './authentication/NavigationWrapper';
import { GlobalContext } from './common/global-context';
import {
  APP_BRANDING_COLOR,
  APP_BRANDING_LOGO,
} from './constants/branding.constants';
import { useState } from 'react';
const DefaultView = ({ userContext, environment }: ExtensionContextValue) => {
  const [title, setTitle] = useState(' ');
  const [description, setDescription] = useState(' ');
  return (
    <GlobalContext.Provider
      value={{
        userContext,
        environment,
        title,
        description,
        setTitle,
        setDescription,
      }}
    >
      <ContextView
        title={title}
        description={description}
        brandIcon={APP_BRANDING_LOGO}
        brandColor={APP_BRANDING_COLOR}
      >
        <NavigationWrapper /> {/* Routes go here */}
      </ContextView>
    </GlobalContext.Provider>
  );
};
export default DefaultView;

Now, any child component can update title and/or description, using the following snippet:

const { setTitle } = useContext(GlobalContext);
setTitle('My new title');

Layout Options

Before starting to model different type of layouts, it is highly recommended to learn the concept of "stacks". Stacks are Stripe's way of modelling flexbox-like layouts.

It's not exactly the same as flexbox, but once you get the hang of it, you will find it to be very similar. You can stack up elements horizontally (on the "x" axis) or vertically (on the "y" axis). Furthermore, elements can be distributed in a different way along the axis, or aligned in a certain way, giving you the ability to make any kind of layout you can imagine. You can even stack up elements on top of each other.

Whenever you want to align your content in any way, you need to use a stack.

Key-Value Two-Column Layout

Let's assume that you want to show key-value pairs in a two-column layout. This can be accomplished easily using stacks:

<Box
  css={{
    stack: 'x',
    gap: 'xxsmall',
    width: 'fill',
    marginBottom: 'xxsmall',
  }}
>
  <Box
    css={{
      keyline: 'neutral',
      padding: 'small',
      fontWeight: 'bold',
      width: '1/2',
    }}
  >
    App Version:
  </Box>
  <Box
    css={{
      keyline: 'neutral',
      padding: 'small',
      stack: 'x',
      alignX: 'end',
      width: '1/2',
    }}
  >
    1.0.0
  </Box>
</Box>

The above code is for one row in the two-column layout. To add more, just add more blocks like these (or better, extract the above component in its own component). This is how it would be displayed in the app:

two-columns

The above code snippet has several things of note:

  • Whenever you use a width on a Box, make sure its parent has a gap defined. The reason is the way how the width is calculated - it takes gap into calculation, and if the gap is 0, the calculation will produce invalid width (which will just fit the content).
  • To align content of a Box (we used alignX: 'end'), the Box should be a stack.
  • In addition to using margin, you can also use marginTop, marginBottom, marginLeft, and marginRight to specify margin only on one (or more sides).

Vertically Aligning Elements

Stacks are not only helpful for horizontal alignments. You can align vertically too, and this can come in handy if you want to nicely align an icon with a text label, or if you want to have form elements displayed in a straight line. For example:

<Box css={{ stack: 'x', alignY: 'center', gap: 'small' }}>
  <Select name="options">
    <option value="option-1">Option 1</option>
    <option value="option-2">Option 2</option>
    <option value="option-3">Option 3</option>
    <option value="option-4">Option 4</option>
  </Select>
  <Switch label="Switch me" checked />
</Box>
vertical-alignment

Wizards

Whenever you need to have confirmation modals, or wizard-like flows, it's best to use the FocusView component. A focus view slides from the right and allows the user to have a dedicated space to perform a specific task, such as entering details to create a new entry in a database, or going through a wizard.

A wizard can be implemented using two or more FocusView components. Here is an example of having a wizard with two components:

import { Button, FocusView } from '@stripe/ui-extension-sdk/ui';
import { useState } from 'react';
const DefaultView = () => {
  const [step1Shown, setStep1Shown] = useState<boolean>(false);
  const [step2Shown, setStep2Shown] = useState<boolean>(false);
  const onPressStart = () => {
    setStep1Shown(true);
  };
  const onPressStep2 = () => {
    setStep1Shown(false);
    setStep2Shown(true);
  };
  const onPressFinish = () => {
    setStep2Shown(false);
  };
  const onPressBack = () => {
    setStep2Shown(false);
    setStep1Shown(true);
  };
  return (
    <>
      <Button type="primary" onPress={onPressStart}>
        Start Wizard
      </Button>
      <FocusView
        title="Step 1"
        shown={step1Shown}
        primaryAction={
          <Button type="primary" onPress={onPressStep2}>
            Next Step
          </Button>
        }
        secondaryAction={
          <Button onPress={() => setStep1Shown(false)}>Cancel</Button>
        }
        onClose={() => setStep1Shown(false)}
      >
        Step 1 content
      </FocusView>
      <FocusView
        title="Step 2"
        shown={step2Shown}
        primaryAction={
          <Button type="primary" onPress={onPressFinish}>
            Finish
          </Button>
        }
        secondaryAction={<Button onPress={onPressBack}>Back</Button>}
        onClose={onPressBack}
      >
        Step 2 content
      </FocusView>
    </>
  );
};
export default DefaultView;

Conclusion

Stripe really did fantastic work to provide an amazing UI Extensions for developing custom Stripe apps. We hope this blog post provided you with some guidance on how to set solid foundations if you want to make more complex UIs using the Stripe UI Extension SDK. Stripe is continuously upgrading the UI Extension SDK, and you can definitely expect some new widgets to play with in the future!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

How to Login in to Third Party Services in Stripe Apps with OAuth PKCE cover image

How to Login in to Third Party Services in Stripe Apps with OAuth PKCE

One of the benefits of Stripe Apps is that they allow you to connect to third-party services directly from the Stripe Dashboard. There are many ways to implement the OAuth flows to authenticate with a third-party service, but the ideal one for Stripe Apps is PKCE. Unlike other OAuth flows, a Stripe app authenticating with a third-party using PKCE does not require any kind of backend. The entire process can take place in the user's browser. What is OAuth PKCE Proof Code for Key Exchange (PKCE, pronounced "pixie") is an extension of regular OAuth flows. It is designed for when you've got a client where it would be possible to access a secret key, such as a native app, or a single-page app. Because Stripe Apps are very restricted for security purposes, the OAuth PKCE flow is the only OAuth flow that works in Stripe Apps without requiring a separate backend. Not all third-party services support the PKCE authorization flow. One that does is Dropbox, and we will use that for our code examples. Using createOAuthState and oauthContext to Get an Auth Token To use the OAuth PKCE flow, you'll use createOAuthState from the Stripe UI Extension SDK to generate a state and code challenge. We will use these to request a code and verifier from Dropbox. Dropbox will then respond to a specific endpoint for our Stripe App with the code and verifier, which we'll have access to in the oauthContext. With these, we can finally get our access token. If you wish to follow along, you'll need to both create a Stripe App and a Dropbox App. We'll start by creating state to save our oauthState and challenge, and then get a code and verifier if we don't have one already. If we do have a code and verifier, we'll try to get the token, and put it in tokenData state. ` ` ` Fetch Dropbox User Data To prove to ourselves that the token works, let's fetch Dropbox user data using the token. We'll create a new function to fetch this user data, and call it from within our Stripe App's view. We'll store this user data in state. ` ` ` Storing Tokens with the Secret Store Currently, we're only persisting the retrieved token data in memory. As soon as we close the Stripe App, it will be forgotten and the user would have to fetch it all over again. For security reasons, we can't save it as a cookie or to local storage. But Stripe has a solution: the secret store. The secret store allows us to persist key-value data with Stripe itself. We can use this to save our token data and load it whenever a user opens our Stripe App. To make it easier to work with the secret store, we'll create a custom hook: useSecretStore. ` Once we've got our custom hook ready, we can integrate it into our App.tsx view. We will rewrite the useEffect to check for a saved token in the secret store, and use that if it's valid. Only if there is no token available do we create a new one, which will then be persisted to the secret store. We also add a Log Out button, which will reset the tokenData and secret store values to null. The Log Out button creates an issue. If we have oauthContext from logging in, and then we log out, the Stripe App still has the same oauthContext. If we tried logging in again without closing the app, we would get an error because we're re-using old credentials. To fix this, we also add a React ref to keep track of whether or not we've used our current oauthContext values. ` We've done a lot to create our authorization flow using PKCE. To see this entire example all together, check out this code sample on GitHub....

Integrating In-house Data and Workflows with Stripe Using Private Stripe Apps cover image

Integrating In-house Data and Workflows with Stripe Using Private Stripe Apps

Stripe Apps is a recently-announced platform that allows developers to embed content within Stripe's web UI, extending its functionality to allow interaction with non-Stripe services. Your immediate thought upon hearing of such a platform might be that it is useful for public services, such as customer support, to develop Stripe integrations. This is a core use-case and high-profile public integrations like those for Intercom and DocuSign have featured prominently in demonstrations of the platform's capabilities. However, you shouldn't overlook the value of private apps, developed specifically for your organization and visible only to your employees. Private apps may prove to be even more valuable, because they can specifically address your business' problems, automating domain-specific workflows, and integrate with in-house data and services. What is a private Stripe App? Stripe Apps published to the marketplace act like apps you might be familiar with from iOS or Android. They are developed for use by the public, each version of them goes through a strict review by Stripe, and they are published to the Stripe Marketplace where anyone can install them. Private Stripe Apps, in contrast, are published directly to the Stripe account that owns them. Since they are not going to be visible to users outside of the organization they are developed for, they don't have to go through the app review process, simplifying the development and maintenance process. Accessing internal services Since Stripe Apps are browser apps which execute on a user's own machine — as opposed to on Stripe's servers — private Stripe Apps can make use of resources that are only accessible from company-controlled devices. Intranet services and any internal authentication are accessible from your private Stripe App just as they are from any other browser-based internal tooling. There are only two caveats to accessing HTTP services from Stripe Apps. Both of them are driven by the security model of apps. The first is that services must be served over HTTPS, which is standard for internet-facing services, but might not be the case for private services on an intranet. The second one is that services must allow all cross-origin requests, since requests from Stripe Apps are made with a null origin, and therefore CORS allowlisting cannot be used to secure services against cross-site request forgery. If this is a major concern, endpoints specific to your Stripe App can be constructed that are secured through Stripe's request signing mechanism, and which proxy requests to the internal services only for requests signed with the App's secret. Enriching views with context-based data and workflows Stripe Apps are displayed on the same screen as Stripe objects like customers or invoices, and can access information about those objects and interact with them. This enables smoother workflows by operators by showing important context all in the same view. For example: - Displaying product shipping and returns information on the Invoice Details screen in order to make processing refunds more efficient. - Allowing operators to see — and maybe edit — the features that a given subscription plan includes directly in the Product Details screen. - Displaying account activity from multiple sources like access and change logs in the Customer Details screen in order to make it easier to resolve support queries. If anyone in your organization is currently working with multiple open browser tabs to manually collate information or execute workflows that cross service boundaries, a private Stripe App could help automate that process. This will free them up to do more valuable tasks, and reduce the likelihood of errors by making contextual information more reliably available, and eliminating manual steps....

The 2025 Guide to JS Build Tools cover image

The 2025 Guide to JS Build Tools

The 2025 Guide to JS Build Tools In 2025, we're seeing the largest number of JavaScript build tools being actively maintained and used in history. Over the past few years, we've seen the trend of many build tools being rewritten or forked to use a faster and more efficient language like Rust and Go. In the last year, new companies have emerged, even with venture capital funding, with the goal of working on specific sets of build tools. Void Zero is one such recent example. With so many build tools around, it can be difficult to get your head around and understand which one is for what. Hopefully, with this blog post, things will become a bit clearer. But first, let's explain some concepts. Concepts When it comes to build tools, there is no one-size-fits-all solution. Each tool typically focuses on one or two primary features, and often relies on other tools as dependencies to accomplish more. While it might be difficult to explain here all of the possible functionalities a build tool might have, we've attempted to explain some of the most common ones so that you can easily understand how tools compare. Minification The concept of minification has been in the JavaScript ecosystem for a long time, and not without reason. JavaScript is typically delivered from the server to the user's browser through a network whose speed can vary. Thus, there was a need very early in the web development era to compress the source code as much as possible while still making it executable by the browser. This is done through the process of *minification*, which removes unnecessary whitespace, comments, and uses shorter variable names, reducing the total size of the file. This is what an unminified JavaScript looks like: ` This is the same file, minified: ` Closely related to minimizing is the concept of source maps#Source_mapping), which goes hand in hand with minimizing - source maps are essentially mappings between the minified file and the original source code. Why is that needed? Well, primarily for debugging minified code. Without source maps, understanding errors in minified code is nearly impossible because variable names are shortened, and all formatting is removed. With source maps, browser developer tools can help you debug minified code. Tree-Shaking *Tree-shaking* was the next-level upgrade from minification that became possible when ES modules were introduced into the JavaScript language. While a minified file is smaller than the original source code, it can still get quite large for larger apps, especially if it contains parts that are effectively not used. Tree shaking helps eliminate this by performing a static analysis of all your code, building a dependency graph of the modules and how they relate to each other, which allows the bundler to determine which exports are used and which are not. Once unused exports are found, the build tool will remove them entirely. This is also called *dead code elimination*. Bundling Development in JavaScript and TypeScript rarely involves a single file. Typically, we're talking about tens or hundreds of files, each containing a specific part of the application. If we were to deliver all those files to the browser, we would overwhelm both the browser and the network with many small requests. *Bundling* is the process of combining multiple JS/TS files (and often other assets like CSS, images, etc.) into one or more larger files. A bundler will typically start with an entry file and then recursively include every module or file that the entry file depends on, before outputting one or more files containing all the necessary code to deliver to the browser. As you might expect, a bundler will typically also involve minification and tree-shaking, as explained previously, in the process to deliver only the minimum amount of code necessary for the app to function. Transpiling Once TypeScript arrived on the scene, it became necessary to translate it to JavaScript, as browsers did not natively understand TypeScript. Generally speaking, the purpose of a *transpiler* is to transform one language into another. In the JavaScript ecosystem, it's most often used to transpile TypeScript code to JavaScript, optionally targeting a specific version of JavaScript that's supported by older browsers. However, it can also be used to transpile newer JavaScript to older versions. For example, arrow functions, which are specified in ES6, are converted into regular function declarations if the target language is ES5. Additionally, a transpiler can also be used by modern frameworks such as React to transpile JSX syntax (used in React) into plain JavaScript. Typically, with transpilers, the goal is to maintain similar abstractions in the target code. For example, transpiling TypeScript into JavaScript might preserve constructs like loops, conditionals, or function declarations that look natural in both languages. Compiling While a transpiler's purpose is to transform from one language to another without or with little optimization, the purpose of a *compiler* is to perform more extensive transformations and optimizations, or translate code from a high-level programming language into a lower-level one such as bytecode. The focus here is on optimizing for performance or resource efficiency. Unlike transpiling, compiling will often transform abstractions so that they suit the low-level representation, which can then run faster. Hot-Module Reloading (HMR) *Hot-module reloading* (HMR) is an important feature of modern build tools that drastically improves the developer experience while developing apps. In the early days of the web, whenever you'd make a change in your source code, you would need to hit that refresh button on the browser to see the change. This would become quite tedious over time, especially because with a full-page reload, you lose all the application state, such as the state of form inputs or other UI components. With HMR, we can update modules in real-time without requiring a full-page reload, speeding up the feedback loop for any changes made by developers. Not only that, but the full application state is typically preserved, making it easier to test and iterate on code. Development Server When developing web applications, you need to have a locally running development server set up on something like http://localhost:3000. A development server typically serves unminified code to the browser, allowing you to easily debug your application. Additionally, a development server will typically have hot module replacement (HMR) so that you can see the results on the browser as you are developing your application. The Tools Now that you understand the most important features of build tools, let's take a closer look at some of the popular tools available. This is by no means a complete list, as there have been many build tools in the past that were effective and popular at the time. However, here we will focus on those used by the current popular frameworks. In the table below, you can see an overview of all the tools we'll cover, along with the features they primarily focus on and those they support secondarily or through plugins. The tools are presented in alphabetical order below. Babel Babel, which celebrated its 10th anniversary since its initial release last year, is primarily a JavaScript transpiler used to convert modern JavaScript (ES6+) into backward-compatible JavaScript code that can run on older JavaScript engines. Traditionally, developers have used it to take advantage of the newer features of the JavaScript language without worrying about whether their code would run on older browsers. esbuild esbuild, created by Evan Wallace, the co-founder and former CTO of Figma, is primarily a bundler that advertises itself as being one of the fastest bundlers in the market. Unlike all the other tools on this list, esbuild is written in Go. When it was first released, it was unusual for a JavaScript bundler to be written in a language other than JavaScript. However, this choice has provided significant performance benefits. esbuild supports ESM and CommonJS modules, as well as CSS, TypeScript, and JSX. Unlike traditional bundlers, esbuild creates a separate bundle for each entry point file. Nowadays, it is used by tools like Vite and frameworks such as Angular. Metro Unlike other build tools mentioned here, which are mostly web-focused, Metro's primary focus is React Native. It has been specifically optimized for bundling, transforming, and serving JavaScript and assets for React Native apps. Internally, it utilizes Babel as part of its transformation process. Metro is sponsored by Meta and actively maintained by the Meta team. Oxc The JavaScript Oxidation Compiler, or Oxc, is a collection of Rust-based tools. Although it is referred to as a compiler, it is essentially a toolchain that includes a parser, linter, formatter, transpiler, minifier, and resolver. Oxc is sponsored by Void Zero and is set to become the backbone of other Void Zero tools, like Vite. Parcel Feature-wise, Parcel covers a lot of ground (no pun intended). Largely created by Devon Govett, it is designed as a zero-configuration build tool that supports bundling, minification, tree-shaking, transpiling, compiling, HMR, and a development server. It can utilize all the necessary types of assets you will need, from JavaScript to HTML, CSS, and images. The core part of it is mostly written in JavaScript, with a CSS transformer written in Rust, whereas it delegates the JavaScript compilation to a SWC. Likewise, it also has a large collection of community-maintained plugins. Overall, it is a good tool for quick development without requiring extensive configuration. Rolldown Rolldown is the future bundler for Vite, written in Rust and built on top of Oxc, currently leveraging its parser and resolver. Inspired by Rollup (hence the name), it will provide Rollup-compatible APIs and plugin interface, but it will be more similar to esbuild in scope. Currently, it is still in heavy development and it is not ready for production, but we should definitely be hearing more about this bundler in 2025 and beyond. Rollup Rollup is the current bundler for Vite. Originally created by Rich Harris, the creator of Svelte, Rollup is slowly becoming a veteran (speaking in JavaScript years) compared to other build tools here. When it originally launched, it introduced novel ideas focused on ES modules and tree-shaking, at the time when Webpack as its competitor was becoming too complex due to its extensive feature set - Rollup promised a simpler way with a straightforward configuration process that is easy to understand. Rolldown, mentioned previously, is hoped to become a replacement for Rollup at some point. Rsbuild Rsbuild is a high-performance build tool written in Rust and built on top of Rspack. Feature-wise, it has many similiarities with Vite. Both Rsbuild and Rspack are sponsored by the Web Infrastructure Team at ByteDance, which is a division of ByteDance, the parent company of TikTok. Rsbuild is built as a high-level tool on top of Rspack that has many additional features that Rspack itself doesn't provide, such as a better development server, image compression, and type checking. Rspack Rspack, as the name suggests, is a Rust-based alternative to Webpack. It offers a Webpack-compatible API, which is helpful if you are familiar with setting up Webpack configurations. However, if you are not, it might have a steep learning curve. To address this, the same team that built Rspack also developed Rsbuild, which helps you achieve a lot with out-of-the-box configuration. Under the hood, Rspack uses SWC for compiling and transpiling. Feature-wise, it’s quite robust. It includes built-in support for TypeScript, JSX, Sass, Less, CSS modules, Wasm, and more, as well as features like module federation, PostCSS, Lightning CSS, and others. Snowpack Snowpack was created around the same time as Vite, with both aiming to address similar needs in modern web development. Their primary focus was on faster build times and leveraging ES modules. Both Snowpack and Vite introduced a novel idea at the time: instead of bundling files while running a local development server, like traditional bundlers, they served the app unbundled. Each file was built only once and then cached indefinitely. When a file changed, only that specific file was rebuilt. For production builds, Snowpack relied on external bundlers such as Webpack, Rollup, or esbuild. Unfortunately, Snowpack is a tool you’re likely to hear less and less about in the future. It is no longer actively developed, and Vite has become the recommended alternative. SWC SWC, which stands for Speedy Web Compiler, can be used for both compilation and bundling (with the help of SWCpack), although compilation is its primary feature. And it really is speedy, thanks to being written in Rust, as are many other tools on this list. Primarily advertised as an alternative to Babel, its SWC is roughly 20x faster than Babel on a single thread. SWC compiles TypeScript to JavaScript, JSX to JavaScript, and more. It is used by tools such as Parcel and Rspack and by frameworks such as Next.js, which are used for transpiling and minification. SWCpack is the bundling part of SWC. However, active development within the SWC ecosystem is not currently a priority. The main author of SWC now works for Turbopack by Vercel, and the documentation states that SWCpack is presently not in active development. Terser Terser has the smallest scope compared to other tools from this list, but considering that it's used in many of those tools, it's worth separating it into its own section. Terser's primary role is minification. It is the successor to the older UglifyJS, but with better performance and ES6+ support. Vite Vite is a somewhat of a special beast. It's primarily a development server, but calling it just that would be an understatement, as it combines the features of a fast development server with modern build capabilities. Vite shines in different ways depending on how it's used. During development, it provides a fast server that doesn't bundle code like traditional bundlers (e.g., Webpack). Instead, it uses native ES modules, serving them directly to the browser. Since the code isn't bundled, Vite also delivers fast HMR, so any updates you make are nearly instant. Vite uses two bundlers under the hood. During development, it uses esbuild, which also allows it to act as a TypeScript transpiler. For each file you work on, it creates a file for the browser, allowing an easy separation between files which helps HMR. For production, it uses Rollup, which generates a single file for the browser. However, Rollup is not as fast as esbuild, so production builds can be a bit slower than you might expect. (This is why Rollup is being rewritten in Rust as Rolldown. Once complete, you'll have the same bundler for both development and production.) Traditionally, Vite has been used for client-side apps, but with the new Environment API released in Vite 6.0, it bridges the gap between client-side and server-rendered apps. Turbopack Turbopack is a bundler, written in Rust by the creators of webpack and Next.js at Vercel. The idea behind Turbopack was to do a complete rewrite of Webpack from scratch and try to keep a Webpack compatible API as much as possible. This is not an easy feat, and this task is still not over. The enormous popularity of Next.js is also helping Turbopack gain traction in the developer community. Right now, Turbopack is being used as an opt-in feature in Next.js's dev server. Production builds are not yet supported but are planned for future releases. Webpack And finally we arrive at Webpack, the legend among bundlers which has had a dominant position as the primary bundler for a long time. Despite the fact that there are so many alternatives to Webpack now (as we've seen in this blog post), it is still widely used, and some modern frameworks such as Next.js still have it as a default bundler. Initially released back in 2012, its development is still going strong. Its primary features are bundling, code splitting, and HMR, but other features are available as well thanks to its popular plugin system. Configuring Webpack has traditionally been challenging, and since it's written in JavaScript rather than a lower-level language like Rust, its performance lags behind compared to newer tools. As a result, many developers are gradually moving away from it. Conclusion With so many build tools in today's JavaScript ecosystem, many of which are similarly named, it's easy to get lost. Hopefully, this blog post was a useful overview of the tools that are most likely to continue being relevant in 2025. Although, with the speed of development, it may as well be that we will be seeing a completely different picture in 2026!...

The simplicity of deploying an MCP server on Vercel cover image

The simplicity of deploying an MCP server on Vercel

The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead. Example of Lightweight MCP Server Design At This Dot Labs, we built an MCP server that leverages the DocuSign Navigator API. The tools, like `get_agreements`, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way. ` Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access. The Oauth flow begins when the user's LLM client makes a request without a valid auth token. In this case they’ll get a 401 response from our server with a WWW-Authenticate header, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. ` This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface. Deployment Options I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel. The case for Vercel: * My own familiarity with Next.js API deployment * Fit for architecture * The extremely simple deployment process * Deploy previews (the eternal Vercel customer conversion feature, IMO) Previews of unfamiliar territory Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review. Stuff I’m Not Worried About Vercel’s mcp-handler package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works. ` A Brief Case for MCP on Next.js Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website's agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool. Conclusion I'll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co