Skip to content

Making Seamless Page Transitions with the View Transitions API

Making Seamless Page Transitions with the View Transitions API

At the time of writing this article (July of 2024), Firefox does not support this feature, and Safari only supports it in its technology preview. The global reach of this feature is roughly 72%, and will only get higher with the new Safari release happening soon. There are no downsides or harm in adding view transitions to your application for other browsers that don’t currently support them.

Make Seamless Page Transitions using the View Transitions API

Traditionally web applications have always had a less polished experience, both functionally and visually speaking, compared to native applications on mobile and other platforms. This has been improving over the years with the introduction of new web APIs and standards.

One such new API that bridges that gap is the View Transitions API.

What is the View Transitions API?

The View Transitions API is a new browser feature that allows developers to more easily create animated transitions between pages and views, similar to how mobile apps have transitions when navigating between pages.

Adding view transitions to your application can reduce the cognitive load on your users and make the experience feel less inconsistent. One great thing about view transitions is that they can be used by both SPAs (single-page applications) and MPAs (multi-page applications) alike!

Using the View Transitions API for SPAs

Let’s see how we can use view transitions ourselves. We will use examples adapted from demos created by the wonderful Jake Archibald and Bramus. There are some alterations to a few of the examples to make them work with the newest version of the View Transitions API, but otherwise, they’re mostly the same.

The process of getting view transitions working for SPAs is as simple as calling document.startViewTransition right before starting navigation and replacing your page’s content immediately after that. You need to make this call in your router or in an event listener that triggers your page navigation.

The following is an example of how you can hook into page navigation in a vanilla JavaScript application. This example uses the Navigation API if it’s available. As of the time of writing this article only Chromium-based browsers support the Navigation API, but there is a fallback to using a click listener on link elements so that this works with Firefox and Safari.

export async function onLinkNavigate(callback) {
  // Fallback to intercepting click events on <a> tags if the
  // Navigation API is not supported by the browser.
  //
  // For modern JavaScript frameworks with client-side routing
  // you would want to hook into the router instead as <a> tags
  // may not be the only way pages are navigated to.
  if (!self.navigation) {
    document.addEventListener('click', (event) => {
      const link = event.target.closest('a');
      if (!link) return;

      event.preventDefault();

      const toUrl = new URL(link.href);

      callback({
        toPath: toUrl.pathname,
        fromPath: location.pathname,
        isBack: false,
      });

      history.pushState({}, null, toUrl.pathname);
    });
    return;
  }

  // Use the Navigation API if it is available.
  navigation.addEventListener('navigate', (event) => {
    const toUrl = new URL(event.destination.url);

    // Do not intercept the event if we're navigating to
    // URL on another domain origin.
    if (location.origin !== toUrl.origin) return;

    const fromPath = location.pathname;
    const isBack = isBackNavigation(event);

    event.intercept({
      async handler() {
        if (event.info === 'ignore') return;

        await callback({
          toPath: toUrl.pathname,
          fromPath,
          isBack,
        });
      },
    });
  });
}

If you’re using a framework, then there’s probably a better way to do this specific to that framework than by using this contrived example. For example, if you are implementing usage of this API with a Next application, you may opt for something like next-view-transitions.

For this article, we’re going to focus on the way this is done in vanilla JavaScript so that the fundamentals of using the API are understood. Knowing how it’s implemented in vanilla JavaScript should give you enough understanding to implement this in the framework of your choice if there isn’t already a library that does it for you.

Now we have a way to know when a navigation is being requested. Using the above utility function we can start the animation and replace our content as follows:

onLinkNavigate(async ({ toPath }) => {
  const content = await getMyPageContentSomehow(toPath);

  startViewTransition(() => {
    document.body.innerHTML = content;
  });
});

function startViewTransition(callback) {
  if (!document.startViewTransition) {
    callback();
    return;
  }

  document.startViewTransition(callback);
}

getMyPageContentSomehow can be any function that returns HTML. How that is implemented will depend on your application and framework of choice.

If you want to run these examples on your machine, the demo repository has instructions in its README. In essence, it’s as simple as hosting an HTTP server at the root of the project and visiting localhost. No fancy build system is required!

By default, a fade is done that only lasts for a moment. The animation that happens can be configured using CSS with the ::view-transition-group pseudo-element on the element you want to animate, which in our case is the entire page, and we pass in root.

::view-transition-group(root) {
  animation-duration: 5s;
}

Given that the animation properties are used, you have much control over what kind of animation to do. You can change the duration and the interpolation, add keyframes and gradients, and even add image masks. You can learn more about the animation properties in CSS on MDN.

Using the View Transitions API for MPAs

As mentioned, view transitions also work for MPAs with no client-side routing. Setting up view transitions for MPAs is much easier than for SPAs. All you have to do is include the CSS rule on all pages where you want the animations.

@view-transition {
    navigation: auto;
}

Of course, like how it works with SPAs, you can customize the animations to your heart’s content using the view transition pseudo-elements.

Conclusion

I hope this article helped you understand the fundamentals of using the View Transitions API. Now that most browsers support it and it’s not too difficult to add it to existing applications, it is a great time to learn how to use it.

You can view a full list of examples here (made by Jake Archibald and Bramus) if you want to see how advanced the animations can get. I like the Star Wars ones especially. The README explains how to get all the examples up and running (a total of 43!).

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers cover image

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers

Before she was a software developer at freeCodeCamp, Jessica Wilkins was a classically trained clarinetist performing across the country. Her days were filled with rehearsals, concerts, and teaching, and she hadn’t considered a tech career until the world changed in 2020. > “When the pandemic hit, most of my gigs were canceled,” she says. “I suddenly had time on my hands and an idea for a site I wanted to build.” That site, a tribute to Black musicians in classical and jazz music, turned into much more than a personal project. It opened the door to a whole new career where her creative instincts and curiosity could thrive just as much as they had in music. Now at freeCodeCamp, Jessica maintains and develops the very JavaScript curriculum that has helped her and millions of developers around the world. We spoke with Jessica about her advice for JavaScript learners, why musicians make great developers, and how inclusive communities are helping more women thrive in tech. Jessica’s Top 3 JavaScript Skill Picks for 2025 If you ask Jessica what it takes to succeed as a JavaScript developer in 2025, she won’t point you straight to the newest library or trend. Instead, she lists three skills that sound simple, but take real time to build: > “Learning how to ask questions and research when you get stuck. Learning how to read error messages. And having a strong foundation in the fundamentals” She says those skills don’t come from shortcuts or shiny tools. They come from building. > “Start with small projects and keep building,” she says. “Books like You Don’t Know JS help you understand the theory, but experience comes from writing and shipping code. You learn a lot by doing.” And don’t forget the people around you. > “Meetups and conferences are amazing,” she adds. “You’ll pick up things faster, get feedback, and make friends who are learning alongside you.” Why So Many Musicians End Up in Tech A musical past like Jessica’s isn’t unheard of in the JavaScript industry. In fact, she’s noticed a surprising number of musicians making the leap into software. > “I think it’s because music and code have a lot in common,” she says. “They both require creativity, pattern recognition, problem-solving… and you can really get into flow when you’re deep in either one.” That crossover between artistry and logic feels like home to people who’ve lived in both worlds. What the Tech Community Is Getting Right Jessica has seen both the challenges and the wins when it comes to supporting women in tech. > “There’s still a lot of toxicity in some corners,” she says. “But the communities that are doing it right—like Women Who Code, Women in Tech, and Virtual Coffee—create safe, supportive spaces to grow and share experiences.” She believes those spaces aren’t just helpful, but they’re essential. > “Having a network makes a huge difference, especially early in your career.” What’s Next for Jessica Wilkins? With a catalog of published articles, open-source projects under her belt, and a growing audience of devs following her journey, Jessica is just getting started. She’s still writing. Still mentoring. Still building. And still proving that creativity doesn’t stop at the orchestra pit—it just finds a new stage. Follow Jessica Wilkins on X and Linkedin to keep up with her work in tech, her musical roots, and whatever she’s building next. Sticker illustration by Jacob Ashley....

The Future of Dates in JavaScript: Introducing Temporal cover image

The Future of Dates in JavaScript: Introducing Temporal

The Future of Dates in JavaScript: Introducing Temporal What is Temporaal? Temporal is a proposal currently at stage 3 of the TC39 process. It's expected to revolutionize how we handle dates in JavaScript, which has always been a challenging aspect of the language. But what does it mean that it's at stage 3 of the process? * The specification is complete * It has been reviewed * It's unlikely to change significantly at this point Key Features of Temporal Temporal introduces a new global object with a fresh API. Here are some important things to know about Temporal: 1. All Temporal objects are immutable 2. They're represented in local calendar systems, but can be converted 3. Time values use 24-hour clocks 4. Leap seconds aren't represented Why Do We Need Temporal? The current Date object in JavaScript has several limitations: * No support for time zones other than the user's local time and UTC * Date objects can be mutated * Unpredictable behavior * No support for calendars other than Gregorian * Daylight savings time issues While some of these have workarounds, not all can be fixed with the current Date implementation. Let's see some useful examples where Temporal will improve our lives: Some Examples Creating a day without a time zone is impossible using Date, it also adds time beyond the date. Temporal introduces PlainDate to overcome this. ` But what if we want to add timezone information? Then we have ZonedDateTime for this purpose. The timezone must be added in this case, as it also allows a lot of flexibility when creating dates. ` Temporal is very useful when manipulating and displaying the dates in different time zones. ` Let's try some more things that are currently difficult or lead to unexpected behavior using the Date object. Operations like adding days or minutes can lead to inconsistent results. However, Temporal makes these operations easier and consistent. ` Another interesting feature of Temporal is the concept of Duration, which is the difference between two time points. We can use these durations, along with dates, for arithmetic operations involving dates and times. Note that Durations are serialized using the ISO 8601 duration format ` Temporal Objects We've already seen some of the objects that Temporal exposes. Here's a more comprehensive list. * Temporal * Temporal.Duration` * Temporal.Instant * Temporal.Now * Temporal.PlainDate * Temporal.PlainDateTime * Temporal.PlainMonthDay * Temporal.PlainTime * Temporal.PlainYearMonth * Temporal.ZonedDateTime Try Temporal Today If you want to test Temporal now, there's a polyfill available. You can install it using: ` Note that this doesn't install a global Temporal object as expected in the final release, but it provides most of the Temporal implementation for testing purposes. Conclusion Working with dates in JavaScript has always been a bit of a mess. Between weird quirks in the Date object, juggling time zones, and trying to do simple things like “add a day,” it’s way too easy to introduce bugs. Temporal is finally fixing that. It gives us a clear, consistent, and powerful way to work with dates and times. If you’ve ever struggled with JavaScript dates (and who hasn’t?), Temporal is definitely worth checking out....

Integrating Playwright Tests into Your GitHub Workflow with Vercel cover image

Integrating Playwright Tests into Your GitHub Workflow with Vercel

Vercel previews offer a great way to test PRs for a project. They have a predefined environment and don’t require any additional setup work from the reviewer to test changes quickly. Many projects also use end-to-end tests with Playwright as part of the review process to ensure that no regressions slip uncaught. Usually, workflows configure Playwright to run against a project running on the GitHub action worker itself, maybe with dependencies in Docker containers as well, however, why bother setting that all up and configuring yet another environment for your app to run in when there’s a working preview right there? Not only that, the Vercel preview will be more similar to production as it’s running on the same infrastructure, allowing you to be more confident about the accuracy of your tests. In this article, I’ll show you how you can run Playwright against the Vercel preview associated with a PR. Setting up the Vercel Project To set up a project in Vercel, we first need to have a codebase. I’m going to use the Next.js starter, but you can use whatever you like. What technology stack you use for this project won’t matter, as integrating Playwright with it will be the same experience. You can create a Next.js project with the following command: ` If you’ve selected all of the defaults, you should be able to run npm run dev and navigate to the app at http://localhost:3000. Setting up Playwright We will set up Playwright the standard way and make a few small changes to the configuration and the example test so that they run against our site and not the Playwright site. Setup Playwright in our existing project by running the following command: ` Install all browsers when prompted, and for the workflow question, say no since the one we’re going to use will work differently than the default one. The default workflow doesn’t set up a development server by default, and if that is enabled, it will run on the GitHub action virtual machine instead of against our Vercel deployment. To make Playwright run tests against the Vercel deployment, we’ll need to define a baseUrl in playwright.config.ts and send an additional header called X-Vercel-Protection-Bypass where we'll pass the bypass secret that we generated earlier so that we don’t get blocked from making requests to the deployment. I’ll cover how to add this environment variable to GitHub later. ` Our GitHub workflow will set the DEPLOYMENT_URL environment variable automatically. Now, in tests/example.spec.ts let’s rewrite the tests to work against the Next.js starter that we generated earlier: ` This is similar to the default test provided by Playwright. The main difference is we’re loading pages relative to baseURL instead of Playwright’s website. With that done and your Next.js dev server running, you should be able to run npx playwright test and see 6 passing tests against your local server. Now that the boilerplate is handled let’s get to the interesting part. The Workflow There is a lot going on in the workflow that we’ll be using, so we’ll go through it step by step, starting from the top. At the top of the file, we name the workflow and specify when it will run. ` This workflow will run against new PRs against the default branch and whenever new commits are merged against it. If you only want the workflow to run against PRs, you can remove the push object. Be careful about running workflows against your main branch if the deployment associated with it in Vercel is the production deployment. Some tests might not be safe to run against production such as destructive tests or those that modify customer data. In our simple example, however, this isn’t something to worry about. Installing Playwright in the Virtual Machine Workflows have jobs associated with them, and each job has multiple steps. Our test job takes a few steps to set up our project and install Playwright. ` The actions/checkout@v4 step clones our code since it isn’t available straight out of the gate. After that, we install Node v22 with actions/setup-node@v4, which, at the time of writing this article, is the latest LTS available. The latest LTS version of Node should always work with Playwright. With the project cloned and Node installed, we can install dependencies now. We run npm ci to install packages using the versions specified in the lock file. After our JS dependencies are installed, we have to install dependencies for Playwright now. sudo npx playwright install-deps installs all system dependencies that Playwright needs to work using apt, which is the package manager used by Ubuntu. This command needs to be run as the administrative user since higher privilege is needed to install system packages. Playwright’s dependencies aren’t all available in npm because the browser engines are native code that has native library dependencies that aren’t in the registry. Vercel Preview URL and GitHub Action Await Vercel The next couple of steps is where the magic happens. We need two things to happen to run our tests against the deployment. First, we need the URL of the deployment we want to test. Second, we want to wait until the deployment is ready to go before we run our tests. We have written about this topic before on our blog if you want more information about this step, but we’ll reiterate some of that here. Thankfully, the community has created GitHub actions that allow us to do this called zentered/vercel-preview-url and UnlyEd/github-action-await-vercel. Here is how you can use these actions: ` There are a few things to take note of here. Firstly, some variables need to be set that will differ from project to project. vercel_app in the zentered/vercel-preview-url step needs to be set to the name of your project in Vercel that was created earlier. The other variable that you need is the VERCEL_TOKEN environment variable. You can get this by going to Vercel > Account Settings > Tokens and creating a token in the form that appears. For the scope, select the account that has your project. To put VERCEL_TOKEN into GitHub, navigate to your repo, go to Settings > Secrets and variables > Actions and add it to Repository secrets. We should also add VERCEL_AUTOMATION_BYPASS_SECRETl. In Vercel, go to your project then navigate to Settings > Deployment Protection > Protection Bypass for Automation. From here you can add the secret, copy it to your clipboard, and put it in your GitHub action environment variables just like we did with VERCEL_TOKEN. With the variables taken care of, let’s take a look at how these two steps work together. You will notice that the zentered/vercel-preview-url step has an ID set to vercel_preview_url. We need this so we can pass the URL we receive to the UnlyEd/github-action-await-vercel action, as it needs a URL to know which deployment to wait on. Running Playwright After the last steps we just added, our deployment should be ready to go, and we can run our tests! The following steps will run the Playwright tests against the deployment and save the results to GitHub: ` In the first step, where we run the tests, we pass in the environment variables needed by our Playwright configuration that’s stored in playwright.config.ts. DEPLOYMENT_URL uses the Vercel deployment URL we got in an earlier step, and VERCEL_AUTOMATION_BYPASS_SECRET gets passed the secret with the same name directly from the GitHub secret store. The second step uploads a report of how the tests did to GitHub, regardless of whether they’ve passed or failed. If you need to access these reports, you can find them in the GitHub action log. There will be a link in the last step that will allow you to download a zip file. Once this workflow is in the default branch, it should start working for all new PRs! It’s important to note that this won’t work for forked PRs unless they are explicitly approved, as that’s a potential security hazard that can lead to secrets being leaked. You can read more about this in the GitHub documentation. One Caveat There’s one caveat that is worth mentioning with this approach, which is latency. Since your application is being served by Vercel and not locally on the GitHub action instance itself, there will be longer round-trips to it. This could result in your tests taking longer to execute. How much latency there is can vary based on what region your runner ends up being hosted in and whether the pages you’re loading are served from the edge or not. Conclusion Running your Playwright tests against Vercel preview deployments provides a robust way of running your tests against new code in an environment that more closely aligns with production. Doing this also eliminates the need to create and maintain a 2nd test environment under which your project needs to work....

Understanding Sourcemaps: From Development to Production cover image

Understanding Sourcemaps: From Development to Production

What Are Sourcemaps? Modern web development involves transforming your source code before deploying it. We minify JavaScript to reduce file sizes, bundle multiple files together, transpile TypeScript to JavaScript, and convert modern syntax into browser-compatible code. These optimizations are essential for performance, but they create a significant problem: the code running in production does not look like the original code you wrote. Here's a simple example. Your original code might look like this: ` After minification, it becomes something like this: ` Now imagine trying to debug an error in that minified code. Which line threw the exception? What was the value of variable d? This is where sourcemaps come in. A sourcemap is a JSON file that contains a mapping between your transformed code and your original source files. When you open browser DevTools, the browser reads these mappings and reconstructs your original code, allowing you to debug with variable names, comments, and proper formatting intact. How Sourcemaps Work When you build your application with tools like Webpack, Vite, or Rollup, they can generate sourcemap files alongside your production bundles. A minified file references its sourcemap using a special comment at the end: ` The sourcemap file itself contains a JSON structure with several key fields: ` The mappings field uses an encoding format called VLQ (Variable Length Quantity) to map each position in the minified code back to its original location. The browser's DevTools use this information to show you the original code while you're debugging. Types of Sourcemaps Build tools support several variations of sourcemaps, each with different trade-offs: Inline sourcemaps: The entire mapping is embedded directly in your JavaScript file as a base64 encoded data URL. This increases file size significantly but simplifies deployment during development. ` External sourcemaps: A separate .map file that's referenced by the JavaScript bundle. This is the most common approach, as it keeps your production bundles lean since sourcemaps are only downloaded when DevTools is open. Hidden sourcemaps: External sourcemap files without any reference in the JavaScript bundle. These are useful when you want sourcemaps available for error tracking services like Sentry, but don't want to expose them to end users. Why Sourcemaps During development, sourcemaps are absolutely critical. They will help avoid having to guess where errors occur, making debugging much easier. Most modern build tools enable sourcemaps by default in development mode. Sourcemaps in Production Should you ship sourcemaps to production? It depends. While security by making your code more difficult to read is not real security, there's a legitimate argument that exposing your source code makes it easier for attackers to understand your application's internals. Sourcemaps can reveal internal API endpoints and routing logic, business logic, and algorithmic implementations, code comments that might contain developer notes or TODO items. Anyone with basic developer tools can reconstruct your entire codebase when sourcemaps are publicly accessible. While the Apple leak contained no credentials or secrets, it did expose their component architecture and implementation patterns. Additionally, code comments can inadvertently contain internal URLs, developer names, or company-specific information that could potentially be exploited by attackers. But that’s not all of it. On the other hand, services like Sentry can provide much more actionable error reports when they have access to sourcemaps. So you can understand exactly where errors happened. If a customer reports an issue, being able to see the actual error with proper context makes diagnosis significantly faster. If your security depends on keeping your frontend code secret, you have bigger problems. Any determined attacker can reverse engineer minified JavaScript. It just takes more time. Sourcemaps are only downloaded when DevTools is open, so shipping them to production doesn't affect load times or performance for end users. How to manage sourcemaps in production You don't have to choose between no sourcemaps and publicly accessible ones. For example, you can restrict access to sourcemaps with server configuration. You can make .map accessible from specific IP addresses. Additionally, tools like Sentry allow you to upload sourcemaps during your build process without making them publicly accessible. Then configure your build to generate sourcemaps without the reference comment, or use hidden sourcemaps. Sentry gets the mapping information it needs, but end users can't access the files. Learning from Apple's Incident Apple's sourcemap incident is a valuable reminder that even the largest tech companies can make deployment oversights. But it also highlights something important: the presence of sourcemaps wasn't actually a security vulnerability. This can be achieved by following good security practices. Never include sensitive data in client code. Developers got an interesting look at how Apple structures its Svelte codebase. The lesson is that you must be intentional about your deployment configuration. If you're going to include sourcemaps in production, make that decision deliberately after considering the trade-offs. And if you decide against using public sourcemaps, verify that your build process actually removes them. In this case, the public repo was quickly removed after Apple filed a DMCA takedown. (https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md) Making the Right Choice So what should you do with sourcemaps in your projects? For development: Always enable them. Use fast options, such as eval-source-map in Webpack or the default configuration in Vite. The debugging benefits far outweigh any downsides. For production: Consider your specific situation. But most importantly, make sure your sourcemaps don't accidentally expose secrets. Review your build output, check for hardcoded credentials, and ensure sensitive configurations stay on the backend where they belong. Conclusion Sourcemaps are powerful development tools that bridge the gap between the optimized code your users download and the readable code you write. They're essential for debugging and make error tracking more effective. The question of whether to include them in production doesn't have a unique answer. Whatever you decide, make it a deliberate choice. Review your build configuration. Verify that sourcemaps are handled the way you expect. And remember that proper frontend security doesn't come from hiding your code. Useful Resources * Source map specification - https://tc39.es/ecma426/ * What are sourcemaps - https://web.dev/articles/source-maps * VLQ implementation - https://github.com/Rich-Harris/vlq * Sentry sourcemaps - https://docs.sentry.io/platforms/javascript/sourcemaps/ * Apple DMCA takedown - https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co