Skip to content

React 18: Concurrency and Streaming SSR

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

React 18

The React team just announced React 18 alpha, along with their plans for release, earlier this month. Included in that announcement is the creation of a working group to help prepare the community for the new features, nearly all of which can be gradually adopted.

Concurrency (Concurrent mode begone!)

So we finally have an answer to what we've been wondering about for the past three years! React 18 will ship with a new concurrency model, but not a completely separate concurrency "mode". Concurrency will come from a new set of features, and just happen automatically when those features are used, so it is completely optional.

createRoot will now be the way that we initialize the root of our React applications going forward as you can see here:

import * as ReactDOM from 'react-dom';
import App from 'App';

const container = document.getElementById('app');

const root = ReactDOM.createRoot(container);

root.render(<App />);

This new createRoot also enables all the new concurrent features- a subtle distinction from a "mode" 😉. This model allows you to render work and events to be interleaved together, and allows you to assign priority to various kinds of updates. This gives us better ways of keeping UI fluid and performant.

Priority

To achieve concurrency, React now uses a cooperative multitasking model which runs a single thread that can be interrupted based on priority. Interruptions happen in between rendering various components. Since the time it takes to render a component is usually very small, it provides a very granular way to interrupt when high priority work comes through.

If you'd like to read more in depth, check out this topic in the working group.

startTransition

One of the new concurrent features is startTransition, which allows us to specify updates that might be lower priority- aka "transition" updates. Specifying updates as "transition", enables urgent updates to interrupt them, allowing interactions to be more performant.

To do this, we'll use a new API:

import { startTransition } from 'react';

// Urgent: Show what was typed
setInputValue(input);

// Mark any state updates inside as transitions
startTransition(() => {
  // Transition: Show the results
  setSearchQuery(input);
});

Urgent vs Transition Updates

Urgent actions are things like clicking, hovering, scrolling, typing, or any other action where the user expect things to happen instantly as they do natively in the browser. Transition updates are more "React-centric" things like handling and managing UI state.

Currently, all updates in React 18 and below are considered urgent. The way this is worded seems to imply a future where all updates are considered transitions, and we only need to specify urgent updates. I think it was done this way to make the adoption even more gradual, because I think realistically, most updates should be transitional, not urgent. However, people need time to get used to the idea of specifying priority before everything actually gets de-prioritized.

Automatic Update Batching

A small but significant change is that now all updates are automatically batched. You may have noticed, when building React apps, that any time you put multiple setState calls next to each other in something like an event handler, React combines them into one update so that only one render occurs. However, you may have also noticed that this does not occur in things like callbacks, setTimeout, Promises, etc.

function App() {
  const [count, setCount] = useState(0);
  const [flag, setFlag] = useState(false);

  function handlePrevClick() {
	  setCount(c => c - 1);
	  setFlag(f => !f);
	  //Currently, React will only re-render once at the end (batching!)
  } 

  function handleNextClick() {
    fetchSomething().then(() => {
      // React 18 and later DOES batch these:
      setCount(c => c + 1);
      setFlag(f => !f);
      // React will only re-render once at the end (that's batching!)
    });
  }

  return (
    <div>
	    <button onClick={handlePrevClick}>Prev</button>
      <button onClick={handleNextClick}>Next</button>
      <h1 style={{ color: flag ? "blue" : "black" }}>{count}</h1>
    </div>
  );
}

flushSync

However, maybe you were relying upon that non-batching behavior! Well, there's a new API to opt-out of batching (although the team says this should be uncommon):

import { flushSync } from 'react-dom'; // Note: react-dom, not react

function handleClick() {
  flushSync(() => {
    setCounter(c => c + 1);
  });
  // React has updated the DOM by now
  flushSync(() => {
    setFlag(f => !f);
  });
  // React has updated the DOM by now
}

Streaming Server Render

Probably one of the most impactful changes in this set is now the ability for server-side rendering or SSR to be streamed to the browser, enabling partial page sections to be rendered before the entire page is ready. This should speed up the time it takes for the user to see anything on the page.

Streaming + Selective Hydration

What does this mean exactly? So first, current SSR efforts do the entire render of the HTML page, and wait until it's completely finished before sending it to the browser. The new method allows the page to be broken up into chunks using <Suspense>. Now, the server can start streaming HTML as soon as one of these chunks is ready.

Even if that HTML gets to the browser faster, it still needs to be hydrated. Previously, this was an all-or-nothing operation, requiring the entire page's JavaScript to be downloaded. Now, we can use a combination of <Suspense> and React.lazy() to achieve partial hydration as well.

If you'd like to see an example of how the server handles this, the React team put together a nice demo:

Event Replay and Priority

If that wasn't enough already, the team also found a way to prioritize hydration for user interactive sections. React will now record interactions with various parts of the page, and prioritize hydration based on what has been interacted with. This is really elegant because it lets things that the user cares about render as quickly as possible. 👏👏👏👏

Suspense is Now Very Important

These new features are all enabled entirely by the <Suspense> component. Wrapping parts of the page that may take longer to load enables their execution to be deferred while the rest of the page gets rendered, streamed, and hydrated, enabling things to be faster and quicker to interact with!

If you'd like to see an in-depth explanation, check out this topic in the working group complete with diagrams.

Conclusion

This is an extremely exciting update with lots of promise towards better performing apps and faster load times! Increased control over what gets rendered and hydrated first will lead to much better user experiences, especially on devices that struggle to load JS efficiently.

I'm very glad we will finally get an answer on concurrency "mode" too! 😂

If you're excited to try these new changes, go ahead and install the alpha:

npm install react@alpha react-dom@alpha  

The React team is looking for feedback through the working group, so feel free to participate in discussions there.

I have to give lots of credit towards @swyx, who did a lot of investigation work to point out key features from the release in this thread. Check it out!

Thanks!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging) cover image

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging)

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging) In the first part of my series on the importance of a scientific mindset in software engineering, we explored how the principles of the scientific method can help us evaluate sources and make informed decisions. Now, we will focus on how these principles can help us tackle one of the most crucial and challenging tasks in software engineering: debugging. In software engineering, debugging is often viewed as an art - an intuitive skill honed through experience and trial and error. In a way, it is - the same as a GP, even a very evidence-based one, will likely diagnose most of their patients based on their experience and intuition and not research scientific literature every time; a software engineer will often rely on their experience and intuition to identify and fix common bugs. However, an internist faced with a complex case will likely not be able to rely on their intuition alone and must apply the scientific method to diagnose the patient. Similarly, a software engineer can benefit from using the scientific method to identify and fix the problem when faced with a complex bug. From that perspective, treating engineering challenges like scientific inquiries can transform the way we tackle problems. Rather than resorting to guesswork or gut feelings, we can apply the principles of the scientific method—forming hypotheses, designing controlled experiments, gathering and evaluating evidence—to identify and eliminate bugs systematically. This approach, sometimes referred to as "scientific debugging," reframes debugging from a haphazard process into a structured, disciplined practice. It encourages us to be skeptical, methodical, and transparent in our reasoning. For instance, as Andreas Zeller notes in the book _Why Programs Fail_, the key aspect of scientific debugging is its explicitness: Using the scientific method, you make your assumptions and reasoning explicit, allowing you to understand your assumptions and often reveals hidden clues that can lead to the root cause of the problem on hand. Note: If you'd like to read an excerpt from the book, you can find it on Embedded.com. Scientific Debugging At its core, scientific debugging applies the principles of the scientific method to the process of finding and fixing software defects. Rather than attempting random fixes or relying on intuition, it encourages engineers to move systematically, guided by data, hypotheses, and controlled experimentation. By adopting debugging as a rigorous inquiry, we can reduce guesswork, speed up the resolution process, and ensure that our fixes are based on solid evidence. Just as a scientist begins with a well-defined research question, a software engineer starts by identifying the specific symptom or error condition. For instance, if our users report inconsistencies in the data they see across different parts of the application, our research question could be: _"Under what conditions does the application display outdated or incorrect user data?"_ From there, we can follow a structured debugging process that mirrors the scientific method: - 1. Observe and Define the Problem: First, we need to clearly state the bug's symptoms and the environment in which it occurs. We should isolate whether the issue is deterministic or intermittent and identify any known triggers if possible. Such a structured definition serves as the groundwork for further investigation. - 2. Formulate a Hypothesis: A hypothesis in debugging is a testable explanation for the observed behavior. For instance, you might hypothesize: _"The data inconsistency occurs because a caching layer is serving stale data when certain user profiles are updated."_ The key is that this explanation must be falsifiable; if experiments don't support the hypothesis, it must be refined or discarded. - 3. Collect Evidence and Data: Evidence often includes logs, system metrics, error messages, and runtime traces. Similar to reviewing primary sources in academic research, treat your raw debugging data as crucial evidence. Evaluating these data points can reveal patterns. In our example, such patterns could be whether the bug correlates with specific caching mechanisms, increased memory usage, or database query latency. During this step, it's essential to approach data critically, just as you would analyze the quality and credibility of sources in a research literature review. Don't forget that even logs can be misleading, incomplete, or even incorrect, so cross-referencing multiple sources is key. - 4. Design and Run Experiments: Design minimal, controlled tests to confirm or refute your hypothesis. In our example, you may try disabling or shortening the cache's time-to-live (TTL) to see if more recent data is displayed correctly. By manipulating one variable at a time - such as cache invalidation intervals - you gain clearer insights into causation. Tools such as profilers, debuggers, or specialized test harnesses can help isolate factors and gather precise measurements. - 5. Analyze Results and Refine Hypotheses: If the experiment's outcome doesn't align with your hypothesis, treat it as a stepping stone, not a dead end. Adjust your explanation, form a new hypothesis, or consider additional variables (for example, whether certain API calls bypass caching). Each iteration should bring you closer to a better understanding of the bug's root cause. Remember, the goal is not to prove an initial guess right but to arrive at a verifiable explanation. - 6. Implement and Verify the Fix: Once you're confident in the identified cause, you can implement the fix. Verification doesn't stop at deployment - re-test under the same conditions and, if possible, beyond them. By confirming the fix in a controlled manner, you ensure that the solution is backed by evidence rather than wishful thinking. - Personally, I consider implementing end-to-end tests (e.g., with Playwright) that reproduce the bug and verify the fix to be a crucial part of this step. This both ensures that the bug doesn't reappear in the future due to changes in the codebase and avoids possible imprecisions of manual testing. Now, we can explore these steps in more detail, highlighting how the scientific method can guide us through the debugging process. Establishing Clear Debugging Questions (Formulating a Hypothesis) A hypothesis is a proposed explanation for a phenomenon that can be tested through experimentation. In a debugging context, that phenomenon is the bug or issue you're trying to resolve. Having a clear, falsifiable statement that you can prove or disprove ensures that you stay focused on the real problem rather than jumping haphazardly between possible causes. A properly formulated hypothesis lets you design precise experiments to evaluate whether your explanation holds true. To formulate a hypothesis effectively, you can follow these steps: 1. Clearly Identify the Symptom(s) Before forming any hypothesis, pin down the specific issue users are experiencing. For instance: - "Users intermittently see outdated profile information after updating their accounts." - "Some newly created user profiles don't reflect changes in certain parts of the application." Having a well-defined problem statement keeps your hypothesis focused on the actual issue. Just like a research question in science, the clarity of your symptom definition directly influences the quality of your hypothesis. 2. Draft a Tentative Explanation Next, convert your symptom into a statement that describes a _possible root cause_, such as: - "Data inconsistency occurs because the caching layer isn't invalidating or refreshing user data properly when profiles are updated." - "Stale data is displayed because the cache timeout is too long under certain load conditions." This step makes your assumption about the root cause explicit. As with the scientific method, your hypothesis should be something you can test and either confirm or refute with data or experimentation. 3. Ensure Falsifiability A valid hypothesis must be falsifiable - meaning it can be proven _wrong_. You'll struggle to design meaningful experiments if a hypothesis is too vague or broad. For example: - Not Falsifiable: "Occasionally, the application just shows weird data." - Falsifiable: "Users see stale data when the cache is not invalidated within 30 seconds of profile updates." Making your hypothesis specific enough to fail a test will pave the way for more precise debugging. 4. Align with Available Evidence Match your hypothesis to what you already know - logs, stack traces, metrics, and user reports. For example: - If logs reveal that cache invalidation events aren't firing, form a hypothesis explaining why those events fail or never occur. - If metrics show that data served from the cache is older than the configured TTL, hypothesize about how or why the TTL is being ignored. If your current explanation contradicts existing data, refine your hypothesis until it fits. 5. Plan for Controlled Tests Once you have a testable hypothesis, figure out how you'll attempt to _disprove_ it. This might involve: - Reproducing the environment: Set up a staging/local system that closely mimics production. For instance with the same cache layer configurations. - Varying one condition at a time: For example, only adjust cache invalidation policies or TTLs and then observe how data freshness changes. - Monitoring metrics: In our example, such monitoring would involve tracking user profile updates, cache hits/misses, and response times. These metrics should lead to confirming or rejecting your explanation. These plans become your blueprint for experiments in further debugging stages. Collecting and Evaluating Evidence After formulating a clear, testable hypothesis, the next crucial step is to gather data that can either support or refute it. This mirrors how scientists collect observations in a literature review or initial experiments. 1. Identify "Primary Sources" (Logs, Stack Traces, Code History): - Logs and Stack Traces: These are your direct pieces of evidence - treat them like raw experimental data. For instance, look closely at timestamps, caching-related events (e.g., invalidation triggers), and any error messages related to stale reads. - Code History: Look for related changes in your source control, e.g. using Git bisect. In our example, we would look for changes to caching mechanisms or references to cache libraries in commits, which could pinpoint when the inconsistency was introduced. Sometimes, reverting a commit that altered cache settings helps confirm whether the bug originated there. 2. Corroborate with "Secondary Sources" (Documentation, Q&A Forums): - Documentation: Check official docs for known behavior or configuration details that might differ from your assumptions. - Community Knowledge: Similar issues reported on GitHub or StackOverflow may reveal known pitfalls in a library you're using. 3. Assess Data Quality and Relevance: - Look for Patterns: For instance, does stale data appear only after certain update frequencies or at specific times of day? - Check Environmental Factors: For instance, does the bug happen only with particular deployment setups, container configurations, or memory constraints? - Watch Out for Biases: Avoid seeking only the data that confirms your hypothesis. Look for contradictory logs or metrics that might point to other root causes. You keep your hypothesis grounded in real-world system behavior by treating logs, stack traces, and code history as primary data - akin to raw experimental results. This evidence-first approach reduces guesswork and guides more precise experiments. Designing and Running Experiments With a hypothesis in hand and evidence gathered, it's time to test it through controlled experiments - much like scientists isolate variables to verify or debunk an explanation. 1. Set Up a Reproducible Environment: - Testing Environments: Replicate production conditions as closely as possible. In our example, that would involve ensuring the same caching configuration, library versions, and relevant data sets are in place. - Version Control Branches: Use a dedicated branch to experiment with different settings or configuration, e.g., cache invalidation strategies. This streamlines reverting changes if needed. 2. Control Variables One at a Time: - For instance, if you suspect data inconsistency is tied to cache invalidation events, first adjust only the invalidation timeout and re-test. - Or, if concurrency could be a factor (e.g., multiple requests updating user data simultaneously), test different concurrency levels to see if stale data issues become more pronounced. 3. Measure and Record Outcomes: - Automated Tests: Tests provide a great way to formalize and verify your assumptions. For instance, you could develop tests that intentionally update user profiles and check if the displayed data matches the latest state. - Monitoring Tools: Monitor relevant metrics before, during, and after each experiment. In our example, we might want to track cache hit rates, TTL durations, and query times. - Repeat Trials: Consistency across multiple runs boosts confidence in your findings. 4. Validate Against a Baseline: - If baseline tests manifest normal behavior, but your experimental changes manifest the bug, you've isolated the variable causing the issue. E.g. if the baseline tests show that data is consistently fresh under normal caching conditions but your experimental changes cause stale data. - Conversely, if your change eliminates the buggy behavior, it supports your hypothesis - e.g. that the cache configuration was the root cause. Each experiment outcome is a data point supporting or contradicting your hypothesis. Over time, these data points guide you toward the true cause. Analyzing Results and Iterating In scientific debugging, an unexpected result isn't a failure - it's valuable feedback that brings you closer to the right explanation. 1. Compare Outcomes to the hypothesis. For instance: - Did user data stay consistent after you reduced the cache TTL or fixed invalidation logic? - Did logs show caching events firing as expected, or did they reveal unexpected errors? - Are there only partial improvements that suggest multiple overlapping issues? 2. Incorporate Unexpected Observations: - Sometimes, debugging uncovers side effects - e.g. performance bottlenecks exposed by more frequent cache invalidations. Note these for future work. - If your hypothesis is disproven, revise it. For example, the cache may only be part of the problem, and a separate load balancer setting also needs attention. 3. Avoid Confirmation Bias: - Don't dismiss contrary data. For instance, if you see evidence that updates are fresh in some modules but stale in others, you may have found a more nuanced root cause (e.g., partial cache invalidation). - Consider other credible explanations if your teammates propose them. Test those with the same rigor. 4. Decide If You Need More Data: - If results aren't conclusive, add deeper instrumentation or enable debug modes to capture more detailed logs. - For production-only issues, implement distributed tracing or sampling logs to diagnose real-world usage patterns. 5. Document Each Iteration: - Record the results of each experiment, including any unexpected findings or new hypotheses that arise. - Through iterative experimentation and analysis, each cycle refines your understanding. By letting evidence shape your hypothesis, you ensure that your final conclusion aligns with reality. Implementing and Verifying the Fix Once you've identified the likely culprit - say, a misconfigured or missing cache invalidation policy - the next step is to implement a fix and verify its resilience. 1. Implementing the Change: - Scoped Changes: Adjust just the component pinpointed in your experiments. Avoid large-scale refactoring that might introduce other issues. - Code Reviews: Peer reviews can catch overlooked logic gaps or confirm that your changes align with best practices. 2. Regression Testing: - Re-run the same experiments that initially exposed the issue. In our stale data example, confirm that the data remains fresh under various conditions. - Conduct broader tests - like integration or end-to-end tests - to ensure no new bugs are introduced. 3. Monitoring in Production: - Even with positive test results, real-world scenarios can differ. Monitor logs and metrics (e.g. cache hit rates, user error reports) closely post-deployment. - If the buggy behavior reappears, revisit your hypothesis or consider additional factors, such as unpredicted user behavior. 4. Benchmarking and Performance Checks (If Relevant): - When making changes that affect the frequency of certain processes - such as how often a cache is refreshed - be sure to measure the performance impact. Verify you meet any latency or resource usage requirements. - Keep an eye on the trade-offs: For instance, more frequent cache invalidations might solve stale data but could also raise system load. By systematically verifying your fix - similar to confirming experimental results in research - you ensure that you've addressed the true cause and maintained overall software stability. Documenting the Debugging Process Good science relies on transparency, and so does effective debugging. Thorough documentation guarantees your findings are reproducible and valuable to future team members. 1. Record Your Hypothesis and Experiments: - Keep a concise log of your main hypothesis, the tests you performed, and the outcomes. - A simple markdown file within the repo can capture critical insights without being cumbersome. 2. Highlight Key Evidence and Observations: - Note the logs or metrics that were most instrumental - e.g., seeing repeated stale cache hits 10 minutes after updates. - Document any edge cases discovered along the way. 3. List Follow-Up Actions or Potential Risks: - If you discover additional issues - like memory spikes from more frequent invalidation - note them for future sprints. - Identify parts of the code that might need deeper testing or refactoring to prevent similar issues. 4. Share with Your Team: - Publish your debugging report on an internal wiki or ticket system. A well-documented troubleshooting narrative helps educate other developers. - Encouraging open discussion of the debugging process fosters a culture of continuous learning and collaboration. By paralleling scientific publication practices in your documentation, you establish a knowledge base to guide future debugging efforts and accelerate collective problem-solving. Conclusion Debugging can be as much a rigorous, methodical exercise as an art shaped by intuition and experience. By adopting the principles of scientific inquiry - forming hypotheses, designing controlled experiments, gathering evidence, and transparently documenting your process - you make your debugging approach both systematic and repeatable. The explicitness and structure of scientific debugging offer several benefits: - Better Root-Cause Discovery: Structured, hypothesis-driven debugging sheds light on the _true_ underlying factors causing defects rather than simply masking symptoms. - Informed Decisions: Data and evidence lead the way, minimizing guesswork and reducing the chance of reintroducing similar issues. - Knowledge Sharing: As in scientific research, detailed documentation of methods and outcomes helps others learn from your process and fosters a collaborative culture. Ultimately, whether you are diagnosing an intermittent crash or chasing elusive performance bottlenecks, scientific debugging brings clarity and objectivity to your workflow. By aligning your debugging practices with the scientific method, you build confidence in your solutions and empower your team to tackle complex software challenges with precision and reliability. But most importantly, do not get discouraged by the number of rigorous steps outlined above or by the fact you won't always manage to follow them all religiously. Debugging is a complex and often frustrating process, and it's okay to rely on your intuition and experience when needed. Feel free to adapt the debugging process to your needs and constraints, and as long as you keep the scientific mindset at heart, you'll be on the right track....

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers cover image

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers

Before she was a software developer at freeCodeCamp, Jessica Wilkins was a classically trained clarinetist performing across the country. Her days were filled with rehearsals, concerts, and teaching, and she hadn’t considered a tech career until the world changed in 2020. > “When the pandemic hit, most of my gigs were canceled,” she says. “I suddenly had time on my hands and an idea for a site I wanted to build.” That site, a tribute to Black musicians in classical and jazz music, turned into much more than a personal project. It opened the door to a whole new career where her creative instincts and curiosity could thrive just as much as they had in music. Now at freeCodeCamp, Jessica maintains and develops the very JavaScript curriculum that has helped her and millions of developers around the world. We spoke with Jessica about her advice for JavaScript learners, why musicians make great developers, and how inclusive communities are helping more women thrive in tech. Jessica’s Top 3 JavaScript Skill Picks for 2025 If you ask Jessica what it takes to succeed as a JavaScript developer in 2025, she won’t point you straight to the newest library or trend. Instead, she lists three skills that sound simple, but take real time to build: > “Learning how to ask questions and research when you get stuck. Learning how to read error messages. And having a strong foundation in the fundamentals” She says those skills don’t come from shortcuts or shiny tools. They come from building. > “Start with small projects and keep building,” she says. “Books like You Don’t Know JS help you understand the theory, but experience comes from writing and shipping code. You learn a lot by doing.” And don’t forget the people around you. > “Meetups and conferences are amazing,” she adds. “You’ll pick up things faster, get feedback, and make friends who are learning alongside you.” Why So Many Musicians End Up in Tech A musical past like Jessica’s isn’t unheard of in the JavaScript industry. In fact, she’s noticed a surprising number of musicians making the leap into software. > “I think it’s because music and code have a lot in common,” she says. “They both require creativity, pattern recognition, problem-solving… and you can really get into flow when you’re deep in either one.” That crossover between artistry and logic feels like home to people who’ve lived in both worlds. What the Tech Community Is Getting Right Jessica has seen both the challenges and the wins when it comes to supporting women in tech. > “There’s still a lot of toxicity in some corners,” she says. “But the communities that are doing it right—like Women Who Code, Women in Tech, and Virtual Coffee—create safe, supportive spaces to grow and share experiences.” She believes those spaces aren’t just helpful, but they’re essential. > “Having a network makes a huge difference, especially early in your career.” What’s Next for Jessica Wilkins? With a catalog of published articles, open-source projects under her belt, and a growing audience of devs following her journey, Jessica is just getting started. She’s still writing. Still mentoring. Still building. And still proving that creativity doesn’t stop at the orchestra pit—it just finds a new stage. Follow Jessica Wilkins on X and Linkedin to keep up with her work in tech, her musical roots, and whatever she’s building next. Sticker illustration by Jacob Ashley....

Why You Should Use React Query or SWR cover image

Why You Should Use React Query or SWR

Why You Should Use React Query or SWR for External Data Most of us out here in React-land are out building our apps with the shiny new Hooks API and slinging requests to external APIs like nobody's business. Those of us new to hooks may have started creating hooks that look like this simplified example: ` The Problem However, each time a hook is called, that instance is unique to the component it was called in, so there's a few issues we'll run into: 1. Updating the above query in one instance won't update it for the other instances. 2. If we have three components using a hook that makes an API request, we will get at least one request per component. 3. If we have multiple requests in the air, and we try to store them in a global store or have the hook hold the state, we will end up with state out of sync or subsequent requests overwriting each other. One way to solve this is to leave the request out of hooks and only call it on mount in a component you can guarantee will be singular (aka a singleton component, like a page/route perhaps). Depending on how that data is used, this can sometimes be very tricky to do. Potential Solutions So what can we do? There are a few options: 1. Make sure the data returned from the API goes in context or some kind of global state management, accepting the multiple requests (potentially overexerting our server's API) 2. Doing the above + use a library like react-singleton-hook, ensure there's only a single component with the useEffect doing the API call, or similar to prevent multiple requests. 3. Implement some kind of way to cache the data (while still being able to invalidate that as necessary) so that we pull from the cache first 4. Use React Query or SWR The Real Solution The real solution here is to use option 4. Both libraries have done really sophisticated work to solve these problems to prevent you from having to do it yourself. Caching is extremely tricky to get right and can have pretty dire consequences if done poorly, leading to your app partially or completely breaking. Other Problems They Solve Here are a few examples of other problems these libraries can solve with code examples for each. _Code examples are using React Query but are nearly the same with SWR._ Window Focus Refetching One big issue that we encounter with Javascript heavy sites and apps is that a user may be on one tab/window messing with data and then switch to another of the same app. The problem here is that if we aren't keeping our data fresh, these can fall out of sync. Both libraries solve this by refetching data once the window has focus again. If you don't need that or can't have that behavior, you can simply disable as an option. ` Request Retry, Revalidation and Polling Sometimes requests error temporarily, it happens. Both libraries answer this problem by allowing configured automatic retries, so anytime they encounter an error, it will retry the specified amount of times until finally throwing an error. Additionally, you can use either to poll an endpoint constantly by just setting their refetch/refresh interval to a number in milliseconds. Retry Example ` Polling Example ` Mutation with Optimistic Updates Let's say you have a list of users and you want to update the information of one of those users, a pretty standard operation. Most people are pretty content with seeing a loader to indicate the server is working to update that user, and waiting for it to finish before seeing the updated list of users. However, if we know what the updated list of users will look like locally (because the user just made the update themselves), do we really even need to show a loader? No, both libraries allow you to do mutations on your cached data that will update locally immediately and start the update to the server in the background. They will also make sure that data gets refetched/validated to be the same once it returns from the server, and if not, the returned data will fall into place. Imagine we have a page where the user gets to edit their information. First, we need to fetch that information from the backend. ` Next we need to setup a function to allow us to update the user's data once they submit the form. ` If we want to have our local data update the UI optimistically, we have to add some options to the mutation. The onMutate here will shove the data locally into the cache firing before the actual update so our UI won't show a loader. The return value is used in case of an error, and we need to reset to our previous state. ` If we're updating optimistically, we need to be able to handle errors and also make sure that the server returns the same data. So we add two more hooks into our mutation's options. onError will use the data returned from onMutate so that we can reset to the previous state. onSettled makes sure we refetch the same data from the server so that we don't end up out of sync. ` Prefetching and Background Fetching If you have an idea about some data that the user is about to need, you can use these libraries to prefetch that data. By the time the user gets to it, the data is already loaded making the transition instant. This can really make your apps feel snappier. ` On a similar note, if the user already has received some data, but it's time to refresh, the libraries will fetch new data behind the scenes and replace the old data if and only if that data is different. This prevents showing the user a loading indicator, only notifying them if there's something new which is a much better user experience. Imagine we have a Twitter-style feed component that constantly is refetching for new posts ` We can notify users that the data is upgrading in the background by listening for isFetching to be true which will fire even if cache data is present. ` If we have no data in the cache at all and the query is fetching data, we can listen for isLoading to be true and show some kind of loading indicator. Finally, if isSuccess is true and we received data, we can display the posts themselves. ` Feature Comparison React Query's creator did a great job building out a feature table comparison for React Query, SWR, and Apollo for you to see what features are available. One big feature that I'd like to call out that React Query has over SWR is it's own set of dev tools which are really helpful for debugging misbehaving queries. Conclusion Over my time as a developer, I've tried to solve these problems myself, and had I had a library like React Query or SWR, I would have saved a ton of time. These problems can be really tricky to solve and the solutions can inject subtle bugs into your app that are difficult or time-consuming to debug. Luckily, we have open source and these people were generous to offer their own robust solutions for us to use. If you'd like to see more about the problems these libraries solve and the efforts needed to solve them, Tanner Linsley did a great walkthrough of what he was experiencing and how he solved it. You can watch that walkthrough here: Overall I think these libraries are great additions to the ecosystem and will help us all write better software. I'd like to see other frameworks come out with similar libraries, because the concepts here are pretty universal. I hope you found this helpful and let us know any unique strategies you have when using these libraries! PS. What about GraphQL? 😂 Well, a lot of the GraphQL libraries out there actually built in a lot of these concepts from the get go, so if you're using something like Apollo or Urql, you probably are already getting these benefits. However, both libraries are compatible with anything that returns a promise, so if your particular favorite GQL library doesn't have these features, try putting React Query or SWR in front of it. 😁...

The simplicity of deploying an MCP server on Vercel cover image

The simplicity of deploying an MCP server on Vercel

The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead. Example of Lightweight MCP Server Design At This Dot Labs, we built an MCP server that leverages the DocuSign Navigator API. The tools, like `get_agreements`, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way. ` Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access. The Oauth flow begins when the user's LLM client makes a request without a valid auth token. In this case they’ll get a 401 response from our server with a WWW-Authenticate header, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. ` This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface. Deployment Options I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel. The case for Vercel: * My own familiarity with Next.js API deployment * Fit for architecture * The extremely simple deployment process * Deploy previews (the eternal Vercel customer conversion feature, IMO) Previews of unfamiliar territory Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review. Stuff I’m Not Worried About Vercel’s mcp-handler package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works. ` A Brief Case for MCP on Next.js Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website's agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool. Conclusion I'll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co