Skip to content

Next.js Rendering Strategies and how they affect core web vitals

Next.js Rendering Strategies and how they affect core web vitals

When it comes to building fast and scalable web apps with Next.js, it’s important to understand how rendering works, especially with the App Router. Next.js organizes rendering around two main environments: the server and the client. On the server side, you’ll encounter three key strategies: Static Rendering, Dynamic Rendering, and Streaming. Each one comes with its own set of trade-offs and performance benefits, so knowing when to use which is crucial for delivering a great user experience.

In this post, we'll break down each strategy, what it's good for, and how it impacts your site's performance, especially Core Web Vitals. We'll also explore hybrid approaches and provide practical guidance on choosing the right strategy for your use case.

What Are Core Web Vitals?

Core Web Vitals are a set of metrics defined by Google that measure real-world user experience on websites. These metrics play a major role in search engine rankings and directly affect how users perceive the speed and smoothness of your site.

  • Largest Contentful Paint (LCP): This measures loading performance. It calculates the time taken for the largest visible content element to render. A good LCP is 2.5 seconds or less.
  • Interaction to Next Paint (INP): This measures responsiveness to user input. A good INP is 200 milliseconds or less.
  • Cumulative Layout Shift (CLS): This measures the visual stability of the page. It quantifies layout instability during load. A good CLS is 0.1 or less.

If you want to dive deeper into Core Web Vitals and understand more about their impact on your website's performance, I recommend reading this detailed guide on New Core Web Vitals and How They Work.

Next.js Rendering Strategies and Core Web Vitals

Let's explore each rendering strategy in detail:

1. Static Rendering (Server Rendering Strategy)

Static Rendering is the default for Server Components in Next.js. With this approach, components are rendered at build time (or during revalidation), and the resulting HTML is reused for each request. This pre-rendering happens on the server, not in the user's browser. Static rendering is ideal for routes where the data is not personalized to the user, and this makes it suitable for:

  • Content-focused websites: Blogs, documentation, marketing pages
  • E-commerce product listings: When product details don't change frequently
  • SEO-critical pages: When search engine visibility is a priority
  • High-traffic pages: When you want to minimize server load

How Static Rendering Affects Core Web Vitals

  • Largest Contentful Paint (LCP): Static rendering typically leads to excellent LCP scores (typically < 1s). The Pre-rendered HTML can be cached and delivered instantly from CDNs, resulting in very fast delivery of the initial content, including the largest element. Also, there is no waiting for data fetching or rendering on the client.
  • Interaction to Next Paint (INP): Static rendering provides a good foundation for INP, but doesn't guarantee optimal performance (typically ranges from 50-150 ms depending on implementation). While Server Components don't require hydration, any Client Components within the page still need JavaScript to become interactive. To achieve a very good INP score, you will need to make sure the Client Components within the page is minimal.
  • Cumulative Layout Shift (CLS): While static rendering delivers the complete page structure upfront which can be very beneficial for CLS, achieving excellent CLS requires additional optimization strategies:
    • Static HTML alone doesn't prevent layout shifts if resources load asynchronously
    • Image dimensions must be properly specified to reserve space before the image loads
    • Web fonts can cause text to reflow if not handled properly with font display strategies
    • Dynamically injected content (ads, embeds, lazy-loaded elements) can disrupt layout stability
    • CSS implementation significantly impacts CLS—immediate availability of styling information helps maintain visual stability

Code Examples:

  1. Basic static rendering:
// app/page.tsx (Server Component - Static Rendering by default)
export default async function Page() {
  const res = await fetch('https://api.example.com/static-data');
  const data = await res.json();
  return (
    <div>
      <h1>Static Content</h1>
      <p>{data.content}</p>
    </div>
  );
}
  1. Static rendering with revalidation (ISR):
// app/dashboard/page.tsx
export default async function Dashboard() {
  // Static data that revalidates every day
  const siteStats = await fetch('https://api.example.com/site-stats', {
    next: { revalidate: 86400 } // 24 hours
  }).then(r => r.json());

  // Data that revalidates every hour
  const popularProducts = await fetch('https://api.example.com/popular-products', {
    next: { revalidate: 3600 } // 1 hour
  }).then(r => r.json());

  // Data with a cache tag for on-demand revalidation
  const featuredContent = await fetch('https://api.example.com/featured-content', {
    next: { tags: ['featured'] }
  }).then(r => r.json());

  return (
    <div className="dashboard">
      <section className="stats">
        <h2>Site Statistics</h2>
        <p>Total Users: {siteStats.totalUsers}</p>
        <p>Total Orders: {siteStats.totalOrders}</p>
      </section>

      <section className="popular">
        <h2>Popular Products</h2>
        <ul>
          {popularProducts.map(product => (
            <li key={product.id}>{product.name} - {product.sales} sold</li>
          ))}
        </ul>
      </section>

      <section className="featured">
        <h2>Featured Content</h2>
        <div>{featuredContent.html}</div>
      </section>
    </div>
  );
}
  1. Static path generation:
// app/products/[id]/page.tsx
export async function generateStaticParams() {
  const products = await fetch('https://api.example.com/products').then(r => r.json());

  return products.map((product) => ({
    id: product.id.toString(),
  }));
}

export default async function Product({ params }) {
  const product = await fetch(`https://api.example.com/products/${params.id}`).then(r => r.json());

  return (
    <div>
      <h1>{product.name}</h1>
      <p>${product.price.toFixed(2)}</p>
      <p>{product.description}</p>
    </div>
  );
}

2. Dynamic Rendering (Server Rendering Strategy)

Dynamic Rendering generates HTML on the server for each request at request time. Unlike static rendering, the content is not pre-rendered or cached but freshly generated for each user. This kind of rendering works best for:

  • Personalized content: User dashboards, account pages
  • Real-time data: Stock prices, live sports scores
  • Request-specific information: Pages that use cookies, headers, or search parameters
  • Frequently changing data: Content that needs to be up-to-date on every request

How Dynamic Rendering Affects Core Web Vitals

  • Largest Contentful Paint (LCP): With dynamic rendering, the server needs to generate HTML for each request, and that can't be fully cached at the CDN level. It is still faster than client-side rendering as HTML is generated on the server.
  • Interaction to Next Paint (INP): The performance is similar to static rendering once the page is loaded. However, it can become slower if the dynamic content includes many Client Components.
  • Cumulative Layout Shift (CLS): Dynamic rendering can potentially introduce CLS if the data fetched at request time significantly alters the layout of the page compared to a static structure. However, if the layout is stable and the dynamic content size fits within predefined areas, the CLS can be managed effectively.

Code Examples:

  1. Explicit dynamic rendering:
// app/dashboard/page.tsx
export const dynamic = 'force-dynamic'; // Force this route to be dynamically rendered

export default async function Dashboard() {
  // This will run on every request
  const data = await fetch('https://api.example.com/dashboard-data').then(r => r.json());

  return (
    <div>
      <h1>Dashboard</h1>
      <p>Last updated: {new Date().toLocaleString()}</p>
      {/* Dashboard content */}
    </div>
  );
}
  1. Simplicit dynamic rendering with cookies:
// app/profile/page.tsx
import { cookies } from 'next/headers';

export default async function Profile() {
  // Using cookies() automatically opts into dynamic rendering
  const userId = cookies().get('userId')?.value;

  const user = await fetch(`https://api.example.com/users/${userId}`).then(r => r.json());

  return (
    <div>
      <h1>Welcome, {user.name}</h1>
      <p>Email: {user.email}</p>
      {/* Profile content */}
    </div>
  );
}
  1. Dynamic routes:
// app/blog/[slug]/page.tsx
export default async function BlogPost({ params }) {
  // It will run at request time for any slug not explicitly pre-rendered
  const post = await fetch(`https://api.example.com/posts/${params.slug}`).then(r => r.json());

  return (
    <article>
      <h1>{post.title}</h1>
      <div>{post.content}</div>
    </article>
  );
}

3. Streaming (Server Rendering Strategy)

Streaming allows you to progressively render UI from the server. Instead of waiting for all the data to be ready before sending any HTML, the server sends chunks of HTML as they become available. This is implemented using React's Suspense boundary.

React Suspense works by creating boundaries in your component tree that can "suspend" rendering while waiting for asynchronous operations. When a component inside a Suspense boundary throws a promise (which happens automatically with data fetching in React Server Components), React pauses rendering of that component and its children, renders the fallback UI specified in the Suspense component, continues rendering other parts of the page outside this boundary, and eventually resumes and replaces the fallback with the actual component once the promise resolves.

When streaming, this mechanism allows the server to send the initial HTML with fallbacks for suspended components while continuing to process suspended components in the background. The server then streams additional HTML chunks as each suspended component resolves, including instructions for the browser to seamlessly replace fallbacks with final content. It works well for:

  • Pages with mixed data requirements: Some fast, some slow data sources
  • Improving perceived performance: Show users something quickly while slower parts load
  • Complex dashboards: Different widgets have different loading times
  • Handling slow APIs: Prevent slow third-party services from blocking the entire page

How Streaming Affects Core Web Vitals

  • Largest Contentful Paint (LCP): Streaming can improve the perceived LCP. By sending the initial HTML content quickly, including potentially the largest element, the browser can render it sooner. Even if other parts of the page are still loading, the user sees the main content faster.
  • Interaction to Next Paint (INP): Streaming can contribute to a better INP. When used with React's &lt;Suspense />, interactive elements in the faster-loading parts of the page can become interactive earlier, even while other components are still being streamed in. This allows users to engage with the page sooner.
  • Cumulative Layout Shift (CLS): Streaming can cause layout shifts as new content streams in. However, when implemented carefully, streaming should not negatively impact CLS. The initially streamed content should establish the main layout, and subsequent streamed chunks should ideally fit within this structure without causing significant reflows or layout shifts. Using placeholders and ensuring dimensions are known can help prevent CLS.

Code Examples:

  1. Basic Streaming with Suspense:
// app/dashboard/page.tsx
import { Suspense } from 'react';
import UserProfile from './components/UserProfile';
import RecentActivity from './components/RecentActivity';
import PopularPosts from './components/PopularPosts';

export default function Dashboard() {
  return (
    <div className="dashboard">
      {/* This loads quickly */}
      <h1>Dashboard</h1>

      {/* User profile loads first */}
      <Suspense fallback={<div className="skeleton-profile">Loading profile...</div>}>
        <UserProfile />
      </Suspense>

      {/* Recent activity might take longer */}
      <Suspense fallback={<div className="skeleton-activity">Loading activity...</div>}>
        <RecentActivity />
      </Suspense>

      {/* Popular posts might be the slowest */}
      <Suspense fallback={<div className="skeleton-posts">Loading popular posts...</div>}>
        <PopularPosts />
      </Suspense>
    </div>
  );
}
  1. Nested Suspense boundaries for more granular control:
// app/complex-page/page.tsx
import { Suspense } from 'react';

export default function ComplexPage() {
  return (
    <Suspense fallback={<PageSkeleton />}>
      <Header />

      <div className="content-grid">
        <div className="main-content">
          <Suspense fallback={<MainContentSkeleton />}>
            <MainContent />
          </Suspense>
        </div>

        <div className="sidebar">
          <Suspense fallback={<SidebarTopSkeleton />}>
            <SidebarTopSection />
          </Suspense>

          <Suspense fallback={<SidebarBottomSkeleton />}>
            <SidebarBottomSection />
          </Suspense>
        </div>
      </div>

      <Footer />
    </Suspense>
  );
}
  1. Using Next.js loading.js convention:
// app/products/loading.tsx - This will automatically be used as a Suspense fallback
export default function Loading() {
  return (
    <div className="products-loading-skeleton">
      <div className="header-skeleton" />
      <div className="filters-skeleton" />
      <div className="products-grid-skeleton">
        {Array.from({ length: 12 }).map((_, i) => (
          <div key={i} className="product-card-skeleton" />
        ))}
      </div>
    </div>
  );
}

// app/products/page.tsx
export default async function ProductsPage() {
  // This component can take time to load
  // Next.js will automatically wrap it in Suspense
  // and use the loading.js as the fallback
  const products = await fetchProducts();

  return <ProductsList products={products} />;
}

4. Client Components and Client-Side Rendering

Client Components are defined using the React 'use client' directive. They are pre-rendered on the server but then hydrated on the client, enabling interactivity. This is different from pure client-side rendering (CSR), where rendering happens entirely in the browser. In the traditional sense of CSR (where the initial HTML is minimal, and all rendering happens in the browser), Next.js has moved away from this as a default approach but it can still be achievable by using dynamic imports and setting ssr: false.

// app/csr-example/page.tsx
'use client';

import { useState, useEffect } from 'react';
import dynamic from 'next/dynamic';

// Lazily load a component with no SSR
const ClientOnlyComponent = dynamic(
  () => import('../components/heavy-component'),
  { ssr: false, loading: () => <p>Loading...</p> }
);

export default function CSRPage() {
  const [isClient, setIsClient] = useState(false);

  useEffect(() => {
    setIsClient(true);
  }, []);

  return (
    <div>
      <h1>Client-Side Rendered Page</h1>
      {isClient ? (
        <ClientOnlyComponent />
      ) : (
        <p>Loading client component...</p>
      )}
    </div>
  );
}

Despite the shift toward server rendering, there are valid use cases for CSR:

  1. Private dashboards: Where SEO doesn't matter, and you want to reduce server load
  2. Heavy interactive applications: Like data visualization tools or complex editors
  3. Browser-only APIs: When you need access to browser-specific features like localStorage or WebGL
  4. Third-party integrations: Some third-party widgets or libraries that only work in the browser

While these are valid use cases, using Client Components is generally preferable to pure CSR in Next.js. Client Components give you the best of both worlds: server-rendered HTML for the initial load (improving SEO and LCP) with client-side interactivity after hydration. Pure CSR should be reserved for specific scenarios where server rendering is impossible or counterproductive.

Client components are good for:

  • Interactive UI elements: Forms, dropdowns, modals, tabs
  • State-dependent UI: Components that change based on client state
  • Browser API access: Components that need localStorage, geolocation, etc.
  • Event-driven interactions: Click handlers, form submissions, animations
  • Real-time updates: Chat interfaces, live notifications

How Client Components Affect Core Web Vitals

  • Largest Contentful Paint (LCP): Initial HTML includes the server-rendered version of Client Components, so LCP is reasonably fast. Hydration can delay interactivity but doesn't necessarily affect LCP.
  • Interaction to Next Paint (INP): For Client Components, hydration can cause input delay during page load, and when the page is hydrated, performance depends on the efficiency of event handlers. Also, complex state management can impact responsiveness.
  • Cumulative Layout Shift (CLS): Client-side data fetching can cause layout shifts as new data arrives. Also, state changes might alter the layout unexpectedly. Using Client Components will require careful implementation to prevent shifts.

Code Examples:

  1. Basic Client Component:
// app/components/Counter.tsx
'use client';

import { useState } from 'react';

export default function Counter() {
  const [count, setCount] = useState(0);

  return (
    <div>
      <p>Count: {count}</p>
      <button onClick={() => setCount(count + 1)}>Increment</button>
    </div>
  );
}
  1. Client Component with server data:
// app/products/page.tsx - Server Component
import ProductFilter from '../components/ProductFilter';

export default async function ProductsPage() {
  // Fetch data on the server
  const products = await fetch('https://api.example.com/products').then(r => r.json());

  // Pass server data to Client Component as props
  return <ProductFilter initialProducts={products} />;
}

Hybrid Approaches and Composition Patterns

In real-world applications, you'll often use a combination of rendering strategies to achieve the best performance. Next.js makes it easy to compose Server and Client Components together.

Server Components with Islands of Interactivity

One of the most effective patterns is to use Server Components for the majority of your UI and add Client Components only where interactivity is needed. This approach:

  1. Minimizes JavaScript sent to the client
  2. Provides excellent initial load performance
  3. Maintains good interactivity where needed
// app/products/[id]/page.tsx - Server Component
import AddToCartButton from '../../components/AddToCartButton';
import ProductReviews from '../../components/ProductReviews';
import RelatedProducts from '../../components/RelatedProducts';

export default async function ProductPage({ params }: {
  params: { id: string; }
}) {
  // Fetch product data on the server
  const product = await fetch(`https://api.example.com/products/${params.id}`).then(r => r.json());

  return (
    <div className="product-page">
      <div className="product-main">
        <h1>{product.name}</h1>
        <p className="price">${product.price.toFixed(2)}</p>
        <div className="description">{product.description}</div>

        {/* Client Component for interactivity */}
        <AddToCartButton product={product} />
      </div>

      {/* Server Component for product reviews */}
      <ProductReviews productId={params.id} />

      {/* Server Component for related products */}
      <RelatedProducts categoryId={product.categoryId} />
    </div>
  );
}

Partial Prerendering (Next.js 15)

Next.js 15 introduced Partial Prerendering, a new hybrid rendering strategy that combines static and dynamic content in a single route. This allows you to:

  1. Statically generate a shell of the page
  2. Stream in dynamic, personalized content
  3. Get the best of both static and dynamic rendering

Note: At the time of this writing, Partial Prerendering is experimental and is not ready for production use. Read more

// app/dashboard/page.tsx
import { unstable_noStore as noStore } from 'next/cache';
import StaticContent from './components/StaticContent';
import DynamicContent from './components/DynamicContent';

export default function Dashboard() {
  return (
    <div className="dashboard">
      {/* This part is statically generated */}
      <StaticContent />

      {/* This part is dynamically rendered */}
      <DynamicPart />
    </div>
  );
}

// This component and its children will be dynamically rendered
function DynamicPart() {
  // Opt out of caching for this part
  noStore();

  return <DynamicContent />;
}

Measuring Core Web Vitals in Next.js

Understanding the impact of your rendering strategy choices requires measuring Core Web Vitals in real-world conditions. Here are some approaches:

1. Vercel Analytics

If you deploy on Vercel, you can use Vercel Analytics to automatically track Core Web Vitals for your production site:

// app/layout.tsx
import { Analytics } from '@vercel/analytics/react';

export default function RootLayout({ children }: {
  children: React.ReactNode;
}) {
  return (
    <html lang="en">
      <body>
        {children}
        <Analytics />
      </body>
    </html>
  );
}

2. Web Vitals API

You can manually track Core Web Vitals using the web-vitals library:

// app/components/WebVitalsReporter.tsx
'use client';

import { useEffect } from 'react';
import { onCLS, onINP, onLCP } from 'web-vitals';

export function WebVitalsReporter() {
  useEffect(() => {
    // Report Core Web Vitals
    onCLS(metric => console.log('CLS:', metric.value));
    onINP(metric => console.log('INP:', metric.value));
    onLCP(metric => console.log('LCP:', metric.value));

    // In a real app, you would send these to your analytics service
  }, []);

  return null; // This component doesn't render anything
}

3. Lighthouse and PageSpeed Insights

For development and testing, use:

Making Practical Decisions: Which Rendering Strategy to Choose?

Choosing the right rendering strategy depends on your specific requirements. Here's a decision framework:

Choose Static Rendering when

  • Content is the same for all users
  • Data can be determined at build time
  • Page doesn't need frequent updates
  • SEO is critical
  • You want the best possible performance

Choose Dynamic Rendering when

  • Content is personalized for each user
  • Data must be fresh on every request
  • You need access to request-time information
  • Content changes frequently

Choose Streaming when

  • Page has a mix of fast and slow data requirements
  • You want to improve perceived performance
  • Some parts of the page depend on slow APIs
  • You want to prioritize showing critical UI first

Choose Client Components when

  • UI needs to be interactive
  • Component relies on browser APIs
  • UI changes frequently based on user input
  • You need real-time updates

Conclusion

Next.js provides a powerful set of rendering strategies that allow you to optimize for both performance and user experience. By understanding how each strategy affects Core Web Vitals, you can make informed decisions about how to build your application.

Remember that the best approach is often a hybrid one, combining different rendering strategies based on the specific requirements of each part of your application. Start with Server Components as your default, use Static Rendering where possible, and add Client Components only where interactivity is needed.

By following these principles and measuring your Core Web Vitals, you can create Next.js applications that are fast, responsive, and provide an excellent user experience.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Next.js + MongoDB Connection Storming cover image

Next.js + MongoDB Connection Storming

Building a Next.js application connected to MongoDB can feel like a match made in heaven. MongoDB stores all of its data as JSON objects, which don’t require transformation into JavaScript objects like relational SQL data does. However, when deploying your application to a serverless production environment such as Vercel, it is crucial to manage your database connections properly. If you encounter errors like these, you may be experiencing Connection Storming: * MongoServerSelectionError: connect ECONNREFUSED &lt;IP_ADDRESS>:&lt;PORT> * MongoNetworkError: failed to connect to server [&lt;hostname>:&lt;port>] on first connect * MongoTimeoutError: Server selection timed out after &lt;x> ms * MongoTopologyClosedError: Topology is closed, please connect * Mongo Atlas: Connections % of configured limit has gone above 80 Connection storming occurs when your application has to mount a connection to Mongo for every serverless function or API endpoint call. Vercel executes your application’s code in a highly concurrent and isolated fashion. So, if you create new database connections on each request, your app might quickly exceed the connection limit of your database. We can leverage Vercel’s fluid compute model to keep our database connection objects warm across function invocations. Traditional serverless architecture was designed for quick, stateless web app transactions. Now, especially with the rise of LLM-oriented applications built with Next.js, interactions with applications are becoming more sequential. We just need to ensure that we assign our MongoDB connection to a global variable. Protip: Use global variables Vercel’s fluid compute model means all memory, including global constants like a MongoDB client, stays initialized between requests as long as the instance remains active. By assigning your MongoDB client to a global constant, you avoid redundant setup work and reduce the overhead of cold starts. This enables a more efficient approach to reusing connections for your application’s MongoDB client. The example below demonstrates how to retrieve an array of users from the users collection in MongoDB and either return them through an API request to /api/users or render them as an HTML list at the /users route. To support this, we initialize a global clientPromise variable that maintains the MongoDB connection across warm serverless executions, avoiding re-initialization on every request. ` Using this database connection in your API route code is easy: ` You can also use this database connection in your server-side rendered React components. ` In serverless environments like Vercel, managing database connections efficiently is key to avoiding connection storming. By reusing global variables and understanding the serverless execution model, you can ensure your Next.js app remains stable and performant....

Vercel & React Native - A New Era of Mobile Development? cover image

Vercel & React Native - A New Era of Mobile Development?

Vercel & React Native - A New Era of Mobile Development? Jared Palmer of Vercel recently announced an acquisition that spiked our interest. Having worked extensively with both Next.js and Vercel, as well as React Native, we were curious to see what the appointment of Fernando Rojo, the creator of Solito, as Vercel's Head of Mobile, would mean for the future of React Native and Vercel. While we can only speculate on what the future holds, we can look closer at Solito and its current integration with Vercel. Based on the information available, we can also make some educated guesses about what the future might hold for React Native and Vercel. What is Solito? Based on a recent tweet by Guillermo Rauch, one might assume that Solito allows you to build mobile apps with Next.js. While that might become a reality in the future, Jamon Holmgren, the CTO of Infinite Red, added some context to the conversation. According to Jamon, Solito is a cross-platform framework built on top of two existing technologies: - For the web, Solito leverages Next.js. - For mobile, Solito takes advantage of Expo. That means that, at the moment, you can't build mobile apps using Next.js & Solito only - you still need Expo and React Native. Even Jamon, however, admits that even the current integration of Solito with Vercel is exciting. Let's take a closer look at what Solito is according to its official website: > This library is two things: > > 1. A tiny wrapper around React Navigation and Next.js that lets you share navigation code across platforms. > > 2. A set of patterns and examples for building cross-platform apps with React Native + Next.js. We can see that Jamon was right - Solito allows you to share navigation code between Next.js and React Native and provides some patterns and components that you can use to build cross-platform apps, but it doesn't replace React Native or Expo. The Cross-Platformness of Solito So, we know Solito provides a way to share navigation and some patterns between Next.js and React Native. But what precisely does that entail? Cross-Platform Hooks and Components If you look at Solito's documentation, you'll see that it's not only navigation you can share between Next.js and React Native. There are a few components that wrap Next.js components and make them available in React Native: - Link - a component that wraps Next.js' Link component and allows you to navigate between screens in React Native. - TextLink - a component that also wraps Next.js' Link component but accepts text nodes as children. - MotiLink - a component that wraps Next.js' Link component and allows you to animate the link using moti, a popular animation library for React Native. - SolitoImage - a component that wraps Next.js' Image component and allows you to display images in React Native. On top of that, Solito provides a few hooks that you can use for shared routing and navigation: - useRouter() - a hook that lets you navigate between screens across platforms using URLs and Next.js Url objects. - useLink() - a hook that lets you create Link components across the two platforms. - createParam() - a function that returns the useParam() and useParams() hooks which allow you to access and update URL parameters across platforms. Shared Logic The Solito starter project is structured as a monorepo containing: - apps/next - the Next.js application. - apps/expo or apps/native - the React Native application. - packages/app - shared packages across the two applications: - features - providers - navigation The shared packages contain the shared logic and components you can use across the two platforms. For example, the features package contains the shared components organized by feature, the providers package contains the shared context providers, and the navigation package includes the shared navigation logic. One of the key principles of Solito is gradual adoption, meaning that if you use Solito and follow the recommended structure and patterns, you can start with a Next.js application only and eventually add a React Native application to the mix. Deployments Deploying the Next.js application built on Solito is as easy as deploying any other Next.js application. You can deploy it to Vercel like any other Next.js application, e.g., by linking your GitHub repository to Vercel and setting up automatic deployments. Deploying the React Native application built on top of Solito to Expo is a little bit more involved - you cannot directly use the Github Action recommended by Expo without some modification as Solito uses a monorepo structure. The adjustment, however, is luckily just a one-liner. You just need to add the working-directory parameter to the eas update --auto command in the Github Action. Here's what the modified part of the Expo Github Action would look like: ` What Does the Future Hold? While we can't predict the future, we can make some educated guesses about what the future might hold for Solito, React Native, Expo, and Vercel, given what we know about the current state of Solito and the recent acquisition of Fernando Rojo by Vercel. A Competitor to Expo? One question that comes to mind is whether Vercel will work towards creating a competitor to Expo. While it's too early to tell, it's not entirely out of the question. Vercel has been expanding its offering beyond Next.js and static sites, and it's not hard to imagine that it might want to provide a more integrated, frictionless solution for building mobile apps, further bridging the gap between web and mobile development. However, Expo is a mature and well-established platform, and building a mobile app toolchain from scratch is no trivial task. It would be easier for Vercel to build on top of Expo and partner with them to provide a more integrated solution for building mobile apps with Next.js. Furthermore, we need to consider Vercel's target audience. Most of Vercel's customers are focused on web development with Next.js, and switching to a mobile-first approach might not be in their best interest. That being said, Vercel has been expanding its offering to cater to a broader audience, and providing a more integrated solution for building mobile apps might be a step in that direction. A Cross-Platform Framework for Mobile Apps with Next.js? Imagine a future where you write your entire application in Next.js — using its routing, file structure, and dev tools — and still produce native mobile apps for iOS and Android. It's unlikely such functionality would be built from scratch. It would likely still rely on React Native + Expo to handle the actual native modules, build processes, and distribution. From the developer’s point of view, however, it would still feel like writing Next.js. While this idea sounds exciting, it's not likely to happen in the near future. Building a cross-platform framework that allows you to build mobile apps with Next.js only would require a lot of work and coordination between Vercel, Expo, and the React Native community. Furthermore, there are some conceptual differences between Next.js and React Native that would need to be addressed, such as Next.js being primarily SSR-oriented and native mobile apps running on the client. Vercel Building on Top of Solito? One of the more likely scenarios is that Vercel will build on top of Solito to provide a more integrated solution for building mobile apps with Next.js. This could involve providing more components, hooks, and patterns for building cross-platform apps, as well as improving the deployment process for React Native applications built on top of Solito. A potential partnership between Vercel and Expo, or at least some kind of closer integration, could also be in the cards in this scenario. While Expo already provides a robust infrastructure for building mobile apps, Vercel could provide complementary services or features that make it easier to build mobile apps on top of Solito. Conclusion Some news regarding Vercel and mobile development is very likely on the horizon. After all, Guillermo Rauch, the CEO of Vercel, has himself stated that Vercel will keep raising the quality bar of the mobile and web ecosystems. While it's unlikely we'll see a full-fledged mobile app framework built on top of Next.js or a direct competitor to Expo in the near future, it's not hard to imagine that Vercel will provide more tools and services for building mobile apps with Next.js. Solito is a step in that direction, and it's exciting to see what the future holds for mobile development with Vercel....

How to set up local cloud environment with LocalStack cover image

How to set up local cloud environment with LocalStack

How to set up local cloud environment with LocalStack Developers enjoy building applications with AWS due to the richness of their solutions to problems. However, testing an AWS application during development without a dedicated AWS account can be challenging. This can slow the development process and potentially lead to unnecessary costs if the AWS account isn't properly managed. This article will examine LocalStack, a development framework for developing and testing AWS applications, how it works, and how to set it up. Assumptions This article assumes you have a basic understanding of: - AWS: Familiarity with S3, CloudFormation, and SQS. - Command Line Interface (CLI): Comfortable running commands in a terminal or command prompt. - JavaScript and Node.js: Basic knowledge of JavaScript and Node.js, as we will write some code to interact with AWS services. - Docker Concepts: Understanding of Docker basics, such as images and containers, since LocalStack runs within a Docker container. What is LocalStack? LocalStack is a cloud service emulator that runs in a single container on your laptop or in your CI environment. With LocalStack, you can run your AWS applications or Lambdas entirely on your local machine without connecting to a remote cloud provider! Whether you are testing complex CDK applications or Terraform configurations or just beginning to learn about AWS, LocalStack simplifies your testing and development workflow, relieving you from the complexity of testing AWS applications. Prerequisite Before setting up LocalStack, ensure you have the following: 1. Docker Installed: LocalStack runs in a Docker container, so you need Docker installed on your machine. You can download and install Docker from here. 2. Node.js and npm: Ensure you have Node.js and npm installed, as we will use a simple Node.js application to test AWS services. You can download Node.js from here. 3. Python: Python is required for installing certain CLI tools that interact with LocalStack. Ensure you have Python 3 installed on your machine. You can download Python from here. Installation In this article, we will use the LocalStack CLI, which is the quickest way to get started with LocalStack. It allows you to start LocalStack from your command line. Localstack spins up a Docker instance, Alternative methods of managing the LocalStack container exist, and you can find them here. To install LocalStack CLI, you can use homebrew by running the following command: ` If you do not use a macOS or you don’t have Brew installed, you can install the CLI using Python: ` To confirm your installation was successful, run the following: ` That should output the installed version of LocalStack. Now you can start LocalStack by running the following command: ` This command will start LocalStack in docker mode, and since it is your first installation, try to pull the LocalStack image. You should see the below on your terminal After the image is downloaded successfully, the docker instance is spun up, and LocalStack is running on your machine on port 4566. Testing AWS Services with LocalStack LocalStack lets you easily test AWS services during development. It supports many AWS products but has some limitations, and not all features are free. Community Version: Free access to core AWS products like S3, SQS, DynamoDB, and Lambda. Pro Version: Access to more AWS products and enhanced features. Check the supported community and pro version resources for more details. We're using the community edition, and the screenshot below shows its supported products. To see the current products supported in the community edition, visit http://localhost:4566/_localstack/health. This article will test AWS CloudFormation and SQS. Before we can start testing, we need to create a simple Node.js app. On your terminal, navigate to the desired folder and run the following command: ` This command will create a package.json file at the root of the directory. Now we need to install aws-sdk. Run the following command: ` With that installed, we can now start testing various services. AWS CloudFormation AWS CloudFormation is a service that allows users to create, update, and delete resources in an AWS account. This service can also automate the process of provisioning and configuring resources. We are going to be using LocalStack to test creating a CloudFormation template. In the root of the folder, create a file called cloud-formation.js. This file will be used to create a CloudFormation stack that will be used to create an S3 bucket. Add the following code to the file: ` In the above code, we import the aws-sdk package, which provides the necessary tools to interact with AWS services. Then, an instance of the AWS.CloudFormation class is created. This instance is configured with: region: The AWS region where the requests are sent. In this case, us-east-1 is the default region for LocalStack. endpoint: The URI to send requests to is set to http://localhost:4566 for LocalStack. You should configure this with environment variables to switch between LocalStack for development and the actual AWS endpoint for production, ensuring the same code can be used in both environments. credentials: The AWS credentials to sign requests with. We are passing new AWS.Credentials("test", "test") The params object defines the parameters needed to create the CloudFormation stack: StackName: The name of the stack. Here, we are using 'test-local-stack'. TemplateBody: A JSON string representing the CloudFormation template. In this example, it defines a single resource, an S3 bucket named TestBucket. The createStack method is called on the CloudFormation client with the params object. This method attempts to create the stack. If there is an error, we log it to the console else, we log the successful data to the console. Now, let’s test the code by running the following command: ` If we run the above command, we should see the JSON response on the terminal. ` AWS SQS The process for testing SQS with LocalStack using the aws-sdk follows the same pattern as above, except that we will introduce another CLI package, awslocal. awslocal is a thin wrapper and a substitute for the standard aws command, enabling you to run AWS CLI commands within the LocalStack environment without specifying the --endpoint-url parameter or a profile. To install awslocal, run the following command on your terminal: ` Next, let’s create an SQS queue using the following command: ` This will create a queue name test-queue and return queueUrl like below: ` Now, in our directory, let’s create a sqs.js file, and inside of it, let’s paste the following code: ` In the above code, an instance of the AWS.SQS class is created. The instance is configured with the same parameters as when creating the CloudFormation. We also created a params object which had the required properties needed to send a SQS message: QueueUrl: The URL of the Amazon SQS queue to which a message is sent. In our case, it will be the URL we got when we created a local SQS. Make sure to manage this in environment variables to switch between LocalStack for development and the actual AWS queue URL for production, ensuring the same code can be used in both environments. MessageBody: The message to send. We call the sendMessage method, passing the params object and a callback that handles error and data, respectively. Let’s run the code using the following command: ` We should get a JSON object in the terminal like the following: ` To test if we can receive the SQS message sent, let’s create a sqs-receive.js. Inside the file, we can copy over the AWS.SQS instance that was created earlier into the file and add the following code: ` Run the code using the following command: ` We should receive a JSON object and should be able to see the previous message we sent ` When you are done with testing, you can shut down LocalStack by running the following command: ` Conclusion In this article, we looked at how to set up a local cloud environment using LocalStack, a powerful tool for developing and testing AWS applications locally. We walked through the installation process of LocalStack and demonstrated how to test AWS services using the AWS SDK, including CloudFormation and SQS. Setting up LocalStack allows you to simulate various AWS services on your local machine, which helps streamline development workflows and improve productivity. Whether you are testing simple configurations or complex deployments, LocalStack provides the environment to ensure your applications work as expected before moving to a production environment. Using LocalStack, you can confidently develop and test your AWS applications without an active AWS account, making it an invaluable tool for developers looking to optimize their development process....

Implementing Dynamic Types in Docusign Extension Apps cover image

Implementing Dynamic Types in Docusign Extension Apps

Implementing Dynamic Types in Docusign Extension Apps In our previous blog post about Docusign Extension Apps, Advanced Authentication and Onboarding Workflows with Docusign Extension Apps, we touched on how you can extend the OAuth 2 flow to build a more powerful onboarding flow for your Extension Apps. In this blog post, we will continue explaining more advanced patterns in developing Extension Apps. For that reason, we assume at least basic familiarity with how Extension Apps work and ideally some experience developing them. To give a brief recap, Docusign Extension Apps are a powerful way to embed custom logic into Docusign agreement workflows. These apps are lightweight services, typically cloud-hosted, that integrate at specific workflow extension points to perform custom actions, such as data validation, participant input collection, or interaction with third-party services. Each Extension App is configured using a manifest file. This manifest defines metadata such as the app's author, support links, and the list of extension points it uses (these are the locations in the workflow where your app's logic will be executed). The extension points that are relevant for us in the context of this blog post are GetTypeNames and GetTypeDefinitions. These are used by Docusign to retrieve the types supported by the Extension App and their definitions, and to show them in the Maestro UI. In most apps, these types are static and rarely change. However, they don't have to be. They can also be dynamic and change based on certain configurations in the target system that the Extension App is integrating with, or based on the user role assigned to the Maestro administrator on the target system. Static vs. Dynamic Types To explain the difference between static and dynamic types, we'll use the example from our previous blog post, where we integrated with an imaginary task management system called TaskVibe. In the example, our Extension App enabled agreement workflows to communicate with TaskVibe, allowing tasks to be read, created, and updated. Our first approach to implementing the GetTypeNames and GetTypeDefinitions endpoints for the TaskVibe Extension App might look like the following. The GetTypeNames endpoint returns a single record named task: ` Given the type name task, the GetTypeDefinitions endpoint would return the following definition for that type: ` As noted in the Docusign documentation, this endpoint must return a Concerto schema representing the type. For clarity, we've omitted most of the Concerto-specific properties. The above declaration states that we have a task type, and this type has properties that correspond to task fields in TaskVibe, such as record ID, title, description, assignee, and so on. The type definition and its properties, as described above, are static and they never change. A TaskVibe task will always have the same properties, and these are essentially set in stone. Now, imagine a scenario where TaskVibe supports custom properties that are also project-dependent. One project in TaskVibe might follow a typical agile workflow with sprints, and the project manager might want a "Sprint" field in every task within that project. Another project might use a Kanban workflow, where the project manager wants a status field with values like "Backlog," "ToDo," and so on. With static types, we would need to return every possible field from any project as part of the GetTypeDefinitions response, and this introduces new challenges. For example, we might be dealing with hundreds of custom field types, and showing them in the Maestro UI might be too overwhelming for the Maestro administrator. Or we might be returning fields that are simply not usable by the Maestro administrator because they relate to projects the administrator doesn't have access to in TaskVibe. With dynamic types, however, we can support this level of customization. Implementing Dynamic Types When Docusign sends a request to the GetTypeNames endpoint and the types are dynamic, the Extension App has a bit more work than before. As we've mentioned earlier, we can no longer return a generic task type. Instead, we need to look into each of the TaskVibe projects the user has access to, and return the tasks as they are represented under each project, with all the custom fields. (Determining access can usually be done by making a query to a user information endpoint on the target system using the same OAuth 2 token used for other calls.) Once we find the task definitions on TaskVibe, we then need to return them in the response of GetTypeNames, where each type corresponds to a task for the given project. This is a big difference from static types, where we would only return a single, generic task. For example: ` The key point here is that we are now returning one type per task in a TaskVibe project. You can think of this as having a separate class for each type of task, in object-oriented lingo. The type name can be any string you choose, but it needs to be unique in the list, and it needs to contain the minimum information necessary to be able to distinguish it from other task definitions in the list. In our case, we've decided to form the ID by concatenating the string "task_" with the ID of the project on TaskVibe. The implementation of the GetTypeDefinitions endpoint needs to: 1. Extract the project ID from the requested type name. 1. Using the project ID, retrieve the task definition from TaskVibe for that project. This definition specifies which fields are present on the project's tasks, including all custom fields. 1. Once the fields are retrieved, map them to the properties of the Concerto schema. The resulting JSON could look like this (again, many of the Concerto properties have been omitted for clarity): ` Now, type definitions are fully dynamic and project-dependent. Caching of Type Definitions on Docusign Docusign maintains a cache of type definitions after an initial connection. This means that changes made to your integration (particularly when using dynamic types) might not be immediately visible in the Maestro UI. To ensure users see the latest data, it's useful to inform them that they may need to refresh their Docusign connection in the App Center UI if new fields are added to their integrated system (like TaskVibe). As an example, a newly added custom field on a TaskVibe project wouldn't be reflected until this refresh occurs. Conclusion In this blog post, we've explored how to leverage dynamic types within Docusign Extension Apps to create more flexible integrations with external systems. While static types offer simplicity, they can be constraining when working with external systems that offer a high level of customization. We hope that this blog post provides you with some ideas on how you can tackle similar problems in your Extension Apps....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co