Skip to content

Introducing the New Shopify and Next.js 13 Starter Kit

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Screenshot 2023-08-02 180048

Intro

We are delighted to announce our brand new Shopify and Next.js 13 starter kit to our Starter.dev kits collection. This Starter kit is amazing to help you and your team quickly bootstrap a custom Shopify storefronts powered by Next.js 13 App Router since it has almost every feature you need for production set up and running!

Why did we build it?

At This Dot Labs, we work with a variety of clients, from big corporations to startups. Recently, we realized that we have a lot of clients who use Shopify that want to build custom stores using Next.js to benefit from its features. Additionally, we know that Hydrogen Shopify has only been limited to Remix since they acquired it. We had a lot of a hard time refilling the gap Hydrogen left to implement its features from scratch in our client projects. And from here, we realized that we need to build a starter kit to:

  • Help our teams build custom storefronts using Next.js Quickly
  • Reduce the cost for our clients

There were a lot of Next.js Shopify starters out there, but they were very basic and didn’t have all the features our teams needed, unlike the Official Hydrogen Starter Example which is complete. So, we decided to convert the Hydrogen starter from Remix/Hydrogen to Next.js, and use the power of the App Router and React Server Components.

Tech Stack

We used in this project:

Why Shopify

Shopify is a great e-commerce platform. They have a lot of experience in this field since 2006, and now over 4.12 million e-commerce sites are built with Shopify. (Source: Builtwith.com.) Shopify is used by online sellers in over 175 countries around the world, with 63% of Shopify stores estimated to be based in the US.

Why Next.js 13

As we mentioned above, we have a lot of clients who already use Next.js and it’s a very popular framework in the market for its flexibility, scalability, and diverse collection of features.

Features

Since this starter kit is based on the official Shopify Hydrogen template, it has all of it’s features as well including:

  • Light/Dark themes support
  • Authentication system with login, register, forgot password, and account pages
  • Supports both Mobile/Desktop screens
  • Has built-in infinity scroll pagination, built with Server Components
  • State management using Zustand
  • Variety of custom fonts
  • All components have been built with Tailwind
  • Static analysis testing tools pre-configured like TypeScript, Eslint, Prettier, and a lot of useful extensions
  • Storybooks for the consistency of the system design
  • GitHub Pull requests and release templates
  • Shopify analytics integrated with the kit
  • Great performance and SEO scores
  • And finally, incompatible performance tested by both Lighthouse and PerfBuddy

Performance

This kit is running at top performance! It has amazing performance, SEO, and accessibility scores.

We tested it using PerfBuddy, and the results are incredible on both Desktop and Mobile. Here is the full report.

Screenshot 2023-08-02 181153

Note: PerfBuddy is an incredible free tool to analyze the performance of your web apps accurately. Check it out!

Also, the results from Lighthouse are pretty fascinating:

Screenshot 2023-08-02 181411

When do I use this kit?

This starter kit is great for starting new custom storefronts rapidly, especially for startups to save time and money. Also, it can be used to migrate from the old Next.js storefront to the new App router directory as it has all of the examples your team will need to integrate Shopify in Next.js 13

Conclusion

We are very excited to share this starter kit with you, and we hope you find it useful for your projects. We are looking forward to hearing your feedback and suggestions to improve it, and add more features to it. Also, we are planning to add more starter kits to our Starter.dev collection, so stay tuned for more updates!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

How to build an AI assistant with OpenAI, Vercel AI SDK, and Ollama with Next.js cover image

How to build an AI assistant with OpenAI, Vercel AI SDK, and Ollama with Next.js

In today’s blog post, we’ll build an AI Assistant using three different AI models: Whisper and TTS from OpenAI and Llama 3.1 from Meta. While exploring AI, I wanted to try different things and create an AI assistant that works by voice. This curiosity led me to combine OpenAI’s Whisper and TTS models with Meta’s Llama 3.1 to build a voice-activated assistant. Here’s how these models will work together: * First, we’ll send our audio to the Whisper model, which will convert it from speech to text. * Next, we’ll pass that text to the Llama 3.1 model. Llama will understand the text and generate a response. * Finally, we’ll take Llama’s response and send it to the TTS model, turning the text back into speech. We’ll then stream that audio back to the client. Let’s dive in and start building this excellent AI Assistant! Getting started We will use different tools to build our assistant. To build our client side, we will use Next.js. However, you could choose whichever framework you prefer. To use our OpenAI models, we will use their TypeScript / JavaScript SDK. To use this API, we require the following environmental variable: OPENAI_API_KEY— To get this key, we need to log in to the OpenAI dashboard and find the API keys section. Here, we can generate a new key. Awesome. Now, to use our Llama 3.1 model, we will use Ollama and the Vercel AI SDK, utilizing a provider called ollama-ai-provider. Ollama will allow us to download our preferred model (we could even use a different one, like Phi) and run it locally. The Vercel SDK will facilitate its use in our Next.js project. To use Ollama, we just need to download it and choose our preferred model. For this blog post, we are going to select Llama 3.1. After installing Ollama, we can verify if it is working by opening our terminal and writing the following command: Notice that I wrote “llama3.1” because that’s my chosen model, but you should use the one you downloaded. Kicking things off It's time to kick things off by setting up our Next.js app. Let's start with this command: ` After running the command, you’ll see a few prompts to set the app's details. Let's go step by step: * Name your app. * Enable app router. The other steps are optional and entirely up to you. In my case, I also chose to use TypeScript and Tailwind CSS. Now that’s done, let’s go into our project and install the dependencies that we need to run our models: ` Building our client logic Now, our goal is to record our voice, send it to the backend, and then receive a voice response from it. To record our audio, we need to use client-side functions, which means we need to use client components. In our case, we don’t want to transform our whole page to use client capabilities and have the whole tree in the client bundle; instead, we would prefer to use Server components and import our client components to progressively enhance our application. So, let’s create a separate component that will handle the client-side logic. Inside our app folder, let's create a components folder, and here, we will be creating our component: ` Let’s go ahead and initialize our component. I went ahead and added a button with some styles in it: ` And then import it into our Page Server component: ` Now, if we run our app, we should see the following: Awesome! Now, our button doesn’t do anything, but our goal is to record our audio and send it to someplace; for that, let us create a hook that will contain our logic: ` We will use two APIs to record our voice: navigator and MediaRecorder. The navigator API will give us information about the user’s media devices like the user media audio, and the MediaRecorder will help us record the audio from it. This is how they’re going to play out together: ` Let’s explain this code step by step. First, we create two new states. The first one is for keeping track of when we are recording, and the second one stores the instance of our MediaRecorder. ` Then, we’ll create our first method, startRecording. Here, we are going to have the logic to start recording our audio. We first check if the user has media devices available thanks to the navigator API that gives us information about the browser environment of our user: If we don’t have media devices to record our audio, we just return. If they do, then let us create a stream using their audio media device. ` Finally, we go ahead and create an instance of a MediaRecorder to record this audio: ` Then we need a method to stop our recording, which will be our stopRecording. Here, we will just stop our recording in case a media recorder exists. ` We are recording our audio, but we are not storing it anywhere. Let’s add a new useEffect and ref to accomplish this. We would need a new ref, and this is where our chunks of audio data will be stored. ` In our useEffect we are going to do two main things: store those chunks in our ref, and when it stops, we are going to create a new Blob of type audio/mp3: ` It is time to wire this hook with our AudioRecorder component: ` Let’s go to the other side of the coin, the backend! Setting up our Server side We want to use our models on the server to keep things safe and run faster. Let’s create a new route and add a handler for it using route handlers from Next.js. In our App folder, let’s make an “Api” folder with the following route in it: We want to use our models on the server to keep things safe and run faster. Let’s create a new route and add a handler for it using route handlers from Next.js. In our App folder, let’s make an “Api” folder with the following route in it: ` Our route is called ‘chat’. In the route.ts file, we’ll set up our handler. Let’s start by setting up our OpenAI SDK. ` In this route, we’ll send the audio from the front end as a base64 string. Then, we’ll receive it and turn it into a Buffer object. ` It’s time to use our first model. We want to turn this audio into text and use OpenAI’s Whisper Speech-To-Text model. Whisper needs an audio file to create the text. Since we have a Buffer instead of a file, we’ll use their ‘toFile’ method to convert our audio Buffer into an audio file like this: ` Notice that we specified “mp3”. This is one of the many extensions that the Whisper model can use. You can see the full list of supported extensions here: https://platform.openai.com/docs/api-reference/audio/createTranscription#audio-createtranscription-file Now that our file is ready, let’s pass it to Whisper! Using our OpenAI instance, this is how we will invoke our model: ` That’s it! Now, we can move on to the next step: using Llama 3.1 to interpret this text and give us an answer. We’ll use two methods for this. First, we’ll use ‘ollama’ from the ‘ollama-ai-provider’ package, which lets us use this model with our locally running Ollama. Then, we’ll use ‘generateText’ from the Vercel AI SDK to generate the text. Side note: To make our Ollama run locally, we need to write the following command in the terminal: ` ` Finally, we have our last model: TTS from OpenAI. We want to reply to our user with audio, so this model will be really helpful. It will turn our text into speech: ` The TTS model will turn our response into an audio file. We want to stream this audio back to the user like this: ` And that’s all the whole backend code! Now, back to the frontend to finish wiring everything up. Putting It All Together In our useRecordVoice.tsx hook, let's create a new method that will call our API endpoint. This method will also take the response back and play to the user the audio that we are streaming from the backend. ` Great! Now that we’re getting our streamed response, we need to handle it and play the audio back to the user. We’ll use the AudioContext API for this. This API allows us to store the audio, decode it and play it to the user once it’s ready: ` And that's it! Now the user should hear the audio response on their device. To wrap things up, let's make our app a bit nicer by adding a little loading indicator: ` Conclusion In this blog post, we saw how combining multiple AI models can help us achieve our goals. We learned to run AI models like Llama 3.1 locally and use them in our Next.js app. We also discovered how to send audio to these models and stream back a response, playing the audio back to the user. This is just one of many ways you can use AI—the possibilities are endless. AI models are amazing tools that let us create things that were once hard to achieve with such quality. Thanks for reading; now, it’s your turn to build something amazing with AI! You can find the complete demo on GitHub: AI Assistant with Whisper TTS and Ollama using Next.js...

Keeping Costs in Check When Hosting Next.js on Vercel cover image

Keeping Costs in Check When Hosting Next.js on Vercel

Vercel is usually the go-to platform for hosting Next.js apps, and not without reason. Not only are they one of the sponsors of Next.js, but their platform is very straightforward to use, not just for Next.js but for other frameworks, too. So it's no wonder people choose it more and more over other providers. Vercel, however, is a serverless platform, which means there are a few things you need to be aware of to keep your costs predictable and under control. This blog post covers the most important aspects of hosting a Next.js app on Vercel. Vercel's Pricing Structure Vercel's pricing structure is based on fixed and usage-based pricing, which is implemented through two big components of Vercel: the Developer Experience Platform (DX Platform) and the Managed Infrastructure. The DX Platform offers a monthly-billed suite of tools and services focused on building, deploying, and optimizing web apps. Think of it as the developer-focused interface on Vercel, which assists you in managing your app and provides team collaboration tools, deployment infrastructure, security, and administrative services. Additionally, it includes Vercel support. Because the DX Platform is developer-focused, it's also charged per seat on a monthly basis. The more developers have access to the DX Platform, the more you're charged. In addition to charging per seat, there are also optional, fixed charges for non-included, extra features. Observability Plus is one such example feature. The Managed Infrastructure, on the other hand, is what makes your app run and scale. It is a serverless platform priced per usage. Thanks to this infrastructure, you don't need to worry about provisioning, maintaining, or patching your servers. Everything is executed through serverless functions, which can scale up and down as needed. Although this sounds great, this is also one of the reasons many developers hesitate to adopt serverless; it may have unpredictable costs. One day, your site sees minimal traffic, and the next, it goes viral on Hacker News, leading to a sudden spike in costs. Vercel addresses this uncertainty by including a set amount of free serverless usage in each of its DX Platform plans. Once you exceed those limits, additional charges apply based on usage. Pricing Plans The DX Platform can be used in three different pricing plans on a team level. A team can represent a single person, a team within a company, or even a whole company. When creating a team on Vercel, this team can have one or more projects. The first of the three pricing plans is the Hobby plan, which is ideal for personal projects and experimentation. The Hobby plan is free and comes with some of the features and resources of the DX Platform and Managed Infrastructure out of the box, making it suitable for hosting small websites. However, note that the Hobby plan is limited to non-commercial, personal use only. The Pro plan is the most recommended for most teams and can be used for commercial purposes. At the time of this writing, it costs $20 per team member and comes with generous resources that support most teams. The third tier is the Enterprise plan, which is the most advanced and expensive option. This plan becomes necessary when your application requires specific compliance or performance features, such as isolated build infrastructure, Single Sign-On (SSO) for corporate user management or custom support with Service-Level Agreements. The Enterprise plan has a custom pricing model and is negotiated directly with Vercel. What Contributes to Usage Costs and Limiting Them Now that you've selected a plan for your team, you're ready to deploy Next.js. But how do you determine what contributes to your per-usage costs and ensure they stay within your plan limits? The Vercel pricing page has a detailed breakdown of the resource usage limits for each plan, which can help you understand what affects your costs. However, in this section, we've highlighted some of the most impactful factors on pricing that we've seen on many of our client projects. Number of Function Invocations Each time a serverless function runs, it counts as an invocation. Most of the processing on Vercel for your Next.js apps happens through serverless functions. Some might think that only API endpoints or server actions count as serverless function invocations, but this is not true. Every request that comes to the backend goes through a serverless function invocation, which includes: - Invoking server actions (server functions) - Invoking API routes (from the frontend, another system, or even within another serverless function) - Rendering a React server component tree (as part of a request to display a page) - Middleware execution To give you an idea of the number of invocations included in a plan, the Pro plan includes 1 million invocations per month for free. After that, it costs $0.60 per million, which can total a significant amount for popular websites. To minimize serverless function invocations, focus on reducing any of the above points. For example: - Batch up server actions: If you have multiple server actions, such as downloading a file and increasing its download count, you can combine them into one server action. - Minimize calls to the backend: Closely related to the previous point, unoptimized client components can call the backend more than they need to, contributing to increased function invocation count. If needed, consider using a library such as useSwr or TanStack Query to keep your backend calls under control. - Use API routes correctly: Next.js recommends using API routes for external systems invoking your app. For instance, Contentful can invoke a blog post through a webhook without incurring additional invocations. However, avoid invoking API routes from server component tree renders, as this counts as at least two invocations. Reducing React server component renders is also possible. Not all pages need to be dynamic - convert dynamic routes to static content when you don’t expect them to change in real-time. On the client, utilize Next.js navigation primitives to use the client-side router cache. Middleware in Next.js runs before every request. This adds an extra function invocation on top of the original request processing (e.g., a server action invocation or page render). To minimize middleware invocations, limit them to requests that require it, such as protected routes. For static asset requests, you can skip middleware altogether using matchers. For example, the matcher configuration below would prevent invoking the middleware for most static assets: ` Function Execution Time The duration your serverless function takes to execute counts as the execution time, and it impacts your end bill unless it's within the limits of your plan. This means that any inefficient code that takes longer to execute directly adds up to the total function invocation time. Many things can contribute to this, but one common pattern we've seen is not utilizing caching properly or under-caching. Next.js offers several caching techniques you can use, such as: - Using a data cache to prevent unnecessary database calls or API calls - Using memoization to prevent too many API or database calls in the same rendering pass Another reason, especially now in the age of AI APIs, is having a function run too long due to AI processing. In this case, what we could do is utilize some sort of queuing for long-processing jobs, or enable Fluid Compute, a recent feature by Vercel that optimizes function invocations and reusability. Bandwidth Usage The volume of data transferred between users and Vercel, including JavaScript bundles, RSC payload, API responses, and assets, directly contributes to bandwidth usage. In the Pro plan, you receive 1 TB/month of included bandwidth, which may seem substantial but can quickly be consumed by large Next.js apps with: - Large JavaScript bundles - Many images - Large API JSON payloads Image optimization is crucial for reducing bandwidth usage, as images are typically large assets. By implementing image optimization, you can significantly reduce the amount of data transferred. To further optimize your bandwidth usage, focus on using the Link component efficiently. This component performs automatic prefetch of content, which can be beneficial for frequently accessed pages. However, you may want to disable this feature for infrequently accessed pages. The Link component also plays a role in reducing bandwidth usage, as it aids in client-side navigation. When a page is cached client-side, no request is made when the user navigates to it, resulting in reduced bandwidth usage. Additionally, API and RSC payload responses count towards bandwidth usage. To minimize this impact, always return only the minimum amount of data necessary to the end user. Image Transformations Every time Vercel transforms an image from an unoptimized image, this counts as an image transformation. After transformation, every time an optimized image is written to Vercel's CDN network and then read by the user's browser, this counts as an image cache read and an image cache write, respectively. The Pro plan includes 10k transformations per month, 600k CDN cache reads, and 200k CDN cache writes. Given the high volume of image requests in many apps, it's worth checking if the associated costs can be reduced. Firstly, not every image needs to be transformed. Certain types of images, such as logos and icons, small UI elements (e.g., button graphics), vector graphics, and other pre-optimized images you may have optimized yourself already, don't require transformation. You can store these images in the public folder and use the unoptimized property with the Image component to mark them as non-transformable. Another approach is to utilize an external image provider like Cloudinary or AWS CloudFront, which may have already optimized the images. In this case, you can use a custom image loader to take advantage of their optimizations and avoid Vercel's image transformations. Finally, Next.js provides several configuration options to fine-tune image transformation: - images.minimumCacheTTL: Controls the cache duration, reducing the need for rewritten images. - images.formats: Allows you to limit eligible image formats for transformation. - images.remotePatterns: Defines external sources for image transformation, giving you more control over what's optimized. - images.quality: Enables you to set the image quality for transformed images, potentially reducing bandwidth usage. Monitoring The "Usage" tab on the team page in Vercel provides a clear view of your team's resource usage. It includes information such as function invocation counts, function durations, and fast origin transfer amounts. You can easily see how far you are from reaching your team's limit, and if you're approaching it, you'll see the amount. This page is a great way to monitor regularity. However, you don't need to check it constantly. Vercel offers various aspects of spending management, and you can set alert thresholds to get notified when you're close to or exceed your limit. This helps you proactively manage your spending and avoid unexpected charges. One good feature of Vercel is its ability to pause projects when your spending reaches a certain point, acting as an "emergency break" in the case of a DDoS attack or a very unusual spike in traffic. However, this will stop the production deployment, and the users will not be able to use your site, but at least you won't be charged for any extra usage. This option is enabled by default. Conclusion Hosting a Next.js app on Vercel offers a great developer experience, but it's also important to consider how this contributes to your end bill and keep it under control. Hopefully, this blog post will clear up some of the confusion around pricing and how to plan, optimize, and monitor your costs. We hope you enjoyed this blog post. Be sure to check out our other blog posts on Next.js for more in-depth coverage of different features of this framework....

Redux Toolkit 2.0 release cover image

Redux Toolkit 2.0 release

Intro Redux Toolkit On December 4, 2023, Mark Erikson announced that Redux Toolkit 2.0 has been released! This change is considered a major version of the library and comes with breaking changes because, as the core team declared, it has been 4+ years since they released Redux tool kit, and they needed to clean up the code and standardize the way Redux apps are written. One of these changes is you no longer need to import the core redux library, RTK is a wrapper of it. In this article, we are going to focus on the main breaking changes and new features in RTK 2.0 Some general notes on the change * Standalone getDefaultMiddleware and getType removed * The package definitions now include an exports field to specify which artifacts to load, with a modern ESM build * UMD has been dropped * Typescript rewrite createSlice and createReducer One of the major changes is object syntax for createSlice.extraReducers and createReducer removed. In RTK 2.0, Redux team has opted to remove the 'object' form for createReducer and createSlice.extraReducers, as the 'builder callback' form offers greater flexibility and is more TypeScript-friendly, making it the preferred choice for both APIs So an example like this: ` Should be like this: ` Middlewares Now the configureStore.middleware must be a callback. That’s because before to add a middleware to the Redux store, you had to add it to the array of configureStore.middleware, but this is turning off the default middleware. Now it’s a callback function that should return an array with middlewares, the default middleware is being passed in the parameters of the callback function so the developer can include it or not in the middleware array like this: ` Enhancers Store enhancers are a formal mechanism for adding capabilities to Redux itself like adding some additional capabilities to the store. It could change how reducers process data, or how dispatch works. Most people will never need to write one. But similar to the middlewares, enhancers also must be a callback function that receives getDefaultEnhancers and should return enhancers back. ` autoBatchEnhancer In version 1.9.0, the Redux team introduced a new autoBatchEnhancer that, when multiple "low-priority" actions are dispatched consecutively. This enhancement aims to optimize performance by reducing the impact of UI updates, typically the most resource-intensive part of the update process. While RTK Query automatically marks most of its internal actions as "low-pri" users need to add the autoBatchEnhancer to the store for this feature to take effect. To simplify this process, configureStore has been updated to include the autoBatchEnhancer in the store setup by default, allowing users to benefit from improved performance without manual store configuration adjustments. AnyAction and UnknownAction The Redux TS types have historically exported an AnyAction type that lacks precise type safety, allowing for loose typology like console.log(action.whatever). However, to promote safer typing practices, they've introduced UnknownAction, which treats all fields beyond action.type as unknown, nudging users towards creating type guards with assertions for better type safety when accessing action objects. UnknownAction is now the default action object type in Redux, while AnyAction remains available but deprecated. RTK Query behavior They've addressed issues with RTK Query, such as problems with dispatch(endpoint.initiate(arg, {subscription: false})) and incorrect resolution timing for triggered lazy queries, by reworking RTKQ to consistently track cache entries. Additionally, they've introduced internal logic to delay tag invalidation for consecutive mutations, controlled by the new invalidationBehavior flag on createApi, with the default being 'delayed'. In RTK 1.9, the subscription status has been optimized syncing to the Redux state for performance, primarily for display in the Redux DevTools "RTK Query" panel. Conclusion This version is a major version of Redux Toolkit and it has a lot of breaking changes but what is important its source code has been cleaned and rewritten in TS will improve the developer experience for sure and on the final user experience as well, We discussed here only the headlines and the most used features but of course you can find all of the changes on The official Redux Github Releases page...

Incremental Hydration in Angular cover image

Incremental Hydration in Angular

Incremental Hydration in Angular Some time ago, I wrote a post about SSR finally becoming a first-class citizen in Angular. It turns out that the Angular team really treats SSR as a priority, and they have been working tirelessly to make SSR even better. As the previous blog post mentioned, full-page hydration was launched in Angular 16 and made stable in Angular 17, providing a great way to improve your Core Web Vitals. Another feature aimed to help you improve your INP and other Core Web Vitals was introduced in Angular 17: deferrable views. Using the @defer blocks allows you to reduce the initial bundle size and defer the loading of heavy components based on certain triggers, such as the section entering the viewport. Then, in September 2024, the smart folks at Angular figured out that they could build upon those two features, allowing you to mark parts of your application to be server-rendered dehydrated and then hydrate them incrementally when needed - hence incremental hydration. I’m sure you know what hydration is. In short, the server sends fully formed HTML to the client, ensuring that the user sees meaningful content as quickly as possible and once JavaScript is loaded on the client side, the framework will reconcile the rendered DOM with component logic, event handlers, and state - effectively hydrating the server-rendered content. But what exactly does "dehydrated" mean, you might ask? Here's what will happen when you mark a part of your application to be incrementally hydrated: 1. Server-Side Rendering (SSR): The content marked for incremental hydration is rendered on the server. 2. Skipped During Client-Side Bootstrapping: The dehydrated content is not initially hydrated or bootstrapped on the client, reducing initial load time. 3. Dehydrated State: The code for the dehydrated components is excluded from the initial client-side bundle, optimizing performance. 4. Hydration Triggers: The application listens for specified hydration conditions (e.g., on interaction, on viewport), defined with a hydrate trigger in the @defer block. 5. On-Demand Hydration: Once the hydration conditions are met, Angular downloads the necessary code and hydrates the components, allowing them to become interactive without layout shifts. How to Use Incremental Hydration Thanks to Mark Thompson, who recently hosted a feature showcase on incremental hydration, we can show some code. The first step is to enable incremental hydration in your Angular application's appConfig using the provideClientHydration provider function: ` Then, you can mark the components you want to be incrementally hydrated using the @defer block with a hydrate trigger: ` And that's it! You now have a component that will be server-rendered dehydrated and hydrated incrementally when it becomes visible to the user. But what if you want to hydrate the component on interaction or some other trigger? Or maybe you don't want to hydrate the component at all? The same triggers already supported in @defer blocks are available for hydration: - idle: Hydrate once the browser reaches an idle state. - viewport: Hydrate once the component enters the viewport. - interaction: Hydrate once the user interacts with the component through click or keydown triggers. - hover: Hydrate once the user hovers over the component. - immediate: Hydrate immediately when the component is rendered. - timer: Hydrate after a specified time delay. - when: Hydrate when a provided conditional expression is met. And on top of that, there's a new trigger available for hydration: - never: When used, the component will remain static and not hydrated. The never trigger is handy when you want to exclude a component from hydration altogether, making it a completely static part of the page. Personally, I'm very excited about this feature and can't wait to try it out. How about you?...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co