Skip to content

Building a Multi-Response Streaming API with Node.js, Express, and React

Introduction

As web applications become increasingly complex and data-driven, efficient and effective data transfer methods become critically important. A streaming API that can send multiple responses to a single request can be a powerful tool for handling large amounts of data or for delivering real-time updates. In this article, we will guide you through the process of creating such an API.

We will use video streaming as an illustrative example. With their large file sizes and the need for flexible, on-demand delivery, videos present a fitting scenario for showcasing the power of multi-response streaming APIs. The backend will be built with Node.js and Express, utilizing HTTP range requests to facilitate efficient data delivery in chunks.

Next, we'll build a React front-end to interact with our streaming API. This front-end will handle both the display of the streamed video content and its download, offering users real-time progress updates.

By the end of this walkthrough, you will have a working example of a multi-response streaming API, and you will be able to apply the principles learned to a wide array of use cases beyond video streaming.

Let's jump right into it!

Hands-On

Implementing the Streaming API in Express

In this section, we will dive into the server-side implementation, specifically our Node.js and Express application. We'll be implementing an API endpoint to deliver video content in a streaming fashion.

Assuming you have already set up your Express server with TypeScript, we first need to define our video-serving route. We'll create a GET endpoint that, when hit, will stream a video file back to the client.

Please make sure to install cors for handling cross-origin requests, dotenv for loading environment variables, and throttle for controlling the rate of data transfer. You can install these with the following command:

yarn add cors dotenv throttle @types/cors @types/dotenv @types/throttle
import cors from 'cors';
import 'dotenv/config';
import express, { Request, Response } from 'express';
import fs from 'fs';
import Throttle from 'throttle';

const app = express();
const port = 8000;

app.use(cors());

app.get('/video', (req: Request, res: Response) => {
  // Video by Zlatin Georgiev from Pexels: https://www.pexels.com/video/15708449/
  // For testing purposes - add the video in you `static` folder
  const path = 'src/static/pexels-zlatin-georgiev-15708449 (2160p).mp4';
  const stat = fs.statSync(path);
  const fileSize = stat.size;
  const range = req.headers.range;
  if (range) {
    const parts = range.replace(/bytes=/, '').split('-');
    const start = parseInt(parts[0], 10);
    const end = parts[1] ? parseInt(parts[1], 10) : fileSize - 1;

    const chunksize = end - start + 1;
    const file = fs.createReadStream(path, { start, end });
    const head = {
      'Content-Range': `bytes ${start}-${end}/${fileSize}`,
      'Accept-Ranges': 'bytes',
      'Content-Length': chunksize,
      'Content-Type': 'video/mp4',
    };

    res.writeHead(206, head);
    file.pipe(res);
  } else {
    const head = {
      'Content-Length': fileSize,
      'Content-Type': 'video/mp4',
    };

    res.writeHead(200, head);
    fs.createReadStream(path).pipe(res);
  }
});

app.listen(port, () => {
  console.log(`Server listening at ${process.env.SERVER_URL}:${port}`);
});

In the code snippet above, we are implementing a basic video streaming server that responds to HTTP range requests. Here's a brief overview of the key parts:

  1. File and Range Setup: We start by determining the path to the video file and getting the file size. We also grab the range header from the request, which contains the range of bytes the client is requesting.
  2. Range Requests Handling: If a range is provided, we extract the start and end bytes from the range header, then create a read stream for that specific range. This allows us to stream a portion of the file rather than the entire thing.
  3. Response Headers: We then set up our response headers. In the case of a range request, we send back a '206 Partial Content' status along with information about the byte range and total file size. For non-range requests, we simply send back the total file size and the file type.
  4. Data Streaming: Finally, we pipe the read stream directly to the response. This step is where the video data actually gets sent back to the client. The use of pipe() here automatically handles backpressure, ensuring that data isn't read faster than it can be sent to the client.

With this setup in place, our streaming server is capable of efficiently delivering large video files to the client in small chunks, providing a smoother user experience.

Implementing the Download API in Express

Now, let's add another endpoint to our Express application, which will provide more granular control over the data transfer process. We'll set up a GET endpoint for '/download', and within this endpoint, we'll handle streaming the video file to the client for download.

app.get('/download', (req: Request, res: Response) => {
  // Again, for testing purposes - add the video in you `static` folder
  const path = 'src/static/pexels-zlatin-georgiev-15708449 (2160p).mp4';
  const stat = fs.statSync(path);
  const fileSize = stat.size;

  res.writeHead(200, {
    'Content-Type': 'video/mp4',
    'Content-Disposition': 'attachment; filename=video.mp4',
    'Content-Length': fileSize,
  });

  const readStream = fs.createReadStream(path);
  const throttle = new Throttle(1024 * 1024 * 5); // throttle to 5MB/sec - simulate lower speed

  readStream.pipe(throttle);

  throttle.on('data', (chunk) => {
    Console.log(`Sent ${chunk.length} bytes to client.`);
    res.write(chunk);
  });

  throttle.on('end', () => {
    console.log('File fully sent to client.');
    res.end();
  });
});

This endpoint has a similar setup to the video streaming endpoint, but it comes with a few key differences:

  1. Response Headers: Here, we include a 'Content-Disposition' header with an 'attachment' directive. This header tells the browser to present the file as a downloadable file named 'video.mp4'.
  2. Throttling: We use the 'throttle' package to limit the data transfer rate. Throttling can be useful for simulating lower-speed connections during testing, or for preventing your server from getting overwhelmed by data transfer operations.
  3. Data Writing: Instead of directly piping the read stream to the response, we attach 'data' and 'end' event listeners to the throttled stream. On the 'data' event, we manually write each chunk of data to the response, and on the 'end' event, we close the response.

This implementation provides a more hands-on way to control the data transfer process. It allows for the addition of custom logic to handle events like pausing and resuming the data transfer, adding custom transformations to the data stream, or handling errors during transfer.

Utilizing the APIs: A React Application

Now that we have a server-side setup for video streaming and downloading, let's put these APIs into action within a client-side React application. Note that we'll be using Tailwind CSS for quick, utility-based styling in our components.

Our React application will consist of a video player that uses the video streaming API, a download button to trigger the download API, and a progress bar to show the real-time download progress.

First, let's define the Video Player component that will play the streamed video:

import React from 'react';

const VideoPlayer: React.FC = () => {
  return (
    <div className="mb-3">
      <video controls width="860">
        <source src="http://localhost:8000/video" type="video/mp4" />
        Your browser does not support the video tag.
      </video>
    </div>
  );
};

export default VideoPlayer;

In the above VideoPlayer component, we're using an HTML5 video tag to handle video playback. The src attribute of the source tag is set to the video endpoint of our Express server. When this component is rendered, it sends a request to our video API and starts streaming the video in response to the range requests that the browser automatically makes.

Next, let's create the DownloadButton component that will handle the video download and display the download progress:

import React, { useState } from 'react';

const DownloadButton: React.FC = () => {
  const [downloadProgress, setDownloadProgress] = useState(0);

  const handleDownload = async () => {
    try {
      const response = await fetch('http://localhost:8000/download');
      const reader = response.body?.getReader();

      if (!reader) {
        return;
      }

      const contentLength = +(response.headers?.get('Content-Length') || 0);
      let receivedLength = 0;
      let chunks = [];

      while (true) {
        const { done, value } = await reader.read();

        if (done) {
          console.log('Download complete.');

          const blob = new Blob(chunks, { type: 'video/mp4' });
          const url = window.URL.createObjectURL(blob);
          const a = document.createElement('a');
          a.style.display = 'none';
          a.href = url;
          a.download = 'video.mp4';
          document.body.appendChild(a);
          a.click();
          window.URL.revokeObjectURL(url);

          setDownloadProgress(100);
          break;
        }
        chunks.push(value);
        receivedLength += value.length;
        const progress = (receivedLength / contentLength) * 100;
        setDownloadProgress(progress);
      }
    } catch (err) {
      console.error(err);
    }
  };

  return (
    <div className="flex gap-3 items-center">
      <button
        onClick={handleDownload}
        className="py-2 px-3 bg-indigo-500 text-white text-sm font-semibold rounded-md shadow focus:outline-none"
      >
        Download Video
      </button>
      {downloadProgress > 0 && downloadProgress < 100 && (
      <p className="flex gap-3 items-center">
        Download progress: <progress value={downloadProgress} max={100} />
      </p>
      )}
      {downloadProgress === 100 && <p>Download complete!</p>}
    </div>
  );
};

export default DownloadButton;

In this DownloadButton component, when the download button is clicked, it sends a fetch request to our download API. It then uses a while loop to continually read chunks of data from the response as they arrive, updating the download progress until the download is complete. This is an example of more controlled handling of multi-response APIs where we are not just directly piping the data, but instead, processing it and manually sending it as a downloadable file.

Bringing It All Together

Let's now integrate these components into our main application component.

import React from 'react';
import VideoPlayer from './components/VideoPlayer';
import DownloadButton from './components/DownloadButton';

function App() {
  return (
    <div className="flex flex-col items-center justify-center min-h-screen bg-gray-100 py-2">
      <h1 className="text-3xl font-bold mb-5">My Video Player</h1>
      <VideoPlayer />
      <DownloadButton />
    </div>
  );
}

export default App;

In this simple App component, we've included our VideoPlayer and DownloadButton components. It places the video player and download button on the screen in a neat, centered layout thanks to Tailwind CSS.

Here is a summary of how our system operates:

  • The video player makes a request to our Express server as soon as it is rendered in the React application. Our server handles this request, reading the video file and sending back the appropriate chunks as per the range requested by the browser. This results in the video being streamed in our player.
  • When the download button is clicked, a fetch request is sent to our server's download API. This time, the server reads the file, but instead of just piping the data to the response, it controls the data sending process. It sends chunks of data and also logs the sent chunks for monitoring purposes. The React application collects these chunks and concatenates them, displaying the download progress in real-time. When all chunks are received, it compiles them into a Blob and triggers a download in the browser.

This setup allows us to build a full-featured video streaming and downloading application with fine control over the data transmission process. To see this system in action, you can check out this video demo.

Conclusion

While the focus of this article was on video streaming and downloading, the principles we discussed here extend beyond just media files. The pattern of responding to HTTP range requests is common in various data-heavy applications, and understanding it can be a useful tool in your web development arsenal.

Finally, remember that the code shown in this article is just a simple example to demonstrate the concepts. In a real-world application, you would want to add proper error handling, validation, and possibly some form of access control depending on your use case.

I hope this article helps you in your journey as a developer. Building something yourself is the best way to learn, so don't hesitate to get your hands dirty and start coding!

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline cover image

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

How to automatically deploy your full-stack JavaScript app from an NX monorepo with AWS CodePipeline In our previous blog post (How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk) we set up a horizontally scalable deployment for our full-stack javascript app. In this article, we would like to show you how to set up AWS CodePipeline to automatically deploy changes to the application. APP Structure Our application is a simple front-end with an API back-end set up in an NX monorepo. The production built API code is hosted in Elastic Beanstalk, while the front-end is stored in S3 and hosted through CloudFront. Whenever we are ready to make a new release, we want to be able to deploy the new API and front-end versions to the existing distribution. In this article, we will set up a CodePipeline to deploy changes to the main branch of our connected repository. CodePipeline CodeBuild and the buildspec file First and foremost, we should set up the build job that will run the deploy logic. For this, we are going to need to use CodeBuild. Let's go into our repository and set up a build-and-deploy.buildspec.yml` file. We put this file under the `tools/aws/` folder. `yaml version: 0.2 phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - npm ci build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # TODO: Push FE to S3 # TODO: Push API to Elastic beanstalk ` This buildspec file does not do much so far, we are going to extend it. In the installation phase, it will run npm ci` to install the dependencies and in the build phase, we are going to run the build command using the `ENVIRONMENT_TARGET` variable. This is useful, because if you have more environments, like `development` and `staging` you can have different configurations and builds for those and still use the same buildspec file. Let's go to the Codebuild page in our AWS console and create a build project. Add a descriptive name, such as your-appp-build-and-deploy`. Please provide a meaningful description for your future self. For this example, we are going to restrict the number of concurrent builds to 1. The next step is to set up the source for this job, so we can keep the buildspec file in the repository and make sure this job uses the steps declared in the yaml file. We use an access token that allows us to connect to GitHub. Here you can read more on setting up a GitHub connection with an access token. You can also connect with Oauth, or use an entirely different Git provider. We set our provider to GitHub and provided the repository URL. We also set the Git clone depth to 1, because that makes checking out the repo faster. In the Environment` section, we recommend using an AWS CodeBuild managed image. We use the Ubuntu Standard runtime with the `aws/codebuild/standard:7.0` version. This version uses Node 18. We want to always use the latest image version for this runtime and as the `Environment type` we are good with `Linux EC2`. We don't need elevated privileges, because we won't build docker images, but we do want to create a new service role. In the Buildspec` section select `Use a buildspec file` and give the path from your repository root as the `Buildspec name`. For our example, it is `tools/aws/build-and-deploy.buildspec.yml`. We leave the `Batch configuration` and the `Artifacts` sections as they are and in the `Logs` section we select how we want the logs to work. For this example, to reduce cost, we are going to use S3 logs and save the build logs in the `aws-codebuild-build-logs` bucket that we created for this purpose. We are finished, so let's create the build project. CodePipeline setup To set up automated deployment, we need to create a CodePipeline. Click on Create pipeline` and give it a name. We also want a new service role to be created for this pipeline. Next, we should set up the source stage. As the source provider, we need to use GitHub (version2)` and set up a connection. You can read about how to do it here. After the connection is set up, select your repository and the branch you want to deploy from. We also want to start the pipeline if the source code changes. For the sake of simplicity, we want to have the Output artefact format as CodePipeline default. At the Build stage, we select AWS CodeBuild` as the build provider and let's select the build that we created above. Remember that we have the `ENVIRONMENT_TARGET` as a variable used in our build, so let's add it to this stage with the `Plaintext` value `prod`. This way the build will run the `build:prod` command from our `package.json`. As the `Build type` we want `Single build`. We can skip the deployment stage because we are going to set up deployment in our build job. Review our build pipeline and create it. After it is created, it will run for the first time. At this time it will not deploy anything but it should run successfully. Deployment prerequisites To be able to deploy to S3 and Elastic Beanstalk, we need our CodeBuild job to be able to interact with those services. When we created the build, we created a service role for it. In this example, the service role is codebuild-aws-test-build-and-deploy-service-role`. Let's go to the IAM page in the console and open the `Roles` page. Search for our codebuild role and let's add permissions to it. Click the `Add permissions` button and select `Attach policies`. We need two AWS-managed policies to be added to this service role. The `AdministratorAccess-AWSElasticBeanstalk` will allow us to deploy the API and the `AmazonS3FullAccess` will allow us to deploy the front-end. The `CloudFrontFullAccess` will allow us to invalidate the caches so CloudFront will send the new front-end files after the deployment is ready. Deployment Upload the front-end to S3 Uploading the front-end should be pretty straightforward. We use an AWS CodeBuild managed image in our pipeline, therefore, we have access to the aws` command. Let's update our buildspec file with the following changes: `yaml phases: ... build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # Delete the current front-end and deploy the new version front-end - aws s3 sync dist/apps/frontend/ s3://$FRONTEND_BUCKET --delete # Invalidate cloudfront caches to immediately serve the new front-end files - aws cloudfront create-invalidation --distribution-id $CLOUDFRONTDISTRIBUTION_ID --paths "/index.html" # TODO: Push API to Elastic beanstalk ` First, we upload the fresh front-end build to the S3 bucket, and then we invalidate the caches for the index.html` file, so CloudFront will immediately serve the changes. If you have more static files in your app, you might need to invalidate caches for those as well. Before we push the above changes up, we need to update the environment variables in our CodePipeline. To do this open the pipeline and click on the Edit` button. This will then enable us to edit the `Build` stage. Edit the build step by clicking on the edit button. On this screen, we add the new environment variables. For this example, it is aws-hosting-prod` as `Plaintext` for the `FRONT_END_BUCKET` and `E3FV1Q1P98H4EZ` as `Plaintext` for the `CLOUDFRONT_DISTRIBUTION_ID` Now if we add changes to our index.html file, for example, change the button to HELLO 2`, commit it and push it. It gets deployed. Deploying the API to Elastic Beanstalk We are going to need some environment variables passed down to the build pipeline to be able to deploy to different environments, like staging or prod. We gathered these below: - COMMIT_ID`: `#{SourceVariables.CommitId}` - This will have the commit id from the checkout step. We include this, so we can always check what commit is deployed. - ELASTIC_BEANSTALK_APPLICATION_NAME`: `Test AWS App` - This is the Elastic Beanstalk app which has your environment associated. - ELASTIC_BEANSTALK_ENVIRONMENT_NAME`: `TestAWSApp-prod` - This is the Elastic Beanstalk environment you want to deploy to - API_VERSION_BUCKET`: `elasticbeanstalk-us-east-1-474671518642` - This is the S3 bucket that was created by Elastic Beanstalk With the above variables, we can make some new variables during the build time, so we can make sure that every API version is unique and gets deployed. We set this up in the install phase. `yaml ... phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - APPVERSION=`jq '.version' -j package.json` - APIVERSION=$APP_VERSION-build$CODEBUILD_BUILD_NUMBER - APIZIP_KEY=$COMMIT_ID-api.zip - 'APPVERSION_DESCRIPTION="$AP_VERSION: $COMMIT_ID"' - npm ci ... ` The APP_VERSION` variable is the version property from the `package.json` file. In a release process, the application's version is stored here. The `API_VERSION` variable will contain the `APP_VERSION` and as a suffix, we include the build number. We want to upload this API version by indicating the commit ID, so the `API_ZIP_KEY` will have this information. The `APP_VERSION_DESCRIPTION` will be the description of the deployed version in Elastic Beanstalk. Finally, we are going to update the buildspec file with the actual Elastic Beanstalk deployment steps. `yaml phases: ... build: on-failure: ABORT commands: # ... # ZIP the API - zip -r -j dist/apps/api.zip dist/apps/api # Upload the API bundle to S3 - aws s3 cp dist/apps/api.zip s3://$APIVERSION_BUCKET/$ENVIRONMENT_TARGET/$API_ZIP_KEY # Create new API version in Elastic Beanstalk - aws elasticbeanstalk create-application-version --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --description "$APP_VERSION_DESCRIPTION" --source-bundle "S3Bucket=$API_VERSION_BUCKET,S3Key=$ENVIRONMENT_TARGET/$API_ZIP_KEY" # Deploy new API version - aws elasticbeanstalk update-environment --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" # Wait until the Elastic Beanstalk environment is stable - aws elasticbeanstalk wait environment-updated --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" ` Let's make a change in the API, for example, the message sent back by the /api/hello` endpoint and push up the changes. --- Now every time a change is merged to the main` branch, it gets pushed to our production deployment. Using these guides, you can set up multiple environments, and you can configure separate CodePipeline instances to deploy from different branches. I hope this guide proved to be helpful to you....

State Management with React: Client State and Beyond cover image

State Management with React: Client State and Beyond

Introduction State management has long been a hotly debated subject in the React world. Best practices have continued to evolve, and there’s still a lot of confusion around the subject. In this article, we are going to dive into what tools we might want to use for solving common problems we face in modern web application development. There are different types of state and different types of ways to store and manage your application state. For example, you might have some local client state in a component that controls a dropdown, and you might also have a global store that contains authenticated user state. Aside from those typical examples, on some pages of your website, you might store some state in the URL. For quite a while, we’ve been leaning into global client state in single-page app land. This is what state management tools like Redux and other similar libraries help us out with. In this post, we will cover these different strategies in more detail so that we can try to make the best decisions going forward regarding state management in our applications. Local and global client state In this section, we'll discuss the differences between local and global client state, when to use each, and some popular libraries that can help you manage them effectively. When to use local client state Local client state is best suited for managing state that is specific to a single component or a small group of related components. Examples of local state include form inputs, component visibility, and UI element states like button toggles. Using local state keeps the component self-contained, making it easier to understand and maintain. In general, it's a good idea to start with local state and only use global state when you have a clear need for it. This can help keep your code simple and easy to understand. When to use global client state Global client state is useful when you have state that needs to be accessed by multiple unrelated components, or when the state is complex and would benefit from a more carefully designed API. Common examples of global state include user authentication, theme preferences, and application-wide settings. By centralizing this state, you can easily share it across the entire application, making it more efficient and consistent. If you’re building out a feature that has some very complex state requirements, this could also be a good case for using a “global” state management library. Truth is, you can still use one of these libraries to manage state that is actually localized to your feature. Most of these libraries support creating multiple stores. For example, if I was building a video chat client like Google Meets with a lot of different state values that are constantly changing, it might be a good idea to create a store to manage the state for a video call. Most state management libraries support more features than what you get out of the box with React, which can help design a clean and easy to reason about modules and APIs for scenarios where the state is complex. Popular libraries for global state management There are a lot of great libraries for managing state in React out there these days. I think deciding which might be best for you or your project is mostly a matter of preference. Redux and MobX are a couple of options that have been around for quite a long time. Redux has gained a reputation for being overly complex and requiring a lot of boilerplate code. The experience is actually much improved these days thanks to the Redux Toolkit library. MobX provides an easy-to-use API that revolves around reactive data/variables. These are both mature and battle-tested options that are always worth considering. Meta also has a state management named Recoil that provides a pretty easy-to-use API that revolves around the concept of atoms and selectors. I don’t see this library being used a ton in the wild, but I think it’s worth mentioning. A couple of the more popular new players on the block are named jotai and zustand. I think after the Redux hangover, these libraries showed up as a refreshing oasis of simplicity. Both of these libraries have grown a ton in popularity due to their small byte footprints and simple, straightforward APIs. Context is not evil The React Context API, like Redux, has also been stigmatized over the years to the point where many developers have their pitchforks out, declaring that you should never use it. We leaned on it for state management a bit too much for a while, and now it is a forbidden fruit. I really dislike these hard all-or-nothing stances though. We just need to be a little bit more considerate about when and where we choose to use it. Typically, React Context is best for storing, and making available global state that doesn’t change much. Some of the most common use cases are things like themes, authentication, localization, and user preferences. Contrary to popular belief, only components (and their children) that use that context (const context = useContext(someContext);) are re-rendered in the event of a state change, not all of the children below the context provider. Storing state in the URL The most underused and underrated tool in the web app state management tool belt is using the URL to store state. Storing state in the URL can be beneficial for several reasons, such as enabling users to bookmark and share application state, improving SEO, and simplifying navigation. The classic example for this is filters on an e-commerce website. A good user experience would be that the user can select some filters to show only the products that they are looking for and then be able to share that URL with a friend and them see the same exact results. Before you add some state to a page, I think it’s always worth considering the question: “Should I be able to set this state from the URL?”. Tools for managing URL state We typically have a couple of different tools available to use for managing URL state. Built-in browser APIs like the URL class and URLSearchParams. Both of these APIs allow you to easily parse out parts of a URL. Most often, you will store URL state in the parameters. In most React applications, you will typically have a routing library available to help with URL and route state management as well. React Router has multiple hooks and other APIs for managing URL state like useLocation that returns a parsed object of the current URL state. Keeping URL and application state in sync The tricky part of storing state in the URL is when you need to keep local application state in sync with the URL values. Let’s look at an example component with a simple name component that stores a piece of state called name. ` import React, { useState, useEffect } from 'react'; import { useLocation, useHistory } from 'react-router-dom'; function MyComponent() { const location = useLocation(); const history = useHistory(); const [name, setName] = useState(''); useEffect(() => { // Update the name state when the URL changes const searchParams = new URLSearchParams(location.search); setName(searchParams.get('name') || ''); }, [location]); function handleNameChange(event) { setName(event.target.value); // Update the URL when the name changes history.push(/my-component?name=${event.target.value}`); } return ( ); } ` The general idea is to pull the initial value off of the URL when the component mounts and set the state value. After that, in our event handler, we make sure to update the URL state as well as our local React state. Moving state to the server Moving web application state to the server can be beneficial in several scenarios. For example, when you have a complex state that is difficult to manage on the client-side. By moving the state to the server, you can simplify the client-side code and reduce the amount of data that needs to be transferred between the client and server. This can be useful for applications that have a lot of business logic or complex data structures. In most cases if there’s some logic or other work that you can move off of your client web application and onto the server that is a win. Conclusion State management is a crucial aspect of building modern web applications with React. By understanding the different types of state and the tools available for managing them, you can make informed decisions about the best approach for your specific use case. Remember to consider local and global client state, URL-based state, and server-side state when designing your application's state management strategy....

Leveraging GraphQL Scalars to Enhance Your Schema cover image

Leveraging GraphQL Scalars to Enhance Your Schema

Introduction GraphQL has revolutionized the way developers approach application data and API layers, gaining well-deserved momentum in the tech world. Yet, for all its prowess, there's room for enhancement, especially when it comes to its scalar types. By default, GraphQL offers a limited set of these primitives — Int, Float, String, Boolean, and ID — that underpin every schema. While these types serve most use cases, there are scenarios where they fall short, leading developers to yearn for more specificity in their schemas. Enter graphql-scalars, a library designed to bridge this gap. By supplementing GraphQL with a richer set of scalar types, this tool allows for greater precision and flexibility in data representation. In this post, we'll unpack the potential of enhanced Scalars, delve into the extended capabilities provided by graphql-scalars, and demonstrate its transformative power using an existing starter project. Prepare to redefine the boundaries of what your GraphQL schema can achieve. Benefits of Using Scalars GraphQL hinges on the concept of "types." Scalars, being the foundational units of GraphQL's type system, play a pivotal role. While the default Scalars — Int, Float, String, Boolean, and ID — serve many use cases, there's an evident need for more specialized types in intricate web development scenarios. 1. Precision**: Using default Scalars can sometimes lack specificity. Consider representing a date or time in your application with a String; this might lead to ambiguities in format interpretation and potential inconsistencies. 2. Validation**: Specialized scalar types introduce inherent validation. Instead of using a String for an email or a URL, for example, distinct types ensure the data meets expected formats at the query level itself. 3. Expressiveness**: Advanced Scalars provide clearer intentions. They eliminate ambiguity inherent in generic types, making the schema more transparent and self-explanatory. Acknowledging the limitations of the default Scalars, tools like graphql-scalars have emerged. By broadening the range of available data types, graphql-scalars allows developers to describe their data with greater precision and nuance. Demonstrating Scalars in Action with Our Starter Project To truly grasp the transformative power of enhanced Scalars, seeing them in action is pivotal. For this, we'll leverage a popular starter kit: the Serverless framework with Apollo and Contentful. This kit elegantly blends the efficiency of serverless functions with the power of Apollo's GraphQL and Contentful's content management capabilities. Setting Up the Starter: 1. Initialize the Project: `shell npm create @this-dot/starter -- --kit serverless-framework-apollo-contentful ` 2. When prompted, name your project enhance-with-graphql-scalars`. `shell Welcome to starter.dev! (create-starter) ✔ What is the name of your project? … enhance-with-graphql-scalars > Downloading starter kit... ✔ Done! Next steps: cd enhance-with-graphql-scalars npm install (or pnpm install, yarn, etc) ` 3. For a detailed setup, including integrating with Contentful and deploying your serverless functions, please follow the comprehensive guide provided in the starter kit here. 4. And we add the graphql-scalars` package `shell npm install graphql-scalars ` Enhancing with graphql-scalars: Dive into the technology.typedefs.ts` file, which is the beating heart of our GraphQL type definitions for the project. Initially, these are the definitions we encounter: `javascript export const technologyTypeDefs = gql type Technology { id: ID! displayName: String! description: String url: URL } type Query { "Technology: GET" technology(id: ID!): Technology technologies(offset: Int, limit: Int): [Technology!] } type Mutation { "Technology: create, read and delete operations" createTechnology(displayName: String!, description: String, url: String): Technology updateTechnology(id: ID!, fields: TechnologyUpdateFields): Technology deleteTechnology(id: ID!): ID } input TechnologyUpdateFields { "Mutable fields of a technology entity" displayName: String description: String url: String } ; ` Our enhancement strategy is straightforward: Convert the `url` field from a String to the `URL` scalar type, bolstering field validation to adhere strictly to the URL format. Post-integration of graphql-scalars`, and with our adjustments, the revised type definition emerges as: `javascript export const technologyTypeDefs = gql type Technology { id: ID! displayName: String! description: String url: URL } type Query { "Technology: GET" technology(id: ID!): Technology technologies(offset: Int, limit: Int): [Technology!] } type Mutation { "Technology: create, read and delete operations" createTechnology(displayName: String!, description: String, url: URL): Technology updateTechnology(id: ID!, fields: TechnologyUpdateFields): Technology deleteTechnology(id: ID!): ID } input TechnologyUpdateFields { "Mutable fields of a technology entity" displayName: String description: String url: URL } ; ` To cap it off, we integrate the URL` type definition along with its resolvers (sourced from `graphql-scalars`) in the `schema/index.ts` file: `javascript import { mergeResolvers, mergeTypeDefs } from '@graphql-tools/merge'; import { technologyResolvers, technologyTypeDefs } from './technology'; import { URLResolver, URLTypeDefinition } from 'graphql-scalars'; const graphqlScalars = [URLTypeDefinition]; export const typeDefs = mergeTypeDefs([...graphqlScalars, technologyTypeDefs]); export const resolvers = mergeResolvers([{ URL: URLResolver }, technologyResolvers]); ` This facelift doesn't just refine our GraphQL schema but infuses it with innate validation, acting as a beacon for consistent and accurate data. Testing in the GraphQL Sandbox Time to witness our changes in action within the GraphQL sandbox. Ensure your local server is humming along nicely. Kick off with verifying the list query: ` query { technologies { id displayName url }, } ` Output: `json { "data": { "technologies": [ { "id": "4UXuIqJt75kcaB6idLMz3f", "displayName": "GraphQL", "url": "https://graphql.framework.dev/" }, { "id": "5nOshyir74EmqY4Jtuqk2L", "displayName": "Node.js", "url": "https://nodejs.framework.dev/" }, { "id": "5obCOaxbJql6YBeXmnlb5n", "displayName": "Express", "url": "https://www.npmjs.com/package/express" } ] } } ` Success! Each url` in our dataset adheres to the pristine URL format. Any deviation would've slapped us with a format error. Now, let's court danger. Attempt to update the url` field with a _wonky_ format: ` mutation { updateTechnology(id: "4UXuIqJt75kcaB6idLMz3f", fields: { url: "aFakeURLThatShouldThrowError" }) { id displayName url } } ` As anticipated, the API throws up a validation roadblock: `json { "data": {}, "errors": [ { "message": "Expected value of type \"URL\", found \"aFakeURLThatShouldThrowError\"; Invalid URL", "locations": [ { "line": 18, "column": 65 } ], "extensions": { "code": "GRAPHQLVALIDATION_FAILED", "stacktrace": [ "TypeError [ERRINVALID_URL]: Invalid URL", " at new NodeError (node:internal/errors:399:5)", " at new URL (node:internal/url:560:13)", ... ] } } ] } ` For the final act, re-run the initial query to reassure ourselves that the original dataset remains untarnished. Conclusion Enhancing your GraphQL schemas with custom scalars not only amplifies the robustness of your data structures but also streamlines validation and transformation processes. By setting foundational standards at the schema level, we ensure error-free, consistent, and meaningful data exchanges right from the start. The graphql-scalars` library offers an array of scalars that address common challenges developers face. Beyond the `URL` scalar we explored, consider diving into other commonly used scalars such as: **DateTime**: Represents date and time in the ISO 8601 format. **Email**: Validates strings as email addresses. **PositiveInt**: Ensures integer values are positive. **NonNegativeFloat**: Guarantees float values are non-negative. As a potential next step, consider crafting your own custom scalars tailored to your project's specific requirements. Building a custom scalar not only offers unparalleled flexibility but also provides deeper insights into GraphQL's inner workings and its extensibility. Remember, while GraphQL is inherently powerful, the granular enhancements like scalars truly elevate the data-fetching experience for developers and users alike. Always evaluate your project's needs and lean into enhancements that bring the most value. To richer and more intuitive GraphQL schemas!...

Nuxt DevTools v1.0: Redefining the Developer Experience Beyond Conventional Tools cover image

Nuxt DevTools v1.0: Redefining the Developer Experience Beyond Conventional Tools

In the ever-evolving world of web development, Nuxt.js has taken a monumental leap with the launch of Nuxt DevTools v1.0. More than just a set of tools, it's a game-changer—a faithful companion for developers. This groundbreaking release, available for all Nuxt projects and being defaulted from Nuxt v3.8 onwards, marks the beginning of a new era in developer tools. It's designed to simplify our development journey, offering unparalleled transparency, performance, and ease of use. Join me as we explore how Nuxt DevTools v1.0 is set to revolutionize our workflow, making development faster and more efficient than ever. What makes Nuxt DevTools so unique? Alright, let's start delving into the features that make this tool so amazing and unique. There are a lot, so buckle up! In-App DevTools The first thing that caught my attention is that breaking away from traditional browser extensions, Nuxt DevTools v1.0 is seamlessly integrated within your Nuxt app. This ensures universal compatibility across browsers and devices, offering a more stable and consistent development experience. This setup also means the tools are readily available in the app, making your work more efficient. It's a smart move from the usual browser extensions, making it a notable highlight. To use it you just need to press Shift + Option + D` (macOS) or `Shift + Alt + D` (Windows): With simple keystrokes, the Nuxt DevTools v1.0 springs to life directly within your app, ready for action. This integration eliminates the need to toggle between windows or panels, keeping your workflow streamlined and focused. The tools are not only easily accessible but also intelligently designed to enhance your productivity. Pages, Components, and Componsables View The Pages, Components, and Composables View in Nuxt DevTools v1.0 are a clear roadmap for your app. They help you understand how your app is built by simply showing its structure. It's like having a map that makes sense of your app's layout, making the complex parts of your code easier to understand. This is really helpful for new developers learning about the app and experienced developers working on big projects. Pages View lists all your app's pages, making it easier to move around and see how your site is structured. What's impressive is the live update capability. As you explore the DevTools, you can see the changes happening in real-time, giving you instant feedback on your app's behavior. Components View is like a detailed map of all the parts (components) your app uses, showing you how they connect and depend on each other. This helps you keep everything organized, especially in big projects. You can inspect components, change layouts, see their references, and filter them. By showcasing all the auto-imported composables, Nuxt DevTools provides a clear overview of the composables in use, including their source files. This feature brings much-needed clarity to managing composables within large projects. You can also see short descriptions and documentation links in some of them. Together, these features give you a clear picture of your app's layout and workings, simplifying navigation and management. Modules and Static Assets Management This aspect of the DevTools revolutionizes module management. It displays all registered modules, documentation, and repository links, making it easy to discover and install new modules from the community! This makes managing and expanding your app's capabilities more straightforward than ever. On the other hand, handling static assets like images and videos becomes a breeze. The tool allows you to preview and integrate these assets effortlessly within the DevTools environment. These features significantly enhance the ease and efficiency of managing your app's dynamic and static elements. The Runtime Config and Payload Editor The Runtime Config and Payload Editor in Nuxt DevTools make working with your app's settings and data straightforward. The Runtime Config lets you play with different configuration settings in real time, like adjusting settings on the fly and seeing the effects immediately. This is great for fine-tuning your app without guesswork. The Payload Editor is all about managing the data your app handles, especially data passed from server to client. It's like having a direct view and control over the data your app uses and displays. This tool is handy for seeing how changes in data impact your app, making it easier to understand and debug data-related issues. Open Graph Preview The Open Graph Preview in Nuxt DevTools is a feature I find incredibly handy and a real time-saver. It lets you see how your app will appear when shared on social media platforms. This tool is crucial for SEO and social media presence, as it previews the Open Graph tags (like images and descriptions) used when your app is shared. No more deploying first to check if everything looks right – you can now tweak and get instant feedback within the DevTools. This feature not only streamlines the process of optimizing for social media but also ensures your app makes the best possible first impression online. Timeline The Timeline feature in Nuxt DevTools is another standout tool. It lets you track when and how each part of your app (like composables) is called. This is different from typical performance tools because it focuses on the high-level aspects of your app, like navigation events and composable calls, giving you a more practical view of your app's operation. It's particularly useful for understanding the sequence and impact of events and actions in your app, making it easier to spot issues and optimize performance. This timeline view brings a new level of clarity to monitoring your app's behavior in real-time. Production Build Analyzer The Production Build Analyzer feature in Nuxt DevTools v1.0 is like a health check for your app. It looks at your app's final build and shows you how to make it better and faster. Think of it as a doctor for your app, pointing out areas that need improvement and helping you optimize performance. API Playground The API Playground in Nuxt DevTools v1.0 is like a sandbox where you can play and experiment with your app's APIs. It's a space where you can easily test and try out different things without affecting your main app. This makes it a great tool for trying out new ideas or checking how changes might work. Some other cool features Another amazing aspect of Nuxt DevTools is the embedded full-featured VS Code. It's like having your favorite code editor inside the DevTools, with all its powerful features and extensions. It's incredibly convenient for making quick edits or tweaks to your code. Then there's the Component Inspector. Think of it as your code's detective tool. It lets you easily pinpoint and understand which parts of your code are behind specific elements on your page. This makes identifying and editing components a breeze. And remember customization! Nuxt DevTools lets you tweak its UI to suit your style. This means you can set up the tools just how you like them, making your development environment more comfortable and tailored to your preferences. Conclusion In summary, Nuxt DevTools v1.0 marks a revolutionary step in web development, offering a comprehensive suite of features that elevate the entire development process. Features like live updates, easy navigation, and a user-friendly interface enrich the development experience. Each tool within Nuxt DevTools v1.0 is thoughtfully designed to simplify and enhance how developers build and manage their applications. In essence, Nuxt DevTools v1.0 is more than just a toolkit; it's a transformative companion for developers seeking to build high-quality web applications more efficiently and effectively. It represents the future of web development tools, setting new standards in developer experience and productivity....