Skip to content
William Mimura

AUTHOR

William Mimura

Software Engineer

Select...
Select...
How to create and use custom GraphQL Scalars cover image

How to create and use custom GraphQL Scalars

Enter the world of custom GraphQL scalars. These data types go beyond the conventional, offering the power and flexibility to tailor your schema to precisely match your application's unique needs....

Efficiently Extract Object References in Shopify Storefront GraphQL API cover image

Efficiently Extract Object References in Shopify Storefront GraphQL API

Dive into the intricacies of Shopify's Storefront API with our blog post, 'Efficiently Extract Object References in Shopify Storefront GraphQL API'. Discover a hands-on guide to extracting complex data types, like images, from metafields and metaobjects....

Ensuring Accurate Workflow Status in GitHub for Enhanced Visibility cover image

Ensuring Accurate Workflow Status in GitHub for Enhanced Visibility

Master the nuances of GitHub workflows with our latest blog post. Discover key strategies to ensure your workflows accurately reflect the true status of tests and tasks, preventing misleading green checks....

TypeScript Integration with .env Variables cover image

TypeScript Integration with .env Variables

Learn to integrate .env variables with TypeScript effectively. Our guide covers creating an environment.d.ts file, a simple solution for improved type-checking and development clarity in TypeScript projects....

Leveraging GraphQL Scalars to Enhance Your Schema cover image

Leveraging GraphQL Scalars to Enhance Your Schema

Explore the transformative power of GraphQL scalars with the graphql-scalars library, designed to extend the default scalar types in GraphQL schemas....

Building a Multi-Response Streaming API with Node.js, Express, and React cover image

Building a Multi-Response Streaming API with Node.js, Express, and React

Introduction As web applications become increasingly complex and data-driven, efficient and effective data transfer methods become critically important. A streaming API that can send multiple responses to a single request can be a powerful tool for handling large amounts of data or for delivering real-time updates. In this article, we will guide you through the process of creating such an API. We will use video streaming as an illustrative example. With their large file sizes and the need for flexible, on-demand delivery, videos present a fitting scenario for showcasing the power of multi-response streaming APIs. The backend will be built with Node.js and Express, utilizing HTTP range requests to facilitate efficient data delivery in chunks. Next, we'll build a React front-end to interact with our streaming API. This front-end will handle both the display of the streamed video content and its download, offering users real-time progress updates. By the end of this walkthrough, you will have a working example of a multi-response streaming API, and you will be able to apply the principles learned to a wide array of use cases beyond video streaming. Let's jump right into it! Hands-On Implementing the Streaming API in Express In this section, we will dive into the server-side implementation, specifically our Node.js and Express application. We'll be implementing an API endpoint to deliver video content in a streaming fashion. Assuming you have already set up your Express server with TypeScript, we first need to define our video-serving route. We'll create a GET endpoint that, when hit, will stream a video file back to the client. Please make sure to install cors for handling cross-origin requests, dotenv for loading environment variables, and throttle for controlling the rate of data transfer. You can install these with the following command: ` ` In the code snippet above, we are implementing a basic video streaming server that responds to HTTP range requests. Here's a brief overview of the key parts: 1. File and Range Setup: We start by determining the path to the video file and getting the file size. We also grab the range header from the request, which contains the range of bytes the client is requesting. 2. Range Requests Handling: If a range is provided, we extract the start and end bytes from the range header, then create a read stream for that specific range. This allows us to stream a portion of the file rather than the entire thing. 3. Response Headers: We then set up our response headers. In the case of a range request, we send back a '206 Partial Content' status along with information about the byte range and total file size. For non-range requests, we simply send back the total file size and the file type. 4. Data Streaming: Finally, we pipe the read stream directly to the response. This step is where the video data actually gets sent back to the client. The use of pipe() here automatically handles backpressure, ensuring that data isn't read faster than it can be sent to the client. With this setup in place, our streaming server is capable of efficiently delivering large video files to the client in small chunks, providing a smoother user experience. Implementing the Download API in Express Now, let's add another endpoint to our Express application, which will provide more granular control over the data transfer process. We'll set up a GET endpoint for '/download', and within this endpoint, we'll handle streaming the video file to the client for download. ` This endpoint has a similar setup to the video streaming endpoint, but it comes with a few key differences: 1. Response Headers: Here, we include a 'Content-Disposition' header with an 'attachment' directive. This header tells the browser to present the file as a downloadable file named 'video.mp4'. 2. Throttling: We use the 'throttle' package to limit the data transfer rate. Throttling can be useful for simulating lower-speed connections during testing, or for preventing your server from getting overwhelmed by data transfer operations. 3. Data Writing: Instead of directly piping the read stream to the response, we attach 'data' and 'end' event listeners to the throttled stream. On the 'data' event, we manually write each chunk of data to the response, and on the 'end' event, we close the response. This implementation provides a more hands-on way to control the data transfer process. It allows for the addition of custom logic to handle events like pausing and resuming the data transfer, adding custom transformations to the data stream, or handling errors during transfer. Utilizing the APIs: A React Application Now that we have a server-side setup for video streaming and downloading, let's put these APIs into action within a client-side React application. Note that we'll be using Tailwind CSS for quick, utility-based styling in our components. Our React application will consist of a video player that uses the video streaming API, a download button to trigger the download API, and a progress bar to show the real-time download progress. First, let's define the Video Player component that will play the streamed video: ` In the above VideoPlayer component, we're using an HTML5 video tag to handle video playback. The src attribute of the source tag is set to the video endpoint of our Express server. When this component is rendered, it sends a request to our video API and starts streaming the video in response to the range requests that the browser automatically makes. Next, let's create the DownloadButton component that will handle the video download and display the download progress: ` In this DownloadButton component, when the download button is clicked, it sends a fetch request to our download API. It then uses a while loop to continually read chunks of data from the response as they arrive, updating the download progress until the download is complete. This is an example of more controlled handling of multi-response APIs where we are not just directly piping the data, but instead, processing it and manually sending it as a downloadable file. Bringing It All Together Let's now integrate these components into our main application component. ` In this simple App component, we've included our VideoPlayer and DownloadButton components. It places the video player and download button on the screen in a neat, centered layout thanks to Tailwind CSS. Here is a summary of how our system operates: - The video player makes a request to our Express server as soon as it is rendered in the React application. Our server handles this request, reading the video file and sending back the appropriate chunks as per the range requested by the browser. This results in the video being streamed in our player. - When the download button is clicked, a fetch request is sent to our server's download API. This time, the server reads the file, but instead of just piping the data to the response, it controls the data sending process. It sends chunks of data and also logs the sent chunks for monitoring purposes. The React application collects these chunks and concatenates them, displaying the download progress in real-time. When all chunks are received, it compiles them into a Blob and triggers a download in the browser. This setup allows us to build a full-featured video streaming and downloading application with fine control over the data transmission process. To see this system in action, you can check out this video demo. Conclusion While the focus of this article was on video streaming and downloading, the principles we discussed here extend beyond just media files. The pattern of responding to HTTP range requests is common in various data-heavy applications, and understanding it can be a useful tool in your web development arsenal. Finally, remember that the code shown in this article is just a simple example to demonstrate the concepts. In a real-world application, you would want to add proper error handling, validation, and possibly some form of access control depending on your use case. I hope this article helps you in your journey as a developer. Building something yourself is the best way to learn, so don't hesitate to get your hands dirty and start coding!...

Follow These Best Practices for Using Git cover image

Follow These Best Practices for Using Git

As a developer, working in a version control system is a basic necessity as soon as you start any project. The 2021 Technology Survey by Stack Overflow shows Git as the most popular technology tool, reinforcing how important it is to understand, and use it correctly, in your projects. This article compiles some tips and hopefully good advice when working with Git. Never push to the main branch. > "The Git feature that really makes it stand apart from nearly every other SCM out there is its branching model." - from git-scm Pushing directly to the main branch doesn't make much sense in Git as it doesn't promote collaboration. Instead, make use of merges/rebases from pull requests that most providers have (GitHub, AWS CodeCommit, etc.). Define patterns & standards within your team Every team has its patterns and standards for naming conventions, tags, and commit messages. Those should be followed and pointed out in all pull-request reviews. If you don't have any starting pattern, here are a couple of suggestions: - branch name: user_name/ticket_number,-short_description_dashed - example: mimurawil/1,-my-first-commit - commit title: type(scope): short_description - type: feat, fix, revert, etc - scope: ticket number (can be prefixed by "#") - example: fix(#1): fix broken build Write useful commit messages Try to focus more on "why" and "what" instead of "how". You are spending a bit more time to complete your commit, but it will pay back when your future self or teammates revisit that piece of code. ` Rebase your branch frequently Rebasing your branch frequently ensures you are always working with the latest version from the main branch. Your teammates will review the correct changes, and you will usually encounter no conflicts when merging to the main branch. ` _(make sure you have your local main branch updated from remote)_ Make use of Git commands A few Git commands that I found incredibly useful in my journey: - git revert: creates a new commit that reverts changes from a specific commit - git stash: stores the changes and goes back to a clean working directory - git cherry-pick: picks one specific commit and adds it on top of your current branch - git bisect: help you find a specific commit in the history that added a bug in your codebase...