Skip to content

State of Node.js Wrap-up

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

In this State of Node.js event, our panelists discussed updates, LTS releases and APIs with Node.js maintainers, technical steering committee members and collaborators, and much more.

In this wrap-up, we will take a deeper look into these latest developments and explore what is on the horizon for Node.js. You can watch the full State of Node.js event on the This Dot Media YouTube Channel.

Here is a complete list of the host and panelists that participated in this online event.

Hosts:

  • Tracy Lee, CEO, This Dot Labs, @ladyleet
  • James Snell, Node.js Foundation Technical Steering Committee, @jasnell

Panelists:

  • Beth Griggs, Senior Software Engineer, Red Hat, Node.js TSC Member, @BethGriggs_
  • Matteo Collina, Co-Founder and CTO of Platformatic.dev, Node.js TSC member, @matteocollina
  • Michael Dawson, Node.js Lead, Red Hat and IBM, @mhdawson1

General state of Node.js

Michael kicks off the conversation saying there are a lot of things happening with Node.js right now. There were over a billion downloads last year alone, and it is continuing to grow.

Beth talked about the major release of Node v20 coming out in April. Node 14 end of life is coming at the end of April.

Matteo talked about two micro conferences happening this year for Node.js. One will be in North America in Vancouver in May, and the other one is in September in Bilbao.

Updates from specific working groups

Michael talks about spinning up a uvwasi team. The wasi is the web assembly system interface. It’s not only used in Node, but in other projects like grain. It’s a key component of wasm support.

Michael also talks about how the Node.js API team has been great for building long term contributors. If you’re interested in add-ons and native code, it is a friendly group to get involved with.

Beth talks about other ways folks can contribute to Node.js. She talks about a redesign of the website that happened recently. The main website has been migrated over to Next.js.

Matteo talks about a massive PR that is open right now about the new loader API. There is a lot of effort being put into this with a lot of contributors. This new loader will replace the - - hack.

New Features

Michael talks about the single executable application that enables bundling code into the Node.js binaries without having to build it. He also mentions process permissions. These are two big new experimental features right now.

Beth talks about the built-in test runner. It allows you to throw some scripts together, and get some simple tests without having to deal with dependable warnings for a mod.

End of Event

Each panelist takes time to go over what they are currently doing on their own. Beth is working in security for releases, and takes time to talk about everything there.

Michael is working with the Node API, which is a long-term working project. James is work on standard APIs and also bringing interoperability with Node, Bun, and Dino.

Finally, Matteo is working on getting Platformatic going.

Conclusion

The conversation went in depth about the state of Node.js, and what is being done in the new releases as well as experimental updates. The panelists were very engaged, and were great at bringing up ways to get involved with the Node community. You can watch the full State of Node.js event on the This Dot Media Youtube Channel.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Introduction to Zod for Data Validation cover image

Introduction to Zod for Data Validation

As web developers, we're often working with data from external sources like APIs we don't control or user inputs submitted to our backends. We can't always rely on this data to take the form we expect, and we can encounter unexpected errors when it deviates from expectations. But with the Zod library, we can define what our data ought to look like and parse the incoming data against those defined schemas. This lets us work with that data confidently, or to quickly throw an error when it isn't correct. Why use Zod? TypeScript is great for letting us define the shape of our data in our code. It helps us write more correct code the first time around by warning us if we are doing something we shouldn't. But TypeScript can't do everything for us. For example, we can define a variable as a string or a number, but we can't say "a string that starts with user_id_ and is 20 characters long" or "an integer between 1 and 5". There are limits to how much TypeScript can narrow down our data for us. Also, TypeScript is a tool for us developers. When we compile our code, our types are not available to the vanilla JavaScript. JavaScript can't validate that the data we actually use in our code matches what we thought we'd get when we wrote our TypeScript types unless you're willing to manually write code to perform those checks. This is where we can reach for a tool like Zod. With Zod, we can write data schemas. These schemas, in the simplest scenarios, look very much like TypeScript types. But we can do more with Zod than we can with TypeScript alone. Zod schemas let us create additional rules for data parsing and validation. A 20-character string that starts with user_id_? It's z.string().startsWith('user_id_').length(20). An integer between 1 and 5 inclusive? It's z.number().int().gte(1).lte(5). Zod's primitives give us many extra functions to be more specific about *exactly* what data we expect. Unlike TypeScript, Zod schemas aren't removed on compilation to JavaScript—we still have access to them! If our app receives some data, we can verify that it matches the expected shape by passing it to your Zod schema's parse function. You'll either get back your data in exactly the shape you defined in your schema, or Zod will give you an error to tell you what was wrong. Zod schemas aren't a replacement for TypeScript; rather, they are an excellent complement. Once we've defined our Zod schema, it's simple to derive a TypeScript type from it and to use that type as we normally would. But when we really need to be sure our data conforms to the schema, we can always parse the data with our schema for that extra confidence. Defining Data Schemas Zod schemas are the variables that define our expectations for the shape of our data, validate those expectations, and transform the data if necessary to match our desired shape. It's easy to start with simple schemas, and to add complexity as required. Zod provides different functions that represent data structures and related validation options, which can be combined to create larger schemas. In many cases, you'll probably be building a schema for a data object with properties of some primitive type. For example, here's a schema that would validate a JavaScript object representing an order for a pizza: ` Zod provides a number of primitives for defining schemas that line up with JavaScript primitives: string, number, bigint, boolean, date, symbol, undefined, and null. It also includes primitives void, any, unknown, and never for additional typing information. In addition to basic primitives, Zod can define object, array, and other native data structure schemas, as well as schemas for data structures not natively part of JavaScript like tuple and enum. The documentation contains considerable detail on the available data structures and how to use them. Parsing and Validating Data with Schemas With Zod schemas, you're not only telling your program what data should look like; you're also creating the tools to easily verify that the incoming data matches the schema definitions. This is where Zod really shines, as it greatly simplifies the process of validating data like user inputs or third party API responses. Let's say you're writing a website form to register new users. At a minimum, you'll need to make sure the new user's email address is a valid email address. For a password, we'll ask for something at least 8 characters long and including one letter, one number, and one special character. (Yes, this is not really the best way to write strong passwords; but for the sake of showing off how Zod works, we're going with it.) We'll also ask the user to confirm their password by typing it twice. First, let's create a Zod schema to model these inputs: ` So far, this schema is pretty basic. It's only making sure that whatever the user types as an email is an email, and it's checking that the password is at least 8 characters long. But it is *not* checking if password and confirmPassword match, nor checking for the complexity requirements. Let's enhance our schema to fix that! ` By adding refine with a custom validation function, we have been able to verify that the passwords match. If they don't, parsing will give us an error to let us know that the data was invalid. We can also chain refine functions to add checks for our password complexity rules: ` Here we've chained multiple refine functions. You could alternatively use superRefine, which gives you even more fine grained control. Now that we've built out our schema and added refinements for extra validation, we can parse some user inputs. Let's see two test cases: one that's bound to fail, and one that will succeed. ` There are two main ways we can use our schema to validate our data: parse and safeParse. The main difference is that parse will throw an error if validation fails, while safeParse will return an object with a success property of either true or false, and either a data property with your parsed data or an error property with the details of a ZodError explaining why the parsing failed. In the case of our example data, userInput2 will parse just fine and return the data for you to use. But userInput1 will create a ZodError listing all of the ways it has failed validation. ` ` We can use these error messages to communicate to the user how they need to fix their form inputs if validation fails. Each error in the list describes the validation failure and gives us a human readable message to go with it. You'll notice that the validation errors for checking for a valid email and for checking password length have a lot of details, but we've got three items at the end of the error list that don't really tell us anything useful: just a custom error of Invalid input. The first is from our refine checking if the passwords match, and the second two are from our refine functions checking for password complexity (numbers and special characters). Let's modify our refine functions so that these errors are useful! We'll add our own error parameters to customize the message we get back and the path to the data that failed validation. ` Now, our error messages from failures in refine are informative! You can figure out which form fields aren't validating from the path, and then display the messages next to form fields to let the user know how to remedy the error. ` By giving our refine checks a custom path and message, we can make better use of the returned errors. In this case, we can highlight specific problem form fields for the user and give them the message about what is wrong. Integrating with TypeScript Integrating Zod with TypeScript is very easy. Using z.infer<typeof YourSchema> will allow you to avoid writing extra TypeScript types that merely reflect the intent of your Zod schemas. You can create a type from any Zod schema like so: ` Using a TypeScript type derived from a Zod schema does *not* give you any extra level of data validation at the type level beyond what TypeScript is capable of. If you create a type from z.string.min(3).max(20), the TypeScript type will still just be string. And when compiled to JavaScript, even that will be gone! That's why you still need to use parse/safeParse on incoming data to validate it before proceeding as if it really does match your requirements. A common pattern with inferring types from Zod schemas is to use the same name for both. Because the schema is a variable, there's no name conflict if the type uses the same name. However, I find that this can lead to confusing situations when trying to import one or the other—my personal preference is to name the Zod schema with Schema at the end to make it clear which is which. Conclusion Zod is an excellent tool for easily and confidently asserting that the data you're working with is exactly the sort of data you were expecting. It gives us the ability to assert at runtime that we've got what we wanted, and allows us to then craft strategies to handle what happens if that data is wrong. Combined with the ability to infer TypeScript types from Zod schemas, it lets us write and run more reliable code with greater confidence....

How to build an AI assistant with OpenAI, Vercel AI SDK, and Ollama with Next.js cover image

How to build an AI assistant with OpenAI, Vercel AI SDK, and Ollama with Next.js

In today’s blog post, we’ll build an AI Assistant using three different AI models: Whisper and TTS from OpenAI and Llama 3.1 from Meta. While exploring AI, I wanted to try different things and create an AI assistant that works by voice. This curiosity led me to combine OpenAI’s Whisper and TTS models with Meta’s Llama 3.1 to build a voice-activated assistant. Here’s how these models will work together: * First, we’ll send our audio to the Whisper model, which will convert it from speech to text. * Next, we’ll pass that text to the Llama 3.1 model. Llama will understand the text and generate a response. * Finally, we’ll take Llama’s response and send it to the TTS model, turning the text back into speech. We’ll then stream that audio back to the client. Let’s dive in and start building this excellent AI Assistant! Getting started We will use different tools to build our assistant. To build our client side, we will use Next.js. However, you could choose whichever framework you prefer. To use our OpenAI models, we will use their TypeScript / JavaScript SDK. To use this API, we require the following environmental variable: OPENAI_API_KEY— To get this key, we need to log in to the OpenAI dashboard and find the API keys section. Here, we can generate a new key. Awesome. Now, to use our Llama 3.1 model, we will use Ollama and the Vercel AI SDK, utilizing a provider called ollama-ai-provider. Ollama will allow us to download our preferred model (we could even use a different one, like Phi) and run it locally. The Vercel SDK will facilitate its use in our Next.js project. To use Ollama, we just need to download it and choose our preferred model. For this blog post, we are going to select Llama 3.1. After installing Ollama, we can verify if it is working by opening our terminal and writing the following command: Notice that I wrote “llama3.1” because that’s my chosen model, but you should use the one you downloaded. Kicking things off It's time to kick things off by setting up our Next.js app. Let's start with this command: ` After running the command, you’ll see a few prompts to set the app's details. Let's go step by step: * Name your app. * Enable app router. The other steps are optional and entirely up to you. In my case, I also chose to use TypeScript and Tailwind CSS. Now that’s done, let’s go into our project and install the dependencies that we need to run our models: ` Building our client logic Now, our goal is to record our voice, send it to the backend, and then receive a voice response from it. To record our audio, we need to use client-side functions, which means we need to use client components. In our case, we don’t want to transform our whole page to use client capabilities and have the whole tree in the client bundle; instead, we would prefer to use Server components and import our client components to progressively enhance our application. So, let’s create a separate component that will handle the client-side logic. Inside our app folder, let's create a components folder, and here, we will be creating our component: ` Let’s go ahead and initialize our component. I went ahead and added a button with some styles in it: ` And then import it into our Page Server component: ` Now, if we run our app, we should see the following: Awesome! Now, our button doesn’t do anything, but our goal is to record our audio and send it to someplace; for that, let us create a hook that will contain our logic: ` We will use two APIs to record our voice: navigator and MediaRecorder. The navigator API will give us information about the user’s media devices like the user media audio, and the MediaRecorder will help us record the audio from it. This is how they’re going to play out together: ` Let’s explain this code step by step. First, we create two new states. The first one is for keeping track of when we are recording, and the second one stores the instance of our MediaRecorder. ` Then, we’ll create our first method, startRecording. Here, we are going to have the logic to start recording our audio. We first check if the user has media devices available thanks to the navigator API that gives us information about the browser environment of our user: If we don’t have media devices to record our audio, we just return. If they do, then let us create a stream using their audio media device. ` Finally, we go ahead and create an instance of a MediaRecorder to record this audio: ` Then we need a method to stop our recording, which will be our stopRecording. Here, we will just stop our recording in case a media recorder exists. ` We are recording our audio, but we are not storing it anywhere. Let’s add a new useEffect and ref to accomplish this. We would need a new ref, and this is where our chunks of audio data will be stored. ` In our useEffect we are going to do two main things: store those chunks in our ref, and when it stops, we are going to create a new Blob of type audio/mp3: ` It is time to wire this hook with our AudioRecorder component: ` Let’s go to the other side of the coin, the backend! Setting up our Server side We want to use our models on the server to keep things safe and run faster. Let’s create a new route and add a handler for it using route handlers from Next.js. In our App folder, let’s make an “Api” folder with the following route in it: We want to use our models on the server to keep things safe and run faster. Let’s create a new route and add a handler for it using route handlers from Next.js. In our App folder, let’s make an “Api” folder with the following route in it: ` Our route is called ‘chat’. In the route.ts file, we’ll set up our handler. Let’s start by setting up our OpenAI SDK. ` In this route, we’ll send the audio from the front end as a base64 string. Then, we’ll receive it and turn it into a Buffer object. ` It’s time to use our first model. We want to turn this audio into text and use OpenAI’s Whisper Speech-To-Text model. Whisper needs an audio file to create the text. Since we have a Buffer instead of a file, we’ll use their ‘toFile’ method to convert our audio Buffer into an audio file like this: ` Notice that we specified “mp3”. This is one of the many extensions that the Whisper model can use. You can see the full list of supported extensions here: https://platform.openai.com/docs/api-reference/audio/createTranscription#audio-createtranscription-file Now that our file is ready, let’s pass it to Whisper! Using our OpenAI instance, this is how we will invoke our model: ` That’s it! Now, we can move on to the next step: using Llama 3.1 to interpret this text and give us an answer. We’ll use two methods for this. First, we’ll use ‘ollama’ from the ‘ollama-ai-provider’ package, which lets us use this model with our locally running Ollama. Then, we’ll use ‘generateText’ from the Vercel AI SDK to generate the text. Side note: To make our Ollama run locally, we need to write the following command in the terminal: ` ` Finally, we have our last model: TTS from OpenAI. We want to reply to our user with audio, so this model will be really helpful. It will turn our text into speech: ` The TTS model will turn our response into an audio file. We want to stream this audio back to the user like this: ` And that’s all the whole backend code! Now, back to the frontend to finish wiring everything up. Putting It All Together In our useRecordVoice.tsx hook, let's create a new method that will call our API endpoint. This method will also take the response back and play to the user the audio that we are streaming from the backend. ` Great! Now that we’re getting our streamed response, we need to handle it and play the audio back to the user. We’ll use the AudioContext API for this. This API allows us to store the audio, decode it and play it to the user once it’s ready: ` And that's it! Now the user should hear the audio response on their device. To wrap things up, let's make our app a bit nicer by adding a little loading indicator: ` Conclusion In this blog post, we saw how combining multiple AI models can help us achieve our goals. We learned to run AI models like Llama 3.1 locally and use them in our Next.js app. We also discovered how to send audio to these models and stream back a response, playing the audio back to the user. This is just one of many ways you can use AI—the possibilities are endless. AI models are amazing tools that let us create things that were once hard to achieve with such quality. Thanks for reading; now, it’s your turn to build something amazing with AI! You can find the complete demo on GitHub: AI Assistant with Whisper TTS and Ollama using Next.js...

State of RxJS Wrap-up cover image

State of RxJS Wrap-up

In this State of RxJS event, our panelists discussed the current state of RxJS and the upcoming features and improvements of the highly anticipated RxJS 8 release. In this wrap-up, we will talk about panel discussions with RxJS core team members, which is an in-depth exploration of the upcoming RxJS 8 release, highlighting its key features and performance enhancements. You can watch the full State of RxJS event on the This Dot Media YouTube Channel. Here is a complete list of the host and panelists that participated in this online event. Hosts: - Tracy Lee, CEO, This Dot Labs, @ladyleet - Ben Lesh, RxJs Core Team Lead, @BenLesh Panelists: - Mladen Jakovljević, Frontend Web Developer, RxJS Core Team member, @jakovljeviMla - Moshe Kolodny, Senior Full Stack Engineer at Netflix, Previous RxJS Lead at Google, @mkldny - Marko Stanimirović, NgRx Core Team Member | GDE in Angular, @MarkoStDev State of RxJS Ben kicks off the discussion by updating everyone on the current state of RxJS. He starts off by recommending those using RxJS v6 to update to the current version 7.4 because it is about half the size of v6, and he says that v8 will reduce the size even further. There are also performance updates with 7.4 where the speeds improved 30 fold. RxJS version 8 is currently in alpha! There are not as many breaking changes with this version. The major breaking change in this version is that they are removing the long deprecated APIs. They are really wanting people to try things in the alpha version, especially the 408 loops to subscribe to observables. It is an interesting way to consume observables that use the platform, and may work really well with other people’s async await usage. Operators The team is currently trying to figure out a way to show people how they develop operators by giving them the means of developing operators. They currently have a problem where they externally tell people to develop an operator with this, you subscribe, and then you give this kind of hand rolled Observer there. Internally, they have a createOperatorSubscriber which replaces the operate function. They want a reference where you can see how to develop an operator using the ones already there to build your own. There is also a plan to make sure that the RxJS library works as a utility library to work with any Observable that matches the contract. Docs Mladen gives an update of the docs for RxJS. He explains that there aren’t a lot of updates currently with the docs. He explains that there were some issues in the past with exporting operators from two places. There was also an issue with the Docs app build running in the pipeline. He explains that these issues should now be resolved, and that there hopefully won’t be any more issues there. Pull requests are always welcome when working with RxJS docs. They try to stay on top of merge requests as well. NgRx Marko talks about NgRx and RxJS. He explains that RxJS is the main engine of NgRx for almost all of the libraries, especially State Management. A few packages, like direct store, implements a Redux pattern, but uses RxJS under the hood. Pipe Moshe brings up the typings for pipe. All of RxJS’s pipe methods and functions, including the new RxJS function, will work with any unary function. General Questions One of the first questions brought up was if RxJS should not be associated with Angular anymore. Ben brings up the fact that recently, there have been a lot of React people downloading RxJS. Another question was why NgRx is switching to Signals. Marco talks about how NgRx is a group of libraries that is used in Angular. NgRx needs to be in accordance with Angular. One main reason is because of the performance improvements with the change detection strategy. There were also other questions about contributing to RxJS and coming up with a way to utilize the docs for that. There are also questions about the RxJS community, and that there currently isn’t a discord or anything like that for it right now. Conclusion The panelists were very engaged, and there was a lot of dialogue about RxJS and the future that is to come with it. The question and answer portion at the end covered some great material that all the panelists took part in answering. You can watch the full State of RxJS event on the This Dot Media Youtube Channel....

Lessons from the DOGE Website Hack: How to Secure Your Next.js Website cover image

Lessons from the DOGE Website Hack: How to Secure Your Next.js Website

Lessons from the DOGE Website Hack: How to Secure Your Next.js Website The Department of Government Efficiency (DOGE) launched a new website, doge.gov. Within days, it was defaced with messages from hackers. The culprit? A misconfigured database was left open, letting anyone edit content. Reports suggest the site was built on Cloudflare Pages, possibly with a Next.js frontend pulling data dynamically. While we don’t have the tech stack confirmed, we are confident that Next.js was used from early reporting around the website. Let’s dive into what went wrong—and how you can secure your own Next.js projects. What Happened to DOGE.gov? The hack was a classic case of security 101 gone wrong. The database—likely hosted in the cloud—was accessible without authentication. No passwords, no API keys, no nothing. Hackers simply connected to it and started scribbling their graffiti. Hosted on Cloudflare Pages (not government servers), the site might have been rushed, skipping critical security checks. For a .gov domain, this is surprising—but it’s a reminder that even big names can miss best practices. It’s easy to imagine how this happened: an unsecured server action is being used on the client side, a serverless function or API route fetching data from an unsecured database, no middleware enforcing access control, and a deployment that didn’t double-check cloud configs. Let’s break down how to avoid this in your own Next.js app. Securing Your Next.js Website: 5 Key Steps Next.js is a powerhouse for building fast, scalable websites, but its flexibility means you’re responsible for locking the doors. Here’s how to keep your site safe. 1. Double-check your Server Actions If Next.js 13 or later was used, Server Actions might’ve been part of the mix—think form submissions or dynamic updates straight from the frontend. These are slick for handling server-side logic without a separate API, but they’re a security risk if not handled right. An unsecured Server Action could’ve been how hackers slipped into the database. Why? Next.js generates a public endpoint for each Server Action. If these Server Actions lack proper authentication and authorization measures, they become vulnerable to unauthorized data access. Example: * Restrict Access: Always validate the user’s session or token before executing sensitive operations. * Limit Scope: Only allow Server Actions to perform specific, safe tasks—don’t let them run wild with full database access. * Don’t use server action on the client side without authorization and authentication checks 2. Lock Down Your Database Access Another incident happened in 2020. A hacker used an automated script to scan for misconfigured MongoDB databases, wiping the content of 23 thousand databases that have been left wide open, and leaving a ransom note behind asking for money. So whether you’re using MongoDB, PostgreSQL, or Cloudflare’s D1, never leave it publicly accessible. Here’s what to do: * Set Authentication: Always require credentials (username/password or API keys) to connect. Store these in environment variables (e.g., .env.local for Next.js) and access them via process.env. * Whitelist IPs: If your database is cloud-hosted, restrict access to your Next.js app’s server or Vercel deployment IP range. * Use VPCs: For extra security, put your database in a Virtual Private Cloud (VPC) so it’s not even exposed to the public internet. If you are using Vercel, you can create private connections between Vercel Functions and your backend cloud, like databases or other private infrastructure, using Vercel Secure Compute Example: In a Next.js API route (/app/api/data.js): ` > Tip: Don’t hardcode MONGO_URI—keep it in .env and add .env to .gitignore. 3. Secure Your API Routes Next.js API routes are awesome for server-side logic, but they’re a potential entry point if left unchecked. The site might’ve had an API endpoint feeding its database updates without protection. * Add Authentication: Use a library like next-auth or JSON Web Tokens (JWT) to secure routes. * Rate Limit: Prevent abuse with something like rate-limiter-flexible. Example: ` 4. Double-Check Your Cloud Config A misconfigured cloud setup may have exposed the database. If you’re deploying on Vercel, Netlify, or Cloudflare: * Environment Variables: Store secrets in your hosting platform’s dashboard, not in code. * Serverless Functions: Ensure they’re not leaking sensitive data in responses. Log errors, not secrets. * Access Controls: Verify your database firewall rules only allow connections from your app. 5. Sanitize and Validate Inputs Hackers love injecting junk into forms or APIs. If your app lets users submit data (e.g., feedback forms), unvalidated inputs could’ve been a vector. In Next.js: * Sanitize: Use libraries like sanitize-html for user inputs. * Validate: Check data types and lengths before hitting your database. Example: ` Summary The DOGE website hack serves as a reminder of the ever-present need for robust security measures in web development. By following the outlined steps–double-checking Server Actions, locking down database access, securing API routes, verifying cloud configurations, and sanitizing/validating inputs–you can enhance the security posture of your Next.js applications and protect them from potential threats. Remember, a proactive approach to security is always the best defense....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co