Skip to content

D1 SQLite: Writing queries with the D1 Client API

D1 SQLite: Writing queries with the D1 Client API

Writing queries with the D1 Client API

In the previous post we defined our database schema, got up and running with migrations, and loaded some seed data into our database. In this post we will be working with our new database and seed data. If you want to participate, make sure to follow the steps in the first post.

We’ve been taking a minimal approach so far by using only wrangler and sql scripts for our workflow. The D1 Client API has a small surface area. Thanks to the power of SQL, we will have everything we need to construct all types of queries. Before we start writing our queries, let's touch on some important concepts.

Prepared statements and parameter binding

This is the first section of the docs and it highlights two different ways to write our SQL statements using the client API: prepared and static statements. Best practice is to use prepared statements because they are more performant and prevent SQL injection attacks. So we will write our queries using prepared statements.

We need to use parameter binding to build our queries with prepared statements. This is pretty straightforward and there are two variations.

By default we add ? ’s to our statement to represent a value to be filled in. The bind method will bind the parameters to each question mark by their index. The first ? is tied to the first parameter in bind, 2nd, etc. I would stick with this most of the time to avoid any confusion.

const stmt = db.prepare('SELECT * FROM users WHERE name = ? AND age = ?').bind( 'John Doe', 41 );

I like this second method less as it feels like something I can imagine messing up very innocently. You can add a number directly after a question mark to indicate which number parameter it should be bound to. In this exampl, we reverse the previous binding.

const stmt = db.prepare('SELECT * FROM users WHERE name = ?2 AND age = ?1').bind( 41, 'John Doe' );

Reusing prepared statements

If we take the first example above and not bind any values we have a statement that can be reused:

const stmt = db.prepare('SELECT * FROM users WHERE name = ? AND age = ?')

const results = stmt.bind('John Doe', 41).all()
const results = stmt.bind('Jane Doe', 38).all()

Querying

For the purposes of this post we will just build example queries by writing them out directly in our Worker fetch handler. If you are building an app I would recommend building functions or some other abstraction around your queries.

select queries

Let's write our first query against our data set to get our feet wet.

Here’s the initial worker code and a query for all authors:

import * as Schema from './schema';

export default {
	async fetch(request, env, ctx): Promise<Response> {
		let results = await env.DB.prepare('SELECT * FROM authors').all<Schema.Author[]>();

		return new Response(JSON.stringify(results.results), {
			headers: { 'content-type': 'application/json' },
		});
	},
} satisfies ExportedHandler<Env>;

We pass our SQL statement into prepare and use the all method to get all the rows. Notice that we are able to pass our types to a generic parameter in all. This allows us to get a fully typed response from our query.

We can run our worker with npm run dev and access it at http://localhost:8787 by default. We’ll keep this simple workflow of writing queries and passing them as a json response for inspection in the browser. Opening the page we get our author results.

joins

Not using an ORM means we have full control over our own destiny. Like anything else though, this has tradeoffs. Let’s look at a query to fetch the list of posts that includes author and tags information.

type PostsWithAuthorsAndTags = Schema.Post & {
	author_name: string;
	tags: string;
};

let results = await env.DB.prepare(
		`
SELECT
  posts.*,
	authors.name AS author_name,
	COALESCE(
		JSON_GROUP_ARRAY(tags.name),
		'[]'
	) AS tags
FROM
	posts
JOIN
	authors ON posts.author_id = authors.id
LEFT JOIN
	posts_tags ON posts.id = posts_tags.post_id
LEFT JOIN
	tags ON posts_tags.tag_id = tags.id
GROUP BY
posts.id
			`
).all<PostsWithAuthorsAndTags[]>();

Let’s walk through each part of the query and highlight some pros and cons.

SELECT
  posts.*,
	authors.name AS author_name,
	COALESCE(
		JSON_GROUP_ARRAY(tags.name),
		'[]'
	) AS tags
  • The query selects all columns from the posts table.
  • It also selects the name column from the authors table and renames it to author_name.
  • It aggregates the name column from the tags table into a JSON array. If there are no tags, it returns an empty JSON array. This aggregated result is renamed to tags.
FROM
	posts
JOIN
	authors ON posts.author_id = authors.id
LEFT JOIN
	posts_tags ON posts.id = posts_tags.post_id
LEFT JOIN
	tags ON posts_tags.tag_id = tags.id
GROUP BY
	posts.id
  • The query starts by selecting data from the posts table.
  • It then joins the authors table to include author information for each post, matching posts to authors using the author_id column in posts and the id column in authors.
  • Next, it left joins the posts_tags table to include tag associations for each post, ensuring that all posts are included even if they have no tags.
  • Next, it left joins the tags table to include tag names, matching tags to posts using the tag_id column in posts_tags and the id column in tags.
  • Finally, group the results by the post id so that all rows with the same post id are combined in a single row

SQL provides a lot of power to query our data in interesting ways. JOIN ’s will typically be more performant than performing additional queries.You could just as easily write a simpler version of this query that uses subqueries to fetch post tags and join all the data by hand with JavaScript. This is the nice thing about writing SQL, you’re free to fetch and handle your data how you please.

Our results should look similar to this:

[
  {
    "id": 1,
    "author_id": 1,
    "title": "Exploring the Alps",
    "content": "Content about exploring the Alps...",
    "published_at": "2024-07-31 18:37:21",
    "author_name": "Alice Smith",
    "tags": "[\\"Travel\\",\\"Photography\\"]"
  },
  ...
]

This brings us to our next topic.

Marshaling / coercing result data

A couple of things we notice about the format of the result data our query provides:

Rows are flat. We join the author directly onto the post and prefix its column names with author.

"author_name": "Alice Smith"

Using an ORM we might get the data back as a child object:

{
  "id": 1,
  "title": "Exploring the Alps",
  "author": {
    "name": "Alice Smith"
  },
  ...
 }

Another thing is that our tags data is a JSON string and not a JavaScript array. This means that we will need to parse it ourselves.

result.tags = JSON.parse(result.tags)

This isn’t the end of the world but it is some more work on our end to coerce the result data into the format that we actually want.

This problem is handled in most ORM’s and is their main selling point in my opinion.

insert / update / delete

Next, let’s write a function that will add a new post to our database.

async function createNewPost(env: Env, newPostData: NewPostData): Promise<{ id: number }> {
	// Insert the new post into the posts table
	const postResult = await env.DB.prepare(
		`
INSERT INTO posts (author_id, title, content)
VALUES (?, ?, ?)
RETURNING id
					`
	)
		.bind(newPostData.authorId, newPostData.title, newPostData.content)
		.first<{ id: number }>();

	if (!postResult) {
		throw new Error('Failed to insert new post');
	}

	const postId = postResult.id;

	// Insert tags into the tags table if they don't already exist and get their IDs
	const tagIds = await Promise.all(
		newPostData.tags
			.map(async (tag) => {
				await env.DB.prepare(
					`
INSERT OR IGNORE INTO tags (name)
VALUES (?)
									`
				)
					.bind(tag)
					.run();

				const tagResult = await env.DB.prepare(
					`
SELECT id FROM tags WHERE name = ?
									`
				)
					.bind(tag)
					.first<{ id: number }>();

				return tagResult?.id;
			})
			.filter(Boolean)
	);

	// Link tags to the new post in the posts_tags table
	for (const tagId of tagIds) {
		await env.DB.prepare(
			`
INSERT INTO posts_tags (post_id, tag_id)
VALUES (?, ?)
							`
		)
			.bind(postId, tagId)
			.run();
	}

	return { id: postId };
}

There’s a few queries involved in our create post function:

  • first we create the new post
  • next we run through the tags and either create or return an existing tag
  • finally, we add entries to our post_tags join table to associate our new post with the tags assigned

We can test our new function by providing post content in query params on our index page and formatting them for our function.

const newPostData: NewPostData = {
	authorId: Number(url.searchParams.get('authorId')),
	tags: url.searchParams.get('tags')?.split(',') ?? [],
	title: url.searchParams.get('title') ?? '',
	content: url.searchParams.get('content') ?? '',
};
if (!newPostData.authorId || !newPostData.title || !newPostData.content) {
	return new Response('Missing required fields', { status: 400 });
}

const newPost = await createNewPost(env, newPostData);

I gave it a run like this: http://localhost:8787authorId=1&tags=Food%2CReview&title=A+review+of+my+favorite+Italian+restaurant&content=I+got+the+sausage+orchette+and+it+was+amazing.+I+wish+that+instead+of+baby+broccoli+they+used+rapini.+Otherwise+it+was+a+perfect+dish+and+the+vibes+were+great

And got a new post with the id 11.

UPDATE and DELETE operations are pretty similar to what we’ve seen so far. Most complexity in your queries will be similar to what we’ve seen in the posts query where we want to JOIN or GROUP BY data in various ways.

To update the post we can write a query that looks like this:

await env.DB.prepare(
`
UPDATE posts
SET
author_id = COALESCE(?, author_id),
title = COALESCE(?, title),
content = COALESCE(?, content)
WHERE id = ?
`
)
.bind(updatedPostData.authorId, updatedPostData.title, updatedPostData.content, postId)
.run();

COALESCE acts similarly to if we had written a ?? b in JavaScript. If the binded value that we provide is null it will fall back to the default.

We can delete our new post with a simple DELETE query:

await env.DB.prepare(
`
DELETE posts WHERE id = ?
`
)
.bind(postId)
.run();

Transactions / Batching

One thing to note with D1 is that I don’t think the traditional style of SQLite transactions are supported. You can use the db.batch API to achieve similar functionality though.

According to the docs:

Batched statements are SQL transactions ↗. If a statement in the sequence fails, then an error is returned for that specific statement, and it aborts or rolls back the entire sequence.

await db.batch([
    db.prepare("UPDATE users SET name = ?1 WHERE id = ?2").bind( "John", 17 ),
    db.prepare("UPDATE users SET age = ?1 WHERE id = ?2").bind( 35, 19 ),
]);

Summary

In this post, we've taken a hands-on approach to exploring the D1 Client API, starting with defining our database schema and loading seed data. We then dove into writing queries, covering the basics of prepared statements and parameter binding, before moving on to more complex topics like joins and transactions. We saw how to construct and execute queries to fetch data from our database, including how to handle relationships between tables and marshal result data into a usable format. We also touched on inserting, updating, and deleting data, and how to use transactions to ensure data consistency. By working through these examples, we've gained a solid understanding of how to use the D1 Client API to interact with our database and build robust, data-driven applications.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

The 2025 Guide to JS Build Tools cover image

The 2025 Guide to JS Build Tools

The 2025 Guide to JS Build Tools In 2025, we're seeing the largest number of JavaScript build tools being actively maintained and used in history. Over the past few years, we've seen the trend of many build tools being rewritten or forked to use a faster and more efficient language like Rust and Go. In the last year, new companies have emerged, even with venture capital funding, with the goal of working on specific sets of build tools. Void Zero is one such recent example. With so many build tools around, it can be difficult to get your head around and understand which one is for what. Hopefully, with this blog post, things will become a bit clearer. But first, let's explain some concepts. Concepts When it comes to build tools, there is no one-size-fits-all solution. Each tool typically focuses on one or two primary features, and often relies on other tools as dependencies to accomplish more. While it might be difficult to explain here all of the possible functionalities a build tool might have, we've attempted to explain some of the most common ones so that you can easily understand how tools compare. Minification The concept of minification has been in the JavaScript ecosystem for a long time, and not without reason. JavaScript is typically delivered from the server to the user's browser through a network whose speed can vary. Thus, there was a need very early in the web development era to compress the source code as much as possible while still making it executable by the browser. This is done through the process of *minification*, which removes unnecessary whitespace, comments, and uses shorter variable names, reducing the total size of the file. This is what an unminified JavaScript looks like: ` This is the same file, minified: ` Closely related to minimizing is the concept of source maps#Source_mapping), which goes hand in hand with minimizing - source maps are essentially mappings between the minified file and the original source code. Why is that needed? Well, primarily for debugging minified code. Without source maps, understanding errors in minified code is nearly impossible because variable names are shortened, and all formatting is removed. With source maps, browser developer tools can help you debug minified code. Tree-Shaking *Tree-shaking* was the next-level upgrade from minification that became possible when ES modules were introduced into the JavaScript language. While a minified file is smaller than the original source code, it can still get quite large for larger apps, especially if it contains parts that are effectively not used. Tree shaking helps eliminate this by performing a static analysis of all your code, building a dependency graph of the modules and how they relate to each other, which allows the bundler to determine which exports are used and which are not. Once unused exports are found, the build tool will remove them entirely. This is also called *dead code elimination*. Bundling Development in JavaScript and TypeScript rarely involves a single file. Typically, we're talking about tens or hundreds of files, each containing a specific part of the application. If we were to deliver all those files to the browser, we would overwhelm both the browser and the network with many small requests. *Bundling* is the process of combining multiple JS/TS files (and often other assets like CSS, images, etc.) into one or more larger files. A bundler will typically start with an entry file and then recursively include every module or file that the entry file depends on, before outputting one or more files containing all the necessary code to deliver to the browser. As you might expect, a bundler will typically also involve minification and tree-shaking, as explained previously, in the process to deliver only the minimum amount of code necessary for the app to function. Transpiling Once TypeScript arrived on the scene, it became necessary to translate it to JavaScript, as browsers did not natively understand TypeScript. Generally speaking, the purpose of a *transpiler* is to transform one language into another. In the JavaScript ecosystem, it's most often used to transpile TypeScript code to JavaScript, optionally targeting a specific version of JavaScript that's supported by older browsers. However, it can also be used to transpile newer JavaScript to older versions. For example, arrow functions, which are specified in ES6, are converted into regular function declarations if the target language is ES5. Additionally, a transpiler can also be used by modern frameworks such as React to transpile JSX syntax (used in React) into plain JavaScript. Typically, with transpilers, the goal is to maintain similar abstractions in the target code. For example, transpiling TypeScript into JavaScript might preserve constructs like loops, conditionals, or function declarations that look natural in both languages. Compiling While a transpiler's purpose is to transform from one language to another without or with little optimization, the purpose of a *compiler* is to perform more extensive transformations and optimizations, or translate code from a high-level programming language into a lower-level one such as bytecode. The focus here is on optimizing for performance or resource efficiency. Unlike transpiling, compiling will often transform abstractions so that they suit the low-level representation, which can then run faster. Hot-Module Reloading (HMR) *Hot-module reloading* (HMR) is an important feature of modern build tools that drastically improves the developer experience while developing apps. In the early days of the web, whenever you'd make a change in your source code, you would need to hit that refresh button on the browser to see the change. This would become quite tedious over time, especially because with a full-page reload, you lose all the application state, such as the state of form inputs or other UI components. With HMR, we can update modules in real-time without requiring a full-page reload, speeding up the feedback loop for any changes made by developers. Not only that, but the full application state is typically preserved, making it easier to test and iterate on code. Development Server When developing web applications, you need to have a locally running development server set up on something like http://localhost:3000. A development server typically serves unminified code to the browser, allowing you to easily debug your application. Additionally, a development server will typically have hot module replacement (HMR) so that you can see the results on the browser as you are developing your application. The Tools Now that you understand the most important features of build tools, let's take a closer look at some of the popular tools available. This is by no means a complete list, as there have been many build tools in the past that were effective and popular at the time. However, here we will focus on those used by the current popular frameworks. In the table below, you can see an overview of all the tools we'll cover, along with the features they primarily focus on and those they support secondarily or through plugins. The tools are presented in alphabetical order below. Babel Babel, which celebrated its 10th anniversary since its initial release last year, is primarily a JavaScript transpiler used to convert modern JavaScript (ES6+) into backward-compatible JavaScript code that can run on older JavaScript engines. Traditionally, developers have used it to take advantage of the newer features of the JavaScript language without worrying about whether their code would run on older browsers. esbuild esbuild, created by Evan Wallace, the co-founder and former CTO of Figma, is primarily a bundler that advertises itself as being one of the fastest bundlers in the market. Unlike all the other tools on this list, esbuild is written in Go. When it was first released, it was unusual for a JavaScript bundler to be written in a language other than JavaScript. However, this choice has provided significant performance benefits. esbuild supports ESM and CommonJS modules, as well as CSS, TypeScript, and JSX. Unlike traditional bundlers, esbuild creates a separate bundle for each entry point file. Nowadays, it is used by tools like Vite and frameworks such as Angular. Metro Unlike other build tools mentioned here, which are mostly web-focused, Metro's primary focus is React Native. It has been specifically optimized for bundling, transforming, and serving JavaScript and assets for React Native apps. Internally, it utilizes Babel as part of its transformation process. Metro is sponsored by Meta and actively maintained by the Meta team. Oxc The JavaScript Oxidation Compiler, or Oxc, is a collection of Rust-based tools. Although it is referred to as a compiler, it is essentially a toolchain that includes a parser, linter, formatter, transpiler, minifier, and resolver. Oxc is sponsored by Void Zero and is set to become the backbone of other Void Zero tools, like Vite. Parcel Feature-wise, Parcel covers a lot of ground (no pun intended). Largely created by Devon Govett, it is designed as a zero-configuration build tool that supports bundling, minification, tree-shaking, transpiling, compiling, HMR, and a development server. It can utilize all the necessary types of assets you will need, from JavaScript to HTML, CSS, and images. The core part of it is mostly written in JavaScript, with a CSS transformer written in Rust, whereas it delegates the JavaScript compilation to a SWC. Likewise, it also has a large collection of community-maintained plugins. Overall, it is a good tool for quick development without requiring extensive configuration. Rolldown Rolldown is the future bundler for Vite, written in Rust and built on top of Oxc, currently leveraging its parser and resolver. Inspired by Rollup (hence the name), it will provide Rollup-compatible APIs and plugin interface, but it will be more similar to esbuild in scope. Currently, it is still in heavy development and it is not ready for production, but we should definitely be hearing more about this bundler in 2025 and beyond. Rollup Rollup is the current bundler for Vite. Originally created by Rich Harris, the creator of Svelte, Rollup is slowly becoming a veteran (speaking in JavaScript years) compared to other build tools here. When it originally launched, it introduced novel ideas focused on ES modules and tree-shaking, at the time when Webpack as its competitor was becoming too complex due to its extensive feature set - Rollup promised a simpler way with a straightforward configuration process that is easy to understand. Rolldown, mentioned previously, is hoped to become a replacement for Rollup at some point. Rsbuild Rsbuild is a high-performance build tool written in Rust and built on top of Rspack. Feature-wise, it has many similiarities with Vite. Both Rsbuild and Rspack are sponsored by the Web Infrastructure Team at ByteDance, which is a division of ByteDance, the parent company of TikTok. Rsbuild is built as a high-level tool on top of Rspack that has many additional features that Rspack itself doesn't provide, such as a better development server, image compression, and type checking. Rspack Rspack, as the name suggests, is a Rust-based alternative to Webpack. It offers a Webpack-compatible API, which is helpful if you are familiar with setting up Webpack configurations. However, if you are not, it might have a steep learning curve. To address this, the same team that built Rspack also developed Rsbuild, which helps you achieve a lot with out-of-the-box configuration. Under the hood, Rspack uses SWC for compiling and transpiling. Feature-wise, it’s quite robust. It includes built-in support for TypeScript, JSX, Sass, Less, CSS modules, Wasm, and more, as well as features like module federation, PostCSS, Lightning CSS, and others. Snowpack Snowpack was created around the same time as Vite, with both aiming to address similar needs in modern web development. Their primary focus was on faster build times and leveraging ES modules. Both Snowpack and Vite introduced a novel idea at the time: instead of bundling files while running a local development server, like traditional bundlers, they served the app unbundled. Each file was built only once and then cached indefinitely. When a file changed, only that specific file was rebuilt. For production builds, Snowpack relied on external bundlers such as Webpack, Rollup, or esbuild. Unfortunately, Snowpack is a tool you’re likely to hear less and less about in the future. It is no longer actively developed, and Vite has become the recommended alternative. SWC SWC, which stands for Speedy Web Compiler, can be used for both compilation and bundling (with the help of SWCpack), although compilation is its primary feature. And it really is speedy, thanks to being written in Rust, as are many other tools on this list. Primarily advertised as an alternative to Babel, its SWC is roughly 20x faster than Babel on a single thread. SWC compiles TypeScript to JavaScript, JSX to JavaScript, and more. It is used by tools such as Parcel and Rspack and by frameworks such as Next.js, which are used for transpiling and minification. SWCpack is the bundling part of SWC. However, active development within the SWC ecosystem is not currently a priority. The main author of SWC now works for Turbopack by Vercel, and the documentation states that SWCpack is presently not in active development. Terser Terser has the smallest scope compared to other tools from this list, but considering that it's used in many of those tools, it's worth separating it into its own section. Terser's primary role is minification. It is the successor to the older UglifyJS, but with better performance and ES6+ support. Vite Vite is a somewhat of a special beast. It's primarily a development server, but calling it just that would be an understatement, as it combines the features of a fast development server with modern build capabilities. Vite shines in different ways depending on how it's used. During development, it provides a fast server that doesn't bundle code like traditional bundlers (e.g., Webpack). Instead, it uses native ES modules, serving them directly to the browser. Since the code isn't bundled, Vite also delivers fast HMR, so any updates you make are nearly instant. Vite uses two bundlers under the hood. During development, it uses esbuild, which also allows it to act as a TypeScript transpiler. For each file you work on, it creates a file for the browser, allowing an easy separation between files which helps HMR. For production, it uses Rollup, which generates a single file for the browser. However, Rollup is not as fast as esbuild, so production builds can be a bit slower than you might expect. (This is why Rollup is being rewritten in Rust as Rolldown. Once complete, you'll have the same bundler for both development and production.) Traditionally, Vite has been used for client-side apps, but with the new Environment API released in Vite 6.0, it bridges the gap between client-side and server-rendered apps. Turbopack Turbopack is a bundler, written in Rust by the creators of webpack and Next.js at Vercel. The idea behind Turbopack was to do a complete rewrite of Webpack from scratch and try to keep a Webpack compatible API as much as possible. This is not an easy feat, and this task is still not over. The enormous popularity of Next.js is also helping Turbopack gain traction in the developer community. Right now, Turbopack is being used as an opt-in feature in Next.js's dev server. Production builds are not yet supported but are planned for future releases. Webpack And finally we arrive at Webpack, the legend among bundlers which has had a dominant position as the primary bundler for a long time. Despite the fact that there are so many alternatives to Webpack now (as we've seen in this blog post), it is still widely used, and some modern frameworks such as Next.js still have it as a default bundler. Initially released back in 2012, its development is still going strong. Its primary features are bundling, code splitting, and HMR, but other features are available as well thanks to its popular plugin system. Configuring Webpack has traditionally been challenging, and since it's written in JavaScript rather than a lower-level language like Rust, its performance lags behind compared to newer tools. As a result, many developers are gradually moving away from it. Conclusion With so many build tools in today's JavaScript ecosystem, many of which are similarly named, it's easy to get lost. Hopefully, this blog post was a useful overview of the tools that are most likely to continue being relevant in 2025. Although, with the speed of development, it may as well be that we will be seeing a completely different picture in 2026!...

Introduction to Zod for Data Validation cover image

Introduction to Zod for Data Validation

As web developers, we're often working with data from external sources like APIs we don't control or user inputs submitted to our backends. We can't always rely on this data to take the form we expect, and we can encounter unexpected errors when it deviates from expectations. But with the Zod library, we can define what our data ought to look like and parse the incoming data against those defined schemas. This lets us work with that data confidently, or to quickly throw an error when it isn't correct. Why use Zod? TypeScript is great for letting us define the shape of our data in our code. It helps us write more correct code the first time around by warning us if we are doing something we shouldn't. But TypeScript can't do everything for us. For example, we can define a variable as a string or a number, but we can't say "a string that starts with user_id_ and is 20 characters long" or "an integer between 1 and 5". There are limits to how much TypeScript can narrow down our data for us. Also, TypeScript is a tool for us developers. When we compile our code, our types are not available to the vanilla JavaScript. JavaScript can't validate that the data we actually use in our code matches what we thought we'd get when we wrote our TypeScript types unless you're willing to manually write code to perform those checks. This is where we can reach for a tool like Zod. With Zod, we can write data schemas. These schemas, in the simplest scenarios, look very much like TypeScript types. But we can do more with Zod than we can with TypeScript alone. Zod schemas let us create additional rules for data parsing and validation. A 20-character string that starts with user_id_? It's z.string().startsWith('user_id_').length(20). An integer between 1 and 5 inclusive? It's z.number().int().gte(1).lte(5). Zod's primitives give us many extra functions to be more specific about *exactly* what data we expect. Unlike TypeScript, Zod schemas aren't removed on compilation to JavaScript—we still have access to them! If our app receives some data, we can verify that it matches the expected shape by passing it to your Zod schema's parse function. You'll either get back your data in exactly the shape you defined in your schema, or Zod will give you an error to tell you what was wrong. Zod schemas aren't a replacement for TypeScript; rather, they are an excellent complement. Once we've defined our Zod schema, it's simple to derive a TypeScript type from it and to use that type as we normally would. But when we really need to be sure our data conforms to the schema, we can always parse the data with our schema for that extra confidence. Defining Data Schemas Zod schemas are the variables that define our expectations for the shape of our data, validate those expectations, and transform the data if necessary to match our desired shape. It's easy to start with simple schemas, and to add complexity as required. Zod provides different functions that represent data structures and related validation options, which can be combined to create larger schemas. In many cases, you'll probably be building a schema for a data object with properties of some primitive type. For example, here's a schema that would validate a JavaScript object representing an order for a pizza: ` Zod provides a number of primitives for defining schemas that line up with JavaScript primitives: string, number, bigint, boolean, date, symbol, undefined, and null. It also includes primitives void, any, unknown, and never for additional typing information. In addition to basic primitives, Zod can define object, array, and other native data structure schemas, as well as schemas for data structures not natively part of JavaScript like tuple and enum. The documentation contains considerable detail on the available data structures and how to use them. Parsing and Validating Data with Schemas With Zod schemas, you're not only telling your program what data should look like; you're also creating the tools to easily verify that the incoming data matches the schema definitions. This is where Zod really shines, as it greatly simplifies the process of validating data like user inputs or third party API responses. Let's say you're writing a website form to register new users. At a minimum, you'll need to make sure the new user's email address is a valid email address. For a password, we'll ask for something at least 8 characters long and including one letter, one number, and one special character. (Yes, this is not really the best way to write strong passwords; but for the sake of showing off how Zod works, we're going with it.) We'll also ask the user to confirm their password by typing it twice. First, let's create a Zod schema to model these inputs: ` So far, this schema is pretty basic. It's only making sure that whatever the user types as an email is an email, and it's checking that the password is at least 8 characters long. But it is *not* checking if password and confirmPassword match, nor checking for the complexity requirements. Let's enhance our schema to fix that! ` By adding refine with a custom validation function, we have been able to verify that the passwords match. If they don't, parsing will give us an error to let us know that the data was invalid. We can also chain refine functions to add checks for our password complexity rules: ` Here we've chained multiple refine functions. You could alternatively use superRefine, which gives you even more fine grained control. Now that we've built out our schema and added refinements for extra validation, we can parse some user inputs. Let's see two test cases: one that's bound to fail, and one that will succeed. ` There are two main ways we can use our schema to validate our data: parse and safeParse. The main difference is that parse will throw an error if validation fails, while safeParse will return an object with a success property of either true or false, and either a data property with your parsed data or an error property with the details of a ZodError explaining why the parsing failed. In the case of our example data, userInput2 will parse just fine and return the data for you to use. But userInput1 will create a ZodError listing all of the ways it has failed validation. ` ` We can use these error messages to communicate to the user how they need to fix their form inputs if validation fails. Each error in the list describes the validation failure and gives us a human readable message to go with it. You'll notice that the validation errors for checking for a valid email and for checking password length have a lot of details, but we've got three items at the end of the error list that don't really tell us anything useful: just a custom error of Invalid input. The first is from our refine checking if the passwords match, and the second two are from our refine functions checking for password complexity (numbers and special characters). Let's modify our refine functions so that these errors are useful! We'll add our own error parameters to customize the message we get back and the path to the data that failed validation. ` Now, our error messages from failures in refine are informative! You can figure out which form fields aren't validating from the path, and then display the messages next to form fields to let the user know how to remedy the error. ` By giving our refine checks a custom path and message, we can make better use of the returned errors. In this case, we can highlight specific problem form fields for the user and give them the message about what is wrong. Integrating with TypeScript Integrating Zod with TypeScript is very easy. Using z.infer&lt;typeof YourSchema> will allow you to avoid writing extra TypeScript types that merely reflect the intent of your Zod schemas. You can create a type from any Zod schema like so: ` Using a TypeScript type derived from a Zod schema does *not* give you any extra level of data validation at the type level beyond what TypeScript is capable of. If you create a type from z.string.min(3).max(20), the TypeScript type will still just be string. And when compiled to JavaScript, even that will be gone! That's why you still need to use parse/safeParse on incoming data to validate it before proceeding as if it really does match your requirements. A common pattern with inferring types from Zod schemas is to use the same name for both. Because the schema is a variable, there's no name conflict if the type uses the same name. However, I find that this can lead to confusing situations when trying to import one or the other—my personal preference is to name the Zod schema with Schema at the end to make it clear which is which. Conclusion Zod is an excellent tool for easily and confidently asserting that the data you're working with is exactly the sort of data you were expecting. It gives us the ability to assert at runtime that we've got what we wanted, and allows us to then craft strategies to handle what happens if that data is wrong. Combined with the ability to infer TypeScript types from Zod schemas, it lets us write and run more reliable code with greater confidence....

Remix's evolution to a Vite plugin cover image

Remix's evolution to a Vite plugin

The Remix / React Router news is a beautiful illustration of software evolution. You start with a clear idea of what you’re building, but it’s impossible to anticipate exactly what it will become. If you haven’t heard the news yet, Ryan Florence announced at React Conf 2024 that Remix will be a Vite plugin for React Router. It’s a sort of surprising announcement but it makes a lot of sense if you’ve been following along as Remix the project has evolved. In this post we’ll recap the evolution of the project and then dig into what this change means for the future of Remix in the Server Components era of React. 🗓️ October 2021 > Remix is open sourced In the beginning, Remix cost money. In October 2021 they received some seed funding and the project was open sourced. Remix was an exciting new framework for server-rendering React websites that included API’s for loading data on the server and submitting data to the server with forms. It started a trend in the front-end framework space that a lot of other popular frameworks have modeled after or taken inspiration from. 🗓️ March 2022 - September 2022 > Remixing React Router At some point they realized that the API’s that they had created for Remix were a more natural fit in React Router. In this post Ryan details the plans to move these Remix API’s into React Router. Several months later, React Router 6.4 is released with the action, loader, and other Remix API’s baked in. > "We've always thought of Remix as "just a compiler and server for React Router" After this release, Remix is now mostly as they described - a compiler and server for React Router. 🗓️ October 2023 - February 2024 > Remix gets Vite support If you’re not familiar with Vite, it’s an extremely popular tool for front-end development that handles things like development servers and code bundling/compiling. A lot of the modern front-end frameworks are built on top of it. Remix announced support for Vite as an alternative option to their existing compiler and server. Since Remix is now mostly “just a compiler and server for React Router” - what’s left of Remix now that Vite can handle those things? 🗓️ May 2024 > Remix is React Router The Remix team releases a post detailing the announcement that Ryan made at React Conf. Looking back through the evolution of the project this doesn’t seem like an out of the blue, crazy announcement. Merging Remix API’s into React Router and then replacing the compiler and server with Vite didn’t leave much for Remix to do as a standalone project. 🐐 React Router - the past, present, and future of React React Router is one of the oldest, and most used packages in the React ecosystem. > Since Remix has always been effectively "React Router: The Framework", we wanted to create a bridge for all these React Router projects to be able to upgrade to Remix. With Remix being baked into React Router and now supporting SPA mode, legacy React Router applications have an easier path for migrating to the next generation of React. https://x.com/ryanflorence/status/1791488550162370709 Remix vs React 19 If you’ve taken a look at React 19 at all, it turns out it now handles a lot of the problems that Remix set out to solve in the first place. Things like Server Components, Server Actions, and Form Actions provide the same sort of programming model that Remix popularized. https://x.com/ryanflorence/status/1791484663883829258 Planned Changes for Remix and React Router Just recently, a post was published giving the exact details of what the changes to Remix and React Router will look like. tl;dr For React Router * React Router v6 to v7 will be a non-breaking upgrade * The Vite plugin from Remix is coming to React Router in v7 * The Vite plugin simply makes existing React Router features more convenient to use, but it isn't required to use React Router v7 * v7 will support both React 18 and React 19 For Remix * What would have been Remix v3 is React Router v7 * Remix v2 to React Router v7 will be a non-breaking upgrade * Remix is coming back better than ever in a future release with an incremental adoption strategy enabled by these changes For Both * React Router v7 comes with new features not in Remix or React Router today: RSC, server actions, static pre-rendering, and enhanced Type Safety across the board The show goes on I love how this project has evolved and I tip my hat to the Remix/React Router team. React Router feels like a cozy blanket in an ecosystem that is going through some major changes which feels uncertain and a little scary at times. The Remix / React Router story is still being written and I’m excited to see where it goes. I’m looking forward to seeing the work they are doing to support React Server Components. A lot of people are pretty excited about RSC’s and there is a lot of room for unique and interesting implementations. Long live React Router 🙌....

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers cover image

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers

Before she was a software developer at freeCodeCamp, Jessica Wilkins was a classically trained clarinetist performing across the country. Her days were filled with rehearsals, concerts, and teaching, and she hadn’t considered a tech career until the world changed in 2020. > “When the pandemic hit, most of my gigs were canceled,” she says. “I suddenly had time on my hands and an idea for a site I wanted to build.” That site, a tribute to Black musicians in classical and jazz music, turned into much more than a personal project. It opened the door to a whole new career where her creative instincts and curiosity could thrive just as much as they had in music. Now at freeCodeCamp, Jessica maintains and develops the very JavaScript curriculum that has helped her and millions of developers around the world. We spoke with Jessica about her advice for JavaScript learners, why musicians make great developers, and how inclusive communities are helping more women thrive in tech. Jessica’s Top 3 JavaScript Skill Picks for 2025 If you ask Jessica what it takes to succeed as a JavaScript developer in 2025, she won’t point you straight to the newest library or trend. Instead, she lists three skills that sound simple, but take real time to build: > “Learning how to ask questions and research when you get stuck. Learning how to read error messages. And having a strong foundation in the fundamentals” She says those skills don’t come from shortcuts or shiny tools. They come from building. > “Start with small projects and keep building,” she says. “Books like You Don’t Know JS help you understand the theory, but experience comes from writing and shipping code. You learn a lot by doing.” And don’t forget the people around you. > “Meetups and conferences are amazing,” she adds. “You’ll pick up things faster, get feedback, and make friends who are learning alongside you.” Why So Many Musicians End Up in Tech A musical past like Jessica’s isn’t unheard of in the JavaScript industry. In fact, she’s noticed a surprising number of musicians making the leap into software. > “I think it’s because music and code have a lot in common,” she says. “They both require creativity, pattern recognition, problem-solving… and you can really get into flow when you’re deep in either one.” That crossover between artistry and logic feels like home to people who’ve lived in both worlds. What the Tech Community Is Getting Right Jessica has seen both the challenges and the wins when it comes to supporting women in tech. > “There’s still a lot of toxicity in some corners,” she says. “But the communities that are doing it right—like Women Who Code, Women in Tech, and Virtual Coffee—create safe, supportive spaces to grow and share experiences.” She believes those spaces aren’t just helpful, but they’re essential. > “Having a network makes a huge difference, especially early in your career.” What’s Next for Jessica Wilkins? With a catalog of published articles, open-source projects under her belt, and a growing audience of devs following her journey, Jessica is just getting started. She’s still writing. Still mentoring. Still building. And still proving that creativity doesn’t stop at the orchestra pit—it just finds a new stage. Follow Jessica Wilkins on X and Linkedin to keep up with her work in tech, her musical roots, and whatever she’s building next. Sticker illustration by Jacob Ashley....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co