Free Resources
JavaScript Animations with GreenSock
Recommended Articles
End-to-end type-safety with JSON Schema
End-to-end type-safety with JSON Schema I recently wrote an introduction to JSON Schema post. If you’re unfamiliar with it, check out the post, but TLDR: It’s a schema specification that can be used to define the input and output data for your JSON API. In my post, I highlight many fantastic benefits you can reap from defining schemas for your JSON API. One of the more interesting things you can achieve with your schemas is end-to-end type safety from your backend API to your client application(s). In this post, we will explore how this can be accomplished slightly deeper. Overview The basic idea of what we want to achieve is: * a JSON API server that validates input and output data using JSON Schema * The JSON Schema definitions that our API uses transformed into TypeScript types With those pieces in place, we can achieve type safety on our API server and the consuming client application. The server side is pretty straightforward if you’re using a server like Fastify with already enabled JSON Schema support. This post will focus on the concepts more than the actual implementation details though. Here’s a simple diagram illustrating the high-level concept: We can share the schema and type declaration between the client and server. In that case, we can make a request to an endpoint where we know its type and schema, and assuming the server validates the data against the schema before sending it back to the client, our client can be confident about the type of the response data. Marrying JSON Schema and TypeScript There are a couple of different ways to accomplish this: * Generating types from schema definitions using code generation tools * Creating TypeBox definitions that can infer TypeScript types and be compiled to JSON Schema I recommend considering both and figuring out which would better fit your application and workflows. Like anything else, each has its own set of trade-offs. In my experience, I’ve found TypeBox to be the most compelling if you want to go deep with this pattern. Code generation A couple of different packages are available for generating TS types from JSON Schema definitions. * https://github.com/bcherny/json-schema-to-typescript * https://github.com/vega/ts-json-schema-generator They are CLI tools that you can provide a glob path to where your schema files are located and will generate TS declaration files to a specified output path. You can set up an npm hook or a similar type of script that will generate types for your development environment. TypeBox TypeBox is a JSON Schema type builder. With this approach, instead of json files, we define schemas in code using the TypeBox API. The TypeBox definitions infer to TypeScript types directly, which eliminates the code generation step described above. Here’s a simple example from the documentation of a JSON Schema definition declared with TypeBox: ` This can then be inferred as a TypeScript type: ` Aside from schemas and types, TypeBox can do a lot more to help us on our type-safety journey. We will explore it a bit more in upcoming sections. Sharing schemas between client and server applications Sharing our JSON Schema between our server and client app is the main requirement for end-to-end type-safety. There are a couple of different ways to accomplish this, but the simplest would be to set up our codebase as a monorepo that contains both the server and client app. Some popular options for TypeScript monorepos are: PNPM, Turborepo, and NX. If a monorepo is not an option, you can publish your schema and types as a package that can be installed in both projects. However, this setup would require a lot more maintenance work. Ultimately, as long as you can import your schemas and types from the client and server app, you are in good shape. Server-to-client validation and type-safety For the sake of simplicity, let's focus on data flowing from the server to the client for now. Generally speaking, the concepts also apply in reverse, as long as your JSON API server validates your inputs and outputs. We’ll look at the most basic version of having strongly typed data on the client from a request to our server. Type-safe client requests In our server application, if we validate the /users endpoint with a shared schema - on the client side, when we make the request to the endpoint, we know that the response data is validated using the user schema. As long as we are confident of this fact, we can use the generated type from that schema as the return type on our client fetch call. Here’s some pseudocode: ` Our server endpoint would look something like this: ` You could get creative and build out a map that defines all of your endpoints, their metadata, and schemas, and use the map to define your server endpoints and create an API client. Transforming data over the wire Everything looks stellar, but we can still take our efforts a bit further. To this point, we are still limited to serialized JSON data. If we have a created_at field (number or ISO string) tied to our user, and we want it to be a Date object when we get a hold of it on the client side - additional work and consideration are required. There are some different strategies out there for deserializing JSON data. The great thing about having shared schemas between our client and server is that we can encode our type information in the schema without sending additional metadata from our server to the client. Using format to declare type data In my initial JSON Schema blog post, I touched on the format field of the specification. In our schema, if the actual type of our date is a string in ISO8601 format, we can declare our format to be "date-time". We can use this information on the client to transform the field into a proper Date object. ` Transforming serialized JSON Data This can be a little bit tricky; again, there are many ways to accomplish it. To demonstrate the concept, we’ll use TypeBox to define our schemas as discussed above. TypeBox provides a Transform type that you can use to declare, encode, and decode methods for your schema definition. ` It even provides helpers to statically generate the decoded and encoded types for your schema ` If you declare your decode and encode functions for your schemas, you can then use the TypeBox API to handle decoding the serialized values returned from your JSON API. Here’s what the concept looks like in practice fetching a user from our API: ` Nice. You could use a validation library like Zod to achieve a similar result but here we aren’t actually doing any validation on our client side. That happened on the server. We just know the types based on the schema since both ends share them. On the client, we are just transforming our serialized JSON into what we want it to be in our client application. Summary There are a lot of pieces in play to accomplish end-to-end type safety. With the help of JSON Schema and TypeBox though, it feels like light work for a semi-roll-your-own type of solution. Another great thing about it is that it’s flexible and based on pretty core concepts like a JSON API paired with a TypeScript-based client application. The number of benefits that you can reap from defining JSON Schemas for your APIs is really great. If you’re like me and wanna keep it simple by avoiding GraphQL or other similar tools, this is a great approach....
Apr 17, 2024
6 mins
Configure your project with Drizzle for Local & Deployed Databases
Updated April 8, 2024: Thanks to Jassi Bacha for pointing out that the migrate function is not exported from the drizzle-orm/postgres-js/migrator module for Vercel. I've updated the article to reflect the correct import path. It was a fun Friday, and Jason Lengstorf and I both decided to try and use Drizzle on our respective projects. Jason went the SQLite route and wrote an amazing article about how he got his setup working. My approach was a bit different. I started with Vercel's Postgres + Drizzle Next.js Starter and wanted to use PostgreSQL. If you don't know what Drizzle is, it's a type-safe ORM similar to Prisma. My colleague, Dane Grant, wrote a great intro post on it, so go check his article out if you want to learn more about Drizzle. Getting my project off the ground took longer than I expected, especially coming from a starter kit, but I figured it out. This is the article I wish I had at the time to help get this project set up with less friction. I will focus on using local and Vercel PostgreSQL, but this same setup should work with other databases and adapters. I'll make some notes about where those places are. While I did use Next.js here, these setup instructions work on other projects, too. Configuring Drizzle Every project that leverages Drizzle requires a drizzle.config in the root. Because I'm leveraging TypeScript, I named mine drizzle.config.ts, and to secure secrets, I also installed dotenv. My final file appeared as follows: ` The schema field is used to identify where your project's database schema is defined. Mine is in a file called schema.ts but you can split your schema into multiple files and use glob patterns to detect all of them. The out field determines where your migration outputs will be stored. I recommend putting them in a folder in the same directory as your schema to keep all your database-related information together. Additionally, the config requires a driver and dbCredentials.connectionString to be specified so Drizzle knows what APIs to leverage and where your database lives. For the connectionString, I'm using dotenv to store the value in a secret and protect it. The connectionString should be in a valid connection format for your database. For PostgreSQL, this format is postgresql://:@:/. Getting your connection string Now, you may be wondering how to get that connection string. If you're hosting on Vercel using their Postgres offering, you need to go to your Vercel dashboard and select their Postgres option and attach it to your app. This will set environment variables for you that you can pull into your local development environment. This is all covered in their "Getting Started with Vercel Postgres" guide. If you use a different database hosting solution, they'll offer similar instructions for fetching your connection string. However, I wanted a local database to modify and blow away as needed for this project. Out of the box, Drizzle does not offer a database initialization command and I needed something that could be easily and quickly replicated across different machines. For this, I pulled in Docker Compose and set up my docker-compose.yaml as follows: ` The 3 most important values to note here are the values in the environment key and the ports. These are what allowed me to determine my connection key. For this example, it would be: postgresql://postgres:postgres@localhost:5432/my-local-db. With the compose file set, I ran docker-compose up -d to get the container running, which also initializes the database. Now, we can connect and operate on the database as needed. Creating the database connection To make operations in our app, we need to get a database connection instance. I put mine in db/drizzle.ts to keep all my related database files together. My file looks like: ` This is a bit more complicated because we're using 2 different Drizzle adapters depending on our environment. For local development, we're using the generic PostgreSQL adapter, but for production, we're using the Vercel adapter. While these have different initializers, they have the same output interface which is why this works. The same wouldn't be true if you used MySQL locally and PostgreSQL in production. If we chose a RDS or similar PostgreSQL solution, we could use the same postgres adapter in both cases but change the connection string. That's all this file does at the end - detects which environment and uses the chosen adapter. If we go to use this exported instance, it won't be able to find our tables or provide type safety. This is because we haven't created our database tables yet. Creating database tables To get our database tables created, we're going to leverage Drizzle's Migrations. This allows us to create atomic changes to our database as our schema evolves. To accomplish this, we define the schema changes in our schema files as specified in our config. Then we can run npm run drizzle-kit generate:pg (or whatever script runner you use) to generate the migration SQL file that will be located where we specified in our config. You want to check this file into source! By default, Drizzle doesn't allow you to override migration names _yet_ (they're working on it!) so if you want to make your migration file more descriptive, you need to take both of these steps: 1. Rename the migration file. Take note of the old name. 2. Locate _journal.json. It should be in your migration folder in a folder called meta. From here, find the old file name and replace it with the new file name. Now, we need to run the migrations. I had some issues with top-level awaits and tsx like the Drizzle docs recommend, so I had to go a slightly different route and I'm not thrilled about it still. I made a file called migrate.mts that I stored next to my drizzle.ts. In theory, I should have been able to import my drizzle connection instance here and use that, but I ran out of time to figure it out and ended up repeating myself across files. Here's the file: ` Here, I'm connecting to the correct database pending environment then running the drizzle migrate command. For local development, I set my connection pool to max at 1. This probably isn't necessary for this use case, but when connecting to a cluster, this is a recommended best practice from the Drizzle team. For the local case, I also had to close the connection to the db when I was done. For both cases, though, I had to specify the migrations folder location. I could probably DRY this up a bit, but hopefully, the Drizzle team will eliminate this need and use the config to set this value in the future. With the above file set and our schema generated, we can now run npm run tsx db/migrate.mts and our database will have our latest schema. We can now use the db client to fetch and store data in our database. Note: Jason uses the push command here. This is fine for an initial database creation, but it will override tables in the future. The migration path is the recommended pattern for non-destructive database updates. Conclusion Congratulations! We can connect to our database and perform CRUD operations against our tables. We can also use Drizzle Studio to modify and inspect our data. To review, we had to: 1. Setup a local PostgreSQL server via a tool like Docker Compose 2. Configure the database adapter to work in local mode 3. Generate a schema 4. Create a script to execute migrations so our database is aligned with our schema This was my first experience with Drizzle, and I enjoyed its SQL-like interfaces which made it easy to quickly prototype my project. I noticed in their Discord that they're about to have a full-time maintainer so I'm excited to see what their future looks like. I hope you enjoy it too!...
Mar 8, 2024
6 mins