Skip to content

Deploying a Vue Static Front-End to AWS

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Deploy a Vue Static Front-End to AWS

The Amazon Web Services (AWS) ecosystem is a massive field of over 200 services, capable of getting your projects up into the cloud in no time. To help introduce you to such a massive field, I want to show just how quickly you can deploy a static front-end using AWS.

Today, we will scaffold out a Vue project, and deploy it to AWS with a custom domain name, secured with SSL/TLS (HTTPS), and pushed to a content delivery network (CDN). This knowledge will help you start tinkering with the many of the services AWS has to offer.

Dependencies

To follow this guide, you will need:

Why deploy to AWS?

Right off the bat, I want to say that deploying a static front-end to AWS is NOT the easiest way to deploy a static website! Here are a few tools that can get the job done a LOT more easily than how we will be doing it today.

Even AWS has a solution to compete with the growing array of front-end deployment tools: AWS Amplify, which they describe as being the, "Fastest, easiest way to build mobile and web apps that scale".

We won't be using Amplify, or any of those other tools though. The goal of manually deploying to AWS is to give us a better understanding of the underlying AWS services.

ViteJS, our build tool

We can deploy any static front-end, so in this article, we will use ViteJS as our front-end tooling to generate a Vue front-end.

Vite Logo

Scaffolding out a Vue application with Vite is as simple as running a single command, and following the prompts:

For npm: npm init @vitejs/app For yarn: yarn create @vitejs/app

Vite CLI

While Vite offers us the option to use many different front-ends, we will choose Vue.

After running yarn create, cd into the created directory, and run yarn to install all of the dependencies, and then run yarn build.

After doing this, we will have a production static deployment bundle that we can deploy using AWS.

Vite Static Build

AWS Services

There are 4 AWS services that we will use to deploy our static build:

  • Simple Storage Service (S3)
  • CloudFront
  • Amazon Certificate Manager
  • Route 53

Each of these services will work in some way with each other to provide the full solution we need to deploy our static web app.

Simple Storage Service (S3)

Amazon S3 will be the workhorse of our deployment setup. Amazon describes their services as, "Object storage built to store and retrieve any amount of data from anywhere." We will be using S3 to store our deployment bundle in a cheap and scalable manner.

To deploy our bundle to S3, go to your AWS console, and load up the S3 service. From there, click on 'Create Bucket'.

Create Bucket Button

After that, give your bucket a name and assign it a region, ensure to unblock all public access to this bucket (we want the anyone on the web to be able to see our site), and then create the new bucket.

Bucket Name
Public Access Settings

You will now have a new, empty bucket! Let's fill it with the production bundle ViteJS created for us earlier.

Select your bucket, click 'Upload', drag and drop all of the bundled files in the dist directory that ViteJS generated for us, and then upload all of them.

Bucket Created
Bucket Upload Button
Uploaded Content

We now have content in our bucket. In order to allow the public to see this content, we need to enable static web hosting, and add a bucket policy to allow anyone to retrieve objects from our bucket.

To enable static web hosting, go to your bucket's properties, and scroll all the way down to see the static hosting settings. Edit the settings and enable hosting, along with setting the index document to index.html.

Bucket Properties
Edit Static Host Settings

To add the needed bucket policy, go to your bucket's permissions (instead of properties), and edit the bucket policy to include the following:

{
  "Id": "Policy1617109982386",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1617109981105",
      "Action": [
        "s3:GetObject"
      ],
      "Effect": "Allow",
      "Resource": "arn:aws:s3:::easy-vue-deploy/*",
      "Principal": "*"
    }
  ]
}
Bucket Policy

You'll need to edit the "Resource" line to use your bucket name. This example uses the bucket name of easy-vue-deploy.

After all of this is done, you will see an endpoint that you can click on to test your deployment. Just go to your bucket properties, and scroll down.

S3 Endpoint
S3 Deploy

We have now successfully deployed a ViteJS bundled to the web! You'll notice that this endpoint isn't protected via HTTPS, nor is it as fast as we want it to be. If you peek at the DevTools, you'll see that we are using http/1.1 as our protocol, and we are having a load time of around 360ms on average.

S3 Bucket Performance

We are going to fix these things by serving our content through a CDN.

CloudFront

CloudFront is AWS's global content delivery network. It can work seamlessly with any AWS origin (like our S3 bucket we made earlier) to cache content in over 225+ Points of Presence, enabling a SUPER fast user experience. Let's enable this for our deployment.

CloudFront Points of Presence

Go to the CloudFront service, and get started by creating a distribution.

Create CloudFront Distribution

We are going to populate X things:

  • Origin Domain Name
  • Origin ID
  • Viewer Protocol Policy
  • Default Root Object

Selecting the Origin Domain Name should display a dropdown list of available AWS origins, one of them being the S3 bucket we made. Select that.

Selecting our Origin Domain Name will auto-populate the Origin ID.

CloudFront Distribution Settings

Change the Viewer Protocol Policy to redirect HTTP to HTTPS.

Viewer Protocol

Finally, set the Default Root Object to be index.html, and create the distribution. This process takes a few minutes to complete as I imagine AWS is populating its edge locations with our site's content.

After the distribution is created, we can test out the deployment by selecting our distribution and going to the CloudFront domain name it generated for us.

Deploy Info
Finished CloudFront Deploy

We can see that the site is protected by HTTPS, and even see a boost to our site's performance in the DevTools with the site loading about 50ms faster for me, along with using the http/2 protocol.

CloudFront DevTools

Now, in order to grant our site a custom domain, we will need 2 more services: Amazon Certificate Manager, and Route 53.

Amazon Certificate Manager (ACM)

If you're familiar with working on Linux servers and using Certbot, then ACM is going to be a breeze. This service is what we will use to provision an SSL/TLS certificate for our custom domain name.

Go to the ACM service, request a public certificate, and add the domain names to the request. If you want to create a certificate valid for all subdomains, add another domain name to the certificate, and prefix your domain with an asterisk *.

Add Domains

After this, we'll need to validate that we own the domain we're requesting a certificate for. We can choose DNS or Email validation. For this example, let's use DNS validation.

DNS Validation

After that, skip adding tags for now (tags can help you organize AWS resources when you have a lot of them), and finish the request for the certificate to set the request in progress. You'll be greeted with a screen asking you to add a CNAME record in the DNS configuration of your domain.

CNAME records to add

We can add our CNAME records in Route 53.

Route 53

AWS describes Route 53 as, "a reliable and cost-effective way to route end users to Internet applications." What we'll do in Route 53 depends on where you bought your domain name.

If you bought your domain name from outside of Route 53, you'll need to create a hosted zone in Route 53 to create nameservers you can point your domain to. I bought my domain, matthewpagan.com, from GoDaddy for example, so I needed to create a hosted zone, and then edit my GoDaddy nameservers to point to the nameservers generated by Route 53.

If you need to create a hosted zone, go to the Route 53 service, create a hosted zone (the only information you'll need is your domain name), take note of the nameservers generated, and switch your nameservers at your domain registrar.

Hosted Zone

After you've either created your hosted zone, or bought your domain name through Route 53, you can then create a CNAME record with the information provided by Amazon Certificate Manager (ACM) to verify that you control your domain. This takes a few minutes after you create the CNAME record.

To check the status of the validation, go to ACM, and look at the status of your certificate.

When the certificate is successfully issued, the status of your certificate should change from 'Pending Validation' to 'Issued'

Issued Certificate

Add domain to CloudFront distribution

Once we have a valid certificate for our domain name, we are going to apply that domain/sub-domain to CloudFront. Go to the service, select your distribution, and edit the settings.

Set the Alternate Domain Names to your desired domain/sub-domain that you recently acquired the valid SSL/TLS certificate for, select 'Custom SSL Certificate' instead of the default CloudFront certificate, select your certificate from the generated dropdown, and save these changes.

CloudFront Domain

Create A record in CloudFront pointing to CloudFront CNAME

Now, we can point to that CloudFront distribution in Route 53. Create an A record in Route 53 for your domain/sub-domain, and point it to the CloudFront distribution. You'll know things are working because your distribution should display itself when you go to select a distribution to point to.

Route 53 A Record

Conclusion

After following the above steps, we now have a deployed Vue static front-end, protected by SSL/TLS, using a custom domain name, backed by a CDN, and hosted by AWS!

Finished Deployment

While there certainly are easier ways to deploy static content online, by going through and manually setting up AWS services yourself, you should have a deeper understanding of the moving parts. You may even be able to move into the deep end, and try deploying some back-end solutions for your front-end to consume!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Deploying apps and services to AWS using AWS Copilot CLI cover image

Deploying apps and services to AWS using AWS Copilot CLI

Copilot has become a household name for developers, all thanks to GitHub’s popular AI tooling. Before GitHub released Copilot, AWS already had a developer tool in the wild, also named Copilot. I stumbled across AWS Copilot a year or two ago and found it to be a really great tool for easily deploying Serverless applications and services to AWS infrastructure. Since deploying to AWS has been such a huge pain point for so many developers, Copilot CLI is one of many tools that is designed to make this process a lot easier. AWS Copilot is a command-line interface (CLI) that simplifies the process of deploying and managing containerized applications on AWS. It abstracts away the complexity of managing infrastructure, allowing us to focus on writing code. This blog post will provide an overview of AWS Copilot CLI and explore its practical use cases. AWS Copilot CLI is designed to simplify the building, releasing, and operating of production-ready containerized applications on Amazon ECS and AWS Fargate. It offers an intuitive interface for developers to launch and manage environments, jobs, pipelines, and services in the cloud. Although ECS and Fargate are considered ‘Serverless’; you are actually deploying to more traditional, stateful, web servers on EC2 _(these are cool again)_. Installation You can install the Copilot CLI using Homebrew. ` Otherwise use one of the scripts from the installation page. Credentials You need to add AWS credentials with proper permissions to your ~/aws/.credentials file to use Copilot. See the credentials docs for more details on this. ` Overview All you need is a Dockerfile that knows how to build and run your application and Copilot will handle the rest. Copilot provides a simple declarative set of commands, including examples and guided experiences built-in. These commands make it easy to start from scratch and get a containerized application running in the cloud in just a few steps. The configuration files that Copilot generates (called manifests) allow us to easily configure and adjust the amount of compute resources available to our web service _(cpu, memory, instances, etc)_. Here’s an architecture diagram for a load-balanced web service running on AWS and deployed with Copilot. The outermost layer is the region your application is deployed to on AWS ie: us-east-1, us-west-1, etc. Copilot handles all the networking for your application which includes a VPC, and public subnets that your server instances can be reached from. The load balancer (ALB) sits at the top, listens for requests, and directs traffic to the ECS Cluster (instances of your application server). Typically, the work involved with setting all of these pieces up and getting them working together is cumbersome to put it lightly. Concepts Before we dive into some of the commands, a quick look at the concepts that Copilot is built on so we have a clear understanding of what we can achieve with it. Applications Applications in Copilot is the top-level parent of all the AWS infrastructure managed by your Copilot setup. In this article I will generally refer to an application as any kind of software that you are deploying (api server, etc). In Copilot your ‘Application’ is the entire collection of all the environments and services that you have configured. Here’s the diagram from the documentation. The Vote application can consist of a number of different services, jobs, pipelines, etc. Services In AWS Copilot, “Services” refer to the type of application (not to be confused with the definition of Application in the previous section) that you’re deploying and the underlying AWS service infrastructure that supports it. Using the Load Balanced Web Service service from the diagram above, the service consists of the ECS (Elastic Container Service) service, the application load balancer, and the network load balancer. These are some of the AWS infrastructure that Copilot orchestrates for you when deploying this type of “Service”. There are a few main types of services that you can deploy with Copilot - Internet-facing web services (Static Site, Load Balanced Web Service, etc) - Backend services (services that can only be accessed internally) - Worker services (pub/sub queues, etc) Environments When setting up your project and your first service you will supply Copilot with the name of the environment you want to deploy to. For example, you can create a production environment initially to deploy your application and services to and then later on add additional environments like a staging environment. Jobs You might also know these as ‘crons’ but these are just tasks or some code that runs on a schedule. Pipelines Pipelines are for automating tests and releases. This is AWS’s CI/CD service somewhat similar to something like GitHub actions. Copilot can initialize and create new pipelines for you. We can configure this as a manifest in our codebase that declares instructions on how we build and deploy our application(s). Configuration Copilot is configured through various configuration files called ‘Manifests’. The documentation contains examples on how to configure your application, services, environments, jobs, and pipelines. You can also refer to it to learn what options are available. Since Copilot is a CLI tool they make a lot of the configuration process pretty painless by providing walkthrough prompts and generating configuration files for you. The CLI Now that we have a pretty good idea of what AWS Copilot is and the types of things we can accomplish with it, let's walk through using the CLI. ` This is where you will most likely want to start to get your application ready for deployment to AWS. The init command will prompt you with questions like what kind of service you want to generate and once you’re done, it will generate all the initial configuration files for you. Example setup: ` You’ll pick your service type, tell Copilot where your Dockerfile lives, name your environment, and let it do its magic from there. Copilot will generate your Manifest/config files which in turn become CloudFormation stacks that set up all your infrastructure on AWS. Other Commands There are commands for each concept that we covered above. There are Create, Delete, and Deploy operation commands for Environments, Services, Jobs, and Pipelines. If you wanted to add a staging environment to your application you could just run copilot env init . There are also commands to manage your Application or things like tasks. To see details about your Application you can run copilot app show -n [name] Conclusion AWS Copilot CLI is a powerful tool that simplifies the deployment and management of containerized applications on AWS. It abstracts away the complexities of the underlying infrastructure, allowing developers to focus on writing code and delivering value to their customers. Whether you're deploying a simple web service or managing multiple environments, AWS Copilot CLI can make your cloud development process more efficient and enjoyable....

3 VueJS Component Libraries Perfect for Beginners cover image

3 VueJS Component Libraries Perfect for Beginners

For developers checking out VueJS for the first time, the initial steps are overwhelming, particularly when setting up projects from square one. But don’t worry! The VueJS ecosystem offers a plethora of remarkable component libraries, easing this early obstacle. These three libraries are pre-built toolkits, providing beginners with the means to kickstart their VueJS projects effortlessly. Let’s take a look! Quasar Quasar is among the most popular open-source component libraries for Vue.js, offering a comprehensive set of ready-to-use UI components and tools for building responsive web applications and websites. Designed with performance, flexibility, and ease of use in mind, Quasar provides developers with a wide range of customizable components, such as buttons, forms, dialogs, and layouts, along with built-in support for themes, internationalization, and accessibility. With its extensive documentation, active community support, and seamless integration with Vue CLI and Vuex, Quasar empowers developers to rapidly prototype and develop high-quality Vue.js applications for various platforms, including desktop, mobile, and PWA (Progressive Web Apps). PrimeVue PrimeVue is a popular Vue.js component library offering a wide range of customizable UI components designed for modern web applications. Developed by PrimeTek, it follows Material Design guidelines, ensuring responsiveness and accessibility across devices. With features like theming, internationalization, and advanced functionalities such as lazy loading and drag-and-drop, PrimeVue provides developers with the tools to create elegant and high-performing Vue.js applications efficiently. Supported by clear documentation, demos, and an active community, PrimeVue is an excellent choice for developers seeking to streamline their development process and deliver polished user experiences. Vuetify Vuetify is a powerful Vue.js component library that empowers developers to create elegant and responsive user interfaces with ease. Built according to Google's Material Design guidelines, Vuetify offers a vast collection of customizable UI components, ranging from buttons and cards to navigation bars and data tables. Its comprehensive set of features includes themes, typography, layout grids, and advanced components like dialogues and sliders, enabling developers to quickly build modern web applications that look and feel polished. With extensive documentation, active community support, and ongoing development, Vuetify remains a top choice for Vue.js developers seeking to streamline their workflow and deliver visually stunning user experiences. For newcomers venturing into Vue.js, the initial setup might seem daunting. Thankfully, Vue.js offers a variety of component libraries to simplify this process. Quasar, PrimeVue, and Vuetify are standout options, each providing pre-built tools to kickstart projects smoothly. Whether you prefer Quasar's extensive UI components, PrimeVue's Material Design-inspired features, or Vuetify's responsive interfaces, these libraries cater to diverse preferences and project requirements. With their clear documentation and active communities, these libraries empower developers to start Vue.js projects confidently and efficiently, enabling Vue developers to create polished user experiences....

How to deploy a NodeJS application using Fly.io cover image

How to deploy a NodeJS application using Fly.io

I was astounded when I learned just how easy it is to deploy static web applications. Services like AWS Amplify, Netlify, GitHub Pages, etc, trivialize this process, allowing developers to focus less on deployment, and more on feature development. I was equally astounded when I learned that it can be *JUST AS EASY* to deploy a web service! There are many options out there: - Render - Heroku - Platform.sh - AWS Elastic Beanstalk This guide is going to be focused on a solution that I have found to be incredibly developer-friendly platform, Fly.io. What is Fly.io? > Deploy App Servers Close to Your Users > > Run your full-stack apps (and databases!) all over the world. No ops required. Fly.io claims to be able to run full-stack apps with no ops required. That's a bold statement. While I've committed to plenty of PR's, I haven't deployed a web service in a *while*. I was feeling the itch to deploy a service to play around with some different languages in the backend, so I was 100% in the market for a developer-friendly way to get this done, with quite literally no ops hopefully required. Fly.io App Deployment Speed-run The last time I personally deployed a brand new web service was months and months ago, on a DigitalOcean droplet. My usual go-to solution for web apps are static web apps, so I'm used to deploying static assets. Fly.io has a particular portion of their documentation which immediately drew my attention: their Speedrun section. Before deciding to write this blog, I figured, I'd at least try out their service, and if I enjoyed it, I'd write about it. Holy moly was I shocked just how *easy* Fly.io made deployment. After about 15 minutes from landing on their documentation, I was able to deploy a web service, complete with GitHub Actions CI/CD. And I didn't even need to touch a server! Here is how you can do the same. I created an incredibly terse ExpressJS app using TypeScript. 1. Install flyctl: The root URL for Fly.io has a one line command you can use to install flyctl, the command line tool which handles deployment for you. For detailed instructions though, check out this page. 2. Sign up / Log in: Sign up or Log in using the appropriate command, flyctl auth signup or flyctl auth login. 3. Run flyctl launch: This will prompt you to name your application, select a region to deploy to, and then deploy your app. This generates a fly.toml config file with the deployment configurations. In those three steps, you too can launch a sample express app. CI/CD Adding CI/CD is as simple as adding a single file to the repo, along with retrieving and adding an API token to the repo. Add the following to .github/workflows/main.yml ` Then run flyctl auth token to generate and print an API token, then add it to the repo as a new secret named FLY_API_TOKEN. That is all you need to continuously deploy your NodeJS application on every commit! Conclusion Using Fly.io honestly felt like cheating. I remember being super proud of myself when I first deployed a full LAMP stack application on a VPS. I also clearly remember the pain of troubleshooting things like SSH, CertBot and LetsEncrypt, Nginx, etc. Fly.io completely trivialized the entire process. I easily see myself using this going forward for all containerized applications since the service even includes the means for auto-scaling your apps and containers!!! If you are a developer and you are not shy of the command-line, I highly recommend checking out Fly.io and spending 15 minutes of your time, just deploying a service. You'll be shocked at just how quick and easy it can be to go from 0 to CI/CD in mere moments....

Introduction to Zod for Data Validation cover image

Introduction to Zod for Data Validation

As web developers, we're often working with data from external sources like APIs we don't control or user inputs submitted to our backends. We can't always rely on this data to take the form we expect, and we can encounter unexpected errors when it deviates from expectations. But with the Zod library, we can define what our data ought to look like and parse the incoming data against those defined schemas. This lets us work with that data confidently, or to quickly throw an error when it isn't correct. Why use Zod? TypeScript is great for letting us define the shape of our data in our code. It helps us write more correct code the first time around by warning us if we are doing something we shouldn't. But TypeScript can't do everything for us. For example, we can define a variable as a string or a number, but we can't say "a string that starts with user_id_ and is 20 characters long" or "an integer between 1 and 5". There are limits to how much TypeScript can narrow down our data for us. Also, TypeScript is a tool for us developers. When we compile our code, our types are not available to the vanilla JavaScript. JavaScript can't validate that the data we actually use in our code matches what we thought we'd get when we wrote our TypeScript types unless you're willing to manually write code to perform those checks. This is where we can reach for a tool like Zod. With Zod, we can write data schemas. These schemas, in the simplest scenarios, look very much like TypeScript types. But we can do more with Zod than we can with TypeScript alone. Zod schemas let us create additional rules for data parsing and validation. A 20-character string that starts with user_id_? It's z.string().startsWith('user_id_').length(20). An integer between 1 and 5 inclusive? It's z.number().int().gte(1).lte(5). Zod's primitives give us many extra functions to be more specific about *exactly* what data we expect. Unlike TypeScript, Zod schemas aren't removed on compilation to JavaScript—we still have access to them! If our app receives some data, we can verify that it matches the expected shape by passing it to your Zod schema's parse function. You'll either get back your data in exactly the shape you defined in your schema, or Zod will give you an error to tell you what was wrong. Zod schemas aren't a replacement for TypeScript; rather, they are an excellent complement. Once we've defined our Zod schema, it's simple to derive a TypeScript type from it and to use that type as we normally would. But when we really need to be sure our data conforms to the schema, we can always parse the data with our schema for that extra confidence. Defining Data Schemas Zod schemas are the variables that define our expectations for the shape of our data, validate those expectations, and transform the data if necessary to match our desired shape. It's easy to start with simple schemas, and to add complexity as required. Zod provides different functions that represent data structures and related validation options, which can be combined to create larger schemas. In many cases, you'll probably be building a schema for a data object with properties of some primitive type. For example, here's a schema that would validate a JavaScript object representing an order for a pizza: ` Zod provides a number of primitives for defining schemas that line up with JavaScript primitives: string, number, bigint, boolean, date, symbol, undefined, and null. It also includes primitives void, any, unknown, and never for additional typing information. In addition to basic primitives, Zod can define object, array, and other native data structure schemas, as well as schemas for data structures not natively part of JavaScript like tuple and enum. The documentation contains considerable detail on the available data structures and how to use them. Parsing and Validating Data with Schemas With Zod schemas, you're not only telling your program what data should look like; you're also creating the tools to easily verify that the incoming data matches the schema definitions. This is where Zod really shines, as it greatly simplifies the process of validating data like user inputs or third party API responses. Let's say you're writing a website form to register new users. At a minimum, you'll need to make sure the new user's email address is a valid email address. For a password, we'll ask for something at least 8 characters long and including one letter, one number, and one special character. (Yes, this is not really the best way to write strong passwords; but for the sake of showing off how Zod works, we're going with it.) We'll also ask the user to confirm their password by typing it twice. First, let's create a Zod schema to model these inputs: ` So far, this schema is pretty basic. It's only making sure that whatever the user types as an email is an email, and it's checking that the password is at least 8 characters long. But it is *not* checking if password and confirmPassword match, nor checking for the complexity requirements. Let's enhance our schema to fix that! ` By adding refine with a custom validation function, we have been able to verify that the passwords match. If they don't, parsing will give us an error to let us know that the data was invalid. We can also chain refine functions to add checks for our password complexity rules: ` Here we've chained multiple refine functions. You could alternatively use superRefine, which gives you even more fine grained control. Now that we've built out our schema and added refinements for extra validation, we can parse some user inputs. Let's see two test cases: one that's bound to fail, and one that will succeed. ` There are two main ways we can use our schema to validate our data: parse and safeParse. The main difference is that parse will throw an error if validation fails, while safeParse will return an object with a success property of either true or false, and either a data property with your parsed data or an error property with the details of a ZodError explaining why the parsing failed. In the case of our example data, userInput2 will parse just fine and return the data for you to use. But userInput1 will create a ZodError listing all of the ways it has failed validation. ` ` We can use these error messages to communicate to the user how they need to fix their form inputs if validation fails. Each error in the list describes the validation failure and gives us a human readable message to go with it. You'll notice that the validation errors for checking for a valid email and for checking password length have a lot of details, but we've got three items at the end of the error list that don't really tell us anything useful: just a custom error of Invalid input. The first is from our refine checking if the passwords match, and the second two are from our refine functions checking for password complexity (numbers and special characters). Let's modify our refine functions so that these errors are useful! We'll add our own error parameters to customize the message we get back and the path to the data that failed validation. ` Now, our error messages from failures in refine are informative! You can figure out which form fields aren't validating from the path, and then display the messages next to form fields to let the user know how to remedy the error. ` By giving our refine checks a custom path and message, we can make better use of the returned errors. In this case, we can highlight specific problem form fields for the user and give them the message about what is wrong. Integrating with TypeScript Integrating Zod with TypeScript is very easy. Using z.infer<typeof YourSchema> will allow you to avoid writing extra TypeScript types that merely reflect the intent of your Zod schemas. You can create a type from any Zod schema like so: ` Using a TypeScript type derived from a Zod schema does *not* give you any extra level of data validation at the type level beyond what TypeScript is capable of. If you create a type from z.string.min(3).max(20), the TypeScript type will still just be string. And when compiled to JavaScript, even that will be gone! That's why you still need to use parse/safeParse on incoming data to validate it before proceeding as if it really does match your requirements. A common pattern with inferring types from Zod schemas is to use the same name for both. Because the schema is a variable, there's no name conflict if the type uses the same name. However, I find that this can lead to confusing situations when trying to import one or the other—my personal preference is to name the Zod schema with Schema at the end to make it clear which is which. Conclusion Zod is an excellent tool for easily and confidently asserting that the data you're working with is exactly the sort of data you were expecting. It gives us the ability to assert at runtime that we've got what we wanted, and allows us to then craft strategies to handle what happens if that data is wrong. Combined with the ability to infer TypeScript types from Zod schemas, it lets us write and run more reliable code with greater confidence....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co