Skip to content

Building Web Applications using Astro - What makes it special?

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

You might already have heard that there is a new player in the static site generator space that is generating a lot of excitement and asking some hard questions of the modern Javascript ecosystem. I'm talking about Astro, a framework constructed around two simple but revolutionary concepts:

  1. Accepting components from any UI framework
  2. Partial hydration

I have recently been working on a site built from the ground up using Astro, and even in the early state its in, I've been able to see the amazing possibilities it opens up in web development. So, let me give you a tour of what those two points mean, both conceptually and for the future of Javascript development.

Bring your own framework

The modern Javascript ecosystem is divided into separate and sometimes very distant camps, based on what UI library you're using. Next for React, Nuxt for Vue, SvelteKit for Svelte, etc. It doesn't have to be this way. All of these UI libraries ultimately use the same Javascript to output and transform the same HTML and CSS. Why should you have to replace your whole toolkit just to use a different brand of paint?

Astro uses a set of plugin renderers to support components written in different formats. Currently, they have ones for React, Preact, Vue, and Svelte, as well as supporting their own minimalist templating format in .astro files. However, there's no reason why this couldn't be expanded by the community in the future.

Of course, to hydrate components on the client you need to bring the framework's runtime with you, so it's never going to be particularly efficient to mix and match components from different frameworks on the same page. But, if you absolutely have to, it's now an option. What's much more important is the ability to leverage the potential of Astro, no matter which set of tools you're most familiar with. In a future where more parts of the ecosystem plug into different libraries like this, we might even see more iteration in the UI library space without new entrants having to worry about producing an end-to-end build, serve and hydrate story.

Just a sip (of client hydration)

The framework compatibility is great, but at the end of the day, the only thing it meant for us was that using Astro with React was possibile. It's all well and good that you can use it, but why would you want to? Partial hydration is the answer.

You see, most websites you will ever build have some interactivity beyond just clicking hyperlinks. It isn't the 90s anymore, rich app-like behavior is increasingly part of user expectations. This is why frameworks like Vue and React have taken off the way they have. However, most websites you will ever build also have large sections which don't have any interactivity beyond clicking hyperlinks. Maybe you have a blog that is totally static except for a search bar and a comment section. Or a marketing site that includes a carousel and a navigation popover. Or even a very complex interactive app, which is mostly fully interactive, but needs some highly-performant static marketing and e-commerce pages to sell it to customers.

This is the "islands of interactivity" model. Most web applications are a mix of static content and interactive widgets. However, in all of our frameworks, even if they pre-generate some HTML at build time or on the server using Server-Side Rendering, if you want to use components, you have to download those components on the client. Even if they're never going to do anything! A set of paragraphs with CSS classes that won't ever change? The client has to download that twice: once in the HTML that will actually be displayed, and then again as a sluggishly-parsed Javascript bundle that will execute during hydration, but have exactly no effect on the result.

Astro says "no" to this inefficient use of resources, and gives us a rich set of tools to send to the client only the Javascript that is minimally necessary to enable interactivity. This is called "partial hydration" because only the islands of interactivity, those widgets that you mark as needing to change, get "hydrated" by loading the client-side version of their components.

Astro components

The way this is all done is with a new syntax for HTML templating that allows you to include components from your framework of choice with annotations that decide whether they should hydrate, as well as Javascript, to execute during build-time. The following is an astro component extracted from a project I am currently working on, and only lightly edited, showing most major features:

---
import '$system/src/globals/global-styles'
import { fontHeadTags } from '$system/src/globals/font-import.mjs'
import { sprinkles } from '$system/src/sprinkles/sprinkles.css'
import { reactTheme } from '$system/src/themes/themes.css'
import { Sidebar } from '$system/src/components/sidebar.tsx'
import { MobileNav } from '../components/mobile-nav.tsx'
import { fetchCategories } from '../models/category.ts'

const { title } = Astro.props

const categories = await fetchCategories()
---
<!DOCTYPE html>
<html class={reactTheme} lang="en">
  <head>
    {fontHeadTags}
    <title>react.framework.dev | {title}</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <meta charset="UTF-8"/>
  </head>
  <body>
    <MobileNav client:media="(max-width:1024px)" categories={categories} />
    <Sidebar>
      <div class={sprinkles({ layout: "stack", gap: 24 })}>
        {categories.map(category => (
          <a href={`/categories/${category.slug}`}>
            {category.name}
          </a>
        ))}
      </div>
    </Sidebar>
    <main class={sprinkles({ marginX: 64, marginY: 48 })}>
      <slot />
    </main>
  </body>
</html>

The section between the --- is "frontmatter" (a concept borrowed from Markdown-based site generators), but as opposed to Markdown frontmatter, it is not restricted to just declaring data. Any JavaScript code can be imported or run in frontmatter, and it will be executed whenever the template is rendered during build time. This makes it very convenient and powerful for fetching and processing data.

The rest of the file is a JSX template, which hopefully looks fairly familiar. This isn't full React — no reactive state or event handlers — just a way to produce static HTML from a syntax that requires less specialized syntax knowledge than something like Handlebars because you can just use .map for loops and && for conditional rendering. You can include any HTML tag, any component in the framework of your choice, and one or more <slot /> tags, which will be replaced with the inner HTML — or "children" in React parlance —that the component is rendered with. The code above is for a main layout component, so our <slot /> will contain the page content.

The final special of feature of Astro components is client: directives. You can see an example on the MobileNav component above. Client directives can be placed on any component authored in a framework that has a client runtime (so, currently, Vue, React or Svelte) and when the conditions met by the directive are met, Astro will load the framework runtime and that component's code, and hydrate it to make it interactive. There are a number of directives for different conditions:

  • client:load hydrates on page load. This mirrors how SPAs are usually loaded.
  • client:idle hydrates as soon as the main thread is free. This should theoretically get you interactivity as soon as possible at the cost of potentially delaying loading, but I haven't experimented with it.
  • client:media hydrates when the browser matches a media query. This is great for content that will only be shown to certain devices, like our mobile nav!
  • client:visible hydrates when the component becomes visible. This is great for content that might be below the fold. You can load the page as fast as possible, and only download Javascript if the user scrolls.
  • client:only doesn't output anything at build time and instead of hydrating does from-scratch client rendering on load. It's almost always better to use client:load and provide a placeholder, but there can be cases where a component simply cannot be made to run outside a browser.

The result of the above is that on desktop we have a fully-functional site with no Javascript at all — Sidebar is a React component, but it just renders a static menu with <a> tags for navigation — and on mobile, we download React and only what is needed to make the MobileNav component work. This allows us to still use Javascript and Javascript UI frameworks, but our users only pay the price in performance for the features that they actually see, without an extra tax just to allow us to have a unified developer experience!

I hope you now understand why Astro is exciting, and why we jumped to try it out even though it's still in its early days. If you want to learn more about it, check out their documentation and their extensive repository of examples.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Upgrading from Astro 2 to Astro 4 cover image

Upgrading from Astro 2 to Astro 4

Upgrading from Astro 2 to Astro 4 Astro is building fast. Right on the heels of their version 3 launch on August 30th, Astro version 4 launched on December 6th, 2023. They've built so fast, that I didn't even have a chance to try 3 before 4 came out! But the short time between versions makes sense, because the two releases are very complementary. Many Astro features introduced in version 3 as experimental are made stable by version 4. If, like me, you're looking at a two-version upgrade, here's what you need to know about Astro 3 and 4 combined. View Transitions Astro makes it easy to include animated transitions between routes and components with the component. You can add it to the of specific pages, or to your site-wide to be enabled across the entire site. No configuration is required, but you'll probably still want to configure it. Adding between pages effectively turns your site into a single-page application, animating in and out content on route change rather than downloading a new statically generated HTML document. You can further customize how specific components animate in and out with the transition:animate property. If you don't want client side routing for a specific link, you can opt out for that link with the data-astro-reload property. Image Optimization The way Astro works with images has changed a lot from version 2. If you were using @astrojs/image, then updating how you handle images is probably going to be the most time-consuming part of your Astro migration. Astro's and components have had API changes, which will require you to make changes in your usage of them. You should definitely check out the full image migration guide in the Astro docs for all of the details. But some of the details you should be aware of: - @astrojs/image is out, astro:assets is in - You get image optimization when you import images inside /src. This can change your entire image src referencing strategy. Optimization only works for images imported from inside /src, so you might want to relocate images you've been keeping inside the /public directory. - Importing an image file no longer returns the path as a string, and an ImageMetadata object with src, width, height, and format properties. If you need the previous behavior, add ?url to the import path. - Markdown documents can reference image paths inside /src for automatic image optimization, no need to use in MDX nor reference the root relative paths of images in the /public directory. - The and components have changes to properties. For example, aspectRatio is no longer a valid property because it is inferred from the width and height of images. A new pictureAttributes property lets you do things like add a CSS style string. - You can use a helper image schema validator in your content collection schemas. Dev Toolbar The Dev Toolbar is a new local development feature to help developers work with their interactive islands and to integrate with other tools. The Inspect option highlights what parts of the page are interactive, and lets you examine them. You can view their props and even open them directly in your editor. Audit checks for accessibility issues, such as missing alt attributes in images. And the Menu lets you use specific integrations. Currently only Storyblok and spotlight are available, but you can expect more integrations in the future. And if you don't want to wait, you can also extend the Dev Toolbar yourself with their API. If you don't like the Dev Toolbar, you can turn it off in the CLI with astro preferences disable devToolbar. Conclusion Astro has added a lot of cool features in 2 major back-to-back releases, and you should absolutely consider upgrading if you're still on version 2. Be prepared to modify how you've handled images, but also get excited to play with view transitions!...

Using Astro on framework.dev cover image

Using Astro on framework.dev

Have you heard of Astro? It's an exciting, if still experimental, static site generation framework that allows you to author components using your choice of framework and completely control if and when javascript is shipped to the client. If you would like to learn more, I wrote an introductory blog post about it. Considering how new and experimental Astro is, you might be wondering what it's like to actually try to build a website with it. Well, you're in luck because we chose to build react.framework.dev in Astro, and I'm here to tell you how it went. Some background When I was first presented the pitch for the framework.dev project, it had the following characteristics: 1. It was going to be primarily a reference site, allowing users to browse static data. 2. It should be low-cost, fast and accessible. 3. Because it was a small internal project, we were pretty free to go out on a limb on our technology choices, especially if it let us learn something new. At the time, I had recently heard exciting things about this new static site generator called "Astro" from a few developers on Twitter, and from cursory research, it seemed perfect for what we had planned. Most of the site was going to be category pages for browsing, with search existing as a bonus feature for advanced users. Astro would allow us to create the whole site in React but hydrate it only on the search pages, giving us the best of the static and dynamic worlds. This plan didn't quite survive contact with the enemy (see the "ugly" section below) but it was good enough to get the green light. We also picked vanilla-extract as a styling solution because we wanted to be able to use the same styles in both React and Astro and needed said styles to be easily themeable for the different site variants. With its wide array of plugins for different bundlers, it seemed like an extremely safe solution. Being able to leverage Typescript type-checking and intellisense to make sure theme variables were always referenced correctly was certainly helpful. But as you'll see, the journey was much less smooth than expected. The Good Astro did exactly what it had promised. Defining what static pages would be generated based on data, and what sections of those pages would be hydrated on the client, was extremely straightforward. The experience was very similar to using NextJS, with the same system of file system paths defining the routing structure of the site, augmented with a getStaticPaths function to define the generation of dynamic routes. However, Astro is easier to use in many ways due to its more focused feature set: - Very streamlined and focused documentation. - No potential confusion between static generation and server-rendering. - All code in Astro frontmatter is run at build-time only, so it's much easier to know where it's safe to write or import code that shouldn't be leaked to the client bundle. - No need to use special components for things like links, which greatly simplifies writing and testing components. Our Storybook setup didn't have to know about Astro at all, and in fact is running exactly the same code but bundling it with its own webpack-based builder. Choosing what code should be executed client-side is so easy that describing it is somewhat anticlimactic: ` However, this is a feature that is simply not present in any other framework, and it gives a site very predictable performance characteristics: each page loads only as much Javascript as has been marked as needing to be loaded. This means that each page can be profiled and optimized in isolation, and increases in complexity in other pages will not affect it. For example, we have kept the site's homepage entirely static, so even when it shares code with dynamic areas like search, the volume of that code doesn't impact load times. The Bad Although we were pleasantly surprised by how many things worked flawlessly despite Astro being relatively new, we were still plagued by a number of small issues: - A lot of things didn't work in a monorepo. We had to revert to installing all npm libraries in the root package rather than using the full power of workspaces, as we had trouble with hoisting and path resolution. Furthermore, we had to create local shims for any component we wanted to be a hydration root, as Astro didn't handle monorepo imports being roots. - We were hit multiple times with a hard-to-trace issue that prevented hydration from working correctly with a variety of errors mostly relating to node modules being left in the client bundle. The only way to fix this was to clear the Snowpack cache and build the site for production before trying to start the dev server. You can imagine how long it took to figure out this slightly bizarre workaround. - Astro integration with Typescript and Prettier was pretty shaky, so the experience of editing Astro components was a bit of a throwback to the days before mature Javascript tooling and editor integrations. I'm very thankful that we had always intended to write almost all of our components in React rather than Astro's native format. We also hit a larger hurdle that contributed to the above problems' remaining issues for the lifetime of the project: Astro moved from Snowpack to Vite in version 0.21, but despite vanilla-extract having a Vite plugin, we were unable to get CSS working with the new compiler. It's uncertain whether this is an issue with Astro's Vite compiler, or whether it's down to vanilla-extract's Vite plugin not having been updated for compatibility with Vite's new (and still experimental) SSR mode that Astro uses under the hood. Whatever the reason, what we had thought was a very flexible styling solution left us locked in version 0.20 with all its issues. The lesson to be learnt here is probably that when dealing with new and untested frameworks it's wise to stick to the approaches recommended by their authors, because there's no guarantees of backwards compatibility for third-party extensions. The Ugly As alluded to in the introduction, our plans for framework.dev evolved in ways that made the benefits of Astro less clear. As the proposed design for the site evolved, the search experience was foregrounded and eventually became the core of every page other than the homepage. Instead of a static catalogue with an option to search, we found ourselves building a search experience where browsing was just a selection of pre-populated search terms. This means that, in almost all pages, almost all of the page is hydrated, because it is an update-results-as-you-type search box. In these conditions, where most pages are fully interactive and share almost all of their code, it's arguable that a client-side-rendering SPA like you'd get with NextJS or Gatsby would be of equal or even superior performance. Only slightly more code would have to be downloaded for the first view and subsequent navigation would actually require less markup to be fetched from the server. The flip side of choosing the right tool for the job is that the job can often change during development as product ideas are refined, and feedback is incorporated. However, even though replacing Astro with NextJS would be fairly simple since the bulk of the code is in plain React components, we decided against it. Even Astro's worst case is still quite acceptable, and maybe in the future, we will add features to the site which will be able to truly take advantage of the islands-of-interactivity hydration model. Closing thoughts Astro's documentation does not lie. It is a very flexible and easy to use framework with unique options to improve performance that is also still firmly in an experimental stage of its development. You should not use it in large production projects yet. The ground is likely to shift under you as it did with us in the Snowpack-to-Vite move. The team behind Astro is now targeting a 1.0 release which will hopefully mean greater guarantees of backwards compatibility and a lack of major bugs. It will be interesting to see what features end up making it in, and whether developer tools like auto-formatting, linting and type-checking are going to also be finished and supported going forward. Without those quality of life improvements, Astro has still been an exciting tool to test out, which has made me think differently about the possibilities of static generation. With them, it might become the only SSG framework you will need....

What's new in Next.js 12 cover image

What's new in Next.js 12

NextJS 12 came with a number of new features. Some are quiet and largely automatic, but others unlock completely new ways of working. I'll be giving you the highlights of each category, alongside ideas about how to leverage the new features and potential pitfalls. In order of decreasing stability: Stable: Performance improvements In the category of features that have seen stable releases, require very few changes to your existing code, and can give you an immediate improvement we have a raft of performance optimizations for both user and developers: Faster compiler Next now includes a Rust-based compiler which it uses by default instead of Babel. This provides spectacular improvements in build times, especially for large projects, but it's worth noting that it does not include any plugin system, so if you are using a custom babel configuration, you will need to wait until the Next team adds support for what you need and keep using Babel until then. This de-optimization will happen automatically if you have a .babelrc file. At time of writing, for example,styled-components is now supported experimentally, but other systems like emotion or vanilla-extract aren't at all. NextJS moving away from the Babel ecosystem, and towards a custom compiler, will likely mean decreased support for custom configurations going forward as well. In this environment, it seems advisable to stick as much as possible to the approaches officially recommended and supported by the NextJS team, like using CSS modules, SCSS, or styled-jsx for styling as opposed to a third-party CSS-in-JS solution. Trading flexibility for performance is sadly a common theme as software matures, but at least NextJS has a good variety of built-in tools to suit different situations. Smaller images NextJS can now generate AVIF images, which are smaller and thus faster to load than even WebP images. This optimization is opt-in, but there's very little downside to adding it to your next.config.js file. In the worst case, you might see slower build times if you have a lot of images. ` Beta: Middleware The most exciting change that you can actually use straight away — we'll get into the experimental React 18 features later — is page middleware. Middleware are functions that you can define, and which always run on the server right before a user enters any page in that folder or any subfolders. It's a great way to enforce authentication by placing a middleware like this in an /app or /members folder where all registered-only content lives: ` Middleware is somewhat limited by the fact it executes in an isolated context separate from the code that renders that page. It can read the request parameters like cookies and headers, and modify the response parameters in the same way but it cannot, for example, read some data and pass that on to a getServerSideProps function. Any augmentation of getServerSideProps will still require calling a function in the page file itself, although I hope in the future middleware gets some way of communicating with SSR, even if it is only through limited serializable values. It would be useful to be able to have getServerSideProps read the user id decoded from the token in the above example. Still, a lot of common operations you need a server for are covered by reading and writing cookies and issuing redirects. A/B testing, page view and render time metrics, filtering out bots... the NextJS team have provided a whole raft of examples you can draw on for ideas. Experimental: React 18 React 18 isn't even out yet, but NextJS 12 already includes support for it and all of its major features. ` If you install the React 18 like so, you can start testing out new features like update batching and startTransition, an API that lets you leverage async rendering to avoid blocking user interactions while UI updates render. However, the most transformational features, streaming render, and server components, are gated behind Next config flags: ` There are full details of how to use these features, which promise a server-rendering experience that is faster and more seamless, in the NextJS docs. But keep in mind that React 18 is still in alpha, and these APIs might change before release, so it's not a good idea to restructure applications to rely on them yet. Couple that with the fact that many common React libraries you might be using or want to use won't be updated to be compatible with React 18 yet, and it's clear that this is something to just test out for now, and keep an eye on, in the coming months as final APIs and releases begin to appear. Experimental: URL imports In the final experimental category, we have an idea that might be familiar to those of us who've used Deno: importing dependencies not by adding them to a package.json, but by including their URL in code. It's better to still structure your code so that a given URL is only imported in one place, so that becomes the canonical version used throughout your app, and which you can update by changing its URL. This usually takes the form of a central dependencies file, or set of files with declarations like the following: ` You can enable this feature by listing the host names from which you want to import dependencies in Next config: ` This approach can have certain advantages over the standard central package.json model: - You don't get errors when you pull code that adds dependencies but don't know to run an install. - You can divide up dependencies as you wish. For example, if different sections of the site are being developed separately they can each have their own dependencies file. - If some parts of the application depend on an old version of a package, you can still use the newer version for new code until you get around to tackling the tech debt. However, many packages and tools don't yet work well under this model, with the concept of peer dependencies being particularly difficult to translate although CDNs like ESM have some ideas to get them working. Conclusion It's good to see NextJS get ahead of the React 18 and continue making performance improvements, but it doesn't look like this update contains anything truly life-changing or paradigm-shifting yet. However, this is a clear sign that React's streaming and async renderer is going to be a reality very soon, so if you are not already familiar with the concepts or your app doesn't quite run in Strict Mode, it's a good time to learn and prepare....

The simplicity of deploying an MCP server on Vercel cover image

The simplicity of deploying an MCP server on Vercel

The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead. Example of Lightweight MCP Server Design At This Dot Labs, we built an MCP server that leverages the DocuSign Navigator API. The tools, like `get_agreements`, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way. ` Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access. The Oauth flow begins when the user's LLM client makes a request without a valid auth token. In this case they’ll get a 401 response from our server with a WWW-Authenticate header, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. ` This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface. Deployment Options I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel. The case for Vercel: * My own familiarity with Next.js API deployment * Fit for architecture * The extremely simple deployment process * Deploy previews (the eternal Vercel customer conversion feature, IMO) Previews of unfamiliar territory Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review. Stuff I’m Not Worried About Vercel’s mcp-handler package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works. ` A Brief Case for MCP on Next.js Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website's agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool. Conclusion I'll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co