Skip to content

Functional Programming in TypeScript Using the fp-ts Library: Deep Dive Into Option's Methods and Other Useful fp-ts Operators

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Welcome back to our blog series on functional programming with fp-ts! In our previous posts, we talked about the building block of fp-ts library: Pipe and Flow operators and we introduced one of the most useful types in the library: Option type. Let's start to use our knowledge and combine all the blocks: in this blog post, we'll take a deep dive into fp-ts' Option type, and explore its fundamental methods such as fold, fromNullable, and getOrElse. We'll then leverage the map, flatten, and chain operators, combining them with our powerful (and already known) operator to compose expressive and concise code.

Understanding Option

The Option type, also known as Maybe, represents values that might be absent. It is particularly useful for handling scenarios where a value could be missing, eliminating the need for explicit null checks. fp-ts equips us with a rich set of methods and operators to work with Option efficiently.

fold: The fold method allows us to transform an Option value into a different type by providing two functions: one for the None case, and another for the Some case. The pipe operator enhances the readability of the code by enabling a fluent and concise syntax.

import { Option, none, some } from 'fp-ts/lib/Option';
import { fold } from 'fp-ts/lib/Option';
import { pipe } from 'fp-ts/lib/pipeable';

const value: Option<number> = some(10);

const result = pipe(
  value,
  fold(
    () => 'No value',
    (x: number) => `Value is ${x}`
  )
); // result: "Value is 10"

In this example, we have an Option value some(10), representing the presence of the number 10. We use the pipe operator from fp-ts to chain the value through the fold function, passing in two functions. The first function, () => 'No value', handles the None case when the Option is empty. The second function, (x: number) => Value is ${x}, handles the Some case and receives the value inside the Option (in this case, 10). The resulting value is "Value is 10".

fromNullable: The fromNullable function converts nullable values (e.g., null or undefined) into an Option. We can leverage pipe to make the code more readable and maintainable.

import { Option, fromNullable } from 'fp-ts/lib/Option';
import { pipe } from 'fp-ts/lib/pipeable';

const value: string | null = 'Hello, world!';

const optionValue: Option<string> = pipe(value, fromNullable);

In the example, we have a string value 'Hello, world!', which is not nullable. However, by using the pipe operator and passing the value through fromNullable, fp-ts internally checks if the value is null or undefined. If it is, it produces a None value, indicating the absence of a value. Otherwise, it wraps the value inside Some. So, in this case, the resulting optionValue is Some("Hello, world!").

getOrElse: The getOrElse method allows us to extract the value from an Option or provide a default value if the Option is None. Pipe operator aids in composing the getOrElse function with other operations seamlessly.

import { Option, some, none } from 'fp-ts/lib/Option';
import { getOrElse } from 'fp-ts/lib/Option';
import { pipe } from 'fp-ts/lib/pipeable';

const optionValue: Option<number> = some(10);

const value = pipe(optionValue, getOrElse(() => 0)); // value: 10

const noneValue: Option<number> = none;

const defaultValue = pipe(noneValue, getOrElse(() => 0)); // defaultValue: 0

In the first example, we have an Option value some(10). Using the pipe operator, and passing the Option through getOrElse, we provide a function () => 0 as a default value. Since the Option is Some(10), the function is not executed, and the resulting value is 10. In the second example, we have an Option value none, representing the absence of a value. Again, using the pipe operator and getOrElse, we provide a default value of 0. Since the Option is None, the function () => 0 is executed, resulting in the default value of 0.

Map, Flatten, and Chain Operators

Building upon the foundational methods of Option, fp-ts provides powerful operators like map, flatten, and chain, which enable developers to compose complex operations in a functional and expressive manner.

map: The map operator allows us to transform the value inside an Option using a provided function. It applies the function only if the Option is Some.

import { Option, some } from 'fp-ts/lib/Option';
import { map } from 'fp-ts/lib/Option';
import { pipe } from 'fp-ts/lib/pipeable';

const optionValue: Option<number> = some(10);

const mappedValue: Option<string> = pipe(optionValue, map((x: number) => `Value is ${x}`)); // mappedValue: Some("Value is 10")

In this example, we have an Option value some(10). Using the pipe operator and passing the Option through map, we provide a function (x: number) => Value is ${x}. Since the Option is Some(10), the function is applied to the value inside the Option, resulting in a new Option Some("Value is 10").

flatten: The flatten operator allows us to flatten nested Options into a single Option. It simplifies the resulting structure when we have computations that may produce an Option inside another Option. The pipe operator assists in composing flatten operations seamlessly.

import { Option, some, none } from 'fp-ts/lib/Option';
import { flatten } from 'fp-ts/lib/Option';
import { pipe } from 'fp-ts/lib/pipeable';

const nestedOption: Option<Option<number>> = some(some(10));

const flattenedOption: Option<number> = pipe(nestedOption, flatten); // flattenedOption: Some(10)

In the example, we have a nested Option some(some(10)). Using the pipe operator and passing the nested Option through flatten, fp-ts flattens the structure, resulting in a single Option Some(10).

chain: The chain operator, also known as flatMap or >>=, combines the functionalities of map and flatten. It allows us to apply a function that produces an Option to the value inside an Option, resulting in a flattened Option.

import { Option, some, none } from 'fp-ts/lib/Option';
import { chain } from 'fp-ts/lib/Option';
import { pipe } from 'fp-ts/lib/pipeable';

const optionValue: Option<number> = some(42);

const chainedValue: Option<string> = pipe(
  optionValue,
  chain((x: number) => (x > 10 ? some(`Value is ${x}`) : none))
);
// chainedValue: Some("Value is 42")

const noneValue: Option<number> = none;

const noneChainedValue: Option<string> = pipe(
  noneValue,
  chain((x: number) => (x > 10 ? some(`Value is ${x}`) : none))
);
// noneChainedValue: None

In the first example, we have an Option value some(42). Using the pipe operator and passing the Option through chain, we provide a function that checks if the value is greater than 10. If it is, it returns Some(Value is ${x}), where x is the value inside the Option. Since the value is 42, which is greater than 10, the resulting Option is Some("Value is 42"). In the second example, we have an Option value none, representing the absence of a value. When passing it through chain with the same function as before, the function is not executed because the Option is None, resulting in None.

Conclusion

fp-ts provides powerful methods and operators for working with the Option type, allowing developers to embrace functional programming principles effectively. By understanding the fold, fromNullable, and getOrElse methods, as well as the map, flatten, and chain operators, and combining them with the pipe operator, developers can write expressive, maintainable, and resilient code. Explore these tools, unlock their potential, and take your functional programming skills to the next level!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

End-to-end type-safety with JSON Schema cover image

End-to-end type-safety with JSON Schema

End-to-end type-safety with JSON Schema I recently wrote an introduction to JSON Schema post. If you’re unfamiliar with it, check out the post, but TLDR: It’s a schema specification that can be used to define the input and output data for your JSON API. In my post, I highlight many fantastic benefits you can reap from defining schemas for your JSON API. One of the more interesting things you can achieve with your schemas is end-to-end type safety from your backend API to your client application(s). In this post, we will explore how this can be accomplished slightly deeper. Overview The basic idea of what we want to achieve is: * a JSON API server that validates input and output data using JSON Schema * The JSON Schema definitions that our API uses transformed into TypeScript types With those pieces in place, we can achieve type safety on our API server and the consuming client application. The server side is pretty straightforward if you’re using a server like Fastify with already enabled JSON Schema support. This post will focus on the concepts more than the actual implementation details though. Here’s a simple diagram illustrating the high-level concept: We can share the schema and type declaration between the client and server. In that case, we can make a request to an endpoint where we know its type and schema, and assuming the server validates the data against the schema before sending it back to the client, our client can be confident about the type of the response data. Marrying JSON Schema and TypeScript There are a couple of different ways to accomplish this: * Generating types from schema definitions using code generation tools * Creating TypeBox definitions that can infer TypeScript types and be compiled to JSON Schema I recommend considering both and figuring out which would better fit your application and workflows. Like anything else, each has its own set of trade-offs. In my experience, I’ve found TypeBox to be the most compelling if you want to go deep with this pattern. Code generation A couple of different packages are available for generating TS types from JSON Schema definitions. * https://github.com/bcherny/json-schema-to-typescript * https://github.com/vega/ts-json-schema-generator They are CLI tools that you can provide a glob path to where your schema files are located and will generate TS declaration files to a specified output path. You can set up an npm hook or a similar type of script that will generate types for your development environment. TypeBox TypeBox is a JSON Schema type builder. With this approach, instead of json files, we define schemas in code using the TypeBox API. The TypeBox definitions infer to TypeScript types directly, which eliminates the code generation step described above. Here’s a simple example from the documentation of a JSON Schema definition declared with TypeBox: ` This can then be inferred as a TypeScript type: ` Aside from schemas and types, TypeBox can do a lot more to help us on our type-safety journey. We will explore it a bit more in upcoming sections. Sharing schemas between client and server applications Sharing our JSON Schema between our server and client app is the main requirement for end-to-end type-safety. There are a couple of different ways to accomplish this, but the simplest would be to set up our codebase as a monorepo that contains both the server and client app. Some popular options for TypeScript monorepos are: PNPM, Turborepo, and NX. If a monorepo is not an option, you can publish your schema and types as a package that can be installed in both projects. However, this setup would require a lot more maintenance work. Ultimately, as long as you can import your schemas and types from the client and server app, you are in good shape. Server-to-client validation and type-safety For the sake of simplicity, let's focus on data flowing from the server to the client for now. Generally speaking, the concepts also apply in reverse, as long as your JSON API server validates your inputs and outputs. We’ll look at the most basic version of having strongly typed data on the client from a request to our server. Type-safe client requests In our server application, if we validate the /users endpoint with a shared schema - on the client side, when we make the request to the endpoint, we know that the response data is validated using the user schema. As long as we are confident of this fact, we can use the generated type from that schema as the return type on our client fetch call. Here’s some pseudocode: ` Our server endpoint would look something like this: ` You could get creative and build out a map that defines all of your endpoints, their metadata, and schemas, and use the map to define your server endpoints and create an API client. Transforming data over the wire Everything looks stellar, but we can still take our efforts a bit further. To this point, we are still limited to serialized JSON data. If we have a created_at field (number or ISO string) tied to our user, and we want it to be a Date object when we get a hold of it on the client side - additional work and consideration are required. There are some different strategies out there for deserializing JSON data. The great thing about having shared schemas between our client and server is that we can encode our type information in the schema without sending additional metadata from our server to the client. Using format to declare type data In my initial JSON Schema blog post, I touched on the format field of the specification. In our schema, if the actual type of our date is a string in ISO8601 format, we can declare our format to be "date-time". We can use this information on the client to transform the field into a proper Date object. ` Transforming serialized JSON Data This can be a little bit tricky; again, there are many ways to accomplish it. To demonstrate the concept, we’ll use TypeBox to define our schemas as discussed above. TypeBox provides a Transform type that you can use to declare, encode, and decode methods for your schema definition. ` It even provides helpers to statically generate the decoded and encoded types for your schema ` If you declare your decode and encode functions for your schemas, you can then use the TypeBox API to handle decoding the serialized values returned from your JSON API. Here’s what the concept looks like in practice fetching a user from our API: ` Nice. You could use a validation library like Zod to achieve a similar result but here we aren’t actually doing any validation on our client side. That happened on the server. We just know the types based on the schema since both ends share them. On the client, we are just transforming our serialized JSON into what we want it to be in our client application. Summary There are a lot of pieces in play to accomplish end-to-end type safety. With the help of JSON Schema and TypeBox though, it feels like light work for a semi-roll-your-own type of solution. Another great thing about it is that it’s flexible and based on pretty core concepts like a JSON API paired with a TypeScript-based client application. The number of benefits that you can reap from defining JSON Schemas for your APIs is really great. If you’re like me and wanna keep it simple by avoiding GraphQL or other similar tools, this is a great approach....

Testing a Fastify app with the NodeJS test runner cover image

Testing a Fastify app with the NodeJS test runner

Introduction Node.js has shipped a built-in test runner for a couple of major versions. Since its release I haven’t heard much about it so I decided to try it out on a simple Fastify API server application that I was working on. It turns out, it’s pretty good! It’s also really nice to start testing a node application without dealing with the hassle of installing some additional dependencies and managing more configurations. Since it’s got my stamp of approval, why not write a post about it? In this post, we will hit the highlights of the testing API and write some basic but real-life tests for an API server. This server will be built with Fastify, a plugin-centric API framework. They have some good documentation on testing that should make this pretty easy. We’ll also add a SQL driver for the plugin we will test. Setup Let's set up our simple API server by creating a new project, adding our dependencies, and creating some files. Ensure you’re running node v20 or greater (Test runner is a stable API as of the 20 major releases) Overview * index.js - node entry that initializes our Fastify app and listens for incoming http requests on port 3001 * app.js - this file exports a function that creates and returns our Fastify application instance * sql-plugin.js - a Fastify plugin that sets up and connects to a SQL driver and makes it available on our app instance Application Code A simple first test For our first test we will just test our servers index route. If you recall from the app.js code above, our index route returns a 501 response for “not implemented”. In this test, we're using the createApp function to create a new instance of our Fastify app, and then using the inject method from the Fastify API to make a request to the / route. We import our test utilities directly from the node. Notice we can pass async functions to our test to use async/await. Node’s assert API has been around for a long time, this is what we are using to make our test assertions. To run this test, we can use the following command: By default the Node.js test runner uses the TAP reporter. You can configure it using other reporters or even create your own custom reporters for it to use. Testing our SQL plugin Next, let's take a look at how to test our Fastify Postgres plugin. This one is a bit more involved and gives us an opportunity to use more of the test runner features. In this example, we are using a feature called Subtests. This simply means when nested tests inside of a top-level test. In our top-level test call, we get a test parameter t that we call methods on in our nested test structure. In this example, we use t.beforeEach to create a new Fastify app instance for each test, and call the test method to register our nested tests. Along with beforeEach the other methods you might expect are also available: afterEach, before, after. Since we don’t want to connect to our Postgres database in our tests, we are using the available Mocking API to mock out the client. This was the API that I was most excited to see included in the Node Test Runner. After the basics, you almost always need to mock some functions, methods, or libraries in your tests. After trying this feature, it works easily and as expected, I was confident that I could get pretty far testing with the new Node.js core API’s. Since my plugin only uses the end method of the Postgres driver, it’s the only method I provide a mock function for. Our second test confirms that it gets called when our Fastify server is shutting down. Additional features A lot of other features that are common in other popular testing frameworks are also available. Test styles and methods Along with our basic test based tests we used for our Fastify plugins - test also includes skip, todo, and only methods. They are for what you would expect based on the names, skipping or only running certain tests, and work-in-progress tests. If you prefer, you also have the option of using the describe → it test syntax. They both come with the same methods as test and I think it really comes down to a matter of personal preference. Test coverage This might be the deal breaker for some since this feature is still experimental. As popular as test coverage reporting is, I expect this API to be finalized and become stable in an upcoming version. Since this isn’t something that’s being shipped for the end user though, I say go for it. What’s the worst that could happen really? Other CLI flags —watch - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--watch —test-name-pattern - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--test-name-pattern TypeScript support You can use a loader like you would for a regular node application to execute TypeScript files. Some popular examples are tsx and ts-node. In practice, I found that this currently doesn’t work well since the test runner only looks for JS file types. After digging in I found that they added support to locate your test files via a glob string but it won’t be available until the next major version release. Conclusion The built-in test runner is a lot more comprehensive than I expected it to be. I was able to easily write some real-world tests for my application. If you don’t mind some of the features like coverage reporting being experimental, you can get pretty far without installing any additional dependencies. The biggest deal breaker on many projects at this point, in my opinion, is the lack of straightforward TypeScript support. This is the test command that I ended up with in my application: I’ll be honest, I stole this from a GitHub issue thread and I don’t know exactly how it works (but it does). If TypeScript is a requirement, maybe stick with Jest or Vitest for now 🙂...

Next.js Route Groups cover image

Next.js Route Groups

Starting from Next.js 13.4, Vercel introduced the App Router with a whole set of new and exciting features. The way we organize the routing in our application has changed radically compared to previous versions of Next.js, as well as the definition and usage of Layouts for our pages. In this article, we will focus on what is called Route Groups, their use cases, and how they can help us in our developer experience. Basic introduction to the new App Router In version 13, Next.js introduced a new App Router built on React Server Components, which supports shared layouts, nested routing, loading states, error handling, and more. The App Router works in a new directory named app Creating a page.tsx file inside the app/test-page folder allows you to define what users are going to see when they navigate to /test-page. So folder’s names inside app directory define your app routes. You can also have nested routes like this: In this case, the URL of your page will be /test-page/nested-page By default, components inside app are React Server Components. This is a performance optimization and allows you to easily adopt them, and you can also use Client Components. Layouts In Next.js, a Layout file is a special component that is used to define the common structure and layout of multiple pages in your application. It acts as a wrapper around the content of each page, providing consistent styling, structure, and functionality. The purpose of a Layout file is to encapsulate shared elements such as headers, footers, navigation menus, sidebars, or any other components that should be present on multiple pages. By using a Layout file, you can avoid duplicating code across multiple pages and ensure a consistent user experience throughout your application. To create a Layout file in Next.js, you typically create a separate component file, such as Layout.tsx, and define the desired layout structure within it. This component can then be imported and used on individual pages where you want to apply the shared layout. By wrapping your page content with the Layout component, Next.js will render the shared layout around each page, providing a consistent look and feel. This approach simplifies the management of common elements and allows for easy updates or modifications to the layout across multiple pages. Here is an example of how to use a Layout file with the new App Router Route Groups In Next.js, the folders in your app directory usually correspond to URL paths. But if you mark a folder as a Route Group, it won't be included in the URL path of the route. This means you can organize your routes and project files into groups without changing the URL structure. Route groups are helpful for: 1. Organizing routes into groups based on site sections, intent, or teams. 2. Creating nested layouts within the same route segment level: - You can have multiple nested layouts in the same segment, even multiple root layouts. - You can add a layout to only a subset of routes within a common segment. To create a route group inside your app folder you just need to wrap the folder’s name in parenthesis: (folderName) Since route groups won’t change the URL structure your page.tsx content will be shown under the/inside-route-group path. Use cases Route groups are amazing when you want to create multiple layouts inside your page: Or if you want to specify a layout for a specific group of pages You need to be careful because all the examples above can lead you to some misunderstanding. *What is root layout? The top-most layout is called the Root Layout. This required layout is shared across all pages in an application.* As you can see, the route folder in the two examples above always has a well-defined root layout. This means that the specific layouts we have defined for the various groups will not replace the root layout, but will be added to it. However, Route Groups also allow us to redefine the root layout. Specifically, they allow us to define different root layouts for different segments of pages. All we have to do is remove the common Root Layout file, create some Route groups, and re-define the different Root layout files for every group in the route folder: In this way, we will have pages with different root layouts, and our paths will once again not be affected by the folder name used in parentheses. Conclusion In conclusion, Next.js route groups offer a powerful and flexible solution for organizing and managing routes in your Next.js applications. By grouping related routes together, you can improve code organization, enhance maintainability, and promote code reusability. Route groups allow for the use of shared layout components, and the customization of root layouts for different segments of pages. With Next.js route groups, you can streamline your development process, create a more intuitive routing structure, and ultimately deliver a better user experience....

The simplicity of deploying an MCP server on Vercel cover image

The simplicity of deploying an MCP server on Vercel

The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead. Example of Lightweight MCP Server Design At This Dot Labs, we built an MCP server that leverages the DocuSign Navigator API. The tools, like `get_agreements`, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way. ` Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access. The Oauth flow begins when the user's LLM client makes a request without a valid auth token. In this case they’ll get a 401 response from our server with a WWW-Authenticate header, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. ` This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface. Deployment Options I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel. The case for Vercel: * My own familiarity with Next.js API deployment * Fit for architecture * The extremely simple deployment process * Deploy previews (the eternal Vercel customer conversion feature, IMO) Previews of unfamiliar territory Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review. Stuff I’m Not Worried About Vercel’s mcp-handler package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works. ` A Brief Case for MCP on Next.js Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website's agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool. Conclusion I'll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co