Skip to content

Detect Hand Sign Languages with Tensorflow

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Interested in learning how to use Tensorflow to detect hand sign languages in your apps? By the end of this read, you will know how to implement Tensorflow in your application with very simple steps. In our example today, we will be using Vue.

What is Tensorflow?

Tensorflow is an end-to-end platform (meaning: delivering complex systems or services in functional form after developing it from beginning to end.) used for building Machine Learning applications, and it is also open-source. TensorFlow enables you to build dataflow graphs and structures to define how data moves through a graph by taking inputs as a multi-dimensional array called Tensor. You can read more on Tensorflow here.

What is a Model?

A model is a function with learnable parameters that maps an input to an output. A well-trained model will provide an accurate mapping from the input to the desired output.

Tensorflow Models

Tensorflow models are pre-trained models, and there are four defined categories of them:

  • Vision: Analyze features in images and videos.
  • Body: Detect key points and poses on the face, hands, and body with models from MediPipe
  • Text: Enable NLP in your web app using the power of BERT and other Transformer encoder architectures.
  • Audio: Classify audio to detect sounds.

If you want to go into more detail, check out Tensorflow Models.

All these models are broken down into subs and for our case, we will be making use of the Body Model which has the hand pose detection we need in order to detect the hand signs.

Hand Pose Detection

This model used a 2D and 3D multi-dimensional array which enables it to predict the keypoints of the hands.

Example of a 2D is [[1,2],[3,5],[7,8],[20,44]] and that of a 3D is [[1,2,5],[3,5,8],[7,8,6],[20,44,100]].

This hand pose detection is a model from the MediPipe as we established above, and it provides us with two model types which are lite and full. The accuracy of the prediction increases from lite to full while the inference speed reduces, i.e. the response time will be slower as the accuracy increases.

What do we need?

There are a few dependencies we need to get things working, and I also will be assuming that you have your project set up as well.

You will need to add these dependencies to the project


yarn add @tensorflow-models/hand-pose-detection

# Run the below commands if you want to use TF.js runtime.
yarn add @tensorflow/tfjs-core @tensorflow/tfjs-converter
yarn add @tensorflow/tfjs-backend-webgl

# Run the below commands if you want to use MediaPipe runtime.
yarn add @mediapipe/hands

yarn add fingerpose

Above, in the commands, you will notice we added a fingerpose. Let's talk a little about what we need the figerpose for.

Fingerpose

Fingerpose is a gesture classifier for hand landmarks detected by Mediapipe hand pose detection. It also allows you to add your own hand gesture, which means that a gesture that signifies the letter Z can signify Hello based on your fingerpose data. We will see an example of how the data looks in a bit. You can check out fingerpose for more details.

Get started

We are going to use Vue for this illustration. We will start by looking at the HTML first, and then we will cover the JavaScript.

Our Template will be a basic HTML that will have a video tag so we can show a video after getting access to our webcam.

Template

<template>
  <div class="wrapper">
    <video
      ref="videoCam"
      class="peer-video"
      preload="auto"
      autoPlay
      muted
      playsInline
    />
  </div>
</template>

The snippet above shows a div and a video tab. The video is used when we gain access to the webcam.

We will now be writing the JS required to initialize the webcam.

Script


<script  setup>
import { onMounted, ref } from "vue";

const videoCam = ref();
function openCam() {
  let all_mediaDevices = navigator.mediaDevices;

  if (!all_mediaDevices || !all_mediaDevices.getUserMedia) {
    console.log("getUserMedia() not supported.");
    return;
  }
  all_mediaDevices.getUserMedia({
    audio: true,
    video: true,

  })
    .then(function (vidStream) {
      if ("srcObject" in videoCam.value) {
        videoCam.value.srcObject = vidStream;
      } else {
        videoCam.value.src = window.URL.createObjectURL(vidStream);
      }
      videoCam.value.onloadedmetadata = function () {
        videoCam.value.play();
      };
    })
    .catch(function (e) {
      console.log(e.name + ": " + e.message);
    });
}
onMounted(() => {
  openCam();
});
</script>

We imported two methods from vue: onMounted and ref. The onMounted runs when the page is fully mounted while the ref is used to declare a reactive value to reference the video element. If you look at the video tag in the template, you will notice a ref property. You can check out Template ref and onMounted lifecycle hook.

In the openCam function, we first try to test if mediaDevices is available on your browser navigation.

The MediaDevices interface provides access to connected media input devices like cameras and microphones, as well as screen sharing. In essence, it lets you obtain access to any hardware source of media data.

This MediaDevice has a method getUserMedia which prompts the user for permission to use a media input. You can find all you need to know about getUserMedia here.

From the snippet, we can see that getUserMedia returns a promise, and with that, we can get the media stream as a response using then(). We check if the video element has srcObject or not. If it does we assign the media stream to the srcObject and if not, we convert the media stream to a URL and assign it to the src of the video element.

With this Snippet and with a few style, you should have your video showing your awesome face!

webcam-works

Introducing Tensorflow and Hand Detection

Now that we got our webcam working, we will update the Template and the script in order to detect, predict, and display the alphabet based on the hand sign prediction.

The updated HTML should now look like this:

<template>
  <div class="wrapper">
    <video
      ref="videoCam"
      class="peer-video"
      preload="auto"
      autoPlay
      muted
      playsInline
    />
    <div class="alphabet">{{ sign }}</div>
  </div>
</template>

The div with class name alphabet will display the alphabet based on the hand sign prediction.

We will be introducing two(2) new functions, createDetectionInstance and handleSignDetection.

Firstly, lets begin with the createDetectionInstance which is an integral part of the hand sign detection and then we will introduce handleSignDetection which predicts and displays the hand sign.


<script setup>
import { onMounted, ref } from "vue";
import * as handPoseDetection from "@tensorflow-models/hand-pose-detection";

let detector;
const videoCam = ref();

function openCam() {
   …
}

const createDetectionInstance = async () => {
  const model = handPoseDetection.SupportedModels.MediaPipeHands;
  const detectorConfig = {
    runtime: "mediapipe",
    modelType: "lite",
    solutionPath: "https://cdn.jsdelivr.net/npm/@mediapipe/hands/",
  } as const;
  detector = await handPoseDetection.createDetector(model, detectorConfig);
};

onMounted(async () => {
  openCam();
  await createDetectionInstance();
});
</script>

To be able to detect hand poses, we need to create an instance of the handpose detector, and here, we created a function createDetectionInstance which is an asynchronous function.

You can check out this Tensorflow blog to see more details.

Now that we have created an avenue to detect hand signs, let us start detecting the hand.

In that light, we will be adding a handleSignDetection function.


<script setup>
import { onMounted, ref } from "vue";
import * as handPoseDetection from "@tensorflow-models/hand-pose-detection";

let detector;
const videoCam = ref();

function openCam() {
  …
}

const createDetectionInstance = async () => {
  const model = handPoseDetection.SupportedModels.MediaPipeHands;
  const detectorConfig = {
    runtime: "mediapipe",
    modelType: "lite",
    solutionPath: "https://cdn.jsdelivr.net/npm/@mediapipe/hands/",
  } as const;
  detector = await handPoseDetection.createDetector(model, detectorConfig);
};

const handleSignDetection = () => {
  if (!videoCam.value || !detector) return;
  setInterval(async () => {
    const hands = await detector.estimateHands(videoCam.value);
    if (hands.length > 0) {
      console.log(hands)
    }
  }, 2000);
};
onMounted(async () => {
  openCam();
  await createDetectionInstance();
  handleSignDetection();
});
</script>

The handleSignDetection runs after creating the detection instance. We have a setInterval that runs every 2 seconds (PS: the 2 seconds timing is arbitrary and can be less or more) to check if there is any hand sign. We also have a conditional statement to ensure the video element exists, and the detection instance was created accordingly.

So, the detector calls a method estimateHands, which tries to predict the hand pose by getting keypoints with values that are either in 2D or 3D (Multi-dimensional Array).

If you check your console log, you will see an array of data if any hand pose is detected.

Now that we can detect hand poses, we will now add fingerpose that will help predict and display the alphabet based on the hand sign.


<script setup>
import { onMounted, ref } from "vue";
import * as handPoseDetection from "@tensorflow-models/hand-pose-detection";
import * as fp from "fingerpose";
import Handsigns from "@/utils/handsigns";

let detector;
const videoCam = ref();
let sign = ref(null);

function openCam() {
  …
}

const createDetectionInstance = async () => {
  const model = handPoseDetection.SupportedModels.MediaPipeHands;
  const detectorConfig = {
    runtime: "mediapipe",
    modelType: "lite",
    solutionPath: "https://cdn.jsdelivr.net/npm/@mediapipe/hands/",
  } as const;
  detector = await handPoseDetection.createDetector(model, detectorConfig);
};

const handleSignDetection = () => {
  if (!videoCam.value || !detector) return;
  setInterval(async () => {
    const hands = await detector.estimateHands(videoCam.value);
    if (hands.length > 0) {
      const GE = new fp.GestureEstimator([
        fp.Gestures.ThumbsUpGesture,
        Handsigns.aSign,
        Handsigns.bSign,
        Handsigns.cSign,
        …
        Handsigns.zSign,
      ]);

      const landmark = hands[0].keypoints3D.map(
        (value) => [
          value.x,
          value.y,
          value.z,
        ]
      );
      const estimatedGestures = await GE.estimate(landmark, 6.5);

      if (estimatedGestures.gestures && estimatedGestures.gestures.length > 0) {
        const confidence = estimatedGestures.gestures.map((p) => p.score);
        const maxConfidence = confidence.indexOf(
          Math.max.apply(undefined, confidence)
        );

        sign.value = estimatedGestures.gestures[maxConfidence].name
      }
    }
  }, 2000);
};
onMounted(async () => {
  openCam();
  await createDetectionInstance();
  handleSignDetection();
});
</script>

Assuming that our detector sensed a hand, it is time to match this value based on the hand signs we created with the fingerpose.

The landmark variable is a 3D array pulled from the hand result's keypoint3D key value. There is also a keypoint as well, which is a 2D value, and both will give the same result.

Now, using GE.estimate, we can generate a possible gesture that matches the sign, and a score/confidence is assigned to each gesture pending the amount of gesture predicted. So, the gesture with the highest score/confidence is selected since it is estimated to be the closest to the hand sign from the figerpose hand signs we created.

We also imported Handsigns and its content looks like this:

Asign

You can also get the handsigns folder from the 100-ms-vue repository. Looking at the screenshot, there is a GestureDescription instance that takes a string A which will represent what the hand sign will stand for. So, it could be anything you want the handsign to stand for.

onMounted is asynchronous because we need to ensure that our detection instance is created, which is required to detect the hand sign.

With the updated code, you should be able to display some letters.

webcam-signs

Conclusion

Don't forget, you can see in detail how this was implemented in one of This Dot Labs' open source projects 100-ms-vue. Please note that what we did is just a basic implementation, and to have a production-ready version, it will need a bigger model, and a more complex detection to be able to identify hand sign language.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

ChatGPT can't solve these problems! with Chris Gardner cover image

ChatGPT can't solve these problems! with Chris Gardner

Chris Gardner talks neural networks, covering fundamental concepts, historical development, and practical applications. It’s important to understand the difference between artificial intelligence (AI) and machine learning, and the role of neural networks in solving intricate problems with vast datasets. This conversation centers around the intricacies of training neural networks, particularly as they become more complex with multiple layers. Chris and Rob touch on the fascinating yet daunting nature of neural networks, discussing their ability to learn beyond human comprehension. Turning to the practical side of using neural networks, Chris shares the existence of libraries that exist to simplify the process of building a network, enabling users to input data, specify parameters, and entrust the system with the training. Both caution about the biases inherent in the data and the responsibility associated with working on machine learning models. They address challenges related to ethics, highlighting the difficulties in identifying biases and emphasizing the delicate balance between excitement and caution in the evolving field of machine learning. Listen to the podcast here: https://modernweb.podbean.com/e/chris-gardner/...

10 Mobile AR Apps That Are Creating the “New Normal” for Your Customers cover image

10 Mobile AR Apps That Are Creating the “New Normal” for Your Customers

As developers and businesses leaders, we know that augmented reality is no longer a far-flung futuristic concept. In fact, a 2019 Perkins Coie survey revealed that roughly 90% of tech leaders and consultants believe that the use of immersive technologies will be as ubiquitous as mobile devices by 2025. But more intriguing than this, is the diversity in opinions from respondents when asked which sectors they believed would most invest in AR/VR technologies. Not surprisingly, 54% include gaming in their top picks. But that narrow majority is rivaled by 43% of respondents who include healthcare, and at least 20% who felt strongly about a number of other industries, including education, military and defense, and manufacturing/automotive. Though the use of AR technologies is expected to explode across the consumer user-base, only 22% of consumers, in a 2019 ARtillery Intelligence survey, were aware of having used mobile AR technologies themselves. Now it may just be one dev’s opinion, but I believe that this number does not accurately reflect the true percentage of those surveyed who have actually used AR- just the percentage that KNOW they’ve used AR. But is this number important? I think so. I imagine that the unknown percentage that claimed not to have used mobile AR apps, but who have unknowingly used them, likely just stumbled upon them, not drawn to the fact that the applications utilize this transformative technology, and being completely oblivious that they used AR. This unknown should terrify, or motivate executives, because this may mean that AR is so quickly and seamlessly integrating with other mobile technologies, that the change is completely inarticulable at best, and unnoticed at worst, by the average consumer. In other words, AR may be quickly becoming the new normal, with unintegrated technologies possibly being seen by consumers not as a different class of technologies, but, perhaps, as an “inferior” technology. So let’s check out 10 mobile apps that are already preparing your future customers and users for the “new normal”: 1) Wecasper Launched in 2015, Wecasper is a mobile social media application that allows its users to store multimedia content in geolocated containers, almost like time capsules, aptly called “caps”. But not only does this app allow users to interact with certain content while in particular locations, it also lets users search for, and see the caps in their environments through their mobile screens! 2) WallaMe Not so unlike Wecasper, WallaMe have given users the ability to hide messages in the real world, and share those messages with their friends, and the general public. Users can draw pictures, or paint messages on physical walls around them- almost like virtual graffiti- and then alert their friends about the message’s location! Friends are then able to see the messages they leave for each other, and even share those messages with other users!! 3) Pottery Barn’s 3D Room View App Ever been on the verge of buying that stylish chair, or perfect coffee table, but just don’t know how it would look in your living room? Pottery Barn is one of a number of furniture retailers, including my favorite, Ikea, that have launched AR powered retail platforms. With their 3D Room View App, users are now able to place spatially accurate renderings of Pottery Barn’s catalog of furniture to see how the pieces fit their rooms. Customers never need to doubt their online purchases, and Pottery Barn will undoubtedly see an increase in engagement with their ecommerce platforms. 4) Sherwin William’s ColorSnap Visualizer It seems Pottery Barn isn’t the only home decor and improvement company that has caught the AR bug. Industrial paint company, Sherwin Williams is giving their customers the ability to try out colors without having to pick up swatches, or dab unsightly (and let’s be honest, unhelpful) brush strokes on to their walls. Instead, customers can actually project colors onto their very own walls through a mobile interface, giving them the ability to try out hundreds of different colors in just minutes. 5) Star Walk 2 Developed by Vito Technologies, Star Walk 2 is a fun educational application that guides users through the night’s sky. All a user has to do is point their phone toward the sky, and the app’s AR feature will not only tell the user what stars, constellations, and other astronomical features are above, but will show them as well! No more guessing at what constellations are what, or if that little light above is a star or an airplane- users can trust Sky Walk 2’s AR feature to tell them everything they want to know, and probably a little more! Sappy high school dates will never be the same! 6) MeasureKit Now I know that I’m not the only one who is CONSTANTLY misplacing the one measuring tape we keep around the house. That’s where MeasureKit swoops in for the rescue. MeasureKit uses advanced AR technologies to allow its users to quickly and accurately measure lengths, distances, trajectories, and angles with nothing more than the camera on their mobile devices. I would download this is a second if it didn’t make me so afraid of misplacing my Smart Phone too! 7) The Safety Compass Now let’s take a quick diversion from consumer apps, and look at a mobile app being used to help keep industrial work sites safer for employees. According to the app’s publisher, an employee dies on a worksite every 15 seconds, globally. The Safety Compass is hoping to decrease that number significantly with its mobile app that gives workers the ability to scan their worksites for potential hazards, and shows them relevant information on standard safety protocols with which to approach the hazards. This takes the guess-work out of not only whether something is hazardous, but what that hazard is called, equipping workers with relevant information that will help keep them safe. 8) Janus Health AR Janus Health is changing the future of dentistry and orthodontics with its mobile compatible AR application. The app scans a patient’s teeth, and shows them, in real-time, what their teeth could look like after a procedure. This not only boosts patient confidence, but may also expediate the consultation process for providers, and increase patient engagement with the practice. I wonder if the app can make your teeth look worse too? Maybe if my orthodontist had this app when I was a teenager, I would have actually worn my retainer! 9) Google Lens Google Lens, for Android (sorry iOS users), brings the power of advanced image recognition software to the palm of its user’s hand. Have you ever taken a walk through the park, and wanted to identify what kind of tree or flower is in front of you? Now all you need to do is point your mobile camera toward it, and Google Lens will identify it for you! But that’s not all it does! Google advertises that users can also translate text from signs or packages, scan restaurant menus to find pictures and reviews of the items, and even find out more information about landmarks just by pointing their mobile cameras! 10) Quiver Quiver is an AR enhanced edutainment mobile app that helps bring kids’ drawings to life! Users simply need to print off the pages offered through the company’s web platform or app, color in the photos, and use a mobile device to see lifelike 3-D renderings of the pictures right in front of them! Quiver has also expanded, releasing new product lines, including a version of their app that provides all of the fun of coloring, and AR, with a greater focus on education! SO WHAT'S THE TAKE AWAY? According to Perkins Coie, the North American market is expected to see the most significant growth in AR investment over the next 5 years. While the language of AR is not necessarily on the forefronts of consumers’ minds, the pressure to provide these more natural connections between the digital world and consumers will increase as more users come to expect AR integrated application functionalities. Now is a perfect time to invest in AR technology! Whether you are looking to engage more with your customers, improve the lives of your employees, or simply create a product that will enrich users' lives through play and learning, there is space for AR integrated mobile applications within your company. By starting your transformation now, you are affording your company the opportunity to identify the areas of your business that can best be enhanced by this and other transformative technologies, and make mindful, well considered steps to developing these technologies for yourself. Do not wait until the pressure from your competitors eventually forces you to hurry a product to market. Begin the process now, and you will thank yourself later. Don’t know how to start? Contact This Dot Labs to speak with leaders in enterprise-level digital transformation. This Dot Labs is a web development firm that helps some of the world’s leading enterprises, including American Express, ING, Groupon, and many others, reach their development goals through consulting, mentorship, and training. With our diverse team of seasoned senior developers and mentors, we can help you harness the power of advanced modern tools to keep you ahead of the AR curve!...

Storybook: Can Your Next Project Benefit from It? cover image

Storybook: Can Your Next Project Benefit from It?

I will show you what Storybook is all about and hopefully help you decide if you need it in your next project. Don't worry if you have limited experience as an engineer because you don't need to have an advanced technical background to make use of Storybook. What is Storybook? Storybook is a tool for developing UIs that allows you to work on one component at a time. What is a component? A component is smallest part of a UI. In chemistry, we refer to it as an atom(_Smallest unit of a matter_). Most frameworks are now components based. Why Storybook? Storybook allows you to actually develop the entire UIs without starting the major application. But in order to cleary state why one would choose to use it, lets look at some benefits and challenges of Storybook. Benefits - Being able to build components in isolation,and not having to think about integrations or other systems running in your stack is a blessing - A clearer vision of your components styling and layout - Navigating between UI component as you get to see all component as a sidebar item - Communicating with other devs. It really allows you to show how your new component is supposed to be used and which features it boasts. - You can manipulate props value - Its is not framework agnostic i.e you can use for Vue, React, Angular, etc. - Great for documenting Challenges - If you’re integrating it into an existing project, there is some migration work to be done. - You rely on addons and decorators for handling data. - It can take up a lot of time to maintain and properly set up. - It takes team commitment to keep it updated and ensure everything still works. What are add-ons actually? Add-ons are plugins that can be added to make certain feature work. For example, by default scss doesn't work on Storybook, so you will have to install an add-on to make it work as expected. When to use Storybook In this section, we will the making use of This Dot Labs' Open source project starter.dev as a case study. starter.dev is built using a variety of technologies to demonstrate architectural patterns and best practices using these tools. This project uses these technologies like Vue, React, Nextjs, Angular, etc to show various ways Github can be implemented!. Lets Pick a component from starter.dev GitHub Showcases to demonstrate the features Storybook has. Every conponent has a control tab that allows us to manipulte the props if there be any. For example: Here, looking at the image with the props, you will notice some properties: - cardType: This is a dropdown/select option with two opions issue and pullrequest. - state: this defines the state of the card - author: the person who created it or a name, or whatever, you get the point. Lets change some of the props and see the result - cardType will be pullrequest - state will be closed Here, we can see there has been a change. This is so helpful, and tells the team members what the component can and can't do. Other features Looking at other interesting features like - Settings Here, we can toggle for feature like side nav, the controls (which, in the dropdown option, is call addons) - Responsiveness There are few options that help us test for responsivess. Below, you will find an image showing those options: Here we can select any option to see the responsiveness of that component. Large screen Mobile screen Hover state of the card - Inspection Here, we can see certain properties like padding, and current width and height Looking to get started To get started, all you need is to visit this sites based on your prefered technology - Storybook with Vue - Storybook with React - Storybook with Angular - Storybook with Web Components - Storybook with Web Svelte - Storybook with Web Preact - Storybook with Web Ember Conclusion It's up to you to decide if Storybook helps you achieve your goals faster, or if you think it's useful for your project setup. It's important to look at what it could offer and what it can't. You can decide to use it for just plain documentation or for components, or both....

The simplicity of deploying an MCP server on Vercel cover image

The simplicity of deploying an MCP server on Vercel

The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead. Example of Lightweight MCP Server Design At This Dot Labs, we built an MCP server that leverages the DocuSign Navigator API. The tools, like `get_agreements`, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way. ` Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access. The Oauth flow begins when the user's LLM client makes a request without a valid auth token. In this case they’ll get a 401 response from our server with a WWW-Authenticate header, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. ` This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface. Deployment Options I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel. The case for Vercel: * My own familiarity with Next.js API deployment * Fit for architecture * The extremely simple deployment process * Deploy previews (the eternal Vercel customer conversion feature, IMO) Previews of unfamiliar territory Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review. Stuff I’m Not Worried About Vercel’s mcp-handler package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works. ` A Brief Case for MCP on Next.js Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website's agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool. Conclusion I'll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co