Skip to content

10 Mobile AR Apps That Are Creating the “New Normal” for Your Customers

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

As developers and businesses leaders, we know that augmented reality is no longer a far-flung futuristic concept. In fact, a 2019 Perkins Coie survey revealed that roughly 90% of tech leaders and consultants believe that the use of immersive technologies will be as ubiquitous as mobile devices by 2025. But more intriguing than this, is the diversity in opinions from respondents when asked which sectors they believed would most invest in AR/VR technologies. Not surprisingly, 54% include gaming in their top picks. But that narrow majority is rivaled by 43% of respondents who include healthcare, and at least 20% who felt strongly about a number of other industries, including education, military and defense, and manufacturing/automotive.

Though the use of AR technologies is expected to explode across the consumer user-base, only 22% of consumers, in a 2019 ARtillery Intelligence survey, were aware of having used mobile AR technologies themselves. Now it may just be one dev’s opinion, but I believe that this number does not accurately reflect the true percentage of those surveyed who have actually used AR- just the percentage that KNOW they’ve used AR.

But is this number important?

I think so.

I imagine that the unknown percentage that claimed not to have used mobile AR apps, but who have unknowingly used them, likely just stumbled upon them, not drawn to the fact that the applications utilize this transformative technology, and being completely oblivious that they used AR. This unknown should terrify, or motivate executives, because this may mean that AR is so quickly and seamlessly integrating with other mobile technologies, that the change is completely inarticulable at best, and unnoticed at worst, by the average consumer.

In other words, AR may be quickly becoming the new normal, with unintegrated technologies possibly being seen by consumers not as a different class of technologies, but, perhaps, as an “inferior” technology.

So let’s check out 10 mobile apps that are already preparing your future customers and users for the “new normal”:

  1. Wecasper

Launched in 2015, Wecasper is a mobile social media application that allows its users to store multimedia content in geolocated containers, almost like time capsules, aptly called “caps”. But not only does this app allow users to interact with certain content while in particular locations, it also lets users search for, and see the caps in their environments through their mobile screens!

  1. WallaMe

Not so unlike Wecasper, WallaMe have given users the ability to hide messages in the real world, and share those messages with their friends, and the general public. Users can draw pictures, or paint messages on physical walls around them- almost like virtual graffiti- and then alert their friends about the message’s location! Friends are then able to see the messages they leave for each other, and even share those messages with other users!!

  1. Pottery Barn’s 3D Room View App

Ever been on the verge of buying that stylish chair, or perfect coffee table, but just don’t know how it would look in your living room? Pottery Barn is one of a number of furniture retailers, including my favorite, Ikea, that have launched AR powered retail platforms. With their 3D Room View App, users are now able to place spatially accurate renderings of Pottery Barn’s catalog of furniture to see how the pieces fit their rooms. Customers never need to doubt their online purchases, and Pottery Barn will undoubtedly see an increase in engagement with their ecommerce platforms.

  1. Sherwin William’s ColorSnap Visualizer

It seems Pottery Barn isn’t the only home decor and improvement company that has caught the AR bug. Industrial paint company, Sherwin Williams is giving their customers the ability to try out colors without having to pick up swatches, or dab unsightly (and let’s be honest, unhelpful) brush strokes on to their walls. Instead, customers can actually project colors onto their very own walls through a mobile interface, giving them the ability to try out hundreds of different colors in just minutes.

  1. Star Walk 2

Developed by Vito Technologies, Star Walk 2 is a fun educational application that guides users through the night’s sky. All a user has to do is point their phone toward the sky, and the app’s AR feature will not only tell the user what stars, constellations, and other astronomical features are above, but will show them as well! No more guessing at what constellations are what, or if that little light above is a star or an airplane- users can trust Sky Walk 2’s AR feature to tell them everything they want to know, and probably a little more! Sappy high school dates will never be the same!

  1. MeasureKit

Now I know that I’m not the only one who is CONSTANTLY misplacing the one measuring tape we keep around the house. That’s where MeasureKit swoops in for the rescue. MeasureKit uses advanced AR technologies to allow its users to quickly and accurately measure lengths, distances, trajectories, and angles with nothing more than the camera on their mobile devices. I would download this is a second if it didn’t make me so afraid of misplacing my Smart Phone too!

  1. The Safety Compass

Now let’s take a quick diversion from consumer apps, and look at a mobile app being used to help keep industrial work sites safer for employees. According to the app’s publisher, an employee dies on a worksite every 15 seconds, globally. The Safety Compass is hoping to decrease that number significantly with its mobile app that gives workers the ability to scan their worksites for potential hazards, and shows them relevant information on standard safety protocols with which to approach the hazards. This takes the guess-work out of not only whether something is hazardous, but what that hazard is called, equipping workers with relevant information that will help keep them safe.

  1. Janus Health AR

Janus Health is changing the future of dentistry and orthodontics with its mobile compatible AR application. The app scans a patient’s teeth, and shows them, in real-time, what their teeth could look like after a procedure. This not only boosts patient confidence, but may also expediate the consultation process for providers, and increase patient engagement with the practice. I wonder if the app can make your teeth look worse too? Maybe if my orthodontist had this app when I was a teenager, I would have actually worn my retainer!

  1. Google Lens

Google Lens, for Android (sorry iOS users), brings the power of advanced image recognition software to the palm of its user’s hand. Have you ever taken a walk through the park, and wanted to identify what kind of tree or flower is in front of you? Now all you need to do is point your mobile camera toward it, and Google Lens will identify it for you! But that’s not all it does! Google advertises that users can also translate text from signs or packages, scan restaurant menus to find pictures and reviews of the items, and even find out more information about landmarks just by pointing their mobile cameras!

  1. Quiver

Quiver is an AR enhanced edutainment mobile app that helps bring kids’ drawings to life! Users simply need to print off the pages offered through the company’s web platform or app, color in the photos, and use a mobile device to see lifelike 3-D renderings of the pictures right in front of them! Quiver has also expanded, releasing new product lines, including a version of their app that provides all of the fun of coloring, and AR, with a greater focus on education!

SO WHAT'S THE TAKE AWAY?

According to Perkins Coie, the North American market is expected to see the most significant growth in AR investment over the next 5 years. While the language of AR is not necessarily on the forefronts of consumers’ minds, the pressure to provide these more natural connections between the digital world and consumers will increase as more users come to expect AR integrated application functionalities.

Now is a perfect time to invest in AR technology! Whether you are looking to engage more with your customers, improve the lives of your employees, or simply create a product that will enrich users' lives through play and learning, there is space for AR integrated mobile applications within your company. By starting your transformation now, you are affording your company the opportunity to identify the areas of your business that can best be enhanced by this and other transformative technologies, and make mindful, well considered steps to developing these technologies for yourself.

Do not wait until the pressure from your competitors eventually forces you to hurry a product to market. Begin the process now, and you will thank yourself later.

Don’t know how to start?

Contact This Dot Labs to speak with leaders in enterprise-level digital transformation. This Dot Labs is a web development firm that helps some of the world’s leading enterprises, including American Express, ING, Groupon, and many others, reach their development goals through consulting, mentorship, and training. With our diverse team of seasoned senior developers and mentors, we can help you harness the power of advanced modern tools to keep you ahead of the AR curve!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Increasing development velocity with Cursor cover image

Increasing development velocity with Cursor

If you’re a developer, you’ve probably heard of Cursor by now and have either tried it out or are just curious to learn more about it. Cursor is a fork of VSCode with a ton of powerful AI/LLM-powered features added on. For around $20/month, I think it’s the best value in the AI coding space. Tech giants like Shopify and smaller companies like This Dot Labs have purchased Cursor subscriptions for their developers with the goal of increased productivity. I have been using Cursor heavily for a few months now and am excited to share how it’s impacted me personally. In this post, we will cover some of the basic features, use cases, and I’ll share some tips and tricks I’ve learned along the way. If you love coding and building like me, I hope this post will help you unleash some of the superpowers Cursor’s AI coding features make possible. Let’s jump right in! Cursor 101 The core tools of the Cursor tool belt are Autocomplete, Ask, and Agent. Feature: Autocomplete The first thing that got me hooked was Autocomplete. It just worked so much better than the tools I had used previously, like GitHub Copilot. It was quicker and smarter, and I could immediately notice the amount of keystrokes that it was saving me. This feature is great because it doesn’t really require any work or skilled prompting from the user. There are a couple of tricks for getting a little bit more out of it that I will share later, but for now, just enjoy the ride! Feature: Ask If you’ve interacted with AI/LLMs before, like ChatGPT - this is what the Ask feature is. It’s just a chat feature you can easily provide context to from your code base and choose which Model to chat with. This feature is best suited for just asking more general questions that you might have queried Google or Stack Overflow for in the past. It’s also good for planning how to implement a feature you’re working on. After chatting or planning, you can switch directly to Agent mode to pick up and take action on something you were cooking up in Ask mode. Here’s an example of planning a simple tic-tac-toe game implementation using the Ask feature: Feature: Agent Agent mode lets the AI model take the wheel and write code, make edits, or take other similar actions on your code base. The goal is that you can write prompts and give instructions, and the Agent can generate the code and build features or even entire applications for you. With great power comes great responsibility. Agents are a feature where the more you put into them, the more you get out. The more skilled you become in using them by providing better prompts and including the right context, you will continue to get better results. The AI doesn’t always get it right, but the fact that the models and the users are both getting better is exciting. Throughout this post, I will share the best use cases, tips, and tricks I have found using Cursor Agent. Here’s an example using the Agent to execute the implementation details of the tic-tac-toe game we planned using Ask: Core Concept: Context After understanding the features and the basics of prompting, context is the most important thing for getting the best results out of Cursor. In Cursor and in general, whenever you’re prompting a chat or an agent, you want to make sure that it has all the relevant information that it needs to provide an answer or result. Cursor, by default, always has some context of your code. It indexes your code base and usually keeps the open buffer in the context window at the very least. At the top left of the Ask or Agent panel, there is an @ button, and next to that are badges for all the current items that have been explicitly added to the context for the current session. The @ button has a dropdown that allows you to add files, folders, web links, past chats, git commits, and more to the context. Before you prompt, always make sure you add the relevant content it needs as context so that it has everything it needs to provide the best response. Settings and Rules Cursor has its own settings page, which you can access through Cursor → Settings → Cursor Settings. This is where you log in to your account, manage various features, and enable or disable models. In the General section, there is an option for Privacy Mode. This is one setting in particular I recommend enabling. Aside from that, just explore around and see what’s available. Models The model you use is just as important as your prompt and the context that you provide. Models are the underlying AI/LLM used to process your input. The most well-known is GPT-4o, the default model for ChatGPT. There are a lot of different models available, and Cursor provides access to most of them out of the box. Model pricing A lot of the most common models, like GPT-4o or Sonnet 3.5/3.7, are included in your Cursor subscription. Some models like o1 and Sonnet 3.7 MAX are considered premium models, and you will be billed for usage for these. Be sure to pay attention to which models you are using so you don’t get any surprise bills. Choosing a Model Some models are better suited for certain tasks than others. You can configure which models are enabled in the Cursor Settings. If you are planning out a big feature or trying to solve some complex logic issue, you may want to use one of the thinking models, like o1, o3-mini, or Deep Seek R1. For most coding tasks and as a good default, I recommend using Sonnet 3.5 or 3.7. The great thing about Cursor is that you have the options available right in your editor. The most important piece of advice that I can give in this post is to keep trying things out and experimenting. Try out different models for different tasks, get a feel for it, and find what works for you. Use cases Agents and LLM models are still far from perfect. That being said, there are already a lot of tasks they are very good at. The more effective you are with these tools, the more you will be able to get done in a shorter amount of time. Generating test cases Have some code that you would like unit tested? Cursor is very good at generating test cases and assertions for your code. The fewer barriers there are to testing a piece of code, the better the result you will get. So, try your best to write code that is easily testable! If testing the code requires some mocks or other pieces to work, do your best to provide it the context and instructions it needs before writing the tests. Always review the test cases! There could be errors or test cases that don’t make sense. Most of the time, it will get you pretty close to where you want to be. Here’s an example of using the Agent mode to install packages for testing and generate unit tests for the tic-tac-toe game logic: Generating documentation This is another thing we know AI models are good at - summarizing large chunks of information. Make sure it has the context of whatever you want to document. This one, in particular, is really great because historically, keeping documentation up to date is a rare and challenging practice. Here’s an example of using the Agent mode to generate documentation for the tic-tac-toe game: Code review There are a lot of up-and-coming tools outside of Cursor that can handle this. For example, GitHub now has Copilot integrated in pull requests for code reviews. It’s never a bad idea to have whatever change set you’re looking to commit reviewed and inspected before pushing it up to the remote, though. You can provide your unstaged changes or even specific commits as context to a Cursor Ask or Agent prompt. Getting up to speed in a new code base Being able to query a codebase with the power of LLM’s is truly fantastic. It can be a great help to get up to speed in a large new codebase quickly. Some example prompts: > Please provide an overview of this project and how to get started developing with it > I need to make some changes to the way that notifications are grouped in the UI, please provide a detailed analysis and pseudo code outlining how the grouping algorithm works If you have a question about the code base, ask Cursor! Refactoring Refactoring code in a code base is a much quicker process in Cursor. You can execute refactors depending on their scope in a couple of distinct ways. For refactors that don’t span a lot of files or are less complex, you can probably get away with just using the autocomplete. For example, if you make a change to something in a file and there are several instances of the same pattern following, the autocomplete will quickly pick up on this and help you tab through the changes. If you switch to another file, this information will still be in context and can be continued most of the time. For larger refactors spanning several files, using the Agent feature will most likely be the quickest way to get it done. Add all the files you plan to make changes to the Agent tab’s context window. Provide specific instructions and/or a basic example of how to execute the refactor. Let the Agent work, if it doesn’t get it exactly right initially, you can always give it corrections in a follow-up prompt. Generating new code/features This is the big promise of AI agents and the one with the most room for mixed results. My main recommendation here is to keep experimenting. Keep learning to prompt more effectively, compare results from different models, and pay attention to the results you get from each use case. I personally get the best results building new features in small, focused chunks of work. It can also be helpful to have a dialog with the Ask feature first to plan out the feature's details that the Agent can follow up on and implement. If there are existing patterns in your codebase for accomplishing certain things, provide this information in your prompts and make sure to add the relevant code to the context. For example, if you’re adding a new form to the web page and you have other similar forms that handle validation and making back-end calls in the same way, Cursor can base the code for the new feature on this. Example prompt: Generate a form for creating a new post, follow similar patterns from the create user profile form, and look to the post schema for the fields that should be included. Remember that you can always follow up with additional prompts if you aren’t quite happy with the results of the first.. If the results are close but need to be adjusted in some way, let the agent know in the next prompt. You may find that for some things, it just doesn’t do well yet. Mentally note these things and try to get to a place where you can intuit when to reach for the Agent feature or just write some of the code the old-fashioned way. Tips and tricks The more you use Cursor, the more you will find little ways to get more out of it. Here are some of the tips and patterns that I find particularly useful in my day-to-day work. Generating UI with screenshots You can attach images to your prompts that the models can understand using computer vision. To the left of the send button, there is a little button to attach an image from your computer. This functionality is incredibly useful for generating UI code, whether you are giving it an example UI as a reference for generating new UI in your application or providing a screenshot of existing UI in your application and prompting it to change details in reference to the image. Cursor Rules Cursor Rules allow you to add additional information that the LLM models might need to provide the best possible experience in your codebase. You can create global rules as well as project-specific ones. An example use case is if your project has some updated dependency with newer APIs than the one on which the LLM has been trained. I ran into this when adding Tailwind v4 to a project; the models are always generating code based on Tailwind v3 or earlier. Here’s how we can add a rules file to handle this use case: ` If you want to see some more examples, check out the awesome-cursorrules repository. Summary Learn to use Cursor and similar tools to enhance your development process. It may not give you actual superpowers, but it may feel like it. All the features and tools we’ve covered in this post come together to provide an amazing experience for developing all types of software and applications....

“Recognize leadership behavior early. Sometimes people don’t even realize it in themselves…” Kelly Vaughn on Product Leadership, Creating Pathways for Women in Tech, & Conferences cover image

“Recognize leadership behavior early. Sometimes people don’t even realize it in themselves…” Kelly Vaughn on Product Leadership, Creating Pathways for Women in Tech, & Conferences

Some leaders build products. Some lead engineering teams. Kelly Vaughn is doing both. As Director of Engineering at Spot AI—a company building video intelligence software—Kelly recently expanded her role to oversee both Product and Engineering for their VMS offering. That shift means juggling strategy, execution, and team development, all while helping others step confidently into leadership themselves. And yes, she still finds time to speak at conferences and answer DMs from people navigating the same transitions she once did. We spoke with Kelly about spotting leadership potential early, why ambiguity doesn’t have to feel chaotic, and the lesson she learned the hard way about managing up. Stepping into Product Leadership Kelly’s new title might look like a promotion on paper, but the shift is more philosophical than anything. > “Engineering leadership is about execution,” she says. “Product leadership is about defining why we’re building something in the first place.” Now leading Product and Engineering for Spot AI’s VMS product, she’s talking to customers, researching market trends, and making smart bets on where to invest next. It’s a role she’s clearly energized by. > “I’m really looking forward to dedicating time to shaping our product’s future.” Thriving in Ambiguity Some people panic when problems are fuzzy or undefined. Others use it as fuel. > “There are two key traits I see in people who handle ambiguity well,” Kelly says. “They stay calm under stress, and they know how to form a hypothesis from a vague problem statement.” That means asking the right questions, taking action quickly, and being totally okay with pivoting when something doesn’t pan out. It’s no surprise that these same traits overlap with great product thinking—a mindset she’s now leaning into more than ever. > “I do some of my best work when navigating uncertainty,” she adds. Read Kelly’s blog on embracing ambiguity in Product! Creating Leadership Pathways for Women in Tech When asked how leaders can create more leadership pathways for women in software engineering, Kelly stressed that it is not a passive process. > “Senior leaders need to be proactive,” Kelly says. “That starts with identifying and addressing bias across hiring, promotions, and day-to-day interactions.” She emphasizes psychological safety—so women feel confident advocating for themselves and others. But she also knows not everyone feels ready to raise their hand. > “Don’t wait for someone to ask for a title change or a growth opportunity. Recognize leadership behavior early. Sometimes people don’t even realize it in themselves yet.” On Stage, In Real Life Kelly’s no stranger to the tech conference circuit—often giving talks on engineering leadership and team growth. Her biggest source of inspiration? Conversations with people trying to make the leap into leadership. > “I might use the same slide deck at three conferences,” she says, “but the talk itself will be different every time.” Rather than sticking to a script, she likes to share recent examples from her own work, tailoring the delivery to the audience in front of her. It keeps things relevant, grounded, and never too polished. Between setting product strategy, mentoring the next generation of leaders, and hopping from one tech conference to the next, Kelly Vaughn is showing what it means to lead with clarity—even when things are unclear. She’s not here to tell you it’s easy. But she will tell you it’s worth it. Connect with Kelly Vaughn on Bluesky. Sign up for Kelly Vaughn’s Newsletter! Sticker Illustration by Jacob Ashley....

Announcing July JavaScript Marathon - Free, online training! cover image

Announcing July JavaScript Marathon - Free, online training!

Join us July 22nd, 2020 for our next JavaScript Marathon! JavaScript Marathon is a full day of free, online courses on Angular, React, Vue, RxJS, and Web Performance. Come learn about some of the leading web development technologies, and concepts! Stay for one training, or stick around for the whole day! No two sessions will be the same! --- Featuring Shawn Wang @ 11:00am - 12:00pm EDT In this session we will learn how to build a fullstack serverless React + GraphQL app from scratch with authentication, storage, and multiplayer realtime collaboration, all atop infinitely scalable AWS components, with AWS Amplify! It's never been this easy to go from idea to prototype, and each piece will be livecoded in front of your very eyes! --- Featuring Michael Hladky @ 12:30pm - 1:30pm EDT The async pipe is boring! Understand the guts of Angulars change detection and why zone.js is your biggest enemy. Learn the tricks on template bindings, component rendering, and where you pay the biggest price. As a cutting edge demo, you will understand how to analyze blocking UIs over flame charts and how to avoid them. In the end, you will be able to get zone-less performance even in zone-full Angular applications! --- Featuring Nathan Walker @ 2:00pm - 3:00pm EDT During this introduction to Nativescript, you’ll get a brief overview of what Nativescript is and how it works. You’ll also learn how to create a TypeScript, Angular, Vue, and React based app, + so much more! --- Featuring Cecelia Martinez @ 3:30pm - 4:30pm EDT Looking to add testing to your skill set or just feel more confident pushing to production? In this beginner-level talk, we will walk through the process of installing, configuring, and writing a critical-path test using Cypress. Written in JavaScript and built on the popular Mocha and Chai libraries, the free and open-source Cypress Test Runner gets you up to speed with end-to-end testing fast. We will also cover general testing strategies for beginners, including how to decide what to test and how to ensure your test suite is effective. --- Featuring Jesse Tomchak @ 5:00pm - 6:00pm EDT Setting up user authorization and authentication can be a minefield of security practices, token verification, valid callback urls, salt hashes, and more. Now take all those struggles and sprinkle them over serverless functions! When all we want to do is get past the login page to our actual application. We'll walk through setting up secure oAuth with AWS Lambda functions, covering common pitfalls, so that you can get back to the fun part of your project. --- Tune in next month for another full day of JavaScript Marathon! Need private trainings for your company? If you would like to learn more about how you can leverage This Dot’s expertise to upskill your team, and reinvigorate your developers with new knowledge about the web’s leading development technologies, visit the trainings page....

Vercel BotID: The Invisible Bot Protection You Needed cover image

Vercel BotID: The Invisible Bot Protection You Needed

Nowadays, bots do not act like “bots”. They can execute JavaScript, solve CAPTCHAs, and navigate as real users. Traditional defenses often fail to meet expectations or frustrate genuine users. That’s why Vercel created BotID, an invisible CAPTCHA that has real-time protections against sophisticated bots that help you protect your critical endpoints. In this blog post, we will explore why you should care about this new tool, how to set it up, its use cases, and some key considerations to take into account. We will be using Next.js for our examples, but please note that this tool is not tied to this framework alone; the only requirement is that your app is deployed and running on Vercel. Why Should You Care? Think about these scenarios: - Checkout flows are overwhelmed by scalpers - Signup forms inundated with fake registrations - API endpoints draining resources with malicious requests They all impact you and your users in a negative way. For example, when bots flood your checkout page, real customers are unable to complete their purchases, resulting in your business losing money and damaging customer trust. Fake signups clutter the app, slowing things down and making user data unreliable. When someone deliberately overloads your app’s API, it can crash or become unusable, making users angry and creating a significant issue for you, the owner. BotID automatically detects and filters bots attempting to perform any of the above actions without interfering with real users. How does it work? A lightweight first-party script quickly gathers a high set of browser & environment signals (this takes ~30ms, really fast so no worry about performance issues), packages them into an opaque token, and sends that token with protected requests via the rewritten challenge/proxy path + header; Vercel’s edge scores it, attaches a verdict, and checkBotId() function simply reads that verdict so your code can allow or block. We will see how this is implemented in a second! But first, let’s get started. Getting Started in Minutes 1. Install the SDK: ` 1. Configure redirects Wrap your next.config.ts with BotID’s helper. This sets up the right rewrites so BotID can do its job (and not get blocked by ad blockers, extensions, etc.): ` 2. Integrate the client on public-facing pages (where BotID runs checks): Declare which routes are protected so BotID can attach special headers when a real user triggers those routes. We need to create instrumentation-client.ts (place it in the root of your application or inside a src folder) and initialize BotID once: ` instrumentation-client.ts runs before the app hydrates, so it’s a perfect place for a global setup! If we have an inferior Next.js version than 15.3, then we would need to use a different approach. We need to render the React component inside the pages or layouts you want to protect, specifying the protected routes: ` 3. Verify requests on your server or API: ` - NOTE: checkBotId() will fail if the route wasn’t listed on the client, because the client is what attaches the special headers that let the edge classify the request! You’re all set - your routes are now protected! In development, checkBotId() function will always return isBot = false so you can build without friction. To disable this, you can override the options for development: ` What happens on a failed check? In our example above, if the check failed, we return a 403, but it is mostly up to you what to do in this case; the most common approaches for this scenario are: - Hard block with a 403 for obviously automated traffic (just what we did in the example above) - Soft fail (generic error/“try again”) when you want to be cautious. - Step-up (require login, email verification, or other business logic). Remember, although rare, false positives can occur, so it’s up to you to determine how you want to balance your fail strategy between security, UX, telemetry, and attacker behavior. checkBotId() So far, we have seen how to use the property isBot from checkBotId(), but there are a few more properties that you can leverage from it. There are: isHuman (boolean): true when BotID classifies the request as a real human session (i.e., a clear “pass”). BotID is designed to return an unambiguous yes/no, so you can gate actions easily. isBot (boolean): We already saw this one. It will be true when the request is classified as automated traffic. isVerifiedBot (boolean): Here comes a less obvious property. Vercel maintains and continuously updates a comprehensive directory of known legitimate bots from across the internet. This directory is regularly updated to include new legitimate services as they emerge. This could be helpful for allowlists or custom logic per bot. We will see an example in a sec. verifiedBotName? (string): The name for the specific verified bot (e.g., “claude-user”). verifiedBotCategory? (string): The type of the verified bot (e.g., “webhook”, “advertising”, “ai_assistant”). bypassed (boolean): it is true if the request skipped BotID check due to a configured Firewall bypass (custom or system). You could use this flag to avoid taking bot-based actions when you’ve explicitly bypassed protection. Handling Verified Bots - NOTE: Handling verified bots is available in botid@1.5.0 and above. It might be the case that you don’t want to block some verified bots because they are not causing damage to you or your users, as it can sometimes be the case for AI-related bots that fetch your site to give information to a user. We can use the properties related to verified bots from checkBotId() to handle these scenarios: ` Choosing your BotID mode When leveraging BotID, you can choose between 2 modes: - Basic Mode: Instant session-based protection, available for all Vercel plans. - Deep Analysis Mode: Enhanced Kasada-powered detection, only available for Pro and Enterprise plan users. Using this mode, you will leverage a more advanced detection and will block the hardest to catch bots To specify the mode you want, you must do so in both the client and the server. This is important because if either of the two does not match, the verification will fail! ` Conclusion Stop chasing bots - let BotID handle them for you! Bots are and will get smarter and more sophisticated. BotID gives you a simple way to push back without slowing your customers down. It is simple to install, customize, and use. Stronger protection equals fewer headaches. Add BotID, ship with confidence, and let the bots trample into a wall without knowing what’s going on....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co