Skip to content

Introduction to Directives Composition API in Angular

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

In version 15, Angular introduced a new directives composition API that allows developers to compose existing directives into new more complex directives or components. This allows us to encapsulate behaviors into smaller directives and reuse them across the application more easily. In this article, we will explore the new API, and see how we can use it in our own components.

All the examples from this article (and more) can be found on Stackblitz here.

Starting point

In the article, I will use two simple directives as an example

  • HighlightDirective - a directive borrowed from Angular's Getting started guide. This directive will change an element's background color whenever the element hovers.
import { Directive, ElementRef, HostListener, Input } from '@angular/core';

@Directive({
  selector: '[appHighlight]',
  standalone: true
})
export class HighlightDirective {

  @Input() color = 'yellow'

  constructor(private el: ElementRef) { }

  @HostListener('mouseenter') onMouseEnter() {
    this.highlight(this.color);
  }

  @HostListener('mouseleave') onMouseLeave() {
    this.highlight('');
  }

  private highlight(color: string) {
    this.el.nativeElement.style.backgroundColor = color;
  }
}
Fig. 1
  • BorderDirective - a similar directive that will apply a border of a specified color to the element whenever it hovers
import { Directive, ElementRef, HostListener, Input, OnInit } from '@angular/core';

@Directive({
  selector: '[appBorder]',
  standalone: true,
})
export class BorderDirective implements OnInit {

  @Input() color: string = 'red'

  constructor(private el: ElementRef) {
  }

  ngOnInit() {
    this.border('')
  }

  @HostListener('mouseenter') onMouseEnter() {
    this.border(this.color);
  }

  @HostListener('mouseleave') onMouseLeave() {
    this.border('');
  }

  private border(color: string) {
    this.el.nativeElement.style.border = `2px solid ${color || 'transparent'}`;
  }
}
Fig. 2

We can now easily apply our directives to any element we want ie. a paragraph:

<p appHighlight>Paragraph with a highlight directive</p>
<p appBorder>Paragraph with a border directive</p>
Fig. 3

However, if we wanted to apply both highlighting and border on hover we would need to add both directives explicitly:

<p appBorder appHighlight> Paragraph with a highlight and border directive</p>
Fig. 4

With the new directives composition API, we can easily create another directive that composes behaviors of our 2 directives.

Host Directives

Angular 15 added a new property to the @Directive and @Component decorators. In this property, we can specify an array of different directives that we want our new component or directive to apply on a host element. We can do it as follows:

@Directive({
  selector: '[appHighlightAndBorder]',
  hostDirectives: [HighlightDirective, BorderDirective],
  standalone: true,
})
export class HighlightAndBorderDirective {}

As you can see in the above example, by just defining the hostDirectives property containing our highlight and border directives, we created a new directive that composes both behaviors into one directive. We can now achieve the same result as in Fig. 4 by using just a single directive:

<p appHighlightAndBorder>Paragraph with a highlight and border directive using a single composed directive</p>
Fig. 5

Passing inputs and outputs

Our newly composed directive works nicely already, but there is a problem. How do we pass properties to the directives that are defined in the hostDirectives array? They are not passed by default, but we can configure them to do so pretty easily by using an extended syntax:

hostDirectives: [
  {
    directive: HighlightDirective,
    inputs: ['color'],
  },
  {
    directive: BorderDirective,
    inputs: ['color'],
  },
],
Fig. 6

This syntax takes exposes the "inner" directives` color input from the HighlightAndBorderDirective, and passes them down to both highlight and border directives.

<p appHighlightAndBorder color="blue">Paragraph with a highlight and border directive using a single composed directive</p>
Fig. 7

This works, but we ended up with a border and highlight color both being blue. Luckily Angular's API allows us to easily redefine the properties' names using the <innerPropName>: <outerPropName> syntax. So let's remap out properties to highlightColor and borderColor so that the names don't collide with each other:

hostDirectives: [
  {
    directive: HighlightDirective,
    inputs: ['color: highlightColor'],
  },
  {
    directive: BorderDirective,
    inputs: ['color: borderColor'],
  },
],
Fig. 8

Now we can control both colors individually:

<p appHighlightAndBorder borderColor="blue" highlightColor="lightskyblue">Paragraph with a highlight and border directive using a single composed directive</p>
Fig. 9

We could apply the same approach to mapping directive's outputs eg.

outputs: ['stateChanged'],
Fig. 10

or

outputs: ['stateChanged: changed'],
Fig. 11

Adding host directives to a component

Similarly to composing a directive out of other directives, we can apply the same approach to adding behavior to components using hostDirectives API. This way, we could, for example, create a more specialized component or just apply the behavior of the directive to a whole host element:

@Component({
  selector: 'app-highlight-and-border',
  standalone: true,
  template: `
    <p>My first component with border and highlight</p>
  `,
  styles: [
    `
    :host {
      cursor: pointer;
      display: block;
      padding: 16px;
      width: 500px;
    }
  `,
  ],
  hostDirectives: [HighlightDirective, BorderDirective],
})
export class HighlightAndBorderComponent {}
Fig. 12

This component will render the paragraph, and apply both directives' behavior to the host element:

<app-highlight-and-border></app-highlight-and-border>
Fig. 13

Just like we did for the directive, we can also expose and remap the directives inputs using the extended syntax. But if we would like to access and modify the directives inputs from within our component, we can also do that. This is where Angular's dependency injection comes in handy. We can inject the host directives via a constructor just like we would do for a service. After we have the directives instances available, we can modify them ie. in the ngOnInit lifecycle hook:

export class HighlightAndBorderComponent {
  constructor(
    public highlight: HighlightDirective,
    public border: BorderDirective
  ) {}

  ngOnInit(): void {
    this.highlight.color = 'lightcoral';
    this.border.color = 'red';
  }
}
Fig. 14

With this change, the code from Fig. 13 will use lightcoral as a background color and red as a border color.

Performance Note

While this API gives us a powerful tool-set for reusing behaviors across different components, it can impact the performance of our application if used excessively. For each instance of a given composed component Angular will create objects of the component class itself as well as an instance of each directive that it is composed of.

If the component appears only a couple of times in the application. then it won't make a significant difference. However, if we create, for example, a composed checkbox component that appears hundreds of times in the app, this may have a noticeable performance impact. Please make sure you use this pattern with caution, and profile your application in order to find the right composition pattern for your application.

Summary

As I have shown in the above examples, the directives composition API can be a quite useful but easy-to-use tool for extracting behaviors into smaller directives and combining them into more complex behaviors.

In case you have any questions, you can always tweet or DM me at @ktrz. I'm always happy to help!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

End-to-end type-safety with JSON Schema cover image

End-to-end type-safety with JSON Schema

End-to-end type-safety with JSON Schema I recently wrote an introduction to JSON Schema post. If you’re unfamiliar with it, check out the post, but TLDR: It’s a schema specification that can be used to define the input and output data for your JSON API. In my post, I highlight many fantastic benefits you can reap from defining schemas for your JSON API. One of the more interesting things you can achieve with your schemas is end-to-end type safety from your backend API to your client application(s). In this post, we will explore how this can be accomplished slightly deeper. Overview The basic idea of what we want to achieve is: * a JSON API server that validates input and output data using JSON Schema * The JSON Schema definitions that our API uses transformed into TypeScript types With those pieces in place, we can achieve type safety on our API server and the consuming client application. The server side is pretty straightforward if you’re using a server like Fastify with already enabled JSON Schema support. This post will focus on the concepts more than the actual implementation details though. Here’s a simple diagram illustrating the high-level concept: We can share the schema and type declaration between the client and server. In that case, we can make a request to an endpoint where we know its type and schema, and assuming the server validates the data against the schema before sending it back to the client, our client can be confident about the type of the response data. Marrying JSON Schema and TypeScript There are a couple of different ways to accomplish this: * Generating types from schema definitions using code generation tools * Creating TypeBox definitions that can infer TypeScript types and be compiled to JSON Schema I recommend considering both and figuring out which would better fit your application and workflows. Like anything else, each has its own set of trade-offs. In my experience, I’ve found TypeBox to be the most compelling if you want to go deep with this pattern. Code generation A couple of different packages are available for generating TS types from JSON Schema definitions. * https://github.com/bcherny/json-schema-to-typescript * https://github.com/vega/ts-json-schema-generator They are CLI tools that you can provide a glob path to where your schema files are located and will generate TS declaration files to a specified output path. You can set up an npm hook or a similar type of script that will generate types for your development environment. TypeBox TypeBox is a JSON Schema type builder. With this approach, instead of json files, we define schemas in code using the TypeBox API. The TypeBox definitions infer to TypeScript types directly, which eliminates the code generation step described above. Here’s a simple example from the documentation of a JSON Schema definition declared with TypeBox: ` This can then be inferred as a TypeScript type: ` Aside from schemas and types, TypeBox can do a lot more to help us on our type-safety journey. We will explore it a bit more in upcoming sections. Sharing schemas between client and server applications Sharing our JSON Schema between our server and client app is the main requirement for end-to-end type-safety. There are a couple of different ways to accomplish this, but the simplest would be to set up our codebase as a monorepo that contains both the server and client app. Some popular options for TypeScript monorepos are: PNPM, Turborepo, and NX. If a monorepo is not an option, you can publish your schema and types as a package that can be installed in both projects. However, this setup would require a lot more maintenance work. Ultimately, as long as you can import your schemas and types from the client and server app, you are in good shape. Server-to-client validation and type-safety For the sake of simplicity, let's focus on data flowing from the server to the client for now. Generally speaking, the concepts also apply in reverse, as long as your JSON API server validates your inputs and outputs. We’ll look at the most basic version of having strongly typed data on the client from a request to our server. Type-safe client requests In our server application, if we validate the /users endpoint with a shared schema - on the client side, when we make the request to the endpoint, we know that the response data is validated using the user schema. As long as we are confident of this fact, we can use the generated type from that schema as the return type on our client fetch call. Here’s some pseudocode: ` Our server endpoint would look something like this: ` You could get creative and build out a map that defines all of your endpoints, their metadata, and schemas, and use the map to define your server endpoints and create an API client. Transforming data over the wire Everything looks stellar, but we can still take our efforts a bit further. To this point, we are still limited to serialized JSON data. If we have a created_at field (number or ISO string) tied to our user, and we want it to be a Date object when we get a hold of it on the client side - additional work and consideration are required. There are some different strategies out there for deserializing JSON data. The great thing about having shared schemas between our client and server is that we can encode our type information in the schema without sending additional metadata from our server to the client. Using format to declare type data In my initial JSON Schema blog post, I touched on the format field of the specification. In our schema, if the actual type of our date is a string in ISO8601 format, we can declare our format to be "date-time". We can use this information on the client to transform the field into a proper Date object. ` Transforming serialized JSON Data This can be a little bit tricky; again, there are many ways to accomplish it. To demonstrate the concept, we’ll use TypeBox to define our schemas as discussed above. TypeBox provides a Transform type that you can use to declare, encode, and decode methods for your schema definition. ` It even provides helpers to statically generate the decoded and encoded types for your schema ` If you declare your decode and encode functions for your schemas, you can then use the TypeBox API to handle decoding the serialized values returned from your JSON API. Here’s what the concept looks like in practice fetching a user from our API: ` Nice. You could use a validation library like Zod to achieve a similar result but here we aren’t actually doing any validation on our client side. That happened on the server. We just know the types based on the schema since both ends share them. On the client, we are just transforming our serialized JSON into what we want it to be in our client application. Summary There are a lot of pieces in play to accomplish end-to-end type safety. With the help of JSON Schema and TypeBox though, it feels like light work for a semi-roll-your-own type of solution. Another great thing about it is that it’s flexible and based on pretty core concepts like a JSON API paired with a TypeScript-based client application. The number of benefits that you can reap from defining JSON Schemas for your APIs is really great. If you’re like me and wanna keep it simple by avoiding GraphQL or other similar tools, this is a great approach....

Exploring Angular Forms: A New Alternative with Signals cover image

Exploring Angular Forms: A New Alternative with Signals

Exploring Angular Forms: A New Alternative with Signals In the world of Angular, forms are essential for user interaction, whether you're crafting a simple login page or a more complex user profile interface. Angular traditionally offers two primary approaches: template-driven forms and reactive forms. In my previous series on Angular Reactive Forms, I explored how to harness reactive forms' power to manage complex logic, create dynamic forms, and build custom form controls. A new tool for managing reactivity - signals - has been introduced in version 16 of Angular and has been the focus of Angular maintainers ever since, becoming stable with version 17. Signals allow you to handle state changes declaratively, offering an exciting alternative that combines the simplicity of template-driven forms with the robust reactivity of reactive forms. This article will examine how signals can add reactivity to both simple and complex forms in Angular. Recap: Angular Forms Approaches Before diving into the topic of enhancing template-driven forms with signals, let’s quickly recap Angular's traditional forms approaches: 1. Template-Driven Forms: Defined directly in the HTML template using directives like ngModel, these forms are easy to set up and are ideal for simple forms. However, they may not provide the fine-grained control required for more complex scenarios. Here's a minimal example of a template-driven form: ` ` 2. Reactive Forms: Managed programmatically in the component class using Angular's FormGroup, FormControl, and FormArray classes; reactive forms offer granular control over form state and validation. This approach is well-suited for complex forms, as my previous articles on Angular Reactive Forms discussed. And here's a minimal example of a reactive form: ` ` Introducing Signals as a New Way to Handle Form Reactivity With the release of Angular 16, signals have emerged as a new way to manage reactivity. Signals provide a declarative approach to state management, making your code more predictable and easier to understand. When applied to forms, signals can enhance the simplicity of template-driven forms while offering the reactivity and control typically associated with reactive forms. Let’s explore how signals can be used in both simple and complex form scenarios. Example 1: A Simple Template-Driven Form with Signals Consider a basic login form. Typically, this would be implemented using template-driven forms like this: ` ` This approach works well for simple forms, but by introducing signals, we can keep the simplicity while adding reactive capabilities: ` ` In this example, the form fields are defined as signals, allowing for reactive updates whenever the form state changes. The formValue signal provides a computed value that reflects the current state of the form. This approach offers a more declarative way to manage form state and reactivity, combining the simplicity of template-driven forms with the power of signals. You may be tempted to define the form directly as an object inside a signal. While such an approach may seem more concise, typing into the individual fields does not dispatch reactivity updates, which is usually a deal breaker. Here’s an example StackBlitz with a component suffering from such an issue: Therefore, if you'd like to react to changes in the form fields, it's better to define each field as a separate signal. By defining each form field as a separate signal, you ensure that changes to individual fields trigger reactivity updates correctly. Example 2: A Complex Form with Signals You may see little benefit in using signals for simple forms like the login form above, but they truly shine when handling more complex forms. Let's explore a more intricate scenario - a user profile form that includes fields like firstName, lastName, email, phoneNumbers, and address. The phoneNumbers field is dynamic, allowing users to add or remove phone numbers as needed. Here's how this form might be defined using signals: ` > Notice that the phoneNumbers field is defined as a signal of an array of signals. This structure allows us to track changes to individual phone numbers and update the form state reactively. The addPhoneNumber and removePhoneNumber methods update the phoneNumbers signal array, triggering reactivity updates in the form. ` > In the template, we use the phoneNumbers signal array to dynamically render the phone number input fields. The addPhoneNumber and removePhoneNumber methods allow users to reactively add or remove phone numbers, updating the form state. Notice the usage of the track function, which is necessary to ensure that the ngFor directive tracks changes to the phoneNumbers array correctly. Here's a StackBlitz demo of the complex form example for you to play around with: Validating Forms with Signals Validation is critical to any form, ensuring that user input meets the required criteria before submission. With signals, validation can be handled in a reactive and declarative manner. In the complex form example above, we've implemented a computed signal called formValid, which checks whether all fields meet specific validation criteria. The validation logic can easily be customized to accommodate different rules, such as checking for valid email formats or ensuring that all required fields are filled out. Using signals for validation allows you to create more maintainable and testable code, as the validation rules are clearly defined and react automatically to changes in form fields. It can even be abstracted into a separate utility to make it reusable across different forms. In the complex form example, the formValid signal ensures that all required fields are filled and validates the email and phone numbers format. This approach to validation is a bit simple and needs to be better connected to the actual form fields. While it will work for many use cases, in some cases, you might want to wait until explicit "signal forms" support is added to Angular. Tim Deschryver started implementing some abstractions around signal forms, including validation and wrote an article about it. Let's see if something like this will be added to Angular in the future. Why Use Signals in Angular Forms? The adoption of signals in Angular provides a powerful new way to manage form state and reactivity. Signals offer a flexible, declarative approach that can simplify complex form handling by combining the strengths of template-driven forms and reactive forms. Here are some key benefits of using signals in Angular forms: 1. Declarative State Management: Signals allow you to define form fields and computed values declaratively, making your code more predictable and easier to understand. 2. Reactivity: Signals provide reactive updates to form fields, ensuring that changes to the form state trigger reactivity updates automatically. 3. Granular Control: Signals allow you to define form fields at a granular level, enabling fine-grained control over form state and validation. 4. Dynamic Forms: Signals can be used to create dynamic forms with fields that can be added or removed dynamically, providing a flexible way to handle complex form scenarios. 5. Simplicity: Signals can offer a simpler, more concise way to manage form states than traditional reactive forms, making building and maintaining complex forms easier. Conclusion In my previous articles, we explored the powerful features of Angular reactive forms, from dynamic form construction to custom form controls. With the introduction of signals, Angular developers have a new tool that merges the simplicity of template-driven forms with the reactivity of reactive forms. While many use cases warrant Reactive Forms, signals provide a fresh, powerful alternative for managing form state in Angular applications requiring a more straightforward, declarative approach. As Angular continues to evolve, experimenting with these new features will help you build more maintainable, performant applications. Happy coding!...

How to Write a Custom Structural Directive in Angular - Part 2 cover image

How to Write a Custom Structural Directive in Angular - Part 2

How to write a custom structural directive in Angular - part 2 In the previous article I've shown how you can implement a custom structural directive in Angular. We've covered a simple custom structural directive that implements interface similar to Angular's NgIf directive. If you don't know what structural directives are, or are interested in basic concepts behind writing custom one, please read the previous articlefirst. In this article, I will show how to create a more complex structural directive that: - passes properties into the rendered template - enables strict type checking for the template variables Starting point I am basing this article on the example implemented in the part 1 article. You can use example on Stackblitz as a starting point if you wish to follow along with the code examples. Custom NgForOf directive This time, I would like to use Angular's NgForOf directive as an example to re-implement as a custom CsdFor directive. Let's start off by using Angular CLI to create a new module, and directive files: ` First, we need to follow similar steps as with the CsdIf directive. - add constructor with TemplateRef, and ViewContainerRef injected - add an @Input property to hold the array of items that we want to display ` Then, in the ngOnInit hook we can render all the items using the provided template: ` Now, we can verify that it displays the items properly by adding the following template code to our AppComponent. ` It displays the items correctly, but doesn't allow for changing the displayed collection yet. To implement that, we can modify the csdForOf property to be a setter and rerender items then: ` Now, our custom directive will render the fresh items every time the collection changes (its reference). Accessing item property The above example works nice already, but it doesn't allow us to display the item's content yet. The following code will display "no content" for each template rendered. ` To resolve this, we need to provide a value of each item into a template that we are rendering. We can do this by providing second param to createEmbeddedView method of ViewContainerRef. ` The question is what key do we provide to assign it under item variable in the template. In our case, the item is a default param, and Angular uses a reserved $implicit key to pass that variable. With that knowledge, we can finish the renderItems method: ` Now, the content of the item is properly displayed: Adding more variables to the template's context Original NgForOf directives allows developers to access a set of useful properties on an item's template: - index - the index of the current item in the collection. - count - the length of collection - first - true when the item is the first item in the collection - last - true when the item is the last item in the collection - even - true when the item has an even index in the collection - odd - true when the item has an odd index in the collection We can pass those as well when creating a view for a given element along with the $implicit parameter: ` And now, we can use those properties in our template. ` Improve template type checking Lastly, as a developer using the directive it improves, the experience if I can have type checking in the template used by csdFor directive. This is very useful as it will make sure we don't mistype the property name as well as we only use the item, and additional properties properly. Angular's compiler allows us to define a static ngTemplateContextGuard methods on a directive that it will use to type-check the variables defined in the template. The method has a following shape: ` This makes sure that the properties of template rendered by our DirectiveClass will need to conform to DirectiveContext. In our case, this can be the following: ` Now, if we eg. try to access item's property that doesn't exist on the item's interface, we will get a compilation error: ` The same would happen if we made a typo in any of the context property names: ` Summary In this article, we've created a clone of Angular's built-in NgForOf directive. The same approach can be used to create any other custom directive that your project might need. As you can see, implementing a custom directive with additional template properties and great type checking experience is not very hard. If something was not clear, or you want to play with the example directive, please visit the example on Stackblitz. In case you have any questions, you can always tweet or DM me at @ktrz. I'm always happy to help!...

Vercel BotID: The Invisible Bot Protection You Needed cover image

Vercel BotID: The Invisible Bot Protection You Needed

Nowadays, bots do not act like “bots”. They can execute JavaScript, solve CAPTCHAs, and navigate as real users. Traditional defenses often fail to meet expectations or frustrate genuine users. That’s why Vercel created BotID, an invisible CAPTCHA that has real-time protections against sophisticated bots that help you protect your critical endpoints. In this blog post, we will explore why you should care about this new tool, how to set it up, its use cases, and some key considerations to take into account. We will be using Next.js for our examples, but please note that this tool is not tied to this framework alone; the only requirement is that your app is deployed and running on Vercel. Why Should You Care? Think about these scenarios: - Checkout flows are overwhelmed by scalpers - Signup forms inundated with fake registrations - API endpoints draining resources with malicious requests They all impact you and your users in a negative way. For example, when bots flood your checkout page, real customers are unable to complete their purchases, resulting in your business losing money and damaging customer trust. Fake signups clutter the app, slowing things down and making user data unreliable. When someone deliberately overloads your app’s API, it can crash or become unusable, making users angry and creating a significant issue for you, the owner. BotID automatically detects and filters bots attempting to perform any of the above actions without interfering with real users. How does it work? A lightweight first-party script quickly gathers a high set of browser & environment signals (this takes ~30ms, really fast so no worry about performance issues), packages them into an opaque token, and sends that token with protected requests via the rewritten challenge/proxy path + header; Vercel’s edge scores it, attaches a verdict, and checkBotId() function simply reads that verdict so your code can allow or block. We will see how this is implemented in a second! But first, let’s get started. Getting Started in Minutes 1. Install the SDK: ` 1. Configure redirects Wrap your next.config.ts with BotID’s helper. This sets up the right rewrites so BotID can do its job (and not get blocked by ad blockers, extensions, etc.): ` 2. Integrate the client on public-facing pages (where BotID runs checks): Declare which routes are protected so BotID can attach special headers when a real user triggers those routes. We need to create instrumentation-client.ts (place it in the root of your application or inside a src folder) and initialize BotID once: ` instrumentation-client.ts runs before the app hydrates, so it’s a perfect place for a global setup! If we have an inferior Next.js version than 15.3, then we would need to use a different approach. We need to render the React component inside the pages or layouts you want to protect, specifying the protected routes: ` 3. Verify requests on your server or API: ` - NOTE: checkBotId() will fail if the route wasn’t listed on the client, because the client is what attaches the special headers that let the edge classify the request! You’re all set - your routes are now protected! In development, checkBotId() function will always return isBot = false so you can build without friction. To disable this, you can override the options for development: ` What happens on a failed check? In our example above, if the check failed, we return a 403, but it is mostly up to you what to do in this case; the most common approaches for this scenario are: - Hard block with a 403 for obviously automated traffic (just what we did in the example above) - Soft fail (generic error/“try again”) when you want to be cautious. - Step-up (require login, email verification, or other business logic). Remember, although rare, false positives can occur, so it’s up to you to determine how you want to balance your fail strategy between security, UX, telemetry, and attacker behavior. checkBotId() So far, we have seen how to use the property isBot from checkBotId(), but there are a few more properties that you can leverage from it. There are: isHuman (boolean): true when BotID classifies the request as a real human session (i.e., a clear “pass”). BotID is designed to return an unambiguous yes/no, so you can gate actions easily. isBot (boolean): We already saw this one. It will be true when the request is classified as automated traffic. isVerifiedBot (boolean): Here comes a less obvious property. Vercel maintains and continuously updates a comprehensive directory of known legitimate bots from across the internet. This directory is regularly updated to include new legitimate services as they emerge. This could be helpful for allowlists or custom logic per bot. We will see an example in a sec. verifiedBotName? (string): The name for the specific verified bot (e.g., “claude-user”). verifiedBotCategory? (string): The type of the verified bot (e.g., “webhook”, “advertising”, “ai_assistant”). bypassed (boolean): it is true if the request skipped BotID check due to a configured Firewall bypass (custom or system). You could use this flag to avoid taking bot-based actions when you’ve explicitly bypassed protection. Handling Verified Bots - NOTE: Handling verified bots is available in botid@1.5.0 and above. It might be the case that you don’t want to block some verified bots because they are not causing damage to you or your users, as it can sometimes be the case for AI-related bots that fetch your site to give information to a user. We can use the properties related to verified bots from checkBotId() to handle these scenarios: ` Choosing your BotID mode When leveraging BotID, you can choose between 2 modes: - Basic Mode: Instant session-based protection, available for all Vercel plans. - Deep Analysis Mode: Enhanced Kasada-powered detection, only available for Pro and Enterprise plan users. Using this mode, you will leverage a more advanced detection and will block the hardest to catch bots To specify the mode you want, you must do so in both the client and the server. This is important because if either of the two does not match, the verification will fail! ` Conclusion Stop chasing bots - let BotID handle them for you! Bots are and will get smarter and more sophisticated. BotID gives you a simple way to push back without slowing your customers down. It is simple to install, customize, and use. Stronger protection equals fewer headaches. Add BotID, ship with confidence, and let the bots trample into a wall without knowing what’s going on....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co