Skip to content

Going Reactive with RxJS

RxJS is the perfect tool for implementing reactive programming paradigms to your software development. In general, software development handling errors gracefully is a fundamental piece of ensuring the integrity of the application as well as ensuring the best possible user experience.

In this article, we will look at how we handle errors with RxJS and then look at how we can use RxJS to build a simple yet performant application.

Handling Errors

Our general approach to errors usually consists of us exclaiming "Oh no! What went wrong?" but it's something that is a common occurrence in all applications. The ability to manage errors well without disrupting the user's experience, while also providing accurate error logs to allow a full diagnosis of the cause of the error, is more important than ever.

RxJS gives us the tools to do this job very well! Let's take a look at some basic error handling approaches with RxJS.

Basic Error Handling

The most basic way of detecting and reacting to an error that has occurred in an Observable stream provided to us by the .subscribe() method.

obs$.subscribe(
    value => console.log("Received Value: ", value),
    error => console.error("Received Error: ", error)
)

Here we can set up two different pieces of logic—one to handle non-error emission from the Observable and one to gracefully handle errors emitted by the Observable.

We could use this to show a Notification Toast or Alert to inform the user that an error has occurred:

obs$.subscribe(
    value => console.log("Received Value: ", value),
    error => showErrorAlert(error)
)

This can help us minimize disruption for the user, giving them instant feedback that something actually hasn't worked as appropriate rather than leaving them to guess.

Composing Useful Errors

Sometimes, however, we may have situations wherein we want to throw an error ourselves. For example, some data we received isn't quite correct, or maybe some validation checks failed.

RxJS provides us with an operator that allows us to do just that. Let's take an example where we are receiving values from an API, but we encounter missing data that will cause other aspects of the app not to function correctly.

obs$
  .pipe(
    mergeMap((value) =>
      !value.id ? throwError("Data does not have an ID") : of(value)
    )
  )
  .subscribe(
    (value) => console.log(value),
    (error) => console.error("error", error)
  );

If we receive a value from the Observable that doesn't contain an ID, we throw an error that we can handle gracefully.

NOTE: Using the throwError will stop any further Observable emissions from being received.

Advanced Error Handling

We've learned that we can handle errors reactively to prevent too much disruption for the user.

But what if we want to do multiple things when we receive an error or even do a retry?

RxJS makes it super simple for us to retry errored Observables with their retry() operator.

Therefore, to create an even cleaner error handling setup in RxJS, we can set up an error management solution that will receive any errors from the Observable, retry them in the hopes of a successful emission, and, failing that, handle the error gracefully.

obs$
  .pipe(
    mergeMap((value) =>
      !value.id ? throwError("Data does not have an ID") : of(value)
    ),
    retry(2),
    catchError((error) => {
        // Handle error gracefully here
      console.error("Error: ", error);
      return EMPTY;
    })
  )
  .subscribe(
    (value) => console.log(value),
    () => console.log("completed")
  );

Once we reach an error, emitting the EMPTY observable will complete the Observable. The output of an error emission above is:

Error:  Data does not have an ID
completed 

Usage in Frontend Development

RxJS can be used anywhere running JavaScript; however, I'd suggest that it's most predominately used in Angular codebases. Using RxJS correctly with Angular can massively increase the performance of your application, and also help you to maintain the Container-Presentational Component Pattern.

Let's see a super simple Todo app in Angular to see how we can use RxJS effectively.

Basic Todo App

We will have two components in this app: the AppComponent and the ToDoComponent. Let's take a look at the ToDoComponent first:

import {
  ChangeDetectionStrategy,
  Component,
  EventEmitter,
  Input,
  Output
} from "@angular/core";

export interface Todo {
  id: number;
  title: string;
}

@Component({
  selector: "todo",
  template: `
    <li>
      {{ item.title }} - <button (click)="delete.emit(item.id)">Delete</button>
    </li>
  `,
  changeDetection: ChangeDetectionStrategy.OnPush
})
export class ToDoComponent {
  @Input() item: Todo;
  @Output() delete = new EventEmitter<number>();
}

Pretty simple, right? It takes an item input and outputs an event when the delete button is clicked. It performs no real logic itself other than rendering the correct HTML.

One thing to note is changeDetection: ChangeDetectionStrategy.OnPush. This tells the Angular Change Detection System that it should only attempt to re-render this component when the Input has changed.

Doing this can increase performance massively in Angular applications and should always be applicable to pure presentational components, as they should only be rendering data.

Now, let's take a look at the AppComponent.

import { Component } from "@angular/core";
import { BehaviorSubject } from "rxjs";
import { Todo } from "./todo.component";

@Component({
  selector: "my-app",
  template: `
    <div>
      <h1>
        ToDo List
      </h1>
      <div style="width: 50%;">
        <ul>
          <todo
            *ngFor="let item of (items$ | async); trackBy: trackById"
            [item]="item"
            (delete)="deleteItem($event)"
          >
          </todo>
        </ul>
        <input #todoTitle placeholder="Add item" /><br />
        <button (click)="addItem(todoTitle.value, todoTitle)">Add</button>
      </div>
    </div>
  `,
  styleUrls: ["./app.component.css"]
})
export class AppComponent {
  private items: Todo[] = [{ id: 1, title: "Learn RxJS" }];
  items$ = new BehaviorSubject<Todo[]>(this.items);

  addItem(title: string, inputEl: HTMLInputElement) {
    const item = {
      id: this.items[this.items.length - 1].id + 1,
      title,
      completed: false
    };
    this.items = [...this.items, item];
    this.items$.next(this.items);

    inputEl.value = "";
  }

  deleteItem(idToRemove: number) {
    this.items = this.items.filter(({ id }) => id !== idToRemove);
    this.items$.next(this.items);
  }

  trackById(index: number, item: Todo) {
    return item.id;
  }
}

This is a container component, and it's called this because it handles the logic relating to updating component state as well as handles or dispatches side effects.

Let's take a look at some areas of interest:

private items: Todo[] = [{ id: 1, title: "Learn RxJS" }];
items$ = new BehaviorSubject<Todo[]>(this.items);

We create a basic local store to store our ToDo items; however, this could be done via a state management system or an API.
We then set up our Observable, which will stream the value of our ToDo list to anyone who subscribes to it.

You may now look over the code and begin to wonder where we have subscribed to items$.

Angular provides a very convenient Pipe that handles this for us. We can see this in the template:

*ngFor="let item of (items$ | async); trackBy: trackById"

In particular, it's the (items$ | async) this will take the latest value emitted from the Observable and provide it to the template. It does much more than this though. It also will manage the subscription for us, meaning when we destroy this container component, it will unsubscribe automatically for us, preventing unexpected outcomes.

Using a pure pipe in Angular also has another performance benefit. It will only ever re-run the code in the Pipe if the input to the pipe changes. In our case, that would mean that item$ would need to change to a whole new Observable for the code in the async pipe to be executed again. We never have to change item$ as our values are then streamed through the Observable.

Conclusion

Hopefully, you have learned both about how to handle errors effectively as well put RxJS into practice into a real-world app that improves the overall performance of your application. You should also start to see the power that using RxJS effectively can bring!

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline cover image

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

How to automatically deploy your full-stack JavaScript app from an NX monorepo with AWS CodePipeline In our previous blog post (How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk) we set up a horizontally scalable deployment for our full-stack javascript app. In this article, we would like to show you how to set up AWS CodePipeline to automatically deploy changes to the application. APP Structure Our application is a simple front-end with an API back-end set up in an NX monorepo. The production built API code is hosted in Elastic Beanstalk, while the front-end is stored in S3 and hosted through CloudFront. Whenever we are ready to make a new release, we want to be able to deploy the new API and front-end versions to the existing distribution. In this article, we will set up a CodePipeline to deploy changes to the main branch of our connected repository. CodePipeline CodeBuild and the buildspec file First and foremost, we should set up the build job that will run the deploy logic. For this, we are going to need to use CodeBuild. Let's go into our repository and set up a build-and-deploy.buildspec.yml` file. We put this file under the `tools/aws/` folder. `yaml version: 0.2 phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - npm ci build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # TODO: Push FE to S3 # TODO: Push API to Elastic beanstalk ` This buildspec file does not do much so far, we are going to extend it. In the installation phase, it will run npm ci` to install the dependencies and in the build phase, we are going to run the build command using the `ENVIRONMENT_TARGET` variable. This is useful, because if you have more environments, like `development` and `staging` you can have different configurations and builds for those and still use the same buildspec file. Let's go to the Codebuild page in our AWS console and create a build project. Add a descriptive name, such as your-appp-build-and-deploy`. Please provide a meaningful description for your future self. For this example, we are going to restrict the number of concurrent builds to 1. The next step is to set up the source for this job, so we can keep the buildspec file in the repository and make sure this job uses the steps declared in the yaml file. We use an access token that allows us to connect to GitHub. Here you can read more on setting up a GitHub connection with an access token. You can also connect with Oauth, or use an entirely different Git provider. We set our provider to GitHub and provided the repository URL. We also set the Git clone depth to 1, because that makes checking out the repo faster. In the Environment` section, we recommend using an AWS CodeBuild managed image. We use the Ubuntu Standard runtime with the `aws/codebuild/standard:7.0` version. This version uses Node 18. We want to always use the latest image version for this runtime and as the `Environment type` we are good with `Linux EC2`. We don't need elevated privileges, because we won't build docker images, but we do want to create a new service role. In the Buildspec` section select `Use a buildspec file` and give the path from your repository root as the `Buildspec name`. For our example, it is `tools/aws/build-and-deploy.buildspec.yml`. We leave the `Batch configuration` and the `Artifacts` sections as they are and in the `Logs` section we select how we want the logs to work. For this example, to reduce cost, we are going to use S3 logs and save the build logs in the `aws-codebuild-build-logs` bucket that we created for this purpose. We are finished, so let's create the build project. CodePipeline setup To set up automated deployment, we need to create a CodePipeline. Click on Create pipeline` and give it a name. We also want a new service role to be created for this pipeline. Next, we should set up the source stage. As the source provider, we need to use GitHub (version2)` and set up a connection. You can read about how to do it here. After the connection is set up, select your repository and the branch you want to deploy from. We also want to start the pipeline if the source code changes. For the sake of simplicity, we want to have the Output artefact format as CodePipeline default. At the Build stage, we select AWS CodeBuild` as the build provider and let's select the build that we created above. Remember that we have the `ENVIRONMENT_TARGET` as a variable used in our build, so let's add it to this stage with the `Plaintext` value `prod`. This way the build will run the `build:prod` command from our `package.json`. As the `Build type` we want `Single build`. We can skip the deployment stage because we are going to set up deployment in our build job. Review our build pipeline and create it. After it is created, it will run for the first time. At this time it will not deploy anything but it should run successfully. Deployment prerequisites To be able to deploy to S3 and Elastic Beanstalk, we need our CodeBuild job to be able to interact with those services. When we created the build, we created a service role for it. In this example, the service role is codebuild-aws-test-build-and-deploy-service-role`. Let's go to the IAM page in the console and open the `Roles` page. Search for our codebuild role and let's add permissions to it. Click the `Add permissions` button and select `Attach policies`. We need two AWS-managed policies to be added to this service role. The `AdministratorAccess-AWSElasticBeanstalk` will allow us to deploy the API and the `AmazonS3FullAccess` will allow us to deploy the front-end. The `CloudFrontFullAccess` will allow us to invalidate the caches so CloudFront will send the new front-end files after the deployment is ready. Deployment Upload the front-end to S3 Uploading the front-end should be pretty straightforward. We use an AWS CodeBuild managed image in our pipeline, therefore, we have access to the aws` command. Let's update our buildspec file with the following changes: `yaml phases: ... build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # Delete the current front-end and deploy the new version front-end - aws s3 sync dist/apps/frontend/ s3://$FRONTEND_BUCKET --delete # Invalidate cloudfront caches to immediately serve the new front-end files - aws cloudfront create-invalidation --distribution-id $CLOUDFRONTDISTRIBUTION_ID --paths "/index.html" # TODO: Push API to Elastic beanstalk ` First, we upload the fresh front-end build to the S3 bucket, and then we invalidate the caches for the index.html` file, so CloudFront will immediately serve the changes. If you have more static files in your app, you might need to invalidate caches for those as well. Before we push the above changes up, we need to update the environment variables in our CodePipeline. To do this open the pipeline and click on the Edit` button. This will then enable us to edit the `Build` stage. Edit the build step by clicking on the edit button. On this screen, we add the new environment variables. For this example, it is aws-hosting-prod` as `Plaintext` for the `FRONT_END_BUCKET` and `E3FV1Q1P98H4EZ` as `Plaintext` for the `CLOUDFRONT_DISTRIBUTION_ID` Now if we add changes to our index.html file, for example, change the button to HELLO 2`, commit it and push it. It gets deployed. Deploying the API to Elastic Beanstalk We are going to need some environment variables passed down to the build pipeline to be able to deploy to different environments, like staging or prod. We gathered these below: - COMMIT_ID`: `#{SourceVariables.CommitId}` - This will have the commit id from the checkout step. We include this, so we can always check what commit is deployed. - ELASTIC_BEANSTALK_APPLICATION_NAME`: `Test AWS App` - This is the Elastic Beanstalk app which has your environment associated. - ELASTIC_BEANSTALK_ENVIRONMENT_NAME`: `TestAWSApp-prod` - This is the Elastic Beanstalk environment you want to deploy to - API_VERSION_BUCKET`: `elasticbeanstalk-us-east-1-474671518642` - This is the S3 bucket that was created by Elastic Beanstalk With the above variables, we can make some new variables during the build time, so we can make sure that every API version is unique and gets deployed. We set this up in the install phase. `yaml ... phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - APPVERSION=`jq '.version' -j package.json` - APIVERSION=$APP_VERSION-build$CODEBUILD_BUILD_NUMBER - APIZIP_KEY=$COMMIT_ID-api.zip - 'APPVERSION_DESCRIPTION="$AP_VERSION: $COMMIT_ID"' - npm ci ... ` The APP_VERSION` variable is the version property from the `package.json` file. In a release process, the application's version is stored here. The `API_VERSION` variable will contain the `APP_VERSION` and as a suffix, we include the build number. We want to upload this API version by indicating the commit ID, so the `API_ZIP_KEY` will have this information. The `APP_VERSION_DESCRIPTION` will be the description of the deployed version in Elastic Beanstalk. Finally, we are going to update the buildspec file with the actual Elastic Beanstalk deployment steps. `yaml phases: ... build: on-failure: ABORT commands: # ... # ZIP the API - zip -r -j dist/apps/api.zip dist/apps/api # Upload the API bundle to S3 - aws s3 cp dist/apps/api.zip s3://$APIVERSION_BUCKET/$ENVIRONMENT_TARGET/$API_ZIP_KEY # Create new API version in Elastic Beanstalk - aws elasticbeanstalk create-application-version --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --description "$APP_VERSION_DESCRIPTION" --source-bundle "S3Bucket=$API_VERSION_BUCKET,S3Key=$ENVIRONMENT_TARGET/$API_ZIP_KEY" # Deploy new API version - aws elasticbeanstalk update-environment --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" # Wait until the Elastic Beanstalk environment is stable - aws elasticbeanstalk wait environment-updated --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" ` Let's make a change in the API, for example, the message sent back by the /api/hello` endpoint and push up the changes. --- Now every time a change is merged to the main` branch, it gets pushed to our production deployment. Using these guides, you can set up multiple environments, and you can configure separate CodePipeline instances to deploy from different branches. I hope this guide proved to be helpful to you....

Drizzle ORM: A performant and type-safe alternative to Prisma cover image

Drizzle ORM: A performant and type-safe alternative to Prisma

Introduction I’ve written an article about a similar, more well-known TypeScript ORM named Prisma in the past. While it is a fantastic library that I’ve used and have had success with personally, I noted a couple things in particular that I didn’t love about it. Specifically, how it handles relations with add-on queries and also its bulk that can slow down requests in Lambda and other similar serverless environments. Because of these reasons, I took notice of a newer player in the TypeScript ORM space named Drizzle pretty quickly. The first thing that I noticed about Drizzle and really liked is that even though they call it an ‘ORM’ it’s more of a type-safe query builder. It reminds me of a JS query builder library called ‘Knex’ that I used to use years ago. It also feels like the non-futuristic version of EdgeDB which is another technology that I’m pretty excited about, but committing to it still feels like a gamble at this stage in its development. In contrast to Prisma, Drizzle is a ‘thin TypeScript layer on top of SQL’. This by default should make it a better candidate for Lambda’s and other Serverless environments. It could also be a hard sell to Prisma regulars that are living their best life using the incredibly developer-friendly TypeScript API’s that it generates from their schema.prisma files. Fret not, despite its query-builder roots, Drizzle has some tricks up its sleeve. Let’s compare a common query example where we fetch a list of posts and all of it’s comments from the Drizzle docs: ` // Drizzle query const posts = await db.query.posts.findMany({ with: { comments: true, }, }); // Prisma query const posts = await prisma.post.findMany({ include: { comments: true, }, }); ` Sweet, it’s literally the same thing. Maybe not that hard of a sale after all. You will certainly find some differences in their APIs, but they are both well-designed and developer friendly in my opinion. The schema Similar to Prisma, you define a schema for your database in Drizzle. That’s pretty much where the similarities end. In Drizzle, you define your schema in TypeScript files. Instead of generating an API based off of this schema, Drizzle just infers the types for you, and uses them with their TypeScript API to give you all of the nice type completions and things we’re used to in TypeScript land. Here’s an example from the docs: ` import { integer, pgEnum, pgTable, serial, uniqueIndex, varchar } from 'drizzle-orm/pg-core'; // declaring enum in database export const popularityEnum = pgEnum('popularity', ['unknown', 'known', 'popular']); export const countries = pgTable('countries', { id: serial('id').primaryKey(), name: varchar('name', { length: 256 }), }, (countries) => { return { nameIndex: uniqueIndex('nameidx').on(countries.name), } }); export const cities = pgTable('cities', { id: serial('id').primaryKey(), name: varchar('name', { length: 256 }), countryId: integer('countryid').references(() => countries.id), popularity: popularityEnum('popularity'), }); ` I’ll admit, this feels a bit clunky compared to a Prisma schema definition. The trade-off for a lightweight TypeScript API to work with your database can be worth the up-front investment though. Migrations Migrations are an important piece of the puzzle when it comes to managing our applications databases. Database schemas change throughout the lifetime of an application, and the steps to accomplish these changes is a non-trivial problem. Prisma and other popular ORMs offer a CLI tool to manage and automate your migrations, and Drizzle is no different. After creating new migrations, all that is left to do is run them. Drizzle gives you the flexibility to run your migrations in any way you choose. The simplest of the bunch and the one that is recommended for development and prototyping is the drizzle-kit push command that is similar to the prisma db push command if you are familiar with it. You also have the option of running the .sql files directly or using the Drizzle API's migrate function to run them in your application code. Drizzle Kit is a companion CLI tool for managing migrations. Creating your migrations with drizzle-kit is as simple as updating your Drizzle schema. After making some changes to your schema, you run the drizzle-kit generate command and it will generate a migration in the form of a .sql file filled with the needed SQL commands to migrate your database from point a → point b. Performance When it comes to your database, performance is always an extremely important consideration. In my opinion this is the category that really sets Drizzle apart from similar competitors. SQL Focused Tools like Prisma have made sacrifices and trade-offs in their APIs in an attempt to be as database agnostic as possible. Drizzle gives itself an advantage by staying focused on similar SQL dialects. Serverless Environments Serverless environments are where you can expect the most impactful performance gains using Drizzle compared to Prisma. Prisma happens to have a lot of content that you can find on this topic specifically, but the problem stems from cold starts in certain serverless environments like AWS Lambda. With Drizzle being such a lightweight solution, the time required to load and execute a serverless function or Lambda will be much quicker than Prisma. Benchmarks You can find quite a few different open-sourced benchmarks of common database drivers and ORMs in JavaScript land. Drizzle maintains their own benchmarks on GitHub. You should always do your own due diligence when it comes to benchmarks and also consider the inputs and context. In Drizzle's own benchmarks, it’s orders of magnitudes faster when compared to Prisma or TypeORM, and it’s not far off from the performance you would achieve using the database drivers directly. This would make sense considering the API adds almost no overhead, and if you really want to achieve driver level performance, you can utilize the prepared statements API. Prepared Statements The prepared statements API in Drizzle allows you to pre-generate raw queries that get sent directly to the underlying database driver. This can have a very significant impact on performance, especially when it comes to larger, more complex queries. Prepared statements can also provide huge performance gains when used in serverless environments because they can be cached and reused. JOINs I mentioned at the beginning of this article that one of the things that bothered me about Prisma is the fact that fetching relations on queries generates additional sub queries instead of utilizing JOINs. SQL databases are relational, so using JOINs to include data from another table in your query is a core and fundamental part of how the technology is supposed to work. The Drizzle API has methods for every type of JOIN statement. Properly using JOINs instead of running a bunch of additional queries is an important way to get better performance out of your queries. This is a huge selling point of Drizzle for me personally. Other bells and whistles Drizzle Studio UIs for managing the contents of your database are all the rage these days. You’ve got Prisma Studio and EdgeDB UI to name a couple. It's no surprise that these are so popular. They provide a lot of value by letting you work with your database visually. Drizzle also offers Drizzle Studio and it’s pretty similar to Prisma Studio. Other notable features - Raw Queries - The ‘magic’ sql operator is available to write raw queries using template strings. - Transactions - Transactions are a very common and important feature in just about any database tools. It’s commonly used for seeding or if you need to write some other sort of manual migration script. - Schemas - Schemas are a feature specifically for Postgres and MySQL database dialects - Views -Views allow you to encapsulate the details of the structure of your tables, which might change as your application evolves, behind consistent interfaces. - Logging - There are some logging utilities included useful for debugging, benchmarking, and viewing generated queries. - Introspection - There are APIs for introspecting your database and tables - Zod schema generation - This feature is available in a companion package called drizzle-zod that will generate Zod schema’s based on your Drizzle tables Seeding At the time of this writing, I’m not aware of Drizzle offering any tools or specific advice on seeding your database. I assume this is because of how straightforward it is to handle this on your own. If I was building a new application I would probably provide a simple seed script in JS or TS and use a runtime like node to execute it. After that, you can easily add a command to your package.json and work it into your CI/CD setup or anything else. Conclusion Drizzle ORM is a performant and type-safe alternative to Prisma. While Prisma is a fantastic library, Drizzle offers some advantages such as a lightweight TypeScript API, a focus on SQL dialects, and the ability to use JOINs instead of generating additional sub queries. Drizzle also offers Drizzle Studio for managing the contents of your database visually, as well as other notable features such as raw queries, transactions, schemas, views, logging, introspection, and Zod schema generation. While Drizzle may require a bit more up-front investment in defining your schema, it can be worth it for the performance gains, especially in serverless environments....

Mapping Returned HTTP Data with RxJS cover image

Mapping Returned HTTP Data with RxJS

In frontend development, it's likely that you will make API requests to retrieve data from some backend resource. However, some backends are set up in such a way that they either send too much data or not enough data back. When we combine what we know already about Reactive Programming and RxJS, we can handle these situations in a very elegant manner using some useful operators that RxJS provides us. In this article, we will look at handling both scenarios and look at four operators in particular: map`, `mergeMap`, `concatMap` and `switchMap`. We will also provide links to some StackBlitz examples so you can see these live. For fun, we'll use the Star Wars API to fetch data and use Angular as our Frontend framework of choice as its HTTP library has Reactive Bindings out of the box. Handling too much data (part 1) Ok, let's set up the scenario. We want to find out some details about Luke Skywalker such as his name, birth year, height, weight, and eye color. With Angular, we can do this by simply making an HTTP request: `ts this.http.get("https://swapi.dev/api/people/1") .subscribe(response => console.log(response)); ` If we do this, we do_ get back the info we need, but we _also_ get back a lot of info we _don't_ need such as when the API entry was created and edited, Luke's species, his Vehicles etc. Right now, we don't care about those details. Let's see how we could use RxJS's map` operator to only return the data we want. `ts this.http.get("https://swapi.dev/api/people/1") .pipe(map(response => ({ name: response.name, birthYear: response.birthyear, height: Number(response.height), weight: Number(response.mass), eyeColor: response.eyecolor }))) .subscribe(luke => console.log(luke)) ` We can see from the example above that it's super easy to only pull the values you need from the response object! **Note** You can also see how we managed to map some of the fields in the response from snakecase to camelCase which is more of a convention in JavaScript _as well as_ mapping some fields in the response to a new name (mass -> weight). We can also convert some types to other types, such as mass as string to weight as number. This can be super helpful to keep the domain language of your frontend codebase intact. Live Example You can check this code out in action over at the live example here on StackBlitz. Feel free to fork it and play with it. Handling too much data (part 2) Another common use case of having too much data is search results. Some times we only want to display the first result from a bunch of search results. Let's set up a new scenario. We'll create a typeahead search that only shows the most relevant search result. We want to request search results after every key press the user makes; however, we also don't want to make requests that are not needed. Therefore, we want to cancel any inflight requests created as the user types. For this scenario, we'll be using a mix of the map` and `switchMap` operators. We'll also use Angular's Reactive Forms Module to provide us with some reactive bindings. Our code for this is very simple: `ts this.searchResult$ = this.search.valueChanges.pipe( // We get the search term the user has typed // And use switchMap to cancel any inflight requests // then create a new request and switch to that // Observable stream switchMap(term => this.http.get(https://swapi.dev/api/people/?search=${term}`) ), // Next we check that there is results, if so we pick the first one // If not we create an object to show there are no results map(response => response.count > 0 ? response.results[0] : { name: "No results" } ), // We then map the full response data down to only the fields we // care about. map( response => ({ name: response.name, birthYear: response.birthyear, height: Number(response.height), weight: Number(response.mass), eyeColor: response.eyecolor } as PeopleData) ) ); ` It's that simple to create an eager typeahead search with Angular and RxJS. You can see the live example of this here on StackBlitz. Handling too little data We've looked at how to handle too much data in the API response with RxJS, but how about when we have too little data? We can build this scenario out perfectly with the Star Wars API. Let's say we want to see what films the characters we are searching for appear in. The API response for the characters does_ contain what films they are in, but it _does not_ give us details, just a URL that we can send a new request to get that film's data. This is also an Array of films so we _may_ want to get the details for all the films they appear in at this time. Let's see how we can transform our search code to allow us to fetch more data related_ to the results and map it into the final object that we use to render the data. `ts this.searchResult$ = this.search.valueChanges.pipe( // Again we take the search term and map to a new api request switchMap(term => this.http.get(https://swapi.dev/api/people/?search=${term}`) ), // Now we use mergeMap as we do not need cancellation mergeMap(response => // We use from to iterate over each film for the character from(response.films).pipe( // We need to massage the url to be correct as SW API returns http rather // than https in the character details map( (film: string) => ${film.substring(0, 4)}${film.substring(4)}` ), // Now we use concatMap as this will force RxJS to wait for each request // to complete before starting the next one, ensuring we have all the // data needed for each film concatMap((film: string) => this.http.get(film)), // The film API also returns more data than we care about so we map it down // to only the fields we care about map(film => ({ title: film.title, releaseDate: film.releasedate })), // We then need to collect each of these API responses and map them back into // a single array of films reduce((films, film) => [...films, film], []), // Finally we then map the character data and the film data into one percise // object that we care about map( films => ({ name: response.name, birthYear: response.birthyear, height: Number(response.height), weight: Number(response.mass), eyeColor: response.eyecolor, films } as PeopleData) ) ) ) ); ` I highly recommend reading the code above and the comments to understand exactly how to achieve this scenario. This pattern is very powerful, especially for example if you need to iterate over an array of user IDs and fetch the user details associated with those IDs. There is also a live example of this here on StackBlitz that you can use to try it out. Conclusion This is a small introduction to mapping data returned from HTTP requests with RxJS, but hopefully you can see use this as a reference point if you ever need to perform complex data mapping that involves additional API requests....

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels cover image

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels

In this episode of the engineering leadership series, Kathy Keating, co-founder of CTO Levels and CTO Advisor, shares her insights on the role of a CTO and the challenges they face. She begins by discussing her own journey as a technologist and her experience in technology leadership roles, including founding companies and having a recent exit. According to Kathy, the primary responsibility of a CTO is to deliver the technology that aligns with the company's business needs. However, she highlights a concerning statistic that 50% of CTOs have a tenure of less than two years, often due to a lack of understanding and mismatched expectations. She emphasizes the importance of building trust quickly in order to succeed in this role. One of the main challenges CTOs face is transitioning from being a technologist to a leader. Kathy stresses the significance of developing effective communication habits to bridge this gap. She suggests that CTOs create a playbook of best practices to enhance their communication skills and join communities of other CTOs to learn from their experiences. Matching the right CTO to the stage of a company is another crucial aspect discussed in the episode. Kathy explains that different stages of a company require different types of CTOs, and it is essential to find the right fit. To navigate these challenges, Kathy advises CTOs to build a support system of advisors and coaches who can provide guidance and help them overcome obstacles. Additionally, she encourages CTOs to be aware of their own preferences and strengths, as self-awareness can greatly contribute to their success. In conclusion, this podcast episode sheds light on the technical aspects of being a CTO and the challenges they face. Kathy Keating's insights provide valuable guidance for CTOs to build trust, develop effective communication habits, match their skills to the company's stage, and create a support system for their professional growth. By understanding these key technical aspects, CTOs can enhance their leadership skills and contribute to the success of their organizations....