Skip to content

Using Message Events to Resize an IFrame

Over the years, I've often had to embed interactive data visualizations into other websites. The simplest way to manage this has been to use iframes. The downside to using iframes is that their height doesn't automatically adjust to the size of the child content. But this problem can be solved by using window.postMessage() to tell the parent what the child's height is, allowing it to adjust the iframe's height attribute with JavaScript.

The window.postMessage() API allows you to send data from one window to another across domains. With postMessage, the embedded iframe site is able to send data to the parent window. Scripts in the parent window can then listen for the message event, and take action based on the data sent.

In this example, I'm sending a message from a child window in an iframe to a parent window with the clientHeight of the child's content. The parent window is using that information to adjust the height of the iframe. Every time you ADD or REMOVE a box in the child window, it sends a new message to the parent with the updated height. Click here to see it in action.

Dynamically Resizing IFrame

First, a script in the child window checks the height of itself, and then sends that height information to the parent window.

const app = document.querySelector('.resizing-app');
const height = app.clientHeight;
window.parent.postMessage(
  { height },
  'https://child-window-origin.com'
);

The script in the parent window creates the iframe, adds it to the document, and then listens to the message event. If it receives a message from the expected source, it will read the message data and update the iframe height.

let container = document.querySelector('#target');
const iframe = document.createElement('iframe');

window.addEventListener('load', () => {
  container.appendChild(iframe);
});

window.addEventListener(
  'message',
  function(e) {
    if (!event.origin.match('child-window-origin.com')) {
      return;
    }

    let message = e.data;

    if (message.height) {
      iframe.height = message.height + 'px';
    }
  },
  false
);

If you've got an app to embed where the height might change independently of any specific action, you can also use setInterval to poll for height changes, and send them on a regular basis. Here is an example of the same app, but the boxes are created and destroyed at random, and polling is used to check the height at regular intervals.

If you'd like to learn more about the postMessage API, more detailed information can be found on MDN.

Live Demo

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Drizzle ORM: A performant and type-safe alternative to Prisma cover image

Drizzle ORM: A performant and type-safe alternative to Prisma

Introduction I’ve written an article about a similar, more well-known TypeScript ORM named Prisma in the past. While it is a fantastic library that I’ve used and have had success with personally, I noted a couple things in particular that I didn’t love about it. Specifically, how it handles relations with add-on queries and also its bulk that can slow down requests in Lambda and other similar serverless environments. Because of these reasons, I took notice of a newer player in the TypeScript ORM space named Drizzle pretty quickly. The first thing that I noticed about Drizzle and really liked is that even though they call it an ‘ORM’ it’s more of a type-safe query builder. It reminds me of a JS query builder library called ‘Knex’ that I used to use years ago. It also feels like the non-futuristic version of EdgeDB which is another technology that I’m pretty excited about, but committing to it still feels like a gamble at this stage in its development. In contrast to Prisma, Drizzle is a ‘thin TypeScript layer on top of SQL’. This by default should make it a better candidate for Lambda’s and other Serverless environments. It could also be a hard sell to Prisma regulars that are living their best life using the incredibly developer-friendly TypeScript API’s that it generates from their schema.prisma files. Fret not, despite its query-builder roots, Drizzle has some tricks up its sleeve. Let’s compare a common query example where we fetch a list of posts and all of it’s comments from the Drizzle docs: ` // Drizzle query const posts = await db.query.posts.findMany({ with: { comments: true, }, }); // Prisma query const posts = await prisma.post.findMany({ include: { comments: true, }, }); ` Sweet, it’s literally the same thing. Maybe not that hard of a sale after all. You will certainly find some differences in their APIs, but they are both well-designed and developer friendly in my opinion. The schema Similar to Prisma, you define a schema for your database in Drizzle. That’s pretty much where the similarities end. In Drizzle, you define your schema in TypeScript files. Instead of generating an API based off of this schema, Drizzle just infers the types for you, and uses them with their TypeScript API to give you all of the nice type completions and things we’re used to in TypeScript land. Here’s an example from the docs: ` import { integer, pgEnum, pgTable, serial, uniqueIndex, varchar } from 'drizzle-orm/pg-core'; // declaring enum in database export const popularityEnum = pgEnum('popularity', ['unknown', 'known', 'popular']); export const countries = pgTable('countries', { id: serial('id').primaryKey(), name: varchar('name', { length: 256 }), }, (countries) => { return { nameIndex: uniqueIndex('nameidx').on(countries.name), } }); export const cities = pgTable('cities', { id: serial('id').primaryKey(), name: varchar('name', { length: 256 }), countryId: integer('countryid').references(() => countries.id), popularity: popularityEnum('popularity'), }); ` I’ll admit, this feels a bit clunky compared to a Prisma schema definition. The trade-off for a lightweight TypeScript API to work with your database can be worth the up-front investment though. Migrations Migrations are an important piece of the puzzle when it comes to managing our applications databases. Database schemas change throughout the lifetime of an application, and the steps to accomplish these changes is a non-trivial problem. Prisma and other popular ORMs offer a CLI tool to manage and automate your migrations, and Drizzle is no different. After creating new migrations, all that is left to do is run them. Drizzle gives you the flexibility to run your migrations in any way you choose. The simplest of the bunch and the one that is recommended for development and prototyping is the drizzle-kit push command that is similar to the prisma db push command if you are familiar with it. You also have the option of running the .sql files directly or using the Drizzle API's migrate function to run them in your application code. Drizzle Kit is a companion CLI tool for managing migrations. Creating your migrations with drizzle-kit is as simple as updating your Drizzle schema. After making some changes to your schema, you run the drizzle-kit generate command and it will generate a migration in the form of a .sql file filled with the needed SQL commands to migrate your database from point a → point b. Performance When it comes to your database, performance is always an extremely important consideration. In my opinion this is the category that really sets Drizzle apart from similar competitors. SQL Focused Tools like Prisma have made sacrifices and trade-offs in their APIs in an attempt to be as database agnostic as possible. Drizzle gives itself an advantage by staying focused on similar SQL dialects. Serverless Environments Serverless environments are where you can expect the most impactful performance gains using Drizzle compared to Prisma. Prisma happens to have a lot of content that you can find on this topic specifically, but the problem stems from cold starts in certain serverless environments like AWS Lambda. With Drizzle being such a lightweight solution, the time required to load and execute a serverless function or Lambda will be much quicker than Prisma. Benchmarks You can find quite a few different open-sourced benchmarks of common database drivers and ORMs in JavaScript land. Drizzle maintains their own benchmarks on GitHub. You should always do your own due diligence when it comes to benchmarks and also consider the inputs and context. In Drizzle's own benchmarks, it’s orders of magnitudes faster when compared to Prisma or TypeORM, and it’s not far off from the performance you would achieve using the database drivers directly. This would make sense considering the API adds almost no overhead, and if you really want to achieve driver level performance, you can utilize the prepared statements API. Prepared Statements The prepared statements API in Drizzle allows you to pre-generate raw queries that get sent directly to the underlying database driver. This can have a very significant impact on performance, especially when it comes to larger, more complex queries. Prepared statements can also provide huge performance gains when used in serverless environments because they can be cached and reused. JOINs I mentioned at the beginning of this article that one of the things that bothered me about Prisma is the fact that fetching relations on queries generates additional sub queries instead of utilizing JOINs. SQL databases are relational, so using JOINs to include data from another table in your query is a core and fundamental part of how the technology is supposed to work. The Drizzle API has methods for every type of JOIN statement. Properly using JOINs instead of running a bunch of additional queries is an important way to get better performance out of your queries. This is a huge selling point of Drizzle for me personally. Other bells and whistles Drizzle Studio UIs for managing the contents of your database are all the rage these days. You’ve got Prisma Studio and EdgeDB UI to name a couple. It's no surprise that these are so popular. They provide a lot of value by letting you work with your database visually. Drizzle also offers Drizzle Studio and it’s pretty similar to Prisma Studio. Other notable features - Raw Queries - The ‘magic’ sql operator is available to write raw queries using template strings. - Transactions - Transactions are a very common and important feature in just about any database tools. It’s commonly used for seeding or if you need to write some other sort of manual migration script. - Schemas - Schemas are a feature specifically for Postgres and MySQL database dialects - Views -Views allow you to encapsulate the details of the structure of your tables, which might change as your application evolves, behind consistent interfaces. - Logging - There are some logging utilities included useful for debugging, benchmarking, and viewing generated queries. - Introspection - There are APIs for introspecting your database and tables - Zod schema generation - This feature is available in a companion package called drizzle-zod that will generate Zod schema’s based on your Drizzle tables Seeding At the time of this writing, I’m not aware of Drizzle offering any tools or specific advice on seeding your database. I assume this is because of how straightforward it is to handle this on your own. If I was building a new application I would probably provide a simple seed script in JS or TS and use a runtime like node to execute it. After that, you can easily add a command to your package.json and work it into your CI/CD setup or anything else. Conclusion Drizzle ORM is a performant and type-safe alternative to Prisma. While Prisma is a fantastic library, Drizzle offers some advantages such as a lightweight TypeScript API, a focus on SQL dialects, and the ability to use JOINs instead of generating additional sub queries. Drizzle also offers Drizzle Studio for managing the contents of your database visually, as well as other notable features such as raw queries, transactions, schemas, views, logging, introspection, and Zod schema generation. While Drizzle may require a bit more up-front investment in defining your schema, it can be worth it for the performance gains, especially in serverless environments....

Angular 17: Continuing the Renaissance cover image

Angular 17: Continuing the Renaissance

Angular 17: A New Era November 8th marked a significant milestone in the world of Angular with the release of Angular 17. This wasn't just any ordinary update; it was a leap forward, signifying a new chapter for the popular framework. But what made this release truly stand out was the unveiling of Angular's revamped website, complete with a fresh brand identity and a new logo. This significant transformation represents the evolving nature of Angular, aligning with the modern demands of web development. To commemorate this launch, we also hosted a release afterparty, where we went deep into its new features with Minko Gechev from the Angular core team, and Google Developer Experts (GDEs) Brandon Roberts, Deborah Kurata, and Enea Jahollari. But what exactly are these notable new features in the latest version? Let's dive in and explore. The Angular Renaissance Angular has been undergoing a significant revival, often referred to as Angular's renaissance, a term coined by Sarah Drasner, the Director of Engineering at Google, earlier this year. This revival has been particularly evident in its recent versions. The Angular team has worked hard to introduce many new improvements, focusing on signal-based reactivity, hydration, server-side rendering, standalone components, and migrating to esbuild and Vite for a better and faster developer experience. This latest release, in particular, marks many of these features as production-ready. Standalone Components About a year ago, Angular began a journey toward modernity with the introduction of standalone components. This move significantly enhanced the developer experience, making Angular more contemporary and user-friendly. In Angular's context, a standalone component is a self-sufficient, reusable code unit that combines logic, data, and user interface elements. What sets these components apart is their independence from Angular's NgModule system, meaning they do not rely on it for configuration or dependencies. By setting a standalone: true` flag, you no longer need to embed your component in an NgModule and you can bootstrap directly off that component: `typescript // ./app/app.component.ts @Component({ selector: 'app', template: 'hello', standalone: true }) export class AppComponent {} // ./main.ts import { bootstrapApplication } from '@angular/platform-browser'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent).catch(e => console.error(e)); ` Compared to the NgModules way of adding components, as shown below, you can immediately see how standalone components make things much simpler. `ts // ./app/app.component.ts import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'], }) export class AppComponent { title = 'CodeSandbox'; } // ./app/app.module.ts import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } // .main.ts import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app/app.module'; platformBrowserDynamic() .bootstrapModule(AppModule) .catch((err) => console.error(err)); ` In this latest release, the Angular CLI now defaults to generating standalone components, directives, and pipes. This default setting underscores the shift towards a standalone-centric development approach in Angular. New Syntax for Enhanced Control Flow Angular 17 introduces a new syntax for control flow, replacing traditional structural directives like ngIf` or `ngFor`, which have been part of Angular since version 2. This new syntax is designed for fine-grained change detection and eventual zone-less operation when Angular completely migrates to signals. It's more streamlined and performance-efficient, making handling conditional or list content in templates easier. The @if` block replaces `*ngIf` for expressing conditional parts of the UI. `ts @if (a > b) { {{a}} is greater than {{b}} } @else if (b > a) { {{a}} is less than {{b}} } @else { {{a}} is equal to {{b}} } ` The @switch` block replaces `ngSwitch`, offering benefits such as not requiring a container element to hold the condition expression or each conditional template. It also supports template type-checking, including type narrowing within each branch. ```ts @switch (condition) { @case (caseA) { Case A. } @case (caseB) { Case B. } @default { Default case. } } ``` The @for` block replaces `*ngFor` for iteration and presents several differences compared to its structural directive predecessor, `ngFor`. For example, the tracking expression (calculating keys corresponding to object identities) is mandatory but offers better ergonomics. Additionally, it supports `@empty` blocks. `ts @for (item of items; track item.id) { {{ item.name }} } ` Defer Block for Lazy Loading Angular 17 introduces the @defer` block, a dramatically improving lazy loading of content within Angular applications. Within the `@defer` block framework, several sub-blocks are designed to elegantly manage different phases of the deferred loading process. The main content within the `@defer` block is the segment designated for lazy loading. Initially, this content is not rendered, becoming visible only when specific triggers are activated or conditions are met, and after the required dependencies have been loaded. By default, the trigger for a `@defer` block is the browser reaching an idle state. For instance, take the following block: it delays the loading of the calendar-imp` component until it comes into the viewport. Until that happens, a placeholder is shown. This placeholder displays a loading message when the `calendar-imp` component begins to load, and an error message if, for some reason, the component fails to load. `ts @defer (on viewport) { } @placeholder { Calendar placeholder } @loading { Loading calendar } @error { Error loading calendar } ` The on` keyword supports a wide a variety of other conditions, such as: - idle` (when the browser has reached an idle state) - interaction` (when the user interacts with a specified element) - hover` (when the mouse has hovered over a trigger area) - timer(x)` (triggers after a specified duration) - immediate` (triggers the deferred load immediately) The second option of configuring when deferring happens is by using the when` keyword. For example: `ts @defer (when isVisible) { } ` Server-Side Rendering (SSR) Angular 17 has made server-side rendering (SSR) much more straightforward. Now, a --ssr` option is included in the `ng new` command, removing the need for additional setup or configurations. When creating a new project with the `ng new` command, the CLI inquires if SSR should be enabled. As of version 17, the default response is set to 'No'. However, for version 18 and beyond, the plan is to enable SSR by default in newly generated applications. If you prefer to start with SSR right away, you can do so by initializing your project with the `--ssr` flag: `shell ng new --ssr ` For adding SSR to an already existing project, utilize the ng add` command of the Angular CLI: `shell ng add @angular/ssr ` Hydration In Angular 17, the process of hydration, which is essential for reviving a server-side rendered application on the client-side, has reached a stable, production-ready status. Hydration involves reusing the DOM structures rendered on the server, preserving the application's state, and transferring data retrieved from the server, among other crucial tasks. This functionality is automatically activated when server-side rendering (SSR) is used. It offers a more efficient approach than the previous method, where the server-rendered tree was completely replaced, often causing visible UI flickers. Such re-rendering can adversely affect Core Web Vitals, including Largest Contentful Paint (LCP), leading to layout shifts. By enabling hydration, Angular 17 allows for the reuse of the existing DOM, effectively preventing these flickers. Support for View Transitions The new View Transitions API, supported by some browsers, is now integrated into the Angular router. This feature, which must be activated using the withViewTransitions` function, allows for CSS-based animations during route transitions, adding a layer of visual appeal to applications. To use it, first you need to import withViewTransitions`: `ts import { provideRouter, withViewTransitions } from '@angular/router'; ` Then, you need to add it to the provideRouter` configuration: `ts bootstrapApplication(AppComponent, { providers: [ provideRouter(routes, withViewTransitions()) ] }) ` Other Notable Changes - Angular 17 has stabilized signals, initially introduced in Angular 16, providing a new method for state management in Angular apps. - Angular 17 no longer supports Node 16. The minimal Node version required is now 18.13. - TypeScript version 5.2 is the least supported version starting from this release of Angular. - The @Component` decorator now supports a `styleUrl` attribute. This allows for specifying a single stylesheet path as a string, simplifying the process of linking a component to a specific style sheet. Previously, even for a single stylesheet, an array was required under `styleUrls`. Conclusion With the launch of Angular 17, the Angular Renaissance is now in full swing. This release has garnered such positive feedback that developers are showing renewed interest in the framework and are looking forward to leveraging it in upcoming projects. However, it's important to note that it might take some time for IDEs to adapt to the new templating syntax fully. While this transition is underway, rest assured that you can still write perfectly valid code using the old templating syntax, as all the changes in Angular 17 are backward compatible. Looking ahead, the future of Angular appears brighter than ever, and we can't wait to see what the next release has in store!...

Linting, Formatting, and Type Checking Commits in an Nx Monorepo with Husky and lint-staged cover image

Linting, Formatting, and Type Checking Commits in an Nx Monorepo with Husky and lint-staged

One way to keep your codebase clean is to enforce linting, formatting, and type checking on every commit. This is made very easy with pre-commit hooks. Using Husky, you can run arbitrary commands before a commit is made. This can be combined with lint-staged, which allows you to run commands on only the files that have been staged for commit. This is useful because you don't want to run linting, formatting, and type checking on every file in your project, but only on the ones that have been changed. But if you're using an Nx monorepo for your project, things can get a little more complicated. Rather than have you use eslint or prettier directly, Nx has its own scripts for linting and formatting. And type checking is complicated by the use of specific tsconfig.json files for each app or library. Setting up pre-commit hooks with Nx isn't as straightforward as in a simpler repository. This guide will show you how to set up pre-commit hooks to run linting, formatting, and type checking in an Nx monorepo. Configure Formatting Nx comes with a command, nx format:write for applying formatting to affected files which we can give directly to lint-staged. This command uses Prettier under the hood, so it will abide by whatever rules you have in your root-level .prettierrc file. Just install Prettier, and add your preferred configuration. ` npm install --save-dev prettier ` Then add a .prettierrc file to the root of your project with your preferred configuration. For example, if you want to use single quotes and trailing commas, you can add the following: ` { "singleQuote": true, "trailingComma": "all" } ` Configure Linting Nx has its own plugin that uses ESLint to lint projects in your monorepo. It also has a plugin with sensible ESLint defaults for your linter commands to use, including ones specific to Nx. To install them, run the following command: ` npm i --save-dev @nrwl/linter @nrwl/eslint-plugin-nx ` Then, we can create a default .eslintrc.json file in the root of our project: ` { "root": true, "ignorePatterns": ["/*"], "plugins": ["@nrwl/nx"], "overrides": [ { "files": [".ts", "*.tsx", "*.js", "*.jsx"], "rules": { "@nrwl/nx/enforce-module-boundaries": [ "error", { "enforceBuildableLibDependency": true, "allow": [], "depConstraints": [ { "sourceTag": "", "onlyDependOnLibsWithTags": [""] } ] } ] } }, { "files": [".ts", "*.tsx"], "extends": ["plugin:@nrwl/nx/typescript"], "rules": {} }, { "files": [".js", "*.jsx"], "extends": ["plugin:@nrwl/nx/javascript"], "rules": {} } ] } ` The above ESLint configuration will, by default, apply Nx's module boundary rules to any TypeScript or JavaScript files in your project. It also applies its recommended rules for JavaScript and TypeScript respectively, and gives you room to add your own. You can also have ESLint configurations specific to your apps and libraries. For example, if you have a React app, you can add a .eslintrc.json file to the root of your app directory with the following contents: ` { "extends": ["plugin:@nrwl/nx/react", "../../.eslintrc.json"], "rules": { "no-console": ["error", { "allow": ["warn", "error"] }] } } ` Set Up Type Checking Type checking with tsc is normally a very straightforward process. You can just run tsc --noEmit to check your code for type errors. But things are more complicated in Nx with lint-staged. There are a two tricky things about type checking with lint-staged in an Nx monorepo. First, different apps and libraries can have their own tsconfig.json files. When type checking each app or library, we need to make sure we're using that specific configuration. The second wrinkle comes from the fact that lint-staged passes a list of staged files to commands it runs by default. And tsc will only accept either a specific tsconfig file, or a list of files to check. We do want to use the specific tsconfig.json files, and we also only want to run type checking against apps and libraries with changes. To do this, we're going to create some Nx run commands within our apps and libraries and run those instead of calling tsc directly. Within each app or library you want type checked, open the project.json file, and add a new run command like this one: ` { // ... "targets": { // ... "typecheck": { "executor": "nx:run-commands", "options": { "commands": ["tsc -p tsconfig.app.json --noEmit"], "cwd": "apps/directory-of-your-app-goes-here", "forwardAllArgs": false } }, } } ` Inside commands is our type-checking command, using the local tsconfig.json file for that specific Nx app. The cwd option tells Nx where to run the command from. The forwardAllArgs option tells Nx to ignore any arguments passed to the command. This is important because tsc will fail if you pass both a tsconfig.json and a list of files from lint-staged. Now if we ran nx affected --target=typecheck from the command line, we would be able to type check all affected apps and libraries that have a typecheck target in their project.json. Next we'll have lint-staged handle this for us. Installing Husky and lint-staged Finally, we'll install and configure Husky and lint-staged. These are the two packages that will allow us to run commands on staged files before a commit is made. ` npm install --save-dev husky lint-staged ` In your package.json file, add the prepare script to run Husky's install command: ` { "scripts": { "prepare": "husky install" } } ` Then, run your prepare script to set up git hooks in your repository. This will create a .husky directory in your project root with the necessary file system permissions. ` npm run prepare ` The next step is to create our pre-commit hook. We can do this from the command line: ` npx husky add .husky/pre-commit "npx lint-staged --concurrent false --relative" ` It's important to use Husky's CLI to create our hooks, because it handles file system permissions for us. Creating files manually could cause problems when we actually want to use the git hooks. After running the command, we will now have a file at .husky/pre-commit that looks like this: ` !/usr/bin/env sh . "$(dirname -- "$0")//husky.sh" npx lint-staged --concurrent false --relative ` Now whenever we try to commit, Husky will run the lint-staged command. We've given it some extra options. First, --concurrent false to make sure attempts to write fixes with formatting and linting don't conflict with simultaneous attempts at type checking. Second is --relative, because our Nx commands for formatting and linting expect a list of file paths relative to the repo root, but lint-staged would otherwise pass the full path by default. We've got our pre-commit command ready, but we haven't actually configured lint-staged yet. Let's do that next. Configuring lint-staged In a simpler repository, it would be easy to add some lint-staged configuration to our package.json file. But because we're trying to check a complex monorepo in Nx, we need to add a separate configuration file. We'll call it lint-staged.config.js and put it in the root of our project. Here is what our configuration file will look like: ` module.exports = { '{apps,libs,tools}//*.{ts,tsx}': files => { return nx affected --target=typecheck --files=${files.join(',')}`; }, '{apps,libs,tools}//*.{js,ts,jsx,tsx,json}': [ files => nx affected:lint --files=${files.join(',')}`, files => nx format:write --files=${files.join(',')}`, ], }; ` Within our module.exports object, we've defined two globs: one that will match any TypeScript files in our apps, libraries, and tools directories, and another that also matches JavaScript and JSON files in those directories. We only need to run type checking for the TypeScript files, which is why that one is broken out and narrowed down to only those files. These globs defining our directories can be passed a single command, or an array of commands. It's common with lint-staged to just pass a string like tsc --noEmit or eslint --fix. But we're going to pass a function instead to combine the list of files provided by lint-staged with the desired Nx commands. The nx affected and nx format:write commands both accept a --files option. And remember that lint-staged always passes in a list of staged files. That array of file paths becomes the argument to our functions, and we concatenate our list of files from lint-staged into a comma-delimitted string and interpolate that into the desired Nx command's --files option. This will override Nx's normal behavior to explicitly tell it to only run the commands on the files that have changed and any other files affected by those changes. Testing It Out Now that we've got everything set up, let's try it out. Make a change to a TypeScript file in one of your apps or libraries. Then try to commit that change. You should see the following in your terminal as lint-staged runs: ` Preparing lint-staged... Running tasks for staged files... lint-staged.config.js {apps,libs,tools}//*.{ts,tsx} nx affected --target=typecheck --files=apps/your-app/file-you-changed.ts {apps,libs,tools}//*.{js,ts,jsx,tsx,json} nx affected:lint --files=apps/your-app/file-you-changed.ts nx format:write --files=apps/your-app/file-you-changed.ts Applying modifications from tasks... Cleaning up your temporary files... ` Now, whenever you try to commit changes to files that match the globs defined in lint-staged.config.js, the defined commands will run first, and verify that the files contain no type errors, linting errors, or formatting errors. If any of those commands fail, the commit will be aborted, and you'll have to fix the errors before you can commit. Conclusion We've now set up a monorepo with Nx and configured it to run type checking, linting, and formatting on staged files before a commit is made. This will help us catch errors before they make it into our codebase, and it will also help us keep our codebase consistent and readable. To see an example Nx monorepo with these configurations, check out this repo....

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels cover image

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels

In this episode of the engineering leadership series, Kathy Keating, co-founder of CTO Levels and CTO Advisor, shares her insights on the role of a CTO and the challenges they face. She begins by discussing her own journey as a technologist and her experience in technology leadership roles, including founding companies and having a recent exit. According to Kathy, the primary responsibility of a CTO is to deliver the technology that aligns with the company's business needs. However, she highlights a concerning statistic that 50% of CTOs have a tenure of less than two years, often due to a lack of understanding and mismatched expectations. She emphasizes the importance of building trust quickly in order to succeed in this role. One of the main challenges CTOs face is transitioning from being a technologist to a leader. Kathy stresses the significance of developing effective communication habits to bridge this gap. She suggests that CTOs create a playbook of best practices to enhance their communication skills and join communities of other CTOs to learn from their experiences. Matching the right CTO to the stage of a company is another crucial aspect discussed in the episode. Kathy explains that different stages of a company require different types of CTOs, and it is essential to find the right fit. To navigate these challenges, Kathy advises CTOs to build a support system of advisors and coaches who can provide guidance and help them overcome obstacles. Additionally, she encourages CTOs to be aware of their own preferences and strengths, as self-awareness can greatly contribute to their success. In conclusion, this podcast episode sheds light on the technical aspects of being a CTO and the challenges they face. Kathy Keating's insights provide valuable guidance for CTOs to build trust, develop effective communication habits, match their skills to the company's stage, and create a support system for their professional growth. By understanding these key technical aspects, CTOs can enhance their leadership skills and contribute to the success of their organizations....