Skip to content

Using Lottie Animations for UI Components in React

Lottie is a library created by Airbnb that allows developers to display animations created in Adobe After Effects on the web and in mobile apps. This allows developers to use more more dynamic and detailed animations for their apps than would normally be possible with, for example, CSS animations.

Button icons are one great example of where a lottie file can improve the animated user experience. Here I'll be integrating an animation for toggling a hamburger menu into a React app.

In order to use lottie files in my app, I'll first need to install the lottie-web package.

npm install lottie-web

Next, I'll need an animation file. Lottie files are exported from After Effects with the BodyMovin plugin, and are saved as JSON files. If you don't have your own After Effects animations, there are many free lottie files available online. I'm going to use this hamburger menu toggle animation by Christopher Deane.

The lottie-web library works by creating an object that loads the animation. That object takes a container element where the animation will be created, then exposes methods like play and setDirection for running the animation. To make this work in React, we'll have to use the useRef hook.

First, let's create a basic button component.

import React from 'react';

const MenuToggleButton = ({open, setOpen}) => {
  return (
    <button onClick={() => setOpen(!open)} />
  );
};

export default MenuToggleButton;

This button expects to receive the open status of the menu it controls as a boolean, and the state setting function to change that status on click. Next, we'll add our animation.

import React, { useState, useEffect, useRef } from 'react';
import PropTypes from 'prop-types';
import lottie from 'lottie-web/build/player/lottie_light';
import animationData from './animationData.json';

const MenuToggleButton = ({ open, setOpen }) => {
  const animationContainer = useRef(null);
  const anim = useRef(null);

  useEffect(() => {
    if (animationContainer.current) {
      anim.current = lottie.loadAnimation({
        container: animationContainer.current,
        renderer: 'svg',
        loop: false,
        autoplay: false,
        animationData,
      });

      return () => anim.current?.destroy();
    }
  }, []);

  return (
    <button
      onClick={() => setOpen(!open)}
      ref={animationContainer}
    />
  );
};

MenuToggleButton.propTypes = {
  open: PropTypes.bool.isRequired,
  setOpen: PropTypes.func.isRequired
}

export default MenuToggleButton;

We're importing the lottie-web lightweight animation player, since the full package can be large, and we don't need all of it. The lottie file we're using is rather small, so I've chosen to import the JSON file directly rather than fetch it. How you implement that may depend on the size of your animation.

We've also created two refs: one for the containing button element, and one for the animation object itself. We then use useEffect to create the animation object if the containing element has been created. Our animation object has a few important settings: it is set to not loop and not autoplay. Because we want this animation to respond to user interaction, it wouldn't make a lot of sense to allow it to automatically start playing an endless open-and-close animation loop.

Lastly, we need to make the animation respond to clicking the button. We want the animation to play from one end to the other without looping, and then play back the animation in reverse when the button state is toggled back. We will set this inside the button's onClick.

<button
  onClick={() => {
    setOpen(!open);
    anim.current?.setDirection(open ? -1 : 1);
    anim.current?.play();
  }}
  ref={animationContainer}
/>

Live Demo

Our button is now fully functional, and our animation is ready to play when a user clicks it. To see it in action, check out this live example.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Bun v1.0 cover image

Bun v1.0

On September 8, 2023, Bun version 1 was released as the first production-ready version of Bun, a fast, all-in-one toolkit for running, building, testing, and debugging JavaScript and TypeScript. Why a new JS runtime You may ask, we already have Node and Deno, so why would we need another javascript runtime, Well yes we had Node for a very long time, but developers face a lot of problems with it, and maybe the first problem is because it’s there for a very long time, it has been changing a lot between different versions and one of the biggest nightmares for JavaScript developers these days is upgrading the node version. Also, Node lacks support for Typescriptt. Zig programming language One of the main reasons that Bun is faster than Node, is the programming language it has been built with which is Zig. Zig is a very fast programming language, even faster than C) (here is some benchmarks), it focuses on performance and memory control. The reasons it’s faster than C is because of the LLVM optimizations it has, and also the way it handles the undefined behavior under the hood Developer Experience Bun delivers a better developer experience than Node on many levels. First, it’s almost fully compatible with Node so you can use Node packages without any issues. Also, you don’t need to worry about JS Common and ES Modules anymore, you can use both in the same file, yup you read that right, for example: `js import { useState } from 'react'; const React = require('react'); ` Also, it has a built-in test framework similar to Jest or Vitest in the project so no need to install a different test framework with different bundler in the same project like Webpack or Vite `js import { describe, expect, test, beforeAll } from "bun:test"; ` Also, it supports JSX out-of-the-box `bash bun index.tsx ` Also, Bun has the fastest javascript package manager and the most efficient you can find as of the time of this post `bash bun install ` Bun Native APIs Bun supports the Node APIs but also they have fun and easy APIs to work with like `Bun.serve()` : to create HTTP server `Bun.file()` : to read and write the file system `Bun. password.hash()`: to hash passwords `Bun.build()`: to bundle files for the browser `Bun.FileSystemRouter()`: a file system router And many more features Plugin system Bun also has an amazing plugin system that allows developers to create their own plugins and add them to the Bun ecosystem. `js import { plugin, type BunPlugin } from "bun"; const myPlugin: BunPlugin = { name: "Custom loader", setup(build) { // implementation }, }; ` Conclusion Bun is a very promising project, and it’s still in the early stages, but it has a lot of potential to be the next big thing in the JavaScript world. It’s fast, easy to use, and has a lot of amazing features. I’m very excited to see what the future holds for Bun and I’m sure it will be a very successful project....

Drizzle ORM: A performant and type-safe alternative to Prisma cover image

Drizzle ORM: A performant and type-safe alternative to Prisma

Introduction I’ve written an article about a similar, more well-known TypeScript ORM named Prisma in the past. While it is a fantastic library that I’ve used and have had success with personally, I noted a couple things in particular that I didn’t love about it. Specifically, how it handles relations with add-on queries and also its bulk that can slow down requests in Lambda and other similar serverless environments. Because of these reasons, I took notice of a newer player in the TypeScript ORM space named Drizzle pretty quickly. The first thing that I noticed about Drizzle and really liked is that even though they call it an ‘ORM’ it’s more of a type-safe query builder. It reminds me of a JS query builder library called ‘Knex’ that I used to use years ago. It also feels like the non-futuristic version of EdgeDB which is another technology that I’m pretty excited about, but committing to it still feels like a gamble at this stage in its development. In contrast to Prisma, Drizzle is a ‘thin TypeScript layer on top of SQL’. This by default should make it a better candidate for Lambda’s and other Serverless environments. It could also be a hard sell to Prisma regulars that are living their best life using the incredibly developer-friendly TypeScript API’s that it generates from their schema.prisma files. Fret not, despite its query-builder roots, Drizzle has some tricks up its sleeve. Let’s compare a common query example where we fetch a list of posts and all of it’s comments from the Drizzle docs: ` // Drizzle query const posts = await db.query.posts.findMany({ with: { comments: true, }, }); // Prisma query const posts = await prisma.post.findMany({ include: { comments: true, }, }); ` Sweet, it’s literally the same thing. Maybe not that hard of a sale after all. You will certainly find some differences in their APIs, but they are both well-designed and developer friendly in my opinion. The schema Similar to Prisma, you define a schema for your database in Drizzle. That’s pretty much where the similarities end. In Drizzle, you define your schema in TypeScript files. Instead of generating an API based off of this schema, Drizzle just infers the types for you, and uses them with their TypeScript API to give you all of the nice type completions and things we’re used to in TypeScript land. Here’s an example from the docs: ` import { integer, pgEnum, pgTable, serial, uniqueIndex, varchar } from 'drizzle-orm/pg-core'; // declaring enum in database export const popularityEnum = pgEnum('popularity', ['unknown', 'known', 'popular']); export const countries = pgTable('countries', { id: serial('id').primaryKey(), name: varchar('name', { length: 256 }), }, (countries) => { return { nameIndex: uniqueIndex('nameidx').on(countries.name), } }); export const cities = pgTable('cities', { id: serial('id').primaryKey(), name: varchar('name', { length: 256 }), countryId: integer('countryid').references(() => countries.id), popularity: popularityEnum('popularity'), }); ` I’ll admit, this feels a bit clunky compared to a Prisma schema definition. The trade-off for a lightweight TypeScript API to work with your database can be worth the up-front investment though. Migrations Migrations are an important piece of the puzzle when it comes to managing our applications databases. Database schemas change throughout the lifetime of an application, and the steps to accomplish these changes is a non-trivial problem. Prisma and other popular ORMs offer a CLI tool to manage and automate your migrations, and Drizzle is no different. After creating new migrations, all that is left to do is run them. Drizzle gives you the flexibility to run your migrations in any way you choose. The simplest of the bunch and the one that is recommended for development and prototyping is the drizzle-kit push command that is similar to the prisma db push command if you are familiar with it. You also have the option of running the .sql files directly or using the Drizzle API's migrate function to run them in your application code. Drizzle Kit is a companion CLI tool for managing migrations. Creating your migrations with drizzle-kit is as simple as updating your Drizzle schema. After making some changes to your schema, you run the drizzle-kit generate command and it will generate a migration in the form of a .sql file filled with the needed SQL commands to migrate your database from point a → point b. Performance When it comes to your database, performance is always an extremely important consideration. In my opinion this is the category that really sets Drizzle apart from similar competitors. SQL Focused Tools like Prisma have made sacrifices and trade-offs in their APIs in an attempt to be as database agnostic as possible. Drizzle gives itself an advantage by staying focused on similar SQL dialects. Serverless Environments Serverless environments are where you can expect the most impactful performance gains using Drizzle compared to Prisma. Prisma happens to have a lot of content that you can find on this topic specifically, but the problem stems from cold starts in certain serverless environments like AWS Lambda. With Drizzle being such a lightweight solution, the time required to load and execute a serverless function or Lambda will be much quicker than Prisma. Benchmarks You can find quite a few different open-sourced benchmarks of common database drivers and ORMs in JavaScript land. Drizzle maintains their own benchmarks on GitHub. You should always do your own due diligence when it comes to benchmarks and also consider the inputs and context. In Drizzle's own benchmarks, it’s orders of magnitudes faster when compared to Prisma or TypeORM, and it’s not far off from the performance you would achieve using the database drivers directly. This would make sense considering the API adds almost no overhead, and if you really want to achieve driver level performance, you can utilize the prepared statements API. Prepared Statements The prepared statements API in Drizzle allows you to pre-generate raw queries that get sent directly to the underlying database driver. This can have a very significant impact on performance, especially when it comes to larger, more complex queries. Prepared statements can also provide huge performance gains when used in serverless environments because they can be cached and reused. JOINs I mentioned at the beginning of this article that one of the things that bothered me about Prisma is the fact that fetching relations on queries generates additional sub queries instead of utilizing JOINs. SQL databases are relational, so using JOINs to include data from another table in your query is a core and fundamental part of how the technology is supposed to work. The Drizzle API has methods for every type of JOIN statement. Properly using JOINs instead of running a bunch of additional queries is an important way to get better performance out of your queries. This is a huge selling point of Drizzle for me personally. Other bells and whistles Drizzle Studio UIs for managing the contents of your database are all the rage these days. You’ve got Prisma Studio and EdgeDB UI to name a couple. It's no surprise that these are so popular. They provide a lot of value by letting you work with your database visually. Drizzle also offers Drizzle Studio and it’s pretty similar to Prisma Studio. Other notable features - Raw Queries - The ‘magic’ sql operator is available to write raw queries using template strings. - Transactions - Transactions are a very common and important feature in just about any database tools. It’s commonly used for seeding or if you need to write some other sort of manual migration script. - Schemas - Schemas are a feature specifically for Postgres and MySQL database dialects - Views -Views allow you to encapsulate the details of the structure of your tables, which might change as your application evolves, behind consistent interfaces. - Logging - There are some logging utilities included useful for debugging, benchmarking, and viewing generated queries. - Introspection - There are APIs for introspecting your database and tables - Zod schema generation - This feature is available in a companion package called drizzle-zod that will generate Zod schema’s based on your Drizzle tables Seeding At the time of this writing, I’m not aware of Drizzle offering any tools or specific advice on seeding your database. I assume this is because of how straightforward it is to handle this on your own. If I was building a new application I would probably provide a simple seed script in JS or TS and use a runtime like node to execute it. After that, you can easily add a command to your package.json and work it into your CI/CD setup or anything else. Conclusion Drizzle ORM is a performant and type-safe alternative to Prisma. While Prisma is a fantastic library, Drizzle offers some advantages such as a lightweight TypeScript API, a focus on SQL dialects, and the ability to use JOINs instead of generating additional sub queries. Drizzle also offers Drizzle Studio for managing the contents of your database visually, as well as other notable features such as raw queries, transactions, schemas, views, logging, introspection, and Zod schema generation. While Drizzle may require a bit more up-front investment in defining your schema, it can be worth it for the performance gains, especially in serverless environments....

Web Scraping with Deno cover image

Web Scraping with Deno

Maybe you've had this problem before: you need data from a website, but there isn't a good way to download a CSV or hit an API to get it. In these situations, you may be forced to write a web scraper. A web scraper is a script that downloads the HTML from a website as though it were a normal user, and then it parses that HTML to extract information from it. JavaScript is a natural choice for writing your web scraper, as it's the language of the web and natively understands the DOM. And with TypeScript, it is even better, as the type system helps you navigate the trickiness of HTMLCollections and Element types. You can write your TypeScript scraper as a Node script, but this has some limitations. TypeScript support isn't native to Node. Node also doesn't support web APIs very well, only recently implementing fetch` for example. Making your scraper in Node is, to be frank, a bit of a pain. But there's an alternative way to write your scraper in TypeScript: **Deno**! Deno is a newer JavaScript/TypeScript runtime, co-created by one of Node's creators, Ryan Dahl. Deno has native support for TypeScript, and it supports web APIs out of the box. With Deno, we can write a web scraper in TypeScript with far fewer dependencies and boilerplate than what you'd need in order to write the same thing in Node. In this blog, we’re going to build a web scraper using Deno. Along the way, we’ll also explore some of the key advantages of Deno, as well as some differences to be aware of if you’re coming from writing code exclusively for Node. Getting Started First, you'll need to install Deno. Depending on your operating system, there are many methods to install it locally. After installation, we'll set up our project. We'll keep things simple and start with an empty directory. `bash mkdir deno-scraper cd deno-scraper ` Now, let's add some configuration. I like to keep my code consistent with linters (e.g. ESLint) and formatters (e.g. Prettier). But Deno doesn't need ESLint or Prettier. It handles linting and formatting itself. You can set up your preferences for how to lint and format your code in a deno.json` file. Check out this example: `json { "compilerOptions": { "noUnusedLocals": true, "noUnusedParameters": true, "noUncheckedIndexAccess": true }, "fmt": { "options": { "useTabs": true, "lineWidth": 80, "singleQuote": true } } } ` You can then lint and format your Deno project with deno lint` and `deno fmt`, respectively. If you're like me, you like your editor to handle linting and formatting on save. So once Deno is installed, and your linting/formatting options are configured, you'll want to set up your development environment too. I use Visual Studio Code with the Deno extension. Once installed, I let VS Code know that I'm in a Deno project and that I'd like to auto-format by adding a .vscode/settings.json` file with the following contents: `json { "deno.enable": true, "editor.formatOnSave": true, "editor.defaultFormatter": "denoland.vscode-deno" } ` Writing a Web Scraper With Deno installed and a project directory ready to go, let's write our web scraper! And what shall we scrape? Let's scrape the list of Deno 3rd party modules from the official documentation site. Our script will pull the top 60 Deno modules listed, and output them to a JSON file. For this example, we'll only need one script. I'll name it index.ts` for simplicity's sake. Inside our script, we'll create three functions: `sleep`, `getModules` and `getModule`. The first will give us a way to easily wait between fetches. This will prevent issues with our script, and the owners of the website we’ll contact, because it's unkind to flood a website with many successive requests. This type of automation could appear like a Denial of Service (DoS) attack, and could cause the API owners to ban your IP address from contacting it. The sleep` function is fully implemented in the code block below. The second function (`getModules`) will scrape the first three pages of the Deno 3rd party modules list, and the third (`getModule`) will scrape the details for each individual module. `ts // This sleep function creates a Promise that // resolves after a given number of milliseconds. async function sleep(milliseconds: number) { return new Promise((resolve) => { setTimeout(resolve, milliseconds); }); } async function getModules() { // ... } async function getModule(url: string | URL) { // ... } ` Scraping the Modules List First, let's write our getModules` function. This function will scrape the first three pages of the Deno 3rd party modules list. Deno supports the `fetch` API, so we can use that to make our requests. We'll also use the `deno_dom` module to parse the HTML response into a DOM tree that we can traverse. Two things we'll want to do upfront: let's import the deno_dom` parsing module, and create a type for the data we're trying to scrape. `ts import { DOMParser } from 'https://deno.land/x/denodom/deno-dom-wasm.ts'; interface Entry { name?: string; author?: string; repo?: string; href?: string; description?: string; } ` Next, we'll set up our initial fetch and data parsing: `ts async function getModules() { // Here, we define the base URL we want to use, the number // of pages we want to fetch, and we create an empty array // to store our scraped data. const BASEURL = 'https://deno.land/x?page='; const MAXPAGES = 3; const entries: Entry[] = []; // We'll loop for the number of pages we want to fetch, // and parse the contents once available for (let page = 1; page res.text()); // Remember, be kind to the website and wait a second! await sleep(1000); // Use the denodom module to parse the HTML const document = new DOMParser().parseFromString(pageContents, 'text/html'); if (document) { // We'll handle this in the next code block } } } ` Now that we're able to grab the contents of the first three pages of Deno modules, we need to parse the document to get the list of modules we want to collect information for. Then, once we've got the URL of the modules, we'll want to scrape their individual module pages with getModule`. Passing the text of the page to the DOMParser` from `deno_dom`, you can extract information using the same APIs you'd use in the browser like `querySelector` and `getElementsByTagName`. To figure out what to select for, you can use your browser's developer tools to inspect the page, and find selectors for elements you want to select. For example, in Chrome DevTools, you can right-click on an element and select "Copy > Copy selector" to get the selector for that element. `ts if (document) { // Conveniently, the modules are all the only elements // on the page. If you're scraping different data from a // different website, you'll want to use whatever selectors // make sense for the data you're trying to scrape. const modules = document.getElementsByTagName('li'); for (const module of modules) { const entry: Entry = {}; // Here we get the module's name and a short description. entry.name = module.querySelector( '.text-primary.font-semibold', )?.textContent; entry.description = module.querySelector('.col-span-2.text-gray-400') ?.textContent; // Here, we get the path to this module's page. // The Deno site uses relative paths, so we'll // need to add the base URL to the path in getModule. const path = module.getElementsByTagName('a')[0].getAttribute('href')?.split( '?', )[0]; entry.href = https://deno.land${path}`; // We've got all the data we can from just the listing. // Time to fetch the individual module page and add // data from there. let moduleData; if (path) { moduleData = await getModule(path); await sleep(1000); } // Once we've got everything, push the data to our array. entries.push({ ...entry, ...moduleData }); } } ` Scraping a Single Module Next we'll write getModule`. This function will scrape a single Deno module page, and give us information about it. If you're following along so far, you might've noticed that the paths we got from the directory of modules look like this: ` /x/denodom?pos=11&qid=6992af66a1951996c367e6c81c292b2f ` But if you navigate to that page in the browser, the URL looks like this: ` https://deno.land/x/denodom@v0.1.36-alpha ` Deno uses redirects to send you to the latest version of a module. We'll need to follow those redirects to get the correct URL for the module page. We can do that with the redirect: 'follow'` option in the `fetch` call. We'll also need to set the `Accept` header to `text/html`, or else we'll get a 404. `ts async function getModule(path: string | URL) { const modulePage = await fetch(new URL(https://deno.land${path}`), { redirect: 'follow', headers: { 'Accept': 'text/html' }, }).then( (res) => { return res.text(); }, ); const moduleDocument = new DOMParser().parseFromString( modulePage, 'text/html', ); // Parsing will go here... } ` Now we'll parse the module data, just like we did with the directory of modules. `ts async function getModule(path: string | URL) { // ... const moduleData: Entry = {}; const repo = moduleDocument ?.querySelector('a.link.truncate') ?.getAttribute('href'); if (repo) { moduleData.repo = repo; moduleData.author = repo.match(/https?:\/\/(?:www\.)?github\.com\/(.)\//)![1]; } return moduleData; } ` Writing Our Data to a File Finally, we'll write our data to a file. We'll use the Deno.writeTextFile` API to write our data to a file called `output.json`. `ts async function getModules() { // ... await Deno.writeTextFile('./output.json', JSON.stringify(entries, null, 2)); } ` Lastly, we just need to invoke our getModules` function to start the process. `ts getModules(); ` Running Our Script Deno has some security features built into it that prevent it from doing things like accessing the file system or the network without granting it explicit permission. We give it these permissions by passing the --allow-net` and `--allow-write` flags to our script when we run the script. `bash deno run --allow-net --allow-write index.ts ` After we let our script run (which, if you've wisely set a small delay with each request, will take some time), we'll have a new output.json` file with data like this: `json [ { "name": "flat", "description": "A collection of postprocessing utilities for flat", "href": "https://deno.land/x/flat", "repo": "https://github.com/githubocto/flat-postprocessing", "author": "githubocto" }, { "name": "lambda", "description": "A deno runtime for AWS Lambda. Deploy deno via docker, SAM, serverless, or bundle it yourself.", "href": "https://deno.land/x/lambda", "repo": "https://github.com/hayd/deno-lambda", "author": "hayd" }, // ... ] ` Putting It All Together Tada! A quick and easy way to get data from websites using Deno and our favorite JavaScript browser APIs. You can view the entire script in one piece on GitHub. In this blog, you've gotten a basic intro to how to use Deno in lieu of Node for simple scripts, and have seen a few of the key differences. If you want to dive deeper into Deno, start with the official documentation. And when you're ready for more, check out the resources available from This Dot Labs at deno.framework.dev, and our Deno backend starter app at starter.dev!...

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels cover image

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels

In this episode of the engineering leadership series, Kathy Keating, co-founder of CTO Levels and CTO Advisor, shares her insights on the role of a CTO and the challenges they face. She begins by discussing her own journey as a technologist and her experience in technology leadership roles, including founding companies and having a recent exit. According to Kathy, the primary responsibility of a CTO is to deliver the technology that aligns with the company's business needs. However, she highlights a concerning statistic that 50% of CTOs have a tenure of less than two years, often due to a lack of understanding and mismatched expectations. She emphasizes the importance of building trust quickly in order to succeed in this role. One of the main challenges CTOs face is transitioning from being a technologist to a leader. Kathy stresses the significance of developing effective communication habits to bridge this gap. She suggests that CTOs create a playbook of best practices to enhance their communication skills and join communities of other CTOs to learn from their experiences. Matching the right CTO to the stage of a company is another crucial aspect discussed in the episode. Kathy explains that different stages of a company require different types of CTOs, and it is essential to find the right fit. To navigate these challenges, Kathy advises CTOs to build a support system of advisors and coaches who can provide guidance and help them overcome obstacles. Additionally, she encourages CTOs to be aware of their own preferences and strengths, as self-awareness can greatly contribute to their success. In conclusion, this podcast episode sheds light on the technical aspects of being a CTO and the challenges they face. Kathy Keating's insights provide valuable guidance for CTOs to build trust, develop effective communication habits, match their skills to the company's stage, and create a support system for their professional growth. By understanding these key technical aspects, CTOs can enhance their leadership skills and contribute to the success of their organizations....