Skip to content

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk

How to host a full stack javascript app with AWS - 1 Part Series

How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk

Let's imagine that you have finished building your app. You have a Single Page Application (SPA) with a NestJS back-end. You are ready to launch, but what if your app is a hit, and you need to be prepared to serve thousands of users? You might need to scale your API horizontally, which means that to serve traffic, you need to have more instances running behind a load balancer. Serving your front-end using a CDN will also be helpful.

Architecture

In this article, I am going to give you steps on how to set up a scalable distribution in AWS, using S3, CloudFront and Elastic Beanstalk. The NestJS API and the simple front-end are both inside an NX monorepo

The sample application

For the sake of this tutorial, we have put together a very simple HTML page that tries to reach an API endpoint and a very basic API written in NestJS.

The UI

The UI code is very simple. There is a "HELLO" button on the UI which when clicked, tries to reach out to the /api/hello endpoint. If there is a response with status code 2xx, it puts an h1 tag with the response contents into the div with the id result. If it errors out, it puts an error message into the same div.

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8" />
    <title>Frontend</title>
    <base href="/" />
    <meta name="viewport" content="width=device-width, initial-scale=1" />
    <link rel="icon" type="image/x-icon" href="favicon.ico" />
  </head>
  <body>
    <button id="hello">HELLO</button>

    <div id="result"></div>
    <script>
      const helloButton = document.getElementById('hello');
      const resultDiv = document.getElementById('result');
      helloButton.addEventListener('click', async () => {
        const request = await fetch('/api/hello');
        if (request.ok) {
          const response = await request.text();
          console.log(json);
          resultDiv.innerHTML = `<h1>${response}</h1>`;
        } else {
          resultDiv.innerHTML = `<h1>An error occurred.</h1>`;
        }
      });
    </script>
  </body>
</html>

The API

We bootstrap the NestJS app to have the api prefix before every endpoint call.

// main.ts
import { Logger } from '@nestjs/common';
import { NestFactory } from '@nestjs/core';

import { AppModule } from './app/app.module';

async function bootstrap() {
  const app = await NestFactory.create(AppModule);
  const globalPrefix = 'api';
  app.setGlobalPrefix(globalPrefix);
  const port = process.env.PORT || 3000;
  await app.listen(port);
  Logger.log(`🚀 Application is running on: http://localhost:${port}/${globalPrefix}`);
}

bootstrap();

We bootstrap it with the AppModule which only has the AppController in it.

// app.module.ts

import { Module } from '@nestjs/common';

import { AppController } from './app.controller';

@Module({
  imports: [],
  controllers: [AppController],
})
export class AppModule {}

And the AppController sets up two very basic endpoints. We set up a health check on the /api route and our hello endpoint on the /api/hello route.

import { Controller, Get } from '@nestjs/common';

@Controller()
export class AppController {
  @Get()
  health() {
    return 'OK';
  }

  @Get('hello')
  hello() {
    return 'Hello';
  }
}

Hosting the front-end with S3 and CloudFront

To serve the front-end through a CMS, we should first create an S3 bucket. Go to S3 in your AWS account and create a new bucket. Name your new bucket to something meaningful. For example, if this is going to be your production deployment I recommend having -prod in the name so you will be able to see at a glance, that this bucket contains your production front-end and nothing should get deleted accidentally.

We go with the defaults for this bucket setting it to the us-east-1 region. Let's set up the bucket to block all public access, becaCreate control settinguse we are going to allow get requests through CloudFront to these files. We don't need bucket versioning enabled, because these files will be deleted every time a new front-end version will be uploaded to this bucket. If we were to enable bucket versioning, old front-end files would be marked as deleted and kept, increasing the storage costs in the long run. Let's use server-side encryption with Amazon S3-managed keys and create the bucket.

When the bucket is created, upload the front-end files to the bucket and let's go to the CloudFront service and create a distribution.

As the origin domain, choose your S3 bucket. Feel free to change the name for the origin. For Origin access, choose the Origin access control settings (recommended). Create a new Control setting with the defaults. I recommend adding a description to describe this control setting.

At the Web Application Firewall (WAF) settings we would recommend enabling security protections, although it has cost implications. For this tutorial, we chose not to enable WAF for this CloudFront distribution.

In the Settings section, please choose the Price class that best fits you. If you have a domain and an SSL certificate you can set those up for this distribution, but you can do that later as well. As the Default root object, please provide index.html and create the distribution.

03 warning

When you have created the distribution, you should see a warning at the top of the page. Copy the policy and go to your S3 bucket's Permissions tab. Edit the Bucket policy and paste the policy you just copied, then save it.

If you have set up a domain with your CloudFront distribution, you can open that domain and you should be able to see our front-end deployed. If you didn't set up a domain the Details section of your CloudFront distribution contains your distribution domain name.

Distribution domain name is in the upper left corner of the Details section

If you click on the "Hello" button on your deployed front-end, it should not be able to reach the /api/hello endpoint and should display an error message on the page.

Hosting the API in Elastic Beanstalk

Elastic beanstalk prerequisites

For our NestJS API to run in Elastic Beanstalk, we need some additional setup. Inside the apps/api/src folder, let's create a Procfile with the contents: web: node main.js. Then open the apps/api/project.json and under the build configuration, extend the production build setup with the following (I only )

{
  "targets": {
    "build": {
      "configurations": {
        "development": {},
        "production": {
          "generatePackageJson": true,
          "assets": [
            "apps/api/src/assets",
            "apps/api/src/Procfile"
          ]
        }
      }
    }
  }
}

The above settings will make sure that when we build the API with a production configuration, it will generate a package.json and a package-lock.json near the output file main.js.

To have a production-ready API, we set up a script in the package.json file of the repository. Running this will create a dist/apps/api and a dist/apps/frontend folder with the necessary files.

{
  "scripts": {
    "build:prod": "nx run-many --target=build --projects api,frontend --configuration=production"
  }
}

After running the script, zip the production-ready api folder so we can upload it to Elastic Beanstalk later.

zip -r -j dist/apps/api.zip dist/apps/api

Creating the Elastic Beanstalk Environment

Let's open the Elastic Beanstalk service in the AWS console. And create an application. An application is a logical grouping of several environments. We usually put our development, staging and production environments under the same application name. The first time you are going to create an application you will need to create an environment as well.

We are creating a Web server environment. Provide your application's name in the Application information section. You could also provide some unique tags for your convenience. In the Environment information section please provide information on your environment. Leave the Domain field blank for an autogenerated value.

Application and environment information

When setting up the platform, we are going to use the Managed Node.js platform with version 18 and with the latest platform version.

Platform and engines

Let's upload our application code, and name the version to indicate that it was built locally. This version label will be displayed on the running environment and when we set up automatic deployments we can validate if the build was successful. As a Preset, let's choose Single instance (free tier eligible)

App upload and presets

On the next screen configure your service access. For this tutorial, we only create a new service-role. You must select the aws-elasticbeanstalk-ec2-role for the EC2 instance profile.

Create new service role

If can't select this role, you should create it in AWS IAM with the AWSElasticBeanstalkWebTier, AWSElasticBeanstalkMulticontainerDocker and the AWSElasticBeanstalkRoleWorkerTier managed permissions.

IAM ec2 role

The next step is to set up the VPC. For this tutorial, I chose the default VPC that is already present with my AWS account, but you can create your own VPC and customise it. In the Instance settings section, we want our API to have a public IP address, so it can be reached from the internet, and we can route to it from CloudFront. Select all the instance subnets and availability zones you want to have for your APIs.

VPC and instance settings

For now, we are not going to set up a database. We can set it up later in AWS RDS but in this tutorial, we would like to focus on setting up the distribution. Let's move forward

Let's configure the instance traffic and scaling. This is where we are going to set up the load balancer. In this tutorial, we are keeping to the defaults, therefore, we add the EC2 instances to the default security group.

Set the environment type to Load balanced

In the Capacity section we set the Environment type to Load balanced. This will bring up a load balancer for this environment. Let's set it up so that if the traffic is large, AWS can spin up two other instances for us. Please select your preferred tier in the instance types section, We only set this to t3.micro For this tutorial, but you might need to use larger tiers.

Configure the Scaling triggers to your needs, we are going to leave them as defaults. Set the load balancer's visibility to the public and use the same subnets that you have used before.

Load Balancer network settings

At the Load Balancer Type section, choose Application load balancer and select Dedicated for exactly this environment. Let's set up the listeners, to support HTTPS.

Set up a listener for port 443

Add a new listener for the 443 port and connect your SSL certificate that you have set up in CloudFront as well. For the SSL policy choose something that is over TLS 1.2 and connect this port to the default process.

443 port setup

Now let's update the default process and set up the health check endpoint. We set up our API to have the health check endpoint at the /api route. Let's modify the default process accordingly and set its port to 8080.

Default process on port 8080 and /api health check endpoint

For this tutorial, we decided not to enable log file access, but if you need it, please set it up with a separate S3 bucket.

At the last step of configuring your Elastic Beanstalk environment, please set up Monitoring, CloudWatch logs and Managed platform updates to your needs. For the sake of this tutorial, we have turned most of these options off. Set up e-mail notifications to your dedicated alert e-mail and select how you would like to do your application deployments.

At the end, let's configure the Environment properties. We have set the default process to occupy port 8080, therefore, we need to set up the PORT environment variable to 8080.

Set PORT to 8080

Review your configuration and then create your environment. It might take a few minutes to set everything up. After the environment's health transitions to OK you can go to AWS EC2 / Load balancers in your web console. If you select the freshly created load balancer, you can copy the DNS name and test if it works by appending /api/hello at the end of it.

Load balancer DNS record

Connect CloudFront to the API endpoint

Let's go back to our CloudFront distribution and select the Origins tab, then create a new origin. Copy your load balancer's URL into the Origin domain field and select HTTPS only protocol if you have set up your SSL certificate previously. If you don't have an SSL certificate set up, you might use HTTP only, but please know that it is not secure and it is especially not recommended in production. We also renamed this origin to API. Leave everything else as default and create a new origin.

New CloudFront Origin

Under the Behaviors tab, create a new behavior. Set up the path pattern as /api/* and select your newly created API origin. At the Viewer protocol policy select Redirect HTTP to HTTPS and allow all HTTP methods (GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE). For this tutorial, we have left everything else as default, but please select the Cache Policy and Origin request policy that suits you the best.

API behaviour

Now if you visit your deployment, when you click on the HELLO button, it should no longer attach an error message to the DOM.


Now we have a distribution that serves the front-end static files through CloudFront, leveraging caching and CDN, and we have our API behind a load balancer that can scale. But how do we deploy our front-end and back-end automatically when a release is merged to our main branch? For that we are going to leverage AWS CodeBuild and CodePipeline, but in the next blog post. Stay tuned.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Testing a Fastify app with the NodeJS test runner cover image

Testing a Fastify app with the NodeJS test runner

Introduction Node.js has shipped a built-in test runner for a couple of major versions. Since its release I haven’t heard much about it so I decided to try it out on a simple Fastify API server application that I was working on. It turns out, it’s pretty good! It’s also really nice to start testing a node application without dealing with the hassle of installing some additional dependencies and managing more configurations. Since it’s got my stamp of approval, why not write a post about it? In this post, we will hit the highlights of the testing API and write some basic but real-life tests for an API server. This server will be built with Fastify, a plugin-centric API framework. They have some good documentation on testing that should make this pretty easy. We’ll also add a SQL driver for the plugin we will test. Setup Let's set up our simple API server by creating a new project, adding our dependencies, and creating some files. Ensure you’re running node v20 or greater (Test runner is a stable API as of the 20 major releases) Overview `index.js` - node entry that initializes our Fastify app and listens for incoming http requests on port 3001 `app.js` - this file exports a function that creates and returns our Fastify application instance `sql-plugin.js` - a Fastify plugin that sets up and connects to a SQL driver and makes it available on our app instance Application Code A simple first test For our first test we will just test our servers index route. If you recall from the app.js` code above, our index route returns a 501 response for “not implemented”. In this test, we're using the createApp` function to create a new instance of our Fastify app, and then using the `inject` method from the Fastify API to make a request to the `/` route. We import our test utilities directly from the node. Notice we can pass async functions to our test to use async/await. Node’s assert API has been around for a long time, this is what we are using to make our test assertions. To run this test, we can use the following command: By default the Node.js test runner uses the TAP reporter. You can configure it using other reporters or even create your own custom reporters for it to use. Testing our SQL plugin Next, let's take a look at how to test our Fastify Postgres plugin. This one is a bit more involved and gives us an opportunity to use more of the test runner features. In this example, we are using a feature called Subtests. This simply means when nested tests inside of a top-level test. In our top-level test call, we get a test parameter t` that we call methods on in our nested test structure. In this example, we use `t.beforeEach` to create a new Fastify app instance for each test, and call the `test` method to register our nested tests. Along with `beforeEach` the other methods you might expect are also available: `afterEach`, `before`, `after`. Since we don’t want to connect to our Postgres database in our tests, we are using the available Mocking API to mock out the client. This was the API that I was most excited to see included in the Node Test Runner. After the basics, you almost always need to mock some functions, methods, or libraries in your tests. After trying this feature, it works easily and as expected, I was confident that I could get pretty far testing with the new Node.js core API’s. Since my plugin only uses the end method of the Postgres driver, it’s the only method I provide a mock function for. Our second test confirms that it gets called when our Fastify server is shutting down. Additional features A lot of other features that are common in other popular testing frameworks are also available. Test styles and methods Along with our basic test` based tests we used for our Fastify plugins - `test` also includes `skip`, `todo`, and `only` methods. They are for what you would expect based on the names, skipping or only running certain tests, and work-in-progress tests. If you prefer, you also have the option of using the describe` → `it` test syntax. They both come with the same methods as `test` and I think it really comes down to a matter of personal preference. Test coverage This might be the deal breaker for some since this feature is still experimental. As popular as test coverage reporting is, I expect this API to be finalized and become stable in an upcoming version. Since this isn’t something that’s being shipped for the end user though, I say go for it. What’s the worst that could happen really? Other CLI flags —watch` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--watch —test-name-pattern` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--test-name-pattern TypeScript support You can use a loader like you would for a regular node application to execute TypeScript files. Some popular examples are tsx` and `ts-node`. In practice, I found that this currently doesn’t work well since the test runner only looks for JS file types. After digging in I found that they added support to locate your test files via a glob string but it won’t be available until the next major version release. Conclusion The built-in test runner is a lot more comprehensive than I expected it to be. I was able to easily write some real-world tests for my application. If you don’t mind some of the features like coverage reporting being experimental, you can get pretty far without installing any additional dependencies. The biggest deal breaker on many projects at this point, in my opinion, is the lack of straightforward TypeScript support. This is the test command that I ended up with in my application: I’ll be honest, I stole this from a GitHub issue thread and I don’t know exactly how it works (but it does). If TypeScript is a requirement, maybe stick with Jest or Vitest for now 🙂...

Functional Programming in TypeScript using the fp-ts Library: Exploring Task and TaskEither Operators cover image

Functional Programming in TypeScript using the fp-ts Library: Exploring Task and TaskEither Operators

Introduction: Welcome back to our blog series on Functional Programming in TypeScript using the fp-ts library. In the previous three blog posts, we covered essential concepts such as the pipe and flow operators, Option type, and various methods and operators like fold, fromNullable, getOrElse, map, flatten, and chain. In this fourth post, we will delve into the powerful Task and TaskEither operators, understanding their significance, and exploring practical examples to showcase their usefulness. Understanding Task and TaskEither: Before we dive into the examples, let's briefly recap what Task and TaskEither are and why they are valuable in functional programming. Task: In functional programming, a Task represents an asynchronous computation that may produce a value or an error. It allows us to work with asynchronous operations in a pure and composable manner. Tasks are lazy and only start executing when we explicitly run them. They can be thought of as a functional alternative to Promises. Now, let's briefly introduce the Either type and its significance in functional programming since this concept, merged with Task gives us the full power of TaskEither. Either: Either is a type that represents a value that can be one of two possibilities: a value of type Left or a value of type Right. Conventionally, the Left type represents an error or failure case, while the Right type represents a successful result. Using Either, we can explicitly handle and propagate errors in a functional and composable way. Example: Handling Division with Either Suppose we have a function divide that performs a division operation. Instead of throwing an error, we can use Either to handle the potential division by zero scenario. Here's an example: `ts import { Either, left, right } from 'fp-ts/lib/Either'; const divide: (a: number, b: number) => Either = (a, b) => { if (b === 0) { return left('Error: Division by zero'); } return right(a / b); }; const result = divide(10, 2); result.fold( (error) => console.log(Error: ${error}`), (value) => console.log(Result: ${value}`) ); ` In this example, the divide function returns an Either type. If the division is successful, it returns a Right value with the result. If the division by zero occurs, it returns a Left value with an error message. We then use the fold function to handle both cases, printing the appropriate message to the console. TaskEither: TaskEither combines the benefits of both Task and Either. It represents an asynchronous computation that may produce a value or an error, just like Task, but also allows us to handle potential errors using the Either type. This enables us to handle errors in a more explicit and controlled manner. Examples: Let's explore some examples to better understand the practical applications of Task and TaskEither operators. Example 1: Fetching Data from an API Suppose we want to fetch data from an API asynchronously. We can use the Task operator to encapsulate the API call and handle the result using the Task's combinators. In the example below, we define a fetchData` function that returns a Task representing the API call. We then use the `fold` function to handle the success and failure cases of the Task. If the Task succeeds, we return a new Task with the fetched data. If it fails, we return a Task with an error message. Finally, we use the `getOrElse` function to handle the case where the Task returns `None`. `typescript import { pipe } from 'fp-ts/lib/function'; import { Task } from 'fp-ts/lib/Task'; import { fold } from 'fp-ts/lib/TaskEither'; import { getOrElse } from 'fp-ts/lib/Option'; const fetchData: Task = () => fetch('https://api.example.com/data'); const handleData = pipe( fetchData, fold( () => Task.of('Error: Failed to fetch data'), (data) => Task.of(Fetched data: ${data}`) ), getOrElse(() => Task.of('Error: Data not found')) ); handleData().then(console.log); ` Example 2: Performing Computation with Error Handling Let's say we have a function divide` that performs a computation and may throw an error. We can use TaskEither to handle the potential error and perform the computation asynchronously. In the example below, we define a `divideAsync` function that takes two numbers and returns a TaskEither representing the division operation. We use the `tryCatch` function to catch any potential errors thrown by the `divide` function. We then use the `fold` function to handle the success and failure cases of the TaskEither. If the TaskEither succeeds, we return a new TaskEither with the result of the computation. If it fails, we return a TaskEither with an error message. Finally, we use the `map` function to transform the result of the TaskEither. `typescript import { pipe } from 'fp-ts/lib/function'; import { TaskEither, tryCatch } from 'fp-ts/lib/TaskEither'; import { fold } from 'fp-ts/lib/TaskEither'; import { map } from 'fp-ts/lib/TaskEither'; const divide: (a: number, b: number) => number = (a, b) => { if (b === 0) { throw new Error('Division by zero'); } return a / b; }; const divideAsync: (a: number, b: number) => TaskEither = (a, b) => tryCatch(() => divide(a, b), (error) => new Error(String(error))); const handleComputation = pipe( divideAsync(10, 2), fold( (error) => TaskEither.left(Error: ${error.message}`), (result) => TaskEither.right(Result: ${result}`) ), map((result) => Computation: ${result}`) ); handleComputation().then(console.log); ` In the first example, we saw how to fetch data from an API using Task and handle the success and failure cases using fold and getOrElse functions. This allows us to handle different scenarios, such as successful data retrieval or error handling when the data is not available. In the second example, we demonstrated how to perform a computation that may throw an error using TaskEither. We used tryCatch to catch potential errors and fold to handle the success and failure cases. This approach provides a more controlled way of handling errors and performing computations asynchronously. Conclusion: In this blog post, we explored the Task` and `TaskEither` operators in the `fp-ts` library. We learned that Task allows us to work with asynchronous computations in a pure and composable manner, while TaskEither combines the benefits of Task and Either, enabling us to handle potential errors explicitly. By leveraging the concepts we have covered so far, such as pipe, flow, Option, fold, map, flatten, and chain, we can build robust and maintainable functional programs in TypeScript using the fp-ts library. Stay tuned for the next blog post in this series, where we will continue our journey into the world of functional programming....

BullMQ with ExpressJS cover image

BullMQ with ExpressJS

Node.js uses an event loop to process asynchronous tasks. The event loop is responsible for handling the execution of asynchronous tasks, but it does it in a single thread. If there is some CPU-intensive or long-running (blocking) logic that needs to be executed, it should not be run on the main thread. This is where BullMQ can help us. In this blog post, we are going to set up a basic queue, using BullMQ. If you'd like to jump right into developing with a queue, check out our starter.dev kit for ExpressJS, where a fully functioning queue is already provided for you. Why and when to use a queue? BullMQ is a library that can be used to implement message queues in Node.js applications. A message queue allows different parts of an application, or different applications, to communicate with each other asynchronously by sending and receiving messages. This can be useful in a variety of situations, such as when one part of the application needs to perform a task that could take a long time, or when different parts of the application need to be decoupled from each other for flexibility and scalability. If you need to process large numbers of messages in a distributed environment, it can help you solve your problems. One such scenario to use a queue is when you need to deal with webhooks. Webhook handler endpoints usually need to respond with 2xx` quickly, therefore, you cannot put long running tasks inside that handler, but you also need to process the incoming data. A good way of mitigating this is to put the incoming data into the queue, and respond quickly. The processing gets taken care of with the BullMQ worker, and if it is set up correctly, it will run inside a child process not blocking the main thread. Prerequisites BullMQ utilizes Redis to handle its message queue. In development mode, we are using docker-compose` to start up a redis instance. We expose the redis docker container's `6379` port to be reachable on the host machine. We also mount the `/misc/data` and the `misc/conf` folders to preserve data for our local development environments. `yaml version: '3' services: ## Other docker containers ... redis: image: 'redis:alpine' command: redis-server /usr/local/etc/redis/redis.conf ports: - '6379:6379' volumes: - ./misc/data:/var/lib/redis - ./misc/conf:/usr/local/etc/redis/ environment: - REDISREPLICATION_MODE=master ` We can start up our infrastructure with the docker-compose up -d` command and we can stop it with the `docker-compose stop` command. We also need to set up connection information for BullMQ. We make it configurable by using environment variables. `typescript // config.constants.ts export const REDISQUEUE_HOST = process.env.REDIS_QUEUE_HOST || 'localhost'; export const REDISQUEUE_PORT = process.env.REDIS_QUEUE_PORT ? parseInt(process.env.REDISQUEUE_PORT) : 6479; ` To start working with BullMQ, we also need to install it to our node project: `bash npm install bullmq ` Setting up a queue Creating a queue is pretty straightforward, we need to pass the queue name as a string and the connection information. `typescript // queue.ts import { Queue } from 'bullmq'; import { REDISQUEUE_HOST, REDISQUEUE_PORT, } from './config.constants'; export const myQueue = new Queue('my-queue', { connection: { host: REDISQUEUE_HOST, port: REDISQUEUE_PORT, }, }); ` We also create a function that can be used to add jobs to the queue from an endpoint handler. I suggest setting up a rule that removes completed and failed jobs in a timely manner. In this example we remove completed jobs after an hour from redis, and we leave failed jobs for a day. These values depend on what you want to achieve, It can very well happen that in a production app, you would keep the jobs for weeks. `typescript // queue.ts import { Queue, Job } from 'bullmq'; // ... const DEFAULTREMOVE_CONFIG = { removeOnComplete: { age: 3600, }, removeOnFail: { age: 24 3600, }, }; export async function addJobToQueue(data: T): Promise> { return myQueue.add('job', data, DEFAULTREMOVE_CONFIG); } ` It would be called when a specific endpoint gets called. `typescript // main.ts app.post('/', async (req: Request, res: Response, next: NextFunction) => { const job = await addJobToQueue(req.body); res.json({ jobId: job.id }); return next(); }); ` The queue now can store jobs, but in order for us to be able to process those jobs, we need to set up a worker. Put the processing into a thread We set up a worker with an async function at the beginning. The worker needs the same name as the queue to start consuming jobs in that queue. It also needs the same connection information as the queue we set up before. `typescript // worker.ts import { Job, Worker } from 'bullmq'; import { REDISQUEUE_HOST, REDISQUEUE_PORT, } from './config.constants'; let worker: Worker export function setUpWorker(): void { worker = new Worker('my-queue', async () => {/ ... */}, { connection: { host: REDISQUEUE_HOST, port: REDISQUEUE_PORT, }, autorun: true, }); defaultWorker.on('completed', (job: Job, returnvalue: 'DONE') => { console.debug(Completed job with id ${job.id}`, returnvalue); }); defaultWorker.on('active', (job: Job) => { console.debug(Completed job with id ${job.id}`); }); defaultWorker.on('error', (failedReason: Error) => { console.error(Job encountered an error`, failedReason); }); } // we call the method after we set up the queue in queue.ts import { Queue } from 'bullmq'; import { setUpWorker } from './worker'; // ... setUpWorker(); ` After we create the worker and set up event listeners, we call the setUpWorker()` method in the `queue.ts` file after the `Queue` gets created. Let's set up the job processor function. `typescript // job-processor.ts import { Job } from 'bullmq'; module.exports = async function jobProcessor(job: Job): Promise { await job.log(Started processing job with id ${job.id}`); console.log(Job with id ${job.id}`, job.data); // TODO: do your CPU intense logic here await job.updateProgress(100); return 'DONE'; }; ` This example processor function doesn't do much, but if we had a long running job, like complex database update operations, or sending data towards a third-party API, we would do it here. Let's make sure our worker will run these jobs on a separate thread. `typescript // worker.ts // ... let worker: Worker const processorPath = path.join(dirname, 'job-processor.js'); export function setUpWorker(): void { worker = new Worker('my-queue', processorPath, { connection: { host: REDISQUEUE_HOST, port: REDISQUEUE_PORT, }, autorun: true, }); // ... } ` If you provide a file path to the worker as the second parameter, BullMQ will run the function exported from the file in a separate thread. That way, the main thread is not used for the CPU intense work the processor does. The above example works if you run the TypeScript compiler on your back-end code (tsc`), but if you prefer keeping your code in TypeScript and run the logic with `ts-node`, then you should use the TypeScript file as your processor path. `typescript const processorPath = path.join(dirname, 'job-processor.ts'); ` The problem with bundlers Sometimes, your back-end code gets bundled with webpack. For example, if you use NX to keep your front-end and back-end code together in one repository, you will notice that your back-end code gets bundled with webpack. In order to be able to run your processors in a separate thread, you need to tweak your project.json` configuration: `json { "targets": { "build": { "options": { "outputPath": "dist/apps/api", "main": "apps/api/src/main.ts", "tsConfig": "apps/api/tsconfig.app.json", "assets": [], "additionalEntryPoints": [ { "entryName": "sync.processor", "entryPath": "apps/api/src/app/queue/job-processor.ts" } ] } } } } ` Conclusion In an ExpressJS application, running CPU intense tasks on the main thread could cause your endpoints to turn unresponsive and/or slow. Moving these tasks into a different thread can help alleviate performance issues, and using BullMQ can help you out greatly. If you want to learn more about NodeJS, check out node.framework.dev for a curated list of libraries and resources. If you are looking to start a new ExpressJS project, check out our starter kit resources at starter.dev...

Software Team Leadership: Risk Taking & Decision Making with David Cramer, Co-Founder & CTO at Sentry cover image

Software Team Leadership: Risk Taking & Decision Making with David Cramer, Co-Founder & CTO at Sentry

In this episode of the engineering leadership series, Rob Ocel interviews David Cramer, co-founder and CTO of Sentry, delving into the importance of decision-making, risk-taking, and the challenges faced in the software engineering industry. David emphasizes the significance of having conviction and being willing to make decisions, even if they turn out to be wrong. He shares his experience of attending a CEO event, where he discovered that decision-making and conflict resolution are struggles even for successful individuals. David highlights the importance of making decisions quickly and accepting the associated risks, rather than attempting to pursue multiple options simultaneously. He believes that being decisive is crucial in the fast-paced software engineering industry. This approach allows for faster progress and adaptation, even if it means occasionally making mistakes along the way. The success of Sentry is attributed to a combination of factors, including market opportunity and the team's principles and conviction. David acknowledges that bold ideas often carry a higher risk of failure, but if they do succeed, the outcome can be incredibly significant. This mindset has contributed to Sentry’s achievements in the industry. The interview also touches on the challenges of developing and defending opinions in the software engineering field. David acknowledges that it can be difficult to navigate differing viewpoints and conflicting ideas. However, he emphasizes the importance of standing by one's convictions and being open to constructive criticism and feedback. Throughout the conversation, David emphasizes the need for engineering leaders to be decisive and take calculated risks. He encourages leaders to trust their instincts and make decisions promptly, even if they are uncertain about the outcome. This approach fosters a culture of innovation and progress within engineering teams. The episode provides valuable insights into the decision-making process and the challenges faced by engineering leaders. It highlights the importance of conviction, risk-taking, and the ability to make decisions quickly in the software engineering industry. David's experiences and perspectives offer valuable lessons for aspiring engineering leaders looking to navigate the complexities of the field....