Skip to content

Implementing a Task Scheduler in Node Using Redis

Node.js and Redis are often used together to build scalable and high-performing applications. Although Redis has always been primarily an in-memory data store that allows for fast and efficient data access, over time, it has gained many useful features, and nowadays it can be used for things like rate limiting, session management, or queuing. With its excellent support for sorted sets, one feature to be added to that list can also be task scheduling.

Node.js doesn't have support for any kind of task scheduling other than the built-in setInterval() and setTimeout() functions, which are quite simple and don't have task queuing mechanisms. There are third-party packages like node-schedule and node-cron of course. But what if you wanted to understand how this could work under the hood? This blog post will show you how to build your own scheduler from scratch.

Redis Sorted Sets

Redis has a structure called sorted sets, a powerful data structure that allows developers to store data that is both ordered and unique, which is useful in many different use cases such as ranking, scoring, and sorting data. Since their introduction in 2009, sorted sets have become one of the most widely used and powerful data structures in Redis.

To add some data to a sorted set, you would need to use the ZADD command, which accepts three parameters: the name of the sorted set, the name of the member, and the score to associate with that member. When having multiple members, each with its own score, Redis will sort them by score. This is incredibly useful for implementing leaderboard-like lists. In our case, if we use a timestamp as a score, this means that we can order sorted set members by date, effectively implementing a queue where members with the most recent timestamp are at the top of the list.

If the member name is a task identifier, and the timestamp is the time at which we want the task to be executed, then implementing a scheduler would mean reading the sorted list, and just grabbing whatever task we find at the top!

The Algorithm

Now that we understand the capabilities of Redis sorted sets, we can draft out a rough algorithm that will be implemented by our Node scheduler.

Scheduling Tasks

The scheduling task piece would include adding a task to the sorted set, and adding task data to the global set using the task identifier as the key. The steps are as follows:

  1. Generate an identifier for the submitted task using the INCR command. This command will get the next integer sequence each time it's called.
  2. Use the SET command to set task data in the global set. The SET command accepts a key and a string. The key must be unique, therefore it can be something like task:${taskId}, while the value can be a JSON representation of the task data.
  3. Use the ZADD command to add the task identifier and the timestamp to the sorted set. The name of the sorted set can be something simple like sortedTasks, while the set member can be the task identifier and the score is the timestamp.

Processing Tasks

The processing part is an endless loop that checks if there are any tasks to process, otherwise it waits for a predefined interval before trying again. The algorithm can be as follows:

  1. Check if we are still allowed to run. We need a way to stop the loop if we want to stop the scheduler. This can be a simple boolean flag in the code.

  2. Use the ZRANGE command to get the first task in the list. ZRANGE accepts several useful arguments, such as the score range (the timestamp interval, in our case), and the offset/limit. If we provide it with the following arguments, we will get the first next task we need to execute.

    • Minimal score: 0 (the beginning of time)
    • Maximum score: current timestamp
    • Offset: 0
    • Count: 1
  3. If there is a task found:

    3.1 Get the task data by executing the GET command on the task:${taskId} key.

    3.2 Deserialize the data and call the task handler.

    3.3 Remove the task data using the DEL command on the task:${taskId} key.

    3.4 Remove the task identifier from the sorted set by calling ZREM on the sortedSets key.

    3.5 Go back to point 2 to get the next task.

  4. If there is no task found, wait for a predefined number of seconds before trying again.

The Code

Now to the code. We will have two objects to work with. The first one is RedisApi and this is simply a façade over the Redis client. For the Redis client, we chose to use ioredis, a popular Redis library for Node.

const Redis = require('ioredis');

function RedisApi(host, port) {
  const redis = new Redis(port, host, { maxRetriesPerRequest: 3 });

  return {
    getFirstInSortedSet: async (sortedSetKey) => {
      const results = await redis.zrange(
        sortedSetKey,
        0,
        new Date().getTime(),
        'BYSCORE',
        'LIMIT',
        0,
        1
      );

      return results?.length ? results[0] : null;
    },
    addToSortedSet: (sortedSetKey, member, score) => {
      return redis.zadd(sortedSetKey, score, member);
    },
    removeFromSortedSet: (sortedSetKey, member) => {
      return redis.zrem(sortedSetKey, member);
    },
    increaseCounter: (counterKey) => {
      return redis.incr(counterKey);
    },
    setString: (stringKey, value) => {
      return redis.set(stringKey, value, 'GET');
    },
    getString: (stringKey) => {
      return redis.get(stringKey);
    },
    removeString: (stringKey) => {
      return redis.del(stringKey);
    },
    isConnected: async () => {
      try {
        // Just get some dummy key to see if we are connected
        await redis.get('dummy');
        return true;
      } catch (e) {
        return false;
      }
    },
  };
}

The RedisApi function returns an object that has all the Redis operations that we mentioned previously, with the addition of isConnected, which we will use to check if the Redis connection is working.

The other object is the Scheduler object and has three functions:

  • start() to start the task processing
  • stop() to stop the task processing
  • schedule() to submit new tasks

The start() and schedule() functions contain the bulk of the algorithm we wrote above. The schedule() function adds a new task to Redis, while the start() function creates a findNextTask() function internally, which it schedules recursively while the scheduler is running. When creating a new Scheduler object, you need to provide the Redis connection details, a polling interval, and a task handler function. The task handler function will be provided with task data.

function Scheduler(
  pollingIntervalInSec,
  taskHandler,
  redisHost,
  redisPort = 6379
) {
  const redisApi = new RedisApi(redisPort, redisHost);

  let isRunning = false;

  return {
    schedule: async (data, timestamp) => {
      const taskId = await redisApi.increaseCounter('taskCounter');
      console.log(
        `Scheduled new task with ID ${taskId} and timestamp ${timestamp}`,
        data
      );
      await redisApi.setString(`task:${taskId}`, JSON.stringify(data));
      await redisApi.addToSortedSet('sortedTasks', taskId, timestamp);
    },
    start: async () => {
      console.log('Started scheduler');
      isRunning = true;

      const findNextTask = async () => {
        const isRedisConnected = await redisApi.isConnected();
        if (isRunning && isRedisConnected) {
          console.log('Polling for new tasks');

          let taskId;
          do {
            taskId = await redisApi.getFirstInSortedSet('sortedTasks');

            if (taskId) {
              console.log(`Found task ${taskId}`);
              const taskData = await redisApi.getString(`task:${taskId}`);
              try {
                console.log(`Passing data for task ${taskId}`, taskData);
                taskHandler(JSON.parse(taskData));
              } catch (err) {
                console.error(err);
              }
              redisApi.removeString(`task:${taskId}`);
              redisApi.removeFromSortedSet('sortedTasks', taskId);
            }
          } while (taskId);

          setTimeout(findNextTask, pollingIntervalInSec * 1000);
        }
      };

      findNextTask();
    },
    stop: () => {
      isRunning = false;
      console.log('Stopped scheduler');
    },
  };
}

That's it! Now, when you run the scheduler and submit a simple task, you should see an output like below:

const scheduler = new Scheduler(
  5,
  (taskData) => {
    console.log('Handled task', taskData);
  },
  'localhost'
);
scheduler.start();

// Submit a task to execute 10 seconds later
scheduler.schedule({ name: 'Test data' }, new Date().getTime() + 10000);
Started scheduler
Scheduled new task with ID 4 and timestamp 1679677571675 { name: 'Test data' }
Polling for new tasks
Polling for new tasks
Polling for new tasks
Found task 4 with timestamp 1679677571675
Passing data for task 4 {"name":"Test data"}
Handled task { name: 'Test data' }
Polling for new tasks

Conclusion

Redis sorted sets are an amazing tool, and Redis provides you with some really useful commands to query or update sorted sets with ease. Hopefully, this blog post was an inspiration for you to consider using sorted sets in your applications. Feel free to use StackBlitz to view this project online and play with it some more.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk cover image

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk

How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk Let's imagine that you have finished building your app. You have a Single Page Application (SPA) with a NestJS back-end. You are ready to launch, but what if your app is a hit, and you need to be prepared to serve thousands of users? You might need to scale your API horizontally, which means that to serve traffic, you need to have more instances running behind a load balancer. Serving your front-end using a CDN will also be helpful. In this article, I am going to give you steps on how to set up a scalable distribution in AWS, using S3, CloudFront and Elastic Beanstalk. The NestJS API and the simple front-end are both inside an NX monorepo The sample application For the sake of this tutorial, we have put together a very simple HTML page that tries to reach an API endpoint and a very basic API written in NestJS. The UI The UI code is very simple. There is a "HELLO" button on the UI which when clicked, tries to reach out to the /api/hello` endpoint. If there is a response with status code 2xx, it puts an `h1` tag with the response contents into the div with the id `result`. If it errors out, it puts an error message into the same div. `html Frontend HELLO const helloButton = document.getElementById('hello'); const resultDiv = document.getElementById('result'); helloButton.addEventListener('click', async () => { const request = await fetch('/api/hello'); if (request.ok) { const response = await request.text(); console.log(json); resultDiv.innerHTML = ${response}`; } else { resultDiv.innerHTML = An error occurred.`; } }); ` The API We bootstrap the NestJS app to have the api` prefix before every endpoint call. `typescript // main.ts import { Logger } from '@nestjs/common'; import { NestFactory } from '@nestjs/core'; import { AppModule } from './app/app.module'; async function bootstrap() { const app = await NestFactory.create(AppModule); const globalPrefix = 'api'; app.setGlobalPrefix(globalPrefix); const port = process.env.PORT || 3000; await app.listen(port); Logger.log(🚀 Application is running on: http://localhost:${port}/${globalPrefix}`); } bootstrap(); ` We bootstrap it with the AppModule which only has the AppController in it. `typescript // app.module.ts import { Module } from '@nestjs/common'; import { AppController } from './app.controller'; @Module({ imports: [], controllers: [AppController], }) export class AppModule {} ` And the AppController sets up two very basic endpoints. We set up a health check on the /api` route and our hello endpoint on the `/api/hello` route. `typescript import { Controller, Get } from '@nestjs/common'; @Controller() export class AppController { @Get() health() { return 'OK'; } @Get('hello') hello() { return 'Hello'; } } ` Hosting the front-end with S3 and CloudFront To serve the front-end through a CMS, we should first create an S3 bucket. Go to S3 in your AWS account and create a new bucket. Name your new bucket to something meaningful. For example, if this is going to be your production deployment I recommend having -prod` in the name so you will be able to see at a glance, that this bucket contains your production front-end and nothing should get deleted accidentally. We go with the defaults for this bucket setting it to the us-east-1` region. Let's set up the bucket to block all public access, because we are going to allow get requests through CloudFront to these files. We don't need bucket versioning enabled, because these files will be deleted every time a new front-end version will be uploaded to this bucket. If we were to enable bucket versioning, old front-end files would be marked as deleted and kept, increasing the storage costs in the long run. Let's use server-side encryption with Amazon S3-managed keys and create the bucket. When the bucket is created, upload the front-end files to the bucket and let's go to the CloudFront service and create a distribution. As the origin domain, choose your S3 bucket. Feel free to change the name for the origin. For Origin access, choose the Origin access control settings (recommended)`. Create a new Control setting with the defaults. I recommend adding a description to describe this control setting. At the Web Application Firewall (WAF) settings we would recommend enabling security protections, although it has cost implications. For this tutorial, we chose not to enable WAF for this CloudFront distribution. In the Settings section, please choose the Price class that best fits you. If you have a domain and an SSL certificate you can set those up for this distribution, but you can do that later as well. As the Default root object, please provide index.html` and create the distribution. When you have created the distribution, you should see a warning at the top of the page. Copy the policy and go to your S3 bucket's Permissions` tab. Edit the `Bucket policy` and paste the policy you just copied, then save it. If you have set up a domain with your CloudFront distribution, you can open that domain and you should be able to see our front-end deployed. If you didn't set up a domain the Details section of your CloudFront distribution contains your distribution domain name. If you click on the "Hello" button on your deployed front-end, it should not be able to reach the /api/hello` endpoint and should display an error message on the page. Hosting the API in Elastic Beanstalk Elastic beanstalk prerequisites For our NestJS API to run in Elastic Beanstalk, we need some additional setup. Inside the apps/api/src` folder, let's create a `Procfile` with the contents: `web: node main.js`. Then open the `apps/api/project.json` and under the `build` configuration, extend the `production` build setup with the following (I only ) `json { "targets": { "build": { "configurations": { "development": {}, "production": { "generatePackageJson": true, "assets": [ "apps/api/src/assets", "apps/api/src/Procfile" ] } } } } } ` The above settings will make sure that when we build the API with a production configuration, it will generate a package.json` and a `package-lock.json` near the output file `main.js`. To have a production-ready API, we set up a script in the package.json` file of the repository. Running this will create a `dist/apps/api` and a `dist/apps/frontend` folder with the necessary files. `json { "scripts": { "build:prod": "nx run-many --target=build --projects api,frontend --configuration=production" } } ` After running the script, zip the production-ready api folder so we can upload it to Elastic Beanstalk later. `bash zip -r -j dist/apps/api.zip dist/apps/api ` Creating the Elastic Beanstalk Environment Let's open the Elastic Beanstalk service in the AWS console. And create an application. An application is a logical grouping of several environments. We usually put our development, staging and production environments under the same application name. The first time you are going to create an application you will need to create an environment as well. We are creating a Web server environment`. Provide your application's name in the `Application information` section. You could also provide some unique tags for your convenience. In the `Environment information` section please provide information on your environment. Leave the `Domain` field blank for an autogenerated value. When setting up the platform, we are going to use the Managed Node.js platform with version 18 and with the latest platform version. Let's upload our application code, and name the version to indicate that it was built locally. This version label will be displayed on the running environment and when we set up automatic deployments we can validate if the build was successful. As a Preset, let's choose Single instance (free tier eligible)` On the next screen configure your service access. For this tutorial, we only create a new service-role. You must select the aws-elasticbeanstalk-ec2-role` for the EC2 instance profile. If can't select this role, you should create it in AWS IAM with the AWSElasticBeanstalkWebTier`, `AWSElasticBeanstalkMulticontainerDocker` and the `AWSElasticBeanstalkRoleWorkerTier` managed permissions. The next step is to set up the VPC. For this tutorial, I chose the default VPC that is already present with my AWS account, but you can create your own VPC and customise it. In the Instance settings` section, we want our API to have a public IP address, so it can be reached from the internet, and we can route to it from CloudFront. Select all the instance subnets and availability zones you want to have for your APIs. For now, we are not going to set up a database. We can set it up later in AWS RDS but in this tutorial, we would like to focus on setting up the distribution. Let's move forward Let's configure the instance traffic and scaling. This is where we are going to set up the load balancer. In this tutorial, we are keeping to the defaults, therefore, we add the EC2 instances to the default security group. In the Capacity` section we set the `Environment type` to `Load balanced`. This will bring up a load balancer for this environment. Let's set it up so that if the traffic is large, AWS can spin up two other instances for us. Please select your preferred tier in the `instance types` section, We only set this to `t3.micro` For this tutorial, but you might need to use larger tiers. Configure the Scaling triggers` to your needs, we are going to leave them as defaults. Set the load balancer's visibility to the public and use the same subnets that you have used before. At the Load Balancer Type` section, choose `Application load balancer` and select `Dedicated` for exactly this environment. Let's set up the listeners, to support HTTPS. Add a new listener for the 443 port and connect your SSL certificate that you have set up in CloudFront as well. For the SSL policy choose something that is over TLS 1.2 and connect this port to the default` process. Now let's update the default process and set up the health check endpoint. We set up our API to have the health check endpoint at the /api` route. Let's modify the default process accordingly and set its port to 8080. For this tutorial, we decided not to enable log file access, but if you need it, please set it up with a separate S3 bucket. At the last step of configuring your Elastic Beanstalk environment, please set up Monitoring, CloudWatch logs and Managed platform updates to your needs. For the sake of this tutorial, we have turned most of these options off. Set up e-mail notifications to your dedicated alert e-mail and select how you would like to do your application deployments`. At the end, let's configure the Environment properties`. We have set the default process to occupy port 8080, therefore, we need to set up the `PORT` environment variable to `8080`. Review your configuration and then create your environment. It might take a few minutes to set everything up. After the environment's health transitions to OK` you can go to AWS EC2 / Load balancers in your web console. If you select the freshly created load balancer, you can copy the DNS name and test if it works by appending `/api/hello` at the end of it. Connect CloudFront to the API endpoint Let's go back to our CloudFront distribution and select the Origins` tab, then create a new origin. Copy your load balancer's URL into the `Origin domain` field and select `HTTPS only` protocol if you have set up your SSL certificate previously. If you don't have an SSL certificate set up, you might use `HTTP only`, but please know that it is not secure and it is especially not recommended in production. We also renamed this origin to `API`. Leave everything else as default and create a new origin. Under the Behaviors` tab, create a new behavior. Set up the path pattern as `/api/*` and select your newly created `API` origin. At the `Viewer protocol policy` select `Redirect HTTP to HTTPS` and allow all HTTP methods (GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE). For this tutorial, we have left everything else as default, but please select the Cache Policy and Origin request policy that suits you the best. Now if you visit your deployment, when you click on the HELLO` button, it should no longer attach an error message to the DOM. --- Now we have a distribution that serves the front-end static files through CloudFront, leveraging caching and CDN, and we have our API behind a load balancer that can scale. But how do we deploy our front-end and back-end automatically when a release is merged to our main` branch? For that we are going to leverage AWS CodeBuild and CodePipeline, but in the next blog post. Stay tuned....

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline cover image

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

How to automatically deploy your full-stack JavaScript app from an NX monorepo with AWS CodePipeline In our previous blog post (How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk) we set up a horizontally scalable deployment for our full-stack javascript app. In this article, we would like to show you how to set up AWS CodePipeline to automatically deploy changes to the application. APP Structure Our application is a simple front-end with an API back-end set up in an NX monorepo. The production built API code is hosted in Elastic Beanstalk, while the front-end is stored in S3 and hosted through CloudFront. Whenever we are ready to make a new release, we want to be able to deploy the new API and front-end versions to the existing distribution. In this article, we will set up a CodePipeline to deploy changes to the main branch of our connected repository. CodePipeline CodeBuild and the buildspec file First and foremost, we should set up the build job that will run the deploy logic. For this, we are going to need to use CodeBuild. Let's go into our repository and set up a build-and-deploy.buildspec.yml` file. We put this file under the `tools/aws/` folder. `yaml version: 0.2 phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - npm ci build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # TODO: Push FE to S3 # TODO: Push API to Elastic beanstalk ` This buildspec file does not do much so far, we are going to extend it. In the installation phase, it will run npm ci` to install the dependencies and in the build phase, we are going to run the build command using the `ENVIRONMENT_TARGET` variable. This is useful, because if you have more environments, like `development` and `staging` you can have different configurations and builds for those and still use the same buildspec file. Let's go to the Codebuild page in our AWS console and create a build project. Add a descriptive name, such as your-appp-build-and-deploy`. Please provide a meaningful description for your future self. For this example, we are going to restrict the number of concurrent builds to 1. The next step is to set up the source for this job, so we can keep the buildspec file in the repository and make sure this job uses the steps declared in the yaml file. We use an access token that allows us to connect to GitHub. Here you can read more on setting up a GitHub connection with an access token. You can also connect with Oauth, or use an entirely different Git provider. We set our provider to GitHub and provided the repository URL. We also set the Git clone depth to 1, because that makes checking out the repo faster. In the Environment` section, we recommend using an AWS CodeBuild managed image. We use the Ubuntu Standard runtime with the `aws/codebuild/standard:7.0` version. This version uses Node 18. We want to always use the latest image version for this runtime and as the `Environment type` we are good with `Linux EC2`. We don't need elevated privileges, because we won't build docker images, but we do want to create a new service role. In the Buildspec` section select `Use a buildspec file` and give the path from your repository root as the `Buildspec name`. For our example, it is `tools/aws/build-and-deploy.buildspec.yml`. We leave the `Batch configuration` and the `Artifacts` sections as they are and in the `Logs` section we select how we want the logs to work. For this example, to reduce cost, we are going to use S3 logs and save the build logs in the `aws-codebuild-build-logs` bucket that we created for this purpose. We are finished, so let's create the build project. CodePipeline setup To set up automated deployment, we need to create a CodePipeline. Click on Create pipeline` and give it a name. We also want a new service role to be created for this pipeline. Next, we should set up the source stage. As the source provider, we need to use GitHub (version2)` and set up a connection. You can read about how to do it here. After the connection is set up, select your repository and the branch you want to deploy from. We also want to start the pipeline if the source code changes. For the sake of simplicity, we want to have the Output artefact format as CodePipeline default. At the Build stage, we select AWS CodeBuild` as the build provider and let's select the build that we created above. Remember that we have the `ENVIRONMENT_TARGET` as a variable used in our build, so let's add it to this stage with the `Plaintext` value `prod`. This way the build will run the `build:prod` command from our `package.json`. As the `Build type` we want `Single build`. We can skip the deployment stage because we are going to set up deployment in our build job. Review our build pipeline and create it. After it is created, it will run for the first time. At this time it will not deploy anything but it should run successfully. Deployment prerequisites To be able to deploy to S3 and Elastic Beanstalk, we need our CodeBuild job to be able to interact with those services. When we created the build, we created a service role for it. In this example, the service role is codebuild-aws-test-build-and-deploy-service-role`. Let's go to the IAM page in the console and open the `Roles` page. Search for our codebuild role and let's add permissions to it. Click the `Add permissions` button and select `Attach policies`. We need two AWS-managed policies to be added to this service role. The `AdministratorAccess-AWSElasticBeanstalk` will allow us to deploy the API and the `AmazonS3FullAccess` will allow us to deploy the front-end. The `CloudFrontFullAccess` will allow us to invalidate the caches so CloudFront will send the new front-end files after the deployment is ready. Deployment Upload the front-end to S3 Uploading the front-end should be pretty straightforward. We use an AWS CodeBuild managed image in our pipeline, therefore, we have access to the aws` command. Let's update our buildspec file with the following changes: `yaml phases: ... build: on-failure: ABORT commands: # Build the front-end and the back-end - npm run build:$ENVIRONMENTTARGET # Delete the current front-end and deploy the new version front-end - aws s3 sync dist/apps/frontend/ s3://$FRONTEND_BUCKET --delete # Invalidate cloudfront caches to immediately serve the new front-end files - aws cloudfront create-invalidation --distribution-id $CLOUDFRONTDISTRIBUTION_ID --paths "/index.html" # TODO: Push API to Elastic beanstalk ` First, we upload the fresh front-end build to the S3 bucket, and then we invalidate the caches for the index.html` file, so CloudFront will immediately serve the changes. If you have more static files in your app, you might need to invalidate caches for those as well. Before we push the above changes up, we need to update the environment variables in our CodePipeline. To do this open the pipeline and click on the Edit` button. This will then enable us to edit the `Build` stage. Edit the build step by clicking on the edit button. On this screen, we add the new environment variables. For this example, it is aws-hosting-prod` as `Plaintext` for the `FRONT_END_BUCKET` and `E3FV1Q1P98H4EZ` as `Plaintext` for the `CLOUDFRONT_DISTRIBUTION_ID` Now if we add changes to our index.html file, for example, change the button to HELLO 2`, commit it and push it. It gets deployed. Deploying the API to Elastic Beanstalk We are going to need some environment variables passed down to the build pipeline to be able to deploy to different environments, like staging or prod. We gathered these below: - COMMIT_ID`: `#{SourceVariables.CommitId}` - This will have the commit id from the checkout step. We include this, so we can always check what commit is deployed. - ELASTIC_BEANSTALK_APPLICATION_NAME`: `Test AWS App` - This is the Elastic Beanstalk app which has your environment associated. - ELASTIC_BEANSTALK_ENVIRONMENT_NAME`: `TestAWSApp-prod` - This is the Elastic Beanstalk environment you want to deploy to - API_VERSION_BUCKET`: `elasticbeanstalk-us-east-1-474671518642` - This is the S3 bucket that was created by Elastic Beanstalk With the above variables, we can make some new variables during the build time, so we can make sure that every API version is unique and gets deployed. We set this up in the install phase. `yaml ... phases: install: runtime-versions: nodejs: 18 on-failure: ABORT commands: - APPVERSION=`jq '.version' -j package.json` - APIVERSION=$APP_VERSION-build$CODEBUILD_BUILD_NUMBER - APIZIP_KEY=$COMMIT_ID-api.zip - 'APPVERSION_DESCRIPTION="$AP_VERSION: $COMMIT_ID"' - npm ci ... ` The APP_VERSION` variable is the version property from the `package.json` file. In a release process, the application's version is stored here. The `API_VERSION` variable will contain the `APP_VERSION` and as a suffix, we include the build number. We want to upload this API version by indicating the commit ID, so the `API_ZIP_KEY` will have this information. The `APP_VERSION_DESCRIPTION` will be the description of the deployed version in Elastic Beanstalk. Finally, we are going to update the buildspec file with the actual Elastic Beanstalk deployment steps. `yaml phases: ... build: on-failure: ABORT commands: # ... # ZIP the API - zip -r -j dist/apps/api.zip dist/apps/api # Upload the API bundle to S3 - aws s3 cp dist/apps/api.zip s3://$APIVERSION_BUCKET/$ENVIRONMENT_TARGET/$API_ZIP_KEY # Create new API version in Elastic Beanstalk - aws elasticbeanstalk create-application-version --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --description "$APP_VERSION_DESCRIPTION" --source-bundle "S3Bucket=$API_VERSION_BUCKET,S3Key=$ENVIRONMENT_TARGET/$API_ZIP_KEY" # Deploy new API version - aws elasticbeanstalk update-environment --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" # Wait until the Elastic Beanstalk environment is stable - aws elasticbeanstalk wait environment-updated --application-name "$ELASTICBEANSTALK_APPLICATION_NAME" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME" ` Let's make a change in the API, for example, the message sent back by the /api/hello` endpoint and push up the changes. --- Now every time a change is merged to the main` branch, it gets pushed to our production deployment. Using these guides, you can set up multiple environments, and you can configure separate CodePipeline instances to deploy from different branches. I hope this guide proved to be helpful to you....

Angular 17: Continuing the Renaissance cover image

Angular 17: Continuing the Renaissance

Angular 17: A New Era November 8th marked a significant milestone in the world of Angular with the release of Angular 17. This wasn't just any ordinary update; it was a leap forward, signifying a new chapter for the popular framework. But what made this release truly stand out was the unveiling of Angular's revamped website, complete with a fresh brand identity and a new logo. This significant transformation represents the evolving nature of Angular, aligning with the modern demands of web development. To commemorate this launch, we also hosted a release afterparty, where we went deep into its new features with Minko Gechev from the Angular core team, and Google Developer Experts (GDEs) Brandon Roberts, Deborah Kurata, and Enea Jahollari. But what exactly are these notable new features in the latest version? Let's dive in and explore. The Angular Renaissance Angular has been undergoing a significant revival, often referred to as Angular's renaissance, a term coined by Sarah Drasner, the Director of Engineering at Google, earlier this year. This revival has been particularly evident in its recent versions. The Angular team has worked hard to introduce many new improvements, focusing on signal-based reactivity, hydration, server-side rendering, standalone components, and migrating to esbuild and Vite for a better and faster developer experience. This latest release, in particular, marks many of these features as production-ready. Standalone Components About a year ago, Angular began a journey toward modernity with the introduction of standalone components. This move significantly enhanced the developer experience, making Angular more contemporary and user-friendly. In Angular's context, a standalone component is a self-sufficient, reusable code unit that combines logic, data, and user interface elements. What sets these components apart is their independence from Angular's NgModule system, meaning they do not rely on it for configuration or dependencies. By setting a standalone: true` flag, you no longer need to embed your component in an NgModule and you can bootstrap directly off that component: `typescript // ./app/app.component.ts @Component({ selector: 'app', template: 'hello', standalone: true }) export class AppComponent {} // ./main.ts import { bootstrapApplication } from '@angular/platform-browser'; import { AppComponent } from './app/app.component'; bootstrapApplication(AppComponent).catch(e => console.error(e)); ` Compared to the NgModules way of adding components, as shown below, you can immediately see how standalone components make things much simpler. `ts // ./app/app.component.ts import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'], }) export class AppComponent { title = 'CodeSandbox'; } // ./app/app.module.ts import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } // .main.ts import { platformBrowserDynamic } from '@angular/platform-browser-dynamic'; import { AppModule } from './app/app.module'; platformBrowserDynamic() .bootstrapModule(AppModule) .catch((err) => console.error(err)); ` In this latest release, the Angular CLI now defaults to generating standalone components, directives, and pipes. This default setting underscores the shift towards a standalone-centric development approach in Angular. New Syntax for Enhanced Control Flow Angular 17 introduces a new syntax for control flow, replacing traditional structural directives like ngIf` or `ngFor`, which have been part of Angular since version 2. This new syntax is designed for fine-grained change detection and eventual zone-less operation when Angular completely migrates to signals. It's more streamlined and performance-efficient, making handling conditional or list content in templates easier. The @if` block replaces `*ngIf` for expressing conditional parts of the UI. `ts @if (a > b) { {{a}} is greater than {{b}} } @else if (b > a) { {{a}} is less than {{b}} } @else { {{a}} is equal to {{b}} } ` The @switch` block replaces `ngSwitch`, offering benefits such as not requiring a container element to hold the condition expression or each conditional template. It also supports template type-checking, including type narrowing within each branch. ```ts @switch (condition) { @case (caseA) { Case A. } @case (caseB) { Case B. } @default { Default case. } } ``` The @for` block replaces `*ngFor` for iteration and presents several differences compared to its structural directive predecessor, `ngFor`. For example, the tracking expression (calculating keys corresponding to object identities) is mandatory but offers better ergonomics. Additionally, it supports `@empty` blocks. `ts @for (item of items; track item.id) { {{ item.name }} } ` Defer Block for Lazy Loading Angular 17 introduces the @defer` block, a dramatically improving lazy loading of content within Angular applications. Within the `@defer` block framework, several sub-blocks are designed to elegantly manage different phases of the deferred loading process. The main content within the `@defer` block is the segment designated for lazy loading. Initially, this content is not rendered, becoming visible only when specific triggers are activated or conditions are met, and after the required dependencies have been loaded. By default, the trigger for a `@defer` block is the browser reaching an idle state. For instance, take the following block: it delays the loading of the calendar-imp` component until it comes into the viewport. Until that happens, a placeholder is shown. This placeholder displays a loading message when the `calendar-imp` component begins to load, and an error message if, for some reason, the component fails to load. `ts @defer (on viewport) { } @placeholder { Calendar placeholder } @loading { Loading calendar } @error { Error loading calendar } ` The on` keyword supports a wide a variety of other conditions, such as: - idle` (when the browser has reached an idle state) - interaction` (when the user interacts with a specified element) - hover` (when the mouse has hovered over a trigger area) - timer(x)` (triggers after a specified duration) - immediate` (triggers the deferred load immediately) The second option of configuring when deferring happens is by using the when` keyword. For example: `ts @defer (when isVisible) { } ` Server-Side Rendering (SSR) Angular 17 has made server-side rendering (SSR) much more straightforward. Now, a --ssr` option is included in the `ng new` command, removing the need for additional setup or configurations. When creating a new project with the `ng new` command, the CLI inquires if SSR should be enabled. As of version 17, the default response is set to 'No'. However, for version 18 and beyond, the plan is to enable SSR by default in newly generated applications. If you prefer to start with SSR right away, you can do so by initializing your project with the `--ssr` flag: `shell ng new --ssr ` For adding SSR to an already existing project, utilize the ng add` command of the Angular CLI: `shell ng add @angular/ssr ` Hydration In Angular 17, the process of hydration, which is essential for reviving a server-side rendered application on the client-side, has reached a stable, production-ready status. Hydration involves reusing the DOM structures rendered on the server, preserving the application's state, and transferring data retrieved from the server, among other crucial tasks. This functionality is automatically activated when server-side rendering (SSR) is used. It offers a more efficient approach than the previous method, where the server-rendered tree was completely replaced, often causing visible UI flickers. Such re-rendering can adversely affect Core Web Vitals, including Largest Contentful Paint (LCP), leading to layout shifts. By enabling hydration, Angular 17 allows for the reuse of the existing DOM, effectively preventing these flickers. Support for View Transitions The new View Transitions API, supported by some browsers, is now integrated into the Angular router. This feature, which must be activated using the withViewTransitions` function, allows for CSS-based animations during route transitions, adding a layer of visual appeal to applications. To use it, first you need to import withViewTransitions`: `ts import { provideRouter, withViewTransitions } from '@angular/router'; ` Then, you need to add it to the provideRouter` configuration: `ts bootstrapApplication(AppComponent, { providers: [ provideRouter(routes, withViewTransitions()) ] }) ` Other Notable Changes - Angular 17 has stabilized signals, initially introduced in Angular 16, providing a new method for state management in Angular apps. - Angular 17 no longer supports Node 16. The minimal Node version required is now 18.13. - TypeScript version 5.2 is the least supported version starting from this release of Angular. - The @Component` decorator now supports a `styleUrl` attribute. This allows for specifying a single stylesheet path as a string, simplifying the process of linking a component to a specific style sheet. Previously, even for a single stylesheet, an array was required under `styleUrls`. Conclusion With the launch of Angular 17, the Angular Renaissance is now in full swing. This release has garnered such positive feedback that developers are showing renewed interest in the framework and are looking forward to leveraging it in upcoming projects. However, it's important to note that it might take some time for IDEs to adapt to the new templating syntax fully. While this transition is underway, rest assured that you can still write perfectly valid code using the old templating syntax, as all the changes in Angular 17 are backward compatible. Looking ahead, the future of Angular appears brighter than ever, and we can't wait to see what the next release has in store!...

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels cover image

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels

In this episode of the engineering leadership series, Kathy Keating, co-founder of CTO Levels and CTO Advisor, shares her insights on the role of a CTO and the challenges they face. She begins by discussing her own journey as a technologist and her experience in technology leadership roles, including founding companies and having a recent exit. According to Kathy, the primary responsibility of a CTO is to deliver the technology that aligns with the company's business needs. However, she highlights a concerning statistic that 50% of CTOs have a tenure of less than two years, often due to a lack of understanding and mismatched expectations. She emphasizes the importance of building trust quickly in order to succeed in this role. One of the main challenges CTOs face is transitioning from being a technologist to a leader. Kathy stresses the significance of developing effective communication habits to bridge this gap. She suggests that CTOs create a playbook of best practices to enhance their communication skills and join communities of other CTOs to learn from their experiences. Matching the right CTO to the stage of a company is another crucial aspect discussed in the episode. Kathy explains that different stages of a company require different types of CTOs, and it is essential to find the right fit. To navigate these challenges, Kathy advises CTOs to build a support system of advisors and coaches who can provide guidance and help them overcome obstacles. Additionally, she encourages CTOs to be aware of their own preferences and strengths, as self-awareness can greatly contribute to their success. In conclusion, this podcast episode sheds light on the technical aspects of being a CTO and the challenges they face. Kathy Keating's insights provide valuable guidance for CTOs to build trust, develop effective communication habits, match their skills to the company's stage, and create a support system for their professional growth. By understanding these key technical aspects, CTOs can enhance their leadership skills and contribute to the success of their organizations....