Skip to content

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

How to automatically deploy your full-stack JavaScript app with AWS CodePipeline

How to host a full stack javascript app with AWS - 1 Part Series

How to automatically deploy your full-stack JavaScript app from an NX monorepo with AWS CodePipeline

In our previous blog post (How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk) we set up a horizontally scalable deployment for our full-stack javascript app. In this article, we would like to show you how to set up AWS CodePipeline to automatically deploy changes to the application.

APP Structure

Our application is a simple front-end with an API back-end set up in an NX monorepo. The production built API code is hosted in Elastic Beanstalk, while the front-end is stored in S3 and hosted through CloudFront. Whenever we are ready to make a new release, we want to be able to deploy the new API and front-end versions to the existing distribution.

Architecture

In this article, we will set up a CodePipeline to deploy changes to the main branch of our connected repository.

CodePipeline

CodeBuild and the buildspec file

First and foremost, we should set up the build job that will run the deploy logic. For this, we are going to need to use CodeBuild. Let's go into our repository and set up a build-and-deploy.buildspec.yml file. We put this file under the tools/aws/ folder.

version: 0.2

phases:
  install:
    runtime-versions:
      nodejs: 18
    on-failure: ABORT
    commands:
      - npm ci
  build:
    on-failure: ABORT
    commands:
      # Build the front-end and the back-end
      - npm run build:$ENVIRONMENT_TARGET
      # TODO: Push FE to S3
      # TODO: Push API to Elastic beanstalk

This buildspec file does not do much so far, we are going to extend it. In the installation phase, it will run npm ci to install the dependencies and in the build phase, we are going to run the build command using the ENVIRONMENT_TARGET variable. This is useful, because if you have more environments, like development and staging you can have different configurations and builds for those and still use the same buildspec file.

Let's go to the Codebuild page in our AWS console and create a build project. Add a descriptive name, such as your-appp-build-and-deploy. Please provide a meaningful description for your future self. For this example, we are going to restrict the number of concurrent builds to 1.

Build project configuration

The next step is to set up the source for this job, so we can keep the buildspec file in the repository and make sure this job uses the steps declared in the yaml file. We use an access token that allows us to connect to GitHub. Here you can read more on setting up a GitHub connection with an access token. You can also connect with Oauth, or use an entirely different Git provider.

Source setup

We set our provider to GitHub and provided the repository URL. We also set the Git clone depth to 1, because that makes checking out the repo faster.

In the Environment section, we recommend using an AWS CodeBuild managed image. We use the Ubuntu Standard runtime with the aws/codebuild/standard:7.0 version. This version uses Node 18. We want to always use the latest image version for this runtime and as the Environment type we are good with Linux EC2. We don't need elevated privileges, because we won't build docker images, but we do want to create a new service role.

CodeBuild environment

In the Buildspec section select Use a buildspec file and give the path from your repository root as the Buildspec name. For our example, it is tools/aws/build-and-deploy.buildspec.yml. We leave the Batch configuration and the Artifacts sections as they are and in the Logs section we select how we want the logs to work. For this example, to reduce cost, we are going to use S3 logs and save the build logs in the aws-codebuild-build-logs bucket that we created for this purpose. We are finished, so let's create the build project.

CodeBuild logs

CodePipeline setup

To set up automated deployment, we need to create a CodePipeline. Click on Create pipeline and give it a name. We also want a new service role to be created for this pipeline.

CodePipeline settings

Next, we should set up the source stage. As the source provider, we need to use GitHub (version2) and set up a connection. You can read about how to do it here. After the connection is set up, select your repository and the branch you want to deploy from. We also want to start the pipeline if the source code changes. For the sake of simplicity, we want to have the Output artefact format as CodePipeline default.

Pipeline source

At the Build stage, we select AWS CodeBuild as the build provider and let's select the build that we created above. Remember that we have the ENVIRONMENT_TARGET as a variable used in our build, so let's add it to this stage with the Plaintext value prod. This way the build will run the build:prod command from our package.json. As the Build type we want Single build.

Pipeline build stage

We can skip the deployment stage because we are going to set up deployment in our build job. Review our build pipeline and create it. After it is created, it will run for the first time. At this time it will not deploy anything but it should run successfully.

Deployment prerequisites

To be able to deploy to S3 and Elastic Beanstalk, we need our CodeBuild job to be able to interact with those services. When we created the build, we created a service role for it. In this example, the service role is codebuild-aws-test-build-and-deploy-service-role. Let's go to the IAM page in the console and open the Roles page. Search for our codebuild role and let's add permissions to it. Click the Add permissions button and select Attach policies. We need two AWS-managed policies to be added to this service role. The AdministratorAccess-AWSElasticBeanstalk will allow us to deploy the API and the AmazonS3FullAccess will allow us to deploy the front-end. The CloudFrontFullAccess will allow us to invalidate the caches so CloudFront will send the new front-end files after the deployment is ready.

New policies

Deployment

Upload the front-end to S3

Uploading the front-end should be pretty straightforward. We use an AWS CodeBuild managed image in our pipeline, therefore, we have access to the aws command. Let's update our buildspec file with the following changes:

phases:
# ...
  build:
    on-failure: ABORT
    commands:
      # Build the front-end and the back-end
      - npm run build:$ENVIRONMENT_TARGET
      # Delete the current front-end and deploy the new version front-end
      - aws s3 sync dist/apps/frontend/ s3://$FRONT_END_BUCKET --delete
      # Invalidate cloudfront caches to immediately serve the new front-end files
      - aws cloudfront create-invalidation --distribution-id $CLOUDFRONT_DISTRIBUTION_ID --paths "/index.html"
      # TODO: Push API to Elastic beanstalk

First, we upload the fresh front-end build to the S3 bucket, and then we invalidate the caches for the index.html file, so CloudFront will immediately serve the changes. If you have more static files in your app, you might need to invalidate caches for those as well.

Before we push the above changes up, we need to update the environment variables in our CodePipeline. To do this open the pipeline and click on the Edit button. This will then enable us to edit the Build stage. Edit the build step by clicking on the edit button.

Edit build step

On this screen, we add the new environment variables. For this example, it is aws-hosting-prod as Plaintext for the FRONT_END_BUCKET and E3FV1Q1P98H4EZ as Plaintext for the CLOUDFRONT_DISTRIBUTION_ID

Add new variable

Now if we add changes to our index.html file, for example, change the button to <button id="hello">HELLO 2</button>, commit it and push it. It gets deployed.

Deploying the API to Elastic Beanstalk

We are going to need some environment variables passed down to the build pipeline to be able to deploy to different environments, like staging or prod. We gathered these below:

  • COMMIT_ID: #{SourceVariables.CommitId} - This will have the commit id from the checkout step. We include this, so we can always check what commit is deployed.
  • ELASTIC_BEANSTALK_APPLICATION_NAME: Test AWS App - This is the Elastic Beanstalk app which has your environment associated.
  • ELASTIC_BEANSTALK_ENVIRONMENT_NAME: TestAWSApp-prod - This is the Elastic Beanstalk environment you want to deploy to
  • API_VERSION_BUCKET: elasticbeanstalk-us-east-1-474671518642 - This is the S3 bucket that was created by Elastic Beanstalk
API env variables

With the above variables, we can make some new variables during the build time, so we can make sure that every API version is unique and gets deployed. We set this up in the install phase.

# ...

phases:
  install:
    runtime-versions:
      nodejs: 18
    on-failure: ABORT
    commands:
      - APP_VERSION=`jq '.version' -j package.json`
      - API_VERSION=$APP_VERSION-build$CODEBUILD_BUILD_NUMBER
      - API_ZIP_KEY=$COMMIT_ID-api.zip
      - 'APP_VERSION_DESCRIPTION="$AP_VERSION: $COMMIT_ID"'
      - npm ci
# ...

The APP_VERSION variable is the version property from the package.json file. In a release process, the application's version is stored here. The API_VERSION variable will contain the APP_VERSION and as a suffix, we include the build number. We want to upload this API version by indicating the commit ID, so the API_ZIP_KEY will have this information. The APP_VERSION_DESCRIPTION will be the description of the deployed version in Elastic Beanstalk.

Finally, we are going to update the buildspec file with the actual Elastic Beanstalk deployment steps.

phases:
# ...
  build:
    on-failure: ABORT
    commands:
      # ...

      # ZIP the API
      - zip -r -j dist/apps/api.zip dist/apps/api
      # Upload the API bundle to S3
      - aws s3 cp dist/apps/api.zip s3://$API_VERSION_BUCKET/$ENVIRONMENT_TARGET/$API_ZIP_KEY
      # Create new API version in Elastic Beanstalk
      - aws elasticbeanstalk create-application-version --application-name "$ELASTIC_BEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --description "$APP_VERSION_DESCRIPTION" --source-bundle "S3Bucket=$API_VERSION_BUCKET,S3Key=$ENVIRONMENT_TARGET/$API_ZIP_KEY"
      # Deploy new API version
      - aws elasticbeanstalk update-environment --application-name "$ELASTIC_BEANSTALK_APPLICATION_NAME" --version-label "$API_VERSION" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME"
      # Wait until the Elastic Beanstalk environment is stable
      - aws elasticbeanstalk wait environment-updated --application-name "$ELASTIC_BEANSTALK_APPLICATION_NAME" --environment-name "$ELASTIC_BEANSTALK_ENVIRONMENT_NAME"

Let's make a change in the API, for example, the message sent back by the /api/hello endpoint and push up the changes.


Now every time a change is merged to the main branch, it gets pushed to our production deployment. Using these guides, you can set up multiple environments, and you can configure separate CodePipeline instances to deploy from different branches. I hope this guide proved to be helpful to you.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk cover image

How to host a full-stack app with AWS CloudFront and Elastic Beanstalk

How to host a full-stack JavaScript app with AWS CloudFront and Elastic Beanstalk Let's imagine that you have finished building your app. You have a Single Page Application (SPA) with a NestJS back-end. You are ready to launch, but what if your app is a hit, and you need to be prepared to serve thousands of users? You might need to scale your API horizontally, which means that to serve traffic, you need to have more instances running behind a load balancer. Serving your front-end using a CDN will also be helpful. In this article, I am going to give you steps on how to set up a scalable distribution in AWS, using S3, CloudFront and Elastic Beanstalk. The NestJS API and the simple front-end are both inside an NX monorepo The sample application For the sake of this tutorial, we have put together a very simple HTML page that tries to reach an API endpoint and a very basic API written in NestJS. The UI The UI code is very simple. There is a "HELLO" button on the UI which when clicked, tries to reach out to the /api/hello` endpoint. If there is a response with status code 2xx, it puts an `h1` tag with the response contents into the div with the id `result`. If it errors out, it puts an error message into the same div. `html Frontend HELLO const helloButton = document.getElementById('hello'); const resultDiv = document.getElementById('result'); helloButton.addEventListener('click', async () => { const request = await fetch('/api/hello'); if (request.ok) { const response = await request.text(); console.log(json); resultDiv.innerHTML = ${response}`; } else { resultDiv.innerHTML = An error occurred.`; } }); ` The API We bootstrap the NestJS app to have the api` prefix before every endpoint call. `typescript // main.ts import { Logger } from '@nestjs/common'; import { NestFactory } from '@nestjs/core'; import { AppModule } from './app/app.module'; async function bootstrap() { const app = await NestFactory.create(AppModule); const globalPrefix = 'api'; app.setGlobalPrefix(globalPrefix); const port = process.env.PORT || 3000; await app.listen(port); Logger.log(🚀 Application is running on: http://localhost:${port}/${globalPrefix}`); } bootstrap(); ` We bootstrap it with the AppModule which only has the AppController in it. `typescript // app.module.ts import { Module } from '@nestjs/common'; import { AppController } from './app.controller'; @Module({ imports: [], controllers: [AppController], }) export class AppModule {} ` And the AppController sets up two very basic endpoints. We set up a health check on the /api` route and our hello endpoint on the `/api/hello` route. `typescript import { Controller, Get } from '@nestjs/common'; @Controller() export class AppController { @Get() health() { return 'OK'; } @Get('hello') hello() { return 'Hello'; } } ` Hosting the front-end with S3 and CloudFront To serve the front-end through a CMS, we should first create an S3 bucket. Go to S3 in your AWS account and create a new bucket. Name your new bucket to something meaningful. For example, if this is going to be your production deployment I recommend having -prod` in the name so you will be able to see at a glance, that this bucket contains your production front-end and nothing should get deleted accidentally. We go with the defaults for this bucket setting it to the us-east-1` region. Let's set up the bucket to block all public access, because we are going to allow get requests through CloudFront to these files. We don't need bucket versioning enabled, because these files will be deleted every time a new front-end version will be uploaded to this bucket. If we were to enable bucket versioning, old front-end files would be marked as deleted and kept, increasing the storage costs in the long run. Let's use server-side encryption with Amazon S3-managed keys and create the bucket. When the bucket is created, upload the front-end files to the bucket and let's go to the CloudFront service and create a distribution. As the origin domain, choose your S3 bucket. Feel free to change the name for the origin. For Origin access, choose the Origin access control settings (recommended)`. Create a new Control setting with the defaults. I recommend adding a description to describe this control setting. At the Web Application Firewall (WAF) settings we would recommend enabling security protections, although it has cost implications. For this tutorial, we chose not to enable WAF for this CloudFront distribution. In the Settings section, please choose the Price class that best fits you. If you have a domain and an SSL certificate you can set those up for this distribution, but you can do that later as well. As the Default root object, please provide index.html` and create the distribution. When you have created the distribution, you should see a warning at the top of the page. Copy the policy and go to your S3 bucket's Permissions` tab. Edit the `Bucket policy` and paste the policy you just copied, then save it. If you have set up a domain with your CloudFront distribution, you can open that domain and you should be able to see our front-end deployed. If you didn't set up a domain the Details section of your CloudFront distribution contains your distribution domain name. If you click on the "Hello" button on your deployed front-end, it should not be able to reach the /api/hello` endpoint and should display an error message on the page. Hosting the API in Elastic Beanstalk Elastic beanstalk prerequisites For our NestJS API to run in Elastic Beanstalk, we need some additional setup. Inside the apps/api/src` folder, let's create a `Procfile` with the contents: `web: node main.js`. Then open the `apps/api/project.json` and under the `build` configuration, extend the `production` build setup with the following (I only ) `json { "targets": { "build": { "configurations": { "development": {}, "production": { "generatePackageJson": true, "assets": [ "apps/api/src/assets", "apps/api/src/Procfile" ] } } } } } ` The above settings will make sure that when we build the API with a production configuration, it will generate a package.json` and a `package-lock.json` near the output file `main.js`. To have a production-ready API, we set up a script in the package.json` file of the repository. Running this will create a `dist/apps/api` and a `dist/apps/frontend` folder with the necessary files. `json { "scripts": { "build:prod": "nx run-many --target=build --projects api,frontend --configuration=production" } } ` After running the script, zip the production-ready api folder so we can upload it to Elastic Beanstalk later. `bash zip -r -j dist/apps/api.zip dist/apps/api ` Creating the Elastic Beanstalk Environment Let's open the Elastic Beanstalk service in the AWS console. And create an application. An application is a logical grouping of several environments. We usually put our development, staging and production environments under the same application name. The first time you are going to create an application you will need to create an environment as well. We are creating a Web server environment`. Provide your application's name in the `Application information` section. You could also provide some unique tags for your convenience. In the `Environment information` section please provide information on your environment. Leave the `Domain` field blank for an autogenerated value. When setting up the platform, we are going to use the Managed Node.js platform with version 18 and with the latest platform version. Let's upload our application code, and name the version to indicate that it was built locally. This version label will be displayed on the running environment and when we set up automatic deployments we can validate if the build was successful. As a Preset, let's choose Single instance (free tier eligible)` On the next screen configure your service access. For this tutorial, we only create a new service-role. You must select the aws-elasticbeanstalk-ec2-role` for the EC2 instance profile. If can't select this role, you should create it in AWS IAM with the AWSElasticBeanstalkWebTier`, `AWSElasticBeanstalkMulticontainerDocker` and the `AWSElasticBeanstalkRoleWorkerTier` managed permissions. The next step is to set up the VPC. For this tutorial, I chose the default VPC that is already present with my AWS account, but you can create your own VPC and customise it. In the Instance settings` section, we want our API to have a public IP address, so it can be reached from the internet, and we can route to it from CloudFront. Select all the instance subnets and availability zones you want to have for your APIs. For now, we are not going to set up a database. We can set it up later in AWS RDS but in this tutorial, we would like to focus on setting up the distribution. Let's move forward Let's configure the instance traffic and scaling. This is where we are going to set up the load balancer. In this tutorial, we are keeping to the defaults, therefore, we add the EC2 instances to the default security group. In the Capacity` section we set the `Environment type` to `Load balanced`. This will bring up a load balancer for this environment. Let's set it up so that if the traffic is large, AWS can spin up two other instances for us. Please select your preferred tier in the `instance types` section, We only set this to `t3.micro` For this tutorial, but you might need to use larger tiers. Configure the Scaling triggers` to your needs, we are going to leave them as defaults. Set the load balancer's visibility to the public and use the same subnets that you have used before. At the Load Balancer Type` section, choose `Application load balancer` and select `Dedicated` for exactly this environment. Let's set up the listeners, to support HTTPS. Add a new listener for the 443 port and connect your SSL certificate that you have set up in CloudFront as well. For the SSL policy choose something that is over TLS 1.2 and connect this port to the default` process. Now let's update the default process and set up the health check endpoint. We set up our API to have the health check endpoint at the /api` route. Let's modify the default process accordingly and set its port to 8080. For this tutorial, we decided not to enable log file access, but if you need it, please set it up with a separate S3 bucket. At the last step of configuring your Elastic Beanstalk environment, please set up Monitoring, CloudWatch logs and Managed platform updates to your needs. For the sake of this tutorial, we have turned most of these options off. Set up e-mail notifications to your dedicated alert e-mail and select how you would like to do your application deployments`. At the end, let's configure the Environment properties`. We have set the default process to occupy port 8080, therefore, we need to set up the `PORT` environment variable to `8080`. Review your configuration and then create your environment. It might take a few minutes to set everything up. After the environment's health transitions to OK` you can go to AWS EC2 / Load balancers in your web console. If you select the freshly created load balancer, you can copy the DNS name and test if it works by appending `/api/hello` at the end of it. Connect CloudFront to the API endpoint Let's go back to our CloudFront distribution and select the Origins` tab, then create a new origin. Copy your load balancer's URL into the `Origin domain` field and select `HTTPS only` protocol if you have set up your SSL certificate previously. If you don't have an SSL certificate set up, you might use `HTTP only`, but please know that it is not secure and it is especially not recommended in production. We also renamed this origin to `API`. Leave everything else as default and create a new origin. Under the Behaviors` tab, create a new behavior. Set up the path pattern as `/api/*` and select your newly created `API` origin. At the `Viewer protocol policy` select `Redirect HTTP to HTTPS` and allow all HTTP methods (GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE). For this tutorial, we have left everything else as default, but please select the Cache Policy and Origin request policy that suits you the best. Now if you visit your deployment, when you click on the HELLO` button, it should no longer attach an error message to the DOM. --- Now we have a distribution that serves the front-end static files through CloudFront, leveraging caching and CDN, and we have our API behind a load balancer that can scale. But how do we deploy our front-end and back-end automatically when a release is merged to our main` branch? For that we are going to leverage AWS CodeBuild and CodePipeline, but in the next blog post. Stay tuned....

Testing a Fastify app with the NodeJS test runner cover image

Testing a Fastify app with the NodeJS test runner

Introduction Node.js has shipped a built-in test runner for a couple of major versions. Since its release I haven’t heard much about it so I decided to try it out on a simple Fastify API server application that I was working on. It turns out, it’s pretty good! It’s also really nice to start testing a node application without dealing with the hassle of installing some additional dependencies and managing more configurations. Since it’s got my stamp of approval, why not write a post about it? In this post, we will hit the highlights of the testing API and write some basic but real-life tests for an API server. This server will be built with Fastify, a plugin-centric API framework. They have some good documentation on testing that should make this pretty easy. We’ll also add a SQL driver for the plugin we will test. Setup Let's set up our simple API server by creating a new project, adding our dependencies, and creating some files. Ensure you’re running node v20 or greater (Test runner is a stable API as of the 20 major releases) Overview `index.js` - node entry that initializes our Fastify app and listens for incoming http requests on port 3001 `app.js` - this file exports a function that creates and returns our Fastify application instance `sql-plugin.js` - a Fastify plugin that sets up and connects to a SQL driver and makes it available on our app instance Application Code A simple first test For our first test we will just test our servers index route. If you recall from the app.js` code above, our index route returns a 501 response for “not implemented”. In this test, we're using the createApp` function to create a new instance of our Fastify app, and then using the `inject` method from the Fastify API to make a request to the `/` route. We import our test utilities directly from the node. Notice we can pass async functions to our test to use async/await. Node’s assert API has been around for a long time, this is what we are using to make our test assertions. To run this test, we can use the following command: By default the Node.js test runner uses the TAP reporter. You can configure it using other reporters or even create your own custom reporters for it to use. Testing our SQL plugin Next, let's take a look at how to test our Fastify Postgres plugin. This one is a bit more involved and gives us an opportunity to use more of the test runner features. In this example, we are using a feature called Subtests. This simply means when nested tests inside of a top-level test. In our top-level test call, we get a test parameter t` that we call methods on in our nested test structure. In this example, we use `t.beforeEach` to create a new Fastify app instance for each test, and call the `test` method to register our nested tests. Along with `beforeEach` the other methods you might expect are also available: `afterEach`, `before`, `after`. Since we don’t want to connect to our Postgres database in our tests, we are using the available Mocking API to mock out the client. This was the API that I was most excited to see included in the Node Test Runner. After the basics, you almost always need to mock some functions, methods, or libraries in your tests. After trying this feature, it works easily and as expected, I was confident that I could get pretty far testing with the new Node.js core API’s. Since my plugin only uses the end method of the Postgres driver, it’s the only method I provide a mock function for. Our second test confirms that it gets called when our Fastify server is shutting down. Additional features A lot of other features that are common in other popular testing frameworks are also available. Test styles and methods Along with our basic test` based tests we used for our Fastify plugins - `test` also includes `skip`, `todo`, and `only` methods. They are for what you would expect based on the names, skipping or only running certain tests, and work-in-progress tests. If you prefer, you also have the option of using the describe` → `it` test syntax. They both come with the same methods as `test` and I think it really comes down to a matter of personal preference. Test coverage This might be the deal breaker for some since this feature is still experimental. As popular as test coverage reporting is, I expect this API to be finalized and become stable in an upcoming version. Since this isn’t something that’s being shipped for the end user though, I say go for it. What’s the worst that could happen really? Other CLI flags —watch` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--watch —test-name-pattern` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--test-name-pattern TypeScript support You can use a loader like you would for a regular node application to execute TypeScript files. Some popular examples are tsx` and `ts-node`. In practice, I found that this currently doesn’t work well since the test runner only looks for JS file types. After digging in I found that they added support to locate your test files via a glob string but it won’t be available until the next major version release. Conclusion The built-in test runner is a lot more comprehensive than I expected it to be. I was able to easily write some real-world tests for my application. If you don’t mind some of the features like coverage reporting being experimental, you can get pretty far without installing any additional dependencies. The biggest deal breaker on many projects at this point, in my opinion, is the lack of straightforward TypeScript support. This is the test command that I ended up with in my application: I’ll be honest, I stole this from a GitHub issue thread and I don’t know exactly how it works (but it does). If TypeScript is a requirement, maybe stick with Jest or Vitest for now 🙂...

Introducing the express-typeorm-postgres Starter Kit cover image

Introducing the express-typeorm-postgres Starter Kit

Here at This Dot, we've been working with ExpressJS APIs for a while, and we've created a starter.dev kit for ExpressJS that you can use to scaffold your next backend project. The starter kit uses many well-known npm packages, such as TypeORM or BullMQ and integrates with databases such as PostgreSQL and Redis. Kit contents The express-typeorm-postgres` starter kit provides you with infrastructure for development, and integrations with these infrastructures. It comes with a working Redis instance for caching and a second Redis instance for queues. It also starts up a Postgres instance for you, which you can seed with TypeORM. The infrastructure runs on docker using docker-compose. The generated project comes with prettier` and `eslint` set-up, so you only need to spend time on configuration if you want to tweak or change the existing rules. Unit testing is set up using Jest, and there are some example tests provided with the example controllers. How to initialise API development usually requires more infrastructure than front-end development. Before you start, please make sure you have docker` and `docker-compose` installed on your machine. To initialize a project with the express-typeorm-postgres` kit, run the following: 1. Run npx @this-dot/create-starter` to run the scaffolding tool 2. Select the Express.js, TypeORM, and PostgreSQL` kit from the CLI library options 3. Name your project 4. cd` into your project directory, and install dependencies using the tool of your choice (npm, yarn or pnpm) 5. copy the contents of the .env.example` file into a `.env` file With this setup, you have a working starter kit that you can modify to your needs. TypeORM and Database When we started developing the kit, we decided to use PostgreSQL as the database, because it is a powerful, open-source object-relational database system that is widely used for storing and manipulating data. It has a strong reputation for reliability, performance, and feature richness, making it a great choice for a wide range of applications. It can also handle high levels of concurrency and large amounts of data, and supports complex queries and data types. Postgres is also highly extensible because it allows developers to add custom functions and data types to the database. It has a large and active community of developers and users who contribute to its ongoing development and support. The kit uses TypeORM to connect to the database instance. We chose TypeORM because it makes it easy to manage database connections and perform common database operations, such as querying, inserting, updating and deleting data. It supports TypeScript and a wide range of databases, such as PostgreSQL, MySQL, SQLite and MongoDB, therefore if you want to be able to switch between databases, it makes it easier. TypeORM also includes features such as database migrations, which help manage changes to database schema over time, and an entity model that allows you to define your database schema using classes and decorators. Overall, TypeORM is a useful tool for improving the efficiency and reliability of database-related code, and it can be a valuable addition to any TypeScript or JavaScript project that needs to interact with a database. To seed an initial set of data into your database, run the following commands: 1. npm run infrastructure:start` - this starts up the database instance 2. npm run db:seed` - this leverages TypeORM to seed the database. The seed command runs the src/db/run-seeders.ts` file, where you can introduce your seeders for your own needs. The kit uses TypeORM-extension for seeding. Please refer to the `src/db/seeding/technology-seeder.ts` file for an example. Caching Storing response data in caches allows subsequent requests for the same data to be served more quickly. This can improve the performance and user experience of an API by reducing the amount of time it takes to retrieve data from the server. It can reduce the load on the database or mitigate rate limiting on third-party APIs called from your back-end. It also improves the reliability of an application by providing a fallback mechanism in case the database or the server is unavailable or slow to respond. There is a Redis instance set up in the kit to be used for caching data. Under the hood, we use the cachified library to store cached data in Redis. The kit has a useCache` method exported from `src/cache/cache.ts`, which requires a `key` and a callback function to be called to fetch the data. `typescript const technologyId: number = parseInt(req.params.technologyId); const technologyResult = await useCache(req.originalUrl, () => findTechnology(technologyId) ); ` When you need to invalidate cache entries, you can use the clearCacheEntry` method by supplying a `key` string to it. It will remove the cached data from Redis, and the next request that fetches from the database will cache the new values. `typescript const technologyId: number = parseInt(req.params.technologyId); const updateResult = await updateTechnologyEntry(technologyId, { displayName: req.body.name, description: req.body.description, }); // ... clearCacheEntry(req.baseUrl); clearCacheEntry(req.originalUrl); ` Under the src/modules/technology` folder, you can see a complete example of a basic CRUD REST endpoint with caching enabled. Feel free to use those handlers as examples for your development needs. Queue A message queue allows different parts of an application, or different applications, to communicate with each other asynchronously by sending and receiving messages. This can be useful in a variety of situations, such as when one part of the application needs to perform a task that could take a long time, or when different parts of the application need to be decoupled from each other for flexibility and scalability. We chose BullMQ because it is a fast, reliable, and feature-rich message queue system. It is built on top of the popular Redis in-memory data store, which makes it very performant and scalable. It has support for parallel processing, rate limiting, retries, and a variety of other features that make it well-suited for a wide range of use cases. BullMQ has a straightforward API and good documentation. The kit has a second Redis instance set up to be used with BullMQ, and there is a queue set up out of the box, so resource-intensive tasks can be offloaded to a background process. The src/queue/` folder contains all the configuration and setup steps for the queue. Both the queue and its worker is set up in the queue.ts` file. The `job-processor.ts` file contains the function that will process the data. To run int in a separate thread, we must pass the path to this file into the worker: `typescript const processorPath = path.join(dirname, 'job-processor.js'); export const defaultWorker = new Worker(queueName, processorPath, { connection: { host: REDISQUEUE_HOST, port: REDISQUEUE_PORT, }, autorun: true, }); ` When to use this kit This kit is most optimal when you: - want to build back-end services that can be consumed by other applications and services using ExpressJS - need a flexible and scalable way to build server-side applications - need to deal with CPU-intense operations on the server and you need a messaging queue - need to build an API with relational data - would like to just jump right into API development with ExpressJS using TypeORM and Postgres Conclusion The express-typeorm-postgres starter kit can help you kickstart your development by providing you with a working preset. It has testing configured, and it comes with a complete infrastructure orchestrated by docker-compose....

Intro to DevRel: What's the Difference Between External and Internal DevRel Programs? cover image

Intro to DevRel: What's the Difference Between External and Internal DevRel Programs?

Developer Relations (DevRel__) is a proactive, multifaceted discipline that bridges the gap between developers and companies to drive adoption while cultivating an energetic and supportive developer community for their product, service, or technology. The term and the profession are often misunderstood even among those in other technical roles. Some have never heard of DevRel before, and others believe it’s a kind of tech support for developers. Many organizations even think starting a DevRel program means just giving away free software and hoping it catches on. But DevRel is none of these things. At its core, a successful DevRel program builds strong bonds within their target market to ensure that developers can interface with a company or organization behind the product they’re using. Great teams establish authentic connections with developers, cultivate trust, and actively engage with them. DevRel defies traditional marketing strategies. Instead of prioritizing numbers and eyes that contribute to a sales funnel, it focuses on enhancing developer satisfaction. This creates a feedback loop between users and a company to better meet their needs, and foster a sense of collaboration within a product’s user community. The Two Main Domains of Developer Relations DevRel is split into 2 main domains__: __external__ and __internal__. External: Accessing an existing developer community If a company already has a product with an existing community or a product that may appeal to an existing community, and they want to establish a DevRel program around it, this would fall within the external domain. A successful external program will establish credibility and support developers through a number of evangelistic measures like blog posts, tutorials, webinars, giving talks at meetups and conferences, or creating useful code examples to teach concepts. These activities and their goals are rarely product-specific. Instead, they incorporate a number of technologies within their product’s technical ecosystem to demonstrate its value to a developer’s workflow. I had the pleasure of working with Doron Sherman during his tenure at Cloudinary as VP of Developer Relations. Doron has extensive experience with building developer communities, and successfully advocated internally to build a website called Media Jams, a learning resource for developers working with media in their apps. By having this initiative live under Developer Relations, and not under Product, Marketing, or Engineering, Doron and his team built quickly and created a site that prioritized education, without needing to meet the business objectives of other parts of the organization. > “Media Jams has had great organic growth as a community resource. We were able to attract non-Cloudinary users as well as organic search traffic of those looking for media use cases who would have otherwise gotten lost in the Cloudinary docs and/or could not find help through the Cloudinary blog or knowledge base.” says Doron. Internal: Building a developer community In order to support the adoption and retention of developers using a product, companies must have a space where developers can interface with them. Building their own community around a product is the best way to do this. By creating open lines of communication, developers can provide immediate feedback about a product in a productive way to product and engineering teams thereby shortening the feedback loop and improving the speed at which a team is able to innovate based on user needs. This strategy falls under the internal domain. These forums also provide synergistic opportunities for developers that are using a product to learn from each other. By working on similar problems, developers are able to bond and feel more ownership or excitement toward a product, increasing user retention. Danny Thompson, a developer influencer and mentor who has built a community of over a quarter million followers, says that he admires Appwrite’s DevRel program, helmed by Tessa Mero, Head of Developer Relations: > “The Appwrite DevRel team is great at answering questions. They are on Discord, jumping on calls with developers, answering questions, and doing office hours, all of which are super valuable in building that community. The main difference between Appwrite DevRel and other teams is, a lot of communities are run very passively and not always available or taking an active approach within community forums to help out.” - Danny Thompson on Appwrite. > “When we think about how to become successful as a company through DevRel, our first consideration is, what made us successful in the first place? Appwrite became an open-source company and a successful open-source project because of community, so we focus on a community-first approach. Contributors and developers that have supported us since before we were a company are what led us to where we are now. Every initiative, every planning, and everything we do on our team, we consider the community's feedback and perspective before we make any decisions.” - Tessa Mero at Appwrite. The Value-First Approach to Developer Relations Successful DevRel programs prioritize delivering value to cultivate credibility among developers and support product adoption free from reciprocal demands. External efforts involve engaging with existing technology communities, establishing credibility through various evangelistic measures, and delivering value to the community. On the other hand, internal programs build communities around their product, facilitating direct communication between developers and the company. These internal forums not only enhance user retention but also foster a space for developers to learn from each other, creating a sense of ownership and excitement around the product. And by diverting equity to these two programs, DevRel teams find new users, retain them, and receive invaluable feedback. Real-world examples, such as Doron Sherman's work at Cloudinary and Tessa Mero's leadership at Appwrite, showcase the effectiveness of DevRel in action, and highlight how DevRel programs contribute to the success and sustainability of developer-focused products. In the ever-evolving landscape of technology, DevRel emerges not only as a bridge between developers and organizations, but as a crucial driver of innovation, ensuring products remain relevant, adaptive, and deeply integrated into the communities they serve. If you’re thinking about building a successful DevRel program for the first time, the best place to start is to reflect on some of your favorite brands and how they connect with the developer community. Do they simply distribute discount codes and free swag, or are they reaching out to their users, and providing them a platform to learn, collaborate with others, and contribute? If they are, what methods do they use, and how do those methods coincide with your team’s existing strengths? And if you ever have any questions or want to connect with a DevRel specialist, do not hesitate to reach out!...