Skip to content

Connecting to PostgreSQL with Node.js

If you plan on writing more advanced applications in Node.js, you may find the need to store information persistently. Depending on the needs of the application, you may opt for storing data in a relational database like Postgres. In this article, I'm going to cover how you can use Node.js to connect to and execute queries against a Postgres database.

PostgreSQL

PostgreSQL is a proven 30+ year old relational database. It is one of the most popular, advanced and feature-packed relational databases out in the wild and runs on most well-known operating systems.

node-postgres

node-postgres is a pure JavaScript library that allows you to interact with a PostgreSQL database. It supports modern features such as aync / await and is well maintained. There are very little abstractions between node-postgres and the actual calls being made to the database, and this is by design.

node-postgres can be easily installed into your project by installing the pg package:

npm install pg

Setting up PostgreSQL

First, you'll need a Postgres database to connect to as node-postgres obviously won't work without one. There's more than one way to set one up, such as installing from a package manager, but if you have Docker, it's very simple. Here's how I set up the latest version of Postgres on my machine:

$ docker run \
      --name postgres \
      -e POSTGRES_PASSWORD=yourpassword \
      -p 5432:5432 \
      -d postgres

I was able to verify everything was up and running by shelling in with:

$ docker exec -ti -u postgres postgres psql

Then running the following, creating a database for us to connect to later:

$ CREATE DATABASE nodedemo;

At this point you should have a fully functional database server running on your system!

Connecting to Postgres from Node

There are a couple of different ways to connect to your database. You can use a connection pool or just instantiate a client. Here, we create both using credentials inside of the code itself. You can also configure connections with environment variables instead!

This is an example index.js file that I put into a project I generated with npm init and installed node-postgres into:

const { Pool, Client } = require("pg");

const credentials = {
  user: "postgres",
  host: "localhost",
  database: "nodedemo",
  password: "yourpassword",
  port: 5432,
};

// Connect with a connection pool.

async function poolDemo() {
  const pool = new Pool(credentials);
  const now = await pool.query("SELECT NOW()");
  await pool.end();

  return now;
}

// Connect with a client.

async function clientDemo() {
  const client = new Client(credentials);
  await client.connect();
  const now = await client.query("SELECT NOW()");
  await client.end();

  return now;
}

// Use a self-calling function so we can use async / await.

(async () => {
  const poolResult = await poolDemo();
  console.log("Time with pool: " + poolResult.rows[0]["now"]);

  const clientResult = await clientDemo();
  console.log("Time with client: " + clientResult.rows[0]["now"]);
})();

The results object has more information than just the returned rows, such as row count, the command, and fields returned in the response. You can find out more about the results object in the documentation.

The pool is recommended for any application that has to run for an extended period of time. Each time a client is created, it has to do a handshake with the PostgreSQL server and that can take some time. A connection pool will recycle a pre-determined amount of client objects so that the handshake doesn't have to be done as often.

Let's Write some CRUD

I'm going to show some example of basic CRUD (Create-Read-Update-Delete) operations. Since node-postgres is a very light abstraction, it's not much different than writing the queries yourself against the REPL (Read-Eval-Print Loop).

Let's start with a basic schema. You can execute this SQL using psql:

CREATE TABLE people (
    id BIGSERIAL,
    fullname TEXT,
    gender TEXT,
    phone TEXT,
    age INTEGER,
    created_at TIMESTAMP DEFAULT NOW()
);

Here are the functions that implement CRUD:

async function registerPerson(person) {
  const text = `
    INSERT INTO people (fullname, gender, phone, age)
    VALUES ($1, $2, $3, $4)
    RETURNING id
  `;
  const values = [person.fullname, person.gender, person.phone, person.age];
  return pool.query(text, values);
}

async function getPerson(personId) {
  const text = `SELECT * FROM people WHERE id = $1`;
  const values = [personId];
  return pool.query(text, values);
}

async function updatePersonName(personId, fullname) {
  const text = `UPDATE people SET fullname = $2 WHERE id = $1`;
  const values = [personId, fullname];
  return pool.query(text, values);
}

async function removePerson(personId) {
  const text = `DELETE FROM people WHERE id = $1`;
  const values = [personId];
  return pool.query(text, values);
}

If you know SQL, these functions will be self-explanatory. Note here that parameters are passed via prepared queries and we do not concatenate SQL text ever. We do this to avoid SQL injection attacks, where a specially crafted parameter is created to execute arbitrary SQL against your server.

Here is our new init code that makes calls to the aforementioned functions:

(async () => {
  // Register a new user and get an id, which comes from the RETURNING clause.
  const registerResult = await registerPerson({
    fullname: "Jane Doe",
    gender: "F",
    phone: "5555555555",
    age: 29,
  });
  const personId = registerResult.rows[0]["id"];
  console.log("Registered a person with id: " + personId);

  // Obtain the full person object from the database.
  const getPersonResult = await getPerson(personId);
  console.log(
    "Result of SELECT query for person '" +
      personId +
      "': " +
      JSON.stringify(getPersonResult.rows[0], null, "  ")
  );

  // Update the person's full name and query for them again to verify.
  await updatePersonName(personId, "Jane Johnson");
  const getChangedPersonResult = await getPerson(personId);
  console.log(
    "Result of SELECT query for person after name change '" +
      personId +
      "': " +
      JSON.stringify(getChangedPersonResult.rows[0], null, "  ")
  );

  // Clean up the database by removing the person record.
  await removePerson(personId);

  await pool.end();
})();

This is only scratching the surface. Many more advanced features such as transactions are also supported as well!

What about ORMs?

ORM (Object-Relational-Mapper) allows you to write queries using an object-oriented paradigm. This can be useful if you need to generate queries dynamically in a safe way for example. Some examples of ORMs include, but are not limited to: TypeORM, Sequelize, RxDB and Objection.js.

This article is specific to node-postgres and we'll cover ORMs in depth in a future article.

Conclusion

Interfacing with relational databases is a breeze with Node.js. The libraries in this space make it easy to utilize async functionality to ensure that your application is running optimally.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Testing a Fastify app with the NodeJS test runner cover image

Testing a Fastify app with the NodeJS test runner

Introduction Node.js has shipped a built-in test runner for a couple of major versions. Since its release I haven’t heard much about it so I decided to try it out on a simple Fastify API server application that I was working on. It turns out, it’s pretty good! It’s also really nice to start testing a node application without dealing with the hassle of installing some additional dependencies and managing more configurations. Since it’s got my stamp of approval, why not write a post about it? In this post, we will hit the highlights of the testing API and write some basic but real-life tests for an API server. This server will be built with Fastify, a plugin-centric API framework. They have some good documentation on testing that should make this pretty easy. We’ll also add a SQL driver for the plugin we will test. Setup Let's set up our simple API server by creating a new project, adding our dependencies, and creating some files. Ensure you’re running node v20 or greater (Test runner is a stable API as of the 20 major releases) Overview `index.js` - node entry that initializes our Fastify app and listens for incoming http requests on port 3001 `app.js` - this file exports a function that creates and returns our Fastify application instance `sql-plugin.js` - a Fastify plugin that sets up and connects to a SQL driver and makes it available on our app instance Application Code A simple first test For our first test we will just test our servers index route. If you recall from the app.js` code above, our index route returns a 501 response for “not implemented”. In this test, we're using the createApp` function to create a new instance of our Fastify app, and then using the `inject` method from the Fastify API to make a request to the `/` route. We import our test utilities directly from the node. Notice we can pass async functions to our test to use async/await. Node’s assert API has been around for a long time, this is what we are using to make our test assertions. To run this test, we can use the following command: By default the Node.js test runner uses the TAP reporter. You can configure it using other reporters or even create your own custom reporters for it to use. Testing our SQL plugin Next, let's take a look at how to test our Fastify Postgres plugin. This one is a bit more involved and gives us an opportunity to use more of the test runner features. In this example, we are using a feature called Subtests. This simply means when nested tests inside of a top-level test. In our top-level test call, we get a test parameter t` that we call methods on in our nested test structure. In this example, we use `t.beforeEach` to create a new Fastify app instance for each test, and call the `test` method to register our nested tests. Along with `beforeEach` the other methods you might expect are also available: `afterEach`, `before`, `after`. Since we don’t want to connect to our Postgres database in our tests, we are using the available Mocking API to mock out the client. This was the API that I was most excited to see included in the Node Test Runner. After the basics, you almost always need to mock some functions, methods, or libraries in your tests. After trying this feature, it works easily and as expected, I was confident that I could get pretty far testing with the new Node.js core API’s. Since my plugin only uses the end method of the Postgres driver, it’s the only method I provide a mock function for. Our second test confirms that it gets called when our Fastify server is shutting down. Additional features A lot of other features that are common in other popular testing frameworks are also available. Test styles and methods Along with our basic test` based tests we used for our Fastify plugins - `test` also includes `skip`, `todo`, and `only` methods. They are for what you would expect based on the names, skipping or only running certain tests, and work-in-progress tests. If you prefer, you also have the option of using the describe` → `it` test syntax. They both come with the same methods as `test` and I think it really comes down to a matter of personal preference. Test coverage This might be the deal breaker for some since this feature is still experimental. As popular as test coverage reporting is, I expect this API to be finalized and become stable in an upcoming version. Since this isn’t something that’s being shipped for the end user though, I say go for it. What’s the worst that could happen really? Other CLI flags —watch` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--watch —test-name-pattern` - https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#--test-name-pattern TypeScript support You can use a loader like you would for a regular node application to execute TypeScript files. Some popular examples are tsx` and `ts-node`. In practice, I found that this currently doesn’t work well since the test runner only looks for JS file types. After digging in I found that they added support to locate your test files via a glob string but it won’t be available until the next major version release. Conclusion The built-in test runner is a lot more comprehensive than I expected it to be. I was able to easily write some real-world tests for my application. If you don’t mind some of the features like coverage reporting being experimental, you can get pretty far without installing any additional dependencies. The biggest deal breaker on many projects at this point, in my opinion, is the lack of straightforward TypeScript support. This is the test command that I ended up with in my application: I’ll be honest, I stole this from a GitHub issue thread and I don’t know exactly how it works (but it does). If TypeScript is a requirement, maybe stick with Jest or Vitest for now 🙂...

Nuxt DevTools v1.0: Redefining the Developer Experience Beyond Conventional Tools cover image

Nuxt DevTools v1.0: Redefining the Developer Experience Beyond Conventional Tools

In the ever-evolving world of web development, Nuxt.js has taken a monumental leap with the launch of Nuxt DevTools v1.0. More than just a set of tools, it's a game-changer—a faithful companion for developers. This groundbreaking release, available for all Nuxt projects and being defaulted from Nuxt v3.8 onwards, marks the beginning of a new era in developer tools. It's designed to simplify our development journey, offering unparalleled transparency, performance, and ease of use. Join me as we explore how Nuxt DevTools v1.0 is set to revolutionize our workflow, making development faster and more efficient than ever. What makes Nuxt DevTools so unique? Alright, let's start delving into the features that make this tool so amazing and unique. There are a lot, so buckle up! In-App DevTools The first thing that caught my attention is that breaking away from traditional browser extensions, Nuxt DevTools v1.0 is seamlessly integrated within your Nuxt app. This ensures universal compatibility across browsers and devices, offering a more stable and consistent development experience. This setup also means the tools are readily available in the app, making your work more efficient. It's a smart move from the usual browser extensions, making it a notable highlight. To use it you just need to press Shift + Option + D` (macOS) or `Shift + Alt + D` (Windows): With simple keystrokes, the Nuxt DevTools v1.0 springs to life directly within your app, ready for action. This integration eliminates the need to toggle between windows or panels, keeping your workflow streamlined and focused. The tools are not only easily accessible but also intelligently designed to enhance your productivity. Pages, Components, and Componsables View The Pages, Components, and Composables View in Nuxt DevTools v1.0 are a clear roadmap for your app. They help you understand how your app is built by simply showing its structure. It's like having a map that makes sense of your app's layout, making the complex parts of your code easier to understand. This is really helpful for new developers learning about the app and experienced developers working on big projects. Pages View lists all your app's pages, making it easier to move around and see how your site is structured. What's impressive is the live update capability. As you explore the DevTools, you can see the changes happening in real-time, giving you instant feedback on your app's behavior. Components View is like a detailed map of all the parts (components) your app uses, showing you how they connect and depend on each other. This helps you keep everything organized, especially in big projects. You can inspect components, change layouts, see their references, and filter them. By showcasing all the auto-imported composables, Nuxt DevTools provides a clear overview of the composables in use, including their source files. This feature brings much-needed clarity to managing composables within large projects. You can also see short descriptions and documentation links in some of them. Together, these features give you a clear picture of your app's layout and workings, simplifying navigation and management. Modules and Static Assets Management This aspect of the DevTools revolutionizes module management. It displays all registered modules, documentation, and repository links, making it easy to discover and install new modules from the community! This makes managing and expanding your app's capabilities more straightforward than ever. On the other hand, handling static assets like images and videos becomes a breeze. The tool allows you to preview and integrate these assets effortlessly within the DevTools environment. These features significantly enhance the ease and efficiency of managing your app's dynamic and static elements. The Runtime Config and Payload Editor The Runtime Config and Payload Editor in Nuxt DevTools make working with your app's settings and data straightforward. The Runtime Config lets you play with different configuration settings in real time, like adjusting settings on the fly and seeing the effects immediately. This is great for fine-tuning your app without guesswork. The Payload Editor is all about managing the data your app handles, especially data passed from server to client. It's like having a direct view and control over the data your app uses and displays. This tool is handy for seeing how changes in data impact your app, making it easier to understand and debug data-related issues. Open Graph Preview The Open Graph Preview in Nuxt DevTools is a feature I find incredibly handy and a real time-saver. It lets you see how your app will appear when shared on social media platforms. This tool is crucial for SEO and social media presence, as it previews the Open Graph tags (like images and descriptions) used when your app is shared. No more deploying first to check if everything looks right – you can now tweak and get instant feedback within the DevTools. This feature not only streamlines the process of optimizing for social media but also ensures your app makes the best possible first impression online. Timeline The Timeline feature in Nuxt DevTools is another standout tool. It lets you track when and how each part of your app (like composables) is called. This is different from typical performance tools because it focuses on the high-level aspects of your app, like navigation events and composable calls, giving you a more practical view of your app's operation. It's particularly useful for understanding the sequence and impact of events and actions in your app, making it easier to spot issues and optimize performance. This timeline view brings a new level of clarity to monitoring your app's behavior in real-time. Production Build Analyzer The Production Build Analyzer feature in Nuxt DevTools v1.0 is like a health check for your app. It looks at your app's final build and shows you how to make it better and faster. Think of it as a doctor for your app, pointing out areas that need improvement and helping you optimize performance. API Playground The API Playground in Nuxt DevTools v1.0 is like a sandbox where you can play and experiment with your app's APIs. It's a space where you can easily test and try out different things without affecting your main app. This makes it a great tool for trying out new ideas or checking how changes might work. Some other cool features Another amazing aspect of Nuxt DevTools is the embedded full-featured VS Code. It's like having your favorite code editor inside the DevTools, with all its powerful features and extensions. It's incredibly convenient for making quick edits or tweaks to your code. Then there's the Component Inspector. Think of it as your code's detective tool. It lets you easily pinpoint and understand which parts of your code are behind specific elements on your page. This makes identifying and editing components a breeze. And remember customization! Nuxt DevTools lets you tweak its UI to suit your style. This means you can set up the tools just how you like them, making your development environment more comfortable and tailored to your preferences. Conclusion In summary, Nuxt DevTools v1.0 marks a revolutionary step in web development, offering a comprehensive suite of features that elevate the entire development process. Features like live updates, easy navigation, and a user-friendly interface enrich the development experience. Each tool within Nuxt DevTools v1.0 is thoughtfully designed to simplify and enhance how developers build and manage their applications. In essence, Nuxt DevTools v1.0 is more than just a toolkit; it's a transformative companion for developers seeking to build high-quality web applications more efficiently and effectively. It represents the future of web development tools, setting new standards in developer experience and productivity....

How to Add Continuous Benchmarking to Your Projects Using GitHub Actions cover image

How to Add Continuous Benchmarking to Your Projects Using GitHub Actions

Over the lifetime of a project performance, issues may arise from time to time. Lots of the time, these issues don't get detected until they get into production. Adding continuous benchmarking to your project and build pipeline can help you catch these issues before that happens. What is Continuous Benchmarking Benchmarking is the process of measuring the performance of an application. Continuous benchmarking builds on top of this by doing so either on a regular basis, or whenever new code is pushed so that performance regressions can be identified and found as soon as they are introduced. Adding continuous benchmarking to your build pipeline can help you effectively catch performance issues before they ever make it to production. Much like with tests, you are still responsible for writing benchmark logic. But once that’s done, integrating it with your build pipeline can be done easily using the continuous-benchmark GitHub Action. github-action-benchmark github-action-benchmark allows you to easily integrate your existing benchmarks written with your benchmark framework of choice with your build pipeline, with a wide range of configuration options. This action allows you to track the performance of benchmarks against branches in your repository over the history of your project. You can also set thresholds on workflows in PRs, so performance regressions automatically prevent PRs from merging. Benchmark results can vary from framework to framework. This action supports a few different frameworks out of the box, and if yours is not supported, then it can be extended. For your benchmark results to be consumed, they must be kept in a file named output.txt`, and formatted in a way that the action will understand. Each benchmark framework will have a different format. This action supports a few of the most popular ones. Example Benchmark in Rust Firstly, we need a benchmark to test with, and we’re going to use Rust. I am not going to detail everything to setup Rust projects in general, but a full example can be found here. In this case, there is just a simple fibonacci number generator. ` src/lib.rs pub fn fib(u: u32) -> u32 { if u GitHub Action Setup The most basic use-case of this action is setting it up against your main branch so it can collect performance data from every merge moving forward. GitHub actions are configured using yaml files. Let’s go over an example configuration that will run benchmarks on a rust project every time code gets pushed to main, starting with the event trigger. ` name: Benchmarks on: push: branches: - main ` If you aren’t familiar with GitHub Actions already, the ‘on’ key allows us to specify the circumstances that this workflow will run. In our case, we want it to trigger when pushes happen against the main branch. If we want to, we can add additional triggers and branches as well. But for this example,, we’re only focusing on push for now. ` jobs: benchmark: name: Run Rust Benchmark runs-on: ubuntu-latest steps: # Checkout the code and run the benchmarks using cargo. - uses: actions/checkout@v2 - run: rustup toolchain update nightly && rustup default nightly - name: Run benchmark run: cargo bench | tee output.txt # Push the results using the benchmark action. - name: Store Benchmark Result uses: benchmark-action/github-action-benchmark@v1 with: name: Rust Benchmark tool: 'cargo' output-file-path: output.txt github-token: ${{ secrets.GITHUBTOKEN }} auto-push: true alert-threshold: '200%' fail-on-alert: true ` The jobs portion is relatively standard. The code gets checked out from source control, the tooling needed to build the Rust project is installed, the benchmarks are run, and then the results get pushed. For the results storing step, a GitHub API token is required. This is automatically generated when the workflow runs, and is not something that you need to add yourself. The results are then pushed to a special 'gh-pages' branch where the performance data is stored. This branch does need to exist already for this step to work. Considerations There are some performance considerations to be aware of when utilizing GitHub Actions to execute benchmarks. Although the specifications of machines used for different action executions are similar, the runtime performance may vary. GitHub Actions are executed in virtual machines that are hosted on servers. The workloads of other actions on the same servers can affect the runtime performance of your benchmarks. Usually, this is not an issue at all, and results in minimal deviations. This is just something to keep in mind if you expect the results of each of your runs to be extremely accurate. Running benchmarks with more iterations does help, but isn’t a magic bullet solution. Here are the hardware specifications currently being used by GitHub Actions at the time of writing this article. This information comes from the GitHub Actions Documentation. Hardware specification for Windows and Linux virtual machines: - 2-core CPU (x8664) - 7 GB of RAM - 14 GB of SSD space Hardware specification for macOS virtual machines: - 3-core CPU (x8664) - 14 GB of RAM - 14 GB of SSD space If you need more consistent performance out of your runners, then you should use self-hosted runners. Setting these up is outside the scope of this article, and is deserving of its own. Conclusion Continuous benchmarking can help detect performance issues before they cause issues in production, and with GitHub Actions, it is easier than ever to implement it. If you want to learn more about GitHub Qctions and even implementing your own, check out this JS Marathon video by Chris Trzesniewski....

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels cover image

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels

In this episode of the engineering leadership series, Kathy Keating, co-founder of CTO Levels and CTO Advisor, shares her insights on the role of a CTO and the challenges they face. She begins by discussing her own journey as a technologist and her experience in technology leadership roles, including founding companies and having a recent exit. According to Kathy, the primary responsibility of a CTO is to deliver the technology that aligns with the company's business needs. However, she highlights a concerning statistic that 50% of CTOs have a tenure of less than two years, often due to a lack of understanding and mismatched expectations. She emphasizes the importance of building trust quickly in order to succeed in this role. One of the main challenges CTOs face is transitioning from being a technologist to a leader. Kathy stresses the significance of developing effective communication habits to bridge this gap. She suggests that CTOs create a playbook of best practices to enhance their communication skills and join communities of other CTOs to learn from their experiences. Matching the right CTO to the stage of a company is another crucial aspect discussed in the episode. Kathy explains that different stages of a company require different types of CTOs, and it is essential to find the right fit. To navigate these challenges, Kathy advises CTOs to build a support system of advisors and coaches who can provide guidance and help them overcome obstacles. Additionally, she encourages CTOs to be aware of their own preferences and strengths, as self-awareness can greatly contribute to their success. In conclusion, this podcast episode sheds light on the technical aspects of being a CTO and the challenges they face. Kathy Keating's insights provide valuable guidance for CTOs to build trust, develop effective communication habits, match their skills to the company's stage, and create a support system for their professional growth. By understanding these key technical aspects, CTOs can enhance their leadership skills and contribute to the success of their organizations....