Skip to content

Introducing @this-dot/rxidb

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

When we are working on PWAs (Progressive Web Applications), we sometimes need to implement features that require us to store data on our user's machine. One way to do that is to use IndexedDb. Using the IndexedDb browser API has its challenges, so our team at This Dot has developed an RxJS wrapper library around it. With @this-dot/rxidb, one can set up reactive database connections and manipulate its contents the RxJS way. It also provides the ability to subscribe to changes in the database and update our UIs accordingly.

In this blog post, I'd like to show you some small examples of the library in action. If you'd like to see the finished examples, please check out the following links:

You can also check our OSS repository for more examples.

Storing data in chronological order

Imagine that you are working on a special text editor app, or something that needs to keep data between page reloads. These kinds of apps usually need to track larger amounts of data. It would be a bad practice to use localStorage for that. In the following example, we will focus on how to store and delete rows in an Object Store that has autoIncrement enabled. For the sake of simplicity, every time the user presses the Add Item button, a timestamp of the event will be stored in the database.

We would also like to be able to remove items from the beginning and the end of this store. We will add two buttons to our UI to deal with that, and we would like them to be disabled if there are no entries in the store. We have a starter HTML that looks like the following:

<h1>@this-dot/rxidb autoincrement example</h1>
<br>
<button id="add-item-btn"> Add item </button>
<button id="remove-first-item-btn"> Remove first item </button>
<button id="remove-last-item-btn"> Remove last item </button>

<hr>
<div id="container"></div>

Initiating the Database

For us to be able to store data in IndexedDb, we need a database connection and an Object Store set up. We want this Object Store to automatically increment, so we don't need to manually keep track of the last key. We do want to listen to every update that happens in the database, so let's set up our listeners for the keys and the key-value pairs using the entries() and the keys() operators.

import {
  addItem,
  connectIndexedDb,
  deleteItem,
  entries,
  getObjectStore,
  keys,
} from '@this-dot/rxidb';

// ...

const DATABASE_NAME = 'AUTO_INCREMENT';

const store$ = connectIndexedDb(DATABASE_NAME).pipe(
  getObjectStore('store', { autoIncrement: true })
);

const keyValues$: Observable<{ key: IDBValidKey; value: unknown }[]> =
  store$.pipe(entries());
const keys$: Observable<IDBValidKey[]> = store$.pipe(keys());

We want to display the contents of the database when they get updated in the #container div. For that, we need to subscribe to our keyValues$ observable. Whenever it emits, we want to update our div.

const containerDiv: HTMLElement = document.getElementById(`container`);

// ...

keyValues$.subscribe((entries) => {
  const content = entries
    .map(({ key, value }) => `<div>${key} | ${value} </div>`)
    .join('\n<br>\n');
  containerDiv.innerHTML = content;
});

Manipulating the data

We have three buttons on our UI. One for adding data to the Object Store, and two for removing data from it. Let's set up our event emitters, using the fromEvent observable creator function from RxJS.

const removeFirstBtn: HTMLElement = document.getElementById(
  'remove-first-item-btn'
);
const removeLastBtn: HTMLElement = document.getElementById(
  'remove-last-item-btn'
);
const addItemBtn: HTMLElement = document.getElementById('add-item-btn');

const addItemBtnClick$ = fromEvent(addItemBtn, 'click');
const removeFirstItemBtnClick$ = fromEvent(removeFirstBtn, 'click');
const removeLastItemBtnClick$ = fromEvent(removeLastBtn, 'click');

We can use the addItem operator to add rows to an automatically incrementing Ojbect Store. When the Add Item button gets clicked, we want to save a timestamp into our database.

addItemBtnClick$
  .pipe(
    map(() => new Date().getTime()),
    switchMap((timestamp) => store$.pipe(addItem(timestamp)))
  )
  .subscribe();

Removing elements from the database will happen on the other two button clicks. We need the keys$ observable so we can delete the first or the last items in the store.

removeFirstItemBtnClick$
  .pipe(
    withLatestFrom(keys$),
    switchMap(([, keys]) =>
      store$.pipe(
        filter(() => !!keys.length),
        deleteItem(keys[0])
      )
    )
  )
  .subscribe();

removeLastItemBtnClick$
  .pipe(
    withLatestFrom(keys$),
    switchMap(([, keys]) =>
      store$.pipe(
        filter(() => !!keys.length),
        deleteItem(keys[keys.length - 1])
      )
    )
  )
  .subscribe();

Toggling button states

The last feature we want to implement is toggling the remove buttons' disabled state. If there are no entries in the database we disable the buttons. If there are entries, we enable them. We can easily listen to the keyValues$ with the tap operator.

const keyValues$: Observable<{ key: IDBValidKey; value: unknown }[]> =
  store$.pipe(
    entries(),
    tap(toggleRemoveButtons)
  );

// ...

function toggleRemoveButtons(
  keys: { key: IDBValidKey; value: unknown }[]
): void {
  if (keys.length) {
    removeFirstBtn.removeAttribute('disabled');
    removeLastBtn.removeAttribute('disabled');
  } else {
    removeFirstBtn.setAttribute('disabled', 'true');
    removeLastBtn.setAttribute('disabled', 'true');
  }
}

Real-world use cases for autoIncrement Object Stores

An automatically incrementing object store could be useful when your app needs to support offline mode, but you also need to log certain events happening on the UI to an API endpoint. Such audit logs must be stored locally and sent when the device comes online the next time. When the device goes offline, every outgoing request to our logging endpoint can instead put the data into this Object Store, and when the device comes online, we just read the events and send them with their timestamps.

Storing objects

Have you ever needed to fill out an extremely long form online? Maybe even that form was part of a wizard? It is a very bad user experience when you accidentally refresh or close the tab, and you need to start over. Of course, it could be implemented to store the unfinished form in a database somehow, but that would mean storing people's sensitive PII (Personal Identifiable Information) data. IndexedDb can help here as well because it stores that data on the user's machine.

In the following example, we are going to focus on how to store data in specific keys. For the sake of simplicity, we set up some listeners and automatically save the information entered into the form. We will also have two buttons, one for clearing the form, and the other for submitting. Our HTML template looks like the following:

<div id="app">
  <h1>@this-dot/rxidb key-value pair store example</h1>

  <hr />

  <h2>Address form</h2>
  <form id="example-form" method="POST">
    <label for="first-name">First Name:</label>
    <br />
    <input id="first-name" required placeholder="John" />
    <br />
    <br />
    <label for="last-name">Last Name:</label>
    <br />
    <input id="last-name" required placeholder="Doe" />
    <br />
    <br />
    <label for="city">City:</label>
    <br />
    <input id="city" required placeholder="Metropolis" />
    <br />
    <br />
    <label for="address-first">Address line 1:</label>
    <br />
    <input id="address-first" required placeholder="Example street 1" />
    <br />
    <br />
    <label for="address-second">Address line 2 (optional):</label>
    <br />
    <input id="address-second" placeholder="4th floor; 13th door" />
    <br />
    <br />
    <div style="display: flex; justify-content: space-between; width: 153px">
      <button id="clear-button" type="button">Clear form</button>
      <button id="submit-button" type="submit" disabled>Submit</button>
    </div>
  </form>
</div>

Based on the above template, we do have a shape of how the object that we would like to store would look like. Let's set up a type for that, and some default constants.

type UserFormValue = {
  firstName: string;
  lastName: string;
  city: string;
  addressFirst: string;
  addressSecond: string;
};

const EMPTY_FORM_VALUE: UserFormValue = {
  firstName: '',
  lastName: '',
  city: '',
  addressFirst: '',
  addressSecond: '',
};

Set up the Object Store and the event listeners

Setting up the database is done similarly as in the previous example. We open a connection to the IndexedDb and then create a store. But this time, we just create a default store. This way, we have full control over the keys. With this form, we want to write the value of the USER_INFO key in this Object Store. We also want to get notified when this value changes, so we set up the suerInfo$ observable using the read() operator.

import {
  connectIndexedDb,
  deleteItem,
  setItem,
  read,
  getObjectStore,
} from '@this-dot/rxidb';
// ...

const DATABASE_NAME = 'KEY_VALUE_PAIRS';
const FORM_DATA_KEY = 'USER_INFO';

const store$ = connectIndexedDb(DATABASE_NAME).pipe(getObjectStore('store'));
const userInfo$: Observable<UserFormValue | null> = store$.pipe(
  read(FORM_DATA_KEY)
);

To be able to write values into our Object Store and update the data on our UI, we need some HTML elements. We set up constants that point towards our form, the two buttons, and all of the inputs inside the form.

const exampleForm = document.getElementById('example-form') as HTMLFormElement;
const submitButton = document.getElementById(
  'submit-button'
) as HTMLButtonElement;
const clearButton = document.getElementById(
  'clear-button'
) as HTMLButtonElement;

const firstNameInput = document.getElementById(
  'first-name'
) as HTMLInputElement;
const lastNameInput = document.getElementById('last-name') as HTMLInputElement;
const cityInput = document.getElementById('city') as HTMLInputElement;
const addressFirstInput = document.getElementById(
  'address-first'
) as HTMLInputElement;
const addressSecondInput = document.getElementById(
  'address-second'
) as HTMLInputElement;

And finally, we set up some event listener Observables to be able to act when an event occurs. Again, we use the fromEvent creator function from rxjs.

const formInputChange$ = fromEvent(exampleForm, 'input');
const formSubmit$ = fromEvent(exampleForm, 'submit');
const clearForm$ = fromEvent(clearButton, 'click');

Set up some helper methods

Before we set up our subscriptions, let's think through what behaviour we want with this form and the buttons.

It is certain that we need a way to get the current value of the form that matches the UserFormValue type. We also want to set the input fields of the form, especially when we reload the page and there is data saved in our Object Store. If there is no value provided to this setter method, it should use our predefined EMPTY_FORM_VALUE constant.

function getUserFormValue(): UserFormValue {
  return {
    firstName: firstNameInput.value,
    lastName: lastNameInput.value,
    city: cityInput.value,
    addressFirst: addressFirstInput.value,
    addressSecond: addressSecondInput.value,
  };
}

function setInputFieldValues(value: UserFormValue = EMPTY_FORM_VALUE): void {
  firstNameInput.value = value.firstName || '';
  lastNameInput.value = value.lastName || '';
  cityInput.value = value.city || '';
  addressFirstInput.value = value.addressFirst || '';
  addressSecondInput.value = value.addressSecond || '';
}

The UI should block the user from certain interactions. The submit button should be disabled while the form is invalid, and while a database write operation is still in progress. For handling the submit button state, we need two helper methods.

function disableSubmitButton(): void {
  submitButton.setAttribute('disabled', 'true');
}

function removeSubmitButtonDisabledIfFormIsValid(): void {
  const isFormValid = exampleForm.checkValidity();
  if (isFormValid) {
    submitButton.removeAttribute('disabled');
  }
}

Now we have every tool that we need to implement the logic.

Setting up our subscriptions

We would like to write the form data into the Object Store as soon as the form changes, but we don't want to start such operations on every change. To mitigate this, we are going to use the debounceTime(1000) operator, so it waits for 1 second before starting the write operation. We use our getUSerFormValue() helper method to get the actual data from the input fields and we use the setItem() method on the store$ observable, inside a switchMap operator to write the values. We also want to disable the Submit button when the form changes and re-enable it if the form is valid and the write operation is finished.

formInputChange$
  .pipe(
    tap(() => disableSubmitButton()),
    debounceTime(1000),
    map<unknown, UserFormValue>(getUserFormValue),
    switchMap((userFormValue) =>
      store$.pipe(setItem(FORM_DATA_KEY, userFormValue))
    ),
    tap(() => removeSubmitButtonDisabledIfFormIsValid())
  )
  .subscribe();

We also want to set the values of the input fields, for example, when we refresh the page. We also handled the submit button state and we set the values only if there are values to set. We use our setInputFieldValues() method to update the UI.

userInfo$
  .pipe(
    tap(() => disableSubmitButton()),
    filter((v: UserFormValue | null): v is UserFormValue => !!v),
    tap((storedValue: UserFormValue) => {
      setInputFieldValues(storedValue);
      removeSubmitButtonDisabledIfFormIsValid();
    })
  )
  .subscribe();

When we submit the form, we will probably want to do something asynchronously. When that succeeds, we want to clear our Object Store, so we don't keep the submitted data on our user's machine. We also want to update the UI and clear the input fields. In this example, our form would send a POST request when we press the submit button. Therefore, we call event.preventDefault() on the submit event, so we stay on the page.

formSubmit$
  .pipe(
    // We prevent the native HTML submit event to run, so it won't send a post request for the sake of the example.
    tap((event: SubmitEvent) => {
      event.preventDefault();
      disableSubmitButton();
    }),
    map(getUserFormValue),
    // this is the point where we could do anything with the current form values, for example, send them to the server, etc.
    switchMap(() => store$.pipe(deleteItem(FORM_DATA_KEY))),
    tap(() => setInputFieldValues())
  )
  .subscribe();

And when we want to clear the form, we need to do the same with the data stored in our Object Store.

clearForm$
  .pipe(
    switchMap(() => store$.pipe(deleteItem(FORM_DATA_KEY))),
    tap(() => setInputFieldValues())
  )
  .subscribe();

Real-world use cases for Key-Value pair Object Stores

Having persistent forms between page refreshes is just one useful feature you can do with IndexedDb. Our example above is very simple, but you can have a multi-page form. One could store the progress on such forms and allow the user to continue with the forms later. Keeping the constraints of IndexedDb in mind, one very cool feature is storing data for offline use.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Implementing Dynamic Types in Docusign Extension Apps cover image

Implementing Dynamic Types in Docusign Extension Apps

Implementing Dynamic Types in Docusign Extension Apps In our previous blog post about Docusign Extension Apps, Advanced Authentication and Onboarding Workflows with Docusign Extension Apps, we touched on how you can extend the OAuth 2 flow to build a more powerful onboarding flow for your Extension Apps. In this blog post, we will continue explaining more advanced patterns in developing Extension Apps. For that reason, we assume at least basic familiarity with how Extension Apps work and ideally some experience developing them. To give a brief recap, Docusign Extension Apps are a powerful way to embed custom logic into Docusign agreement workflows. These apps are lightweight services, typically cloud-hosted, that integrate at specific workflow extension points to perform custom actions, such as data validation, participant input collection, or interaction with third-party services. Each Extension App is configured using a manifest file. This manifest defines metadata such as the app's author, support links, and the list of extension points it uses (these are the locations in the workflow where your app's logic will be executed). The extension points that are relevant for us in the context of this blog post are GetTypeNames and GetTypeDefinitions. These are used by Docusign to retrieve the types supported by the Extension App and their definitions, and to show them in the Maestro UI. In most apps, these types are static and rarely change. However, they don't have to be. They can also be dynamic and change based on certain configurations in the target system that the Extension App is integrating with, or based on the user role assigned to the Maestro administrator on the target system. Static vs. Dynamic Types To explain the difference between static and dynamic types, we'll use the example from our previous blog post, where we integrated with an imaginary task management system called TaskVibe. In the example, our Extension App enabled agreement workflows to communicate with TaskVibe, allowing tasks to be read, created, and updated. Our first approach to implementing the GetTypeNames and GetTypeDefinitions endpoints for the TaskVibe Extension App might look like the following. The GetTypeNames endpoint returns a single record named task: ` Given the type name task, the GetTypeDefinitions endpoint would return the following definition for that type: ` As noted in the Docusign documentation, this endpoint must return a Concerto schema representing the type. For clarity, we've omitted most of the Concerto-specific properties. The above declaration states that we have a task type, and this type has properties that correspond to task fields in TaskVibe, such as record ID, title, description, assignee, and so on. The type definition and its properties, as described above, are static and they never change. A TaskVibe task will always have the same properties, and these are essentially set in stone. Now, imagine a scenario where TaskVibe supports custom properties that are also project-dependent. One project in TaskVibe might follow a typical agile workflow with sprints, and the project manager might want a "Sprint" field in every task within that project. Another project might use a Kanban workflow, where the project manager wants a status field with values like "Backlog," "ToDo," and so on. With static types, we would need to return every possible field from any project as part of the GetTypeDefinitions response, and this introduces new challenges. For example, we might be dealing with hundreds of custom field types, and showing them in the Maestro UI might be too overwhelming for the Maestro administrator. Or we might be returning fields that are simply not usable by the Maestro administrator because they relate to projects the administrator doesn't have access to in TaskVibe. With dynamic types, however, we can support this level of customization. Implementing Dynamic Types When Docusign sends a request to the GetTypeNames endpoint and the types are dynamic, the Extension App has a bit more work than before. As we've mentioned earlier, we can no longer return a generic task type. Instead, we need to look into each of the TaskVibe projects the user has access to, and return the tasks as they are represented under each project, with all the custom fields. (Determining access can usually be done by making a query to a user information endpoint on the target system using the same OAuth 2 token used for other calls.) Once we find the task definitions on TaskVibe, we then need to return them in the response of GetTypeNames, where each type corresponds to a task for the given project. This is a big difference from static types, where we would only return a single, generic task. For example: ` The key point here is that we are now returning one type per task in a TaskVibe project. You can think of this as having a separate class for each type of task, in object-oriented lingo. The type name can be any string you choose, but it needs to be unique in the list, and it needs to contain the minimum information necessary to be able to distinguish it from other task definitions in the list. In our case, we've decided to form the ID by concatenating the string "task_" with the ID of the project on TaskVibe. The implementation of the GetTypeDefinitions endpoint needs to: 1. Extract the project ID from the requested type name. 1. Using the project ID, retrieve the task definition from TaskVibe for that project. This definition specifies which fields are present on the project's tasks, including all custom fields. 1. Once the fields are retrieved, map them to the properties of the Concerto schema. The resulting JSON could look like this (again, many of the Concerto properties have been omitted for clarity): ` Now, type definitions are fully dynamic and project-dependent. Caching of Type Definitions on Docusign Docusign maintains a cache of type definitions after an initial connection. This means that changes made to your integration (particularly when using dynamic types) might not be immediately visible in the Maestro UI. To ensure users see the latest data, it's useful to inform them that they may need to refresh their Docusign connection in the App Center UI if new fields are added to their integrated system (like TaskVibe). As an example, a newly added custom field on a TaskVibe project wouldn't be reflected until this refresh occurs. Conclusion In this blog post, we've explored how to leverage dynamic types within Docusign Extension Apps to create more flexible integrations with external systems. While static types offer simplicity, they can be constraining when working with external systems that offer a high level of customization. We hope that this blog post provides you with some ideas on how you can tackle similar problems in your Extension Apps....

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging) cover image

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging)

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging) In the first part of my series on the importance of a scientific mindset in software engineering, we explored how the principles of the scientific method can help us evaluate sources and make informed decisions. Now, we will focus on how these principles can help us tackle one of the most crucial and challenging tasks in software engineering: debugging. In software engineering, debugging is often viewed as an art - an intuitive skill honed through experience and trial and error. In a way, it is - the same as a GP, even a very evidence-based one, will likely diagnose most of their patients based on their experience and intuition and not research scientific literature every time; a software engineer will often rely on their experience and intuition to identify and fix common bugs. However, an internist faced with a complex case will likely not be able to rely on their intuition alone and must apply the scientific method to diagnose the patient. Similarly, a software engineer can benefit from using the scientific method to identify and fix the problem when faced with a complex bug. From that perspective, treating engineering challenges like scientific inquiries can transform the way we tackle problems. Rather than resorting to guesswork or gut feelings, we can apply the principles of the scientific method—forming hypotheses, designing controlled experiments, gathering and evaluating evidence—to identify and eliminate bugs systematically. This approach, sometimes referred to as "scientific debugging," reframes debugging from a haphazard process into a structured, disciplined practice. It encourages us to be skeptical, methodical, and transparent in our reasoning. For instance, as Andreas Zeller notes in the book _Why Programs Fail_, the key aspect of scientific debugging is its explicitness: Using the scientific method, you make your assumptions and reasoning explicit, allowing you to understand your assumptions and often reveals hidden clues that can lead to the root cause of the problem on hand. Note: If you'd like to read an excerpt from the book, you can find it on Embedded.com. Scientific Debugging At its core, scientific debugging applies the principles of the scientific method to the process of finding and fixing software defects. Rather than attempting random fixes or relying on intuition, it encourages engineers to move systematically, guided by data, hypotheses, and controlled experimentation. By adopting debugging as a rigorous inquiry, we can reduce guesswork, speed up the resolution process, and ensure that our fixes are based on solid evidence. Just as a scientist begins with a well-defined research question, a software engineer starts by identifying the specific symptom or error condition. For instance, if our users report inconsistencies in the data they see across different parts of the application, our research question could be: _"Under what conditions does the application display outdated or incorrect user data?"_ From there, we can follow a structured debugging process that mirrors the scientific method: - 1. Observe and Define the Problem: First, we need to clearly state the bug's symptoms and the environment in which it occurs. We should isolate whether the issue is deterministic or intermittent and identify any known triggers if possible. Such a structured definition serves as the groundwork for further investigation. - 2. Formulate a Hypothesis: A hypothesis in debugging is a testable explanation for the observed behavior. For instance, you might hypothesize: _"The data inconsistency occurs because a caching layer is serving stale data when certain user profiles are updated."_ The key is that this explanation must be falsifiable; if experiments don't support the hypothesis, it must be refined or discarded. - 3. Collect Evidence and Data: Evidence often includes logs, system metrics, error messages, and runtime traces. Similar to reviewing primary sources in academic research, treat your raw debugging data as crucial evidence. Evaluating these data points can reveal patterns. In our example, such patterns could be whether the bug correlates with specific caching mechanisms, increased memory usage, or database query latency. During this step, it's essential to approach data critically, just as you would analyze the quality and credibility of sources in a research literature review. Don't forget that even logs can be misleading, incomplete, or even incorrect, so cross-referencing multiple sources is key. - 4. Design and Run Experiments: Design minimal, controlled tests to confirm or refute your hypothesis. In our example, you may try disabling or shortening the cache's time-to-live (TTL) to see if more recent data is displayed correctly. By manipulating one variable at a time - such as cache invalidation intervals - you gain clearer insights into causation. Tools such as profilers, debuggers, or specialized test harnesses can help isolate factors and gather precise measurements. - 5. Analyze Results and Refine Hypotheses: If the experiment's outcome doesn't align with your hypothesis, treat it as a stepping stone, not a dead end. Adjust your explanation, form a new hypothesis, or consider additional variables (for example, whether certain API calls bypass caching). Each iteration should bring you closer to a better understanding of the bug's root cause. Remember, the goal is not to prove an initial guess right but to arrive at a verifiable explanation. - 6. Implement and Verify the Fix: Once you're confident in the identified cause, you can implement the fix. Verification doesn't stop at deployment - re-test under the same conditions and, if possible, beyond them. By confirming the fix in a controlled manner, you ensure that the solution is backed by evidence rather than wishful thinking. - Personally, I consider implementing end-to-end tests (e.g., with Playwright) that reproduce the bug and verify the fix to be a crucial part of this step. This both ensures that the bug doesn't reappear in the future due to changes in the codebase and avoids possible imprecisions of manual testing. Now, we can explore these steps in more detail, highlighting how the scientific method can guide us through the debugging process. Establishing Clear Debugging Questions (Formulating a Hypothesis) A hypothesis is a proposed explanation for a phenomenon that can be tested through experimentation. In a debugging context, that phenomenon is the bug or issue you're trying to resolve. Having a clear, falsifiable statement that you can prove or disprove ensures that you stay focused on the real problem rather than jumping haphazardly between possible causes. A properly formulated hypothesis lets you design precise experiments to evaluate whether your explanation holds true. To formulate a hypothesis effectively, you can follow these steps: 1. Clearly Identify the Symptom(s) Before forming any hypothesis, pin down the specific issue users are experiencing. For instance: - "Users intermittently see outdated profile information after updating their accounts." - "Some newly created user profiles don't reflect changes in certain parts of the application." Having a well-defined problem statement keeps your hypothesis focused on the actual issue. Just like a research question in science, the clarity of your symptom definition directly influences the quality of your hypothesis. 2. Draft a Tentative Explanation Next, convert your symptom into a statement that describes a _possible root cause_, such as: - "Data inconsistency occurs because the caching layer isn't invalidating or refreshing user data properly when profiles are updated." - "Stale data is displayed because the cache timeout is too long under certain load conditions." This step makes your assumption about the root cause explicit. As with the scientific method, your hypothesis should be something you can test and either confirm or refute with data or experimentation. 3. Ensure Falsifiability A valid hypothesis must be falsifiable - meaning it can be proven _wrong_. You'll struggle to design meaningful experiments if a hypothesis is too vague or broad. For example: - Not Falsifiable: "Occasionally, the application just shows weird data." - Falsifiable: "Users see stale data when the cache is not invalidated within 30 seconds of profile updates." Making your hypothesis specific enough to fail a test will pave the way for more precise debugging. 4. Align with Available Evidence Match your hypothesis to what you already know - logs, stack traces, metrics, and user reports. For example: - If logs reveal that cache invalidation events aren't firing, form a hypothesis explaining why those events fail or never occur. - If metrics show that data served from the cache is older than the configured TTL, hypothesize about how or why the TTL is being ignored. If your current explanation contradicts existing data, refine your hypothesis until it fits. 5. Plan for Controlled Tests Once you have a testable hypothesis, figure out how you'll attempt to _disprove_ it. This might involve: - Reproducing the environment: Set up a staging/local system that closely mimics production. For instance with the same cache layer configurations. - Varying one condition at a time: For example, only adjust cache invalidation policies or TTLs and then observe how data freshness changes. - Monitoring metrics: In our example, such monitoring would involve tracking user profile updates, cache hits/misses, and response times. These metrics should lead to confirming or rejecting your explanation. These plans become your blueprint for experiments in further debugging stages. Collecting and Evaluating Evidence After formulating a clear, testable hypothesis, the next crucial step is to gather data that can either support or refute it. This mirrors how scientists collect observations in a literature review or initial experiments. 1. Identify "Primary Sources" (Logs, Stack Traces, Code History): - Logs and Stack Traces: These are your direct pieces of evidence - treat them like raw experimental data. For instance, look closely at timestamps, caching-related events (e.g., invalidation triggers), and any error messages related to stale reads. - Code History: Look for related changes in your source control, e.g. using Git bisect. In our example, we would look for changes to caching mechanisms or references to cache libraries in commits, which could pinpoint when the inconsistency was introduced. Sometimes, reverting a commit that altered cache settings helps confirm whether the bug originated there. 2. Corroborate with "Secondary Sources" (Documentation, Q&A Forums): - Documentation: Check official docs for known behavior or configuration details that might differ from your assumptions. - Community Knowledge: Similar issues reported on GitHub or StackOverflow may reveal known pitfalls in a library you're using. 3. Assess Data Quality and Relevance: - Look for Patterns: For instance, does stale data appear only after certain update frequencies or at specific times of day? - Check Environmental Factors: For instance, does the bug happen only with particular deployment setups, container configurations, or memory constraints? - Watch Out for Biases: Avoid seeking only the data that confirms your hypothesis. Look for contradictory logs or metrics that might point to other root causes. You keep your hypothesis grounded in real-world system behavior by treating logs, stack traces, and code history as primary data - akin to raw experimental results. This evidence-first approach reduces guesswork and guides more precise experiments. Designing and Running Experiments With a hypothesis in hand and evidence gathered, it's time to test it through controlled experiments - much like scientists isolate variables to verify or debunk an explanation. 1. Set Up a Reproducible Environment: - Testing Environments: Replicate production conditions as closely as possible. In our example, that would involve ensuring the same caching configuration, library versions, and relevant data sets are in place. - Version Control Branches: Use a dedicated branch to experiment with different settings or configuration, e.g., cache invalidation strategies. This streamlines reverting changes if needed. 2. Control Variables One at a Time: - For instance, if you suspect data inconsistency is tied to cache invalidation events, first adjust only the invalidation timeout and re-test. - Or, if concurrency could be a factor (e.g., multiple requests updating user data simultaneously), test different concurrency levels to see if stale data issues become more pronounced. 3. Measure and Record Outcomes: - Automated Tests: Tests provide a great way to formalize and verify your assumptions. For instance, you could develop tests that intentionally update user profiles and check if the displayed data matches the latest state. - Monitoring Tools: Monitor relevant metrics before, during, and after each experiment. In our example, we might want to track cache hit rates, TTL durations, and query times. - Repeat Trials: Consistency across multiple runs boosts confidence in your findings. 4. Validate Against a Baseline: - If baseline tests manifest normal behavior, but your experimental changes manifest the bug, you've isolated the variable causing the issue. E.g. if the baseline tests show that data is consistently fresh under normal caching conditions but your experimental changes cause stale data. - Conversely, if your change eliminates the buggy behavior, it supports your hypothesis - e.g. that the cache configuration was the root cause. Each experiment outcome is a data point supporting or contradicting your hypothesis. Over time, these data points guide you toward the true cause. Analyzing Results and Iterating In scientific debugging, an unexpected result isn't a failure - it's valuable feedback that brings you closer to the right explanation. 1. Compare Outcomes to the hypothesis. For instance: - Did user data stay consistent after you reduced the cache TTL or fixed invalidation logic? - Did logs show caching events firing as expected, or did they reveal unexpected errors? - Are there only partial improvements that suggest multiple overlapping issues? 2. Incorporate Unexpected Observations: - Sometimes, debugging uncovers side effects - e.g. performance bottlenecks exposed by more frequent cache invalidations. Note these for future work. - If your hypothesis is disproven, revise it. For example, the cache may only be part of the problem, and a separate load balancer setting also needs attention. 3. Avoid Confirmation Bias: - Don't dismiss contrary data. For instance, if you see evidence that updates are fresh in some modules but stale in others, you may have found a more nuanced root cause (e.g., partial cache invalidation). - Consider other credible explanations if your teammates propose them. Test those with the same rigor. 4. Decide If You Need More Data: - If results aren't conclusive, add deeper instrumentation or enable debug modes to capture more detailed logs. - For production-only issues, implement distributed tracing or sampling logs to diagnose real-world usage patterns. 5. Document Each Iteration: - Record the results of each experiment, including any unexpected findings or new hypotheses that arise. - Through iterative experimentation and analysis, each cycle refines your understanding. By letting evidence shape your hypothesis, you ensure that your final conclusion aligns with reality. Implementing and Verifying the Fix Once you've identified the likely culprit - say, a misconfigured or missing cache invalidation policy - the next step is to implement a fix and verify its resilience. 1. Implementing the Change: - Scoped Changes: Adjust just the component pinpointed in your experiments. Avoid large-scale refactoring that might introduce other issues. - Code Reviews: Peer reviews can catch overlooked logic gaps or confirm that your changes align with best practices. 2. Regression Testing: - Re-run the same experiments that initially exposed the issue. In our stale data example, confirm that the data remains fresh under various conditions. - Conduct broader tests - like integration or end-to-end tests - to ensure no new bugs are introduced. 3. Monitoring in Production: - Even with positive test results, real-world scenarios can differ. Monitor logs and metrics (e.g. cache hit rates, user error reports) closely post-deployment. - If the buggy behavior reappears, revisit your hypothesis or consider additional factors, such as unpredicted user behavior. 4. Benchmarking and Performance Checks (If Relevant): - When making changes that affect the frequency of certain processes - such as how often a cache is refreshed - be sure to measure the performance impact. Verify you meet any latency or resource usage requirements. - Keep an eye on the trade-offs: For instance, more frequent cache invalidations might solve stale data but could also raise system load. By systematically verifying your fix - similar to confirming experimental results in research - you ensure that you've addressed the true cause and maintained overall software stability. Documenting the Debugging Process Good science relies on transparency, and so does effective debugging. Thorough documentation guarantees your findings are reproducible and valuable to future team members. 1. Record Your Hypothesis and Experiments: - Keep a concise log of your main hypothesis, the tests you performed, and the outcomes. - A simple markdown file within the repo can capture critical insights without being cumbersome. 2. Highlight Key Evidence and Observations: - Note the logs or metrics that were most instrumental - e.g., seeing repeated stale cache hits 10 minutes after updates. - Document any edge cases discovered along the way. 3. List Follow-Up Actions or Potential Risks: - If you discover additional issues - like memory spikes from more frequent invalidation - note them for future sprints. - Identify parts of the code that might need deeper testing or refactoring to prevent similar issues. 4. Share with Your Team: - Publish your debugging report on an internal wiki or ticket system. A well-documented troubleshooting narrative helps educate other developers. - Encouraging open discussion of the debugging process fosters a culture of continuous learning and collaboration. By paralleling scientific publication practices in your documentation, you establish a knowledge base to guide future debugging efforts and accelerate collective problem-solving. Conclusion Debugging can be as much a rigorous, methodical exercise as an art shaped by intuition and experience. By adopting the principles of scientific inquiry - forming hypotheses, designing controlled experiments, gathering evidence, and transparently documenting your process - you make your debugging approach both systematic and repeatable. The explicitness and structure of scientific debugging offer several benefits: - Better Root-Cause Discovery: Structured, hypothesis-driven debugging sheds light on the _true_ underlying factors causing defects rather than simply masking symptoms. - Informed Decisions: Data and evidence lead the way, minimizing guesswork and reducing the chance of reintroducing similar issues. - Knowledge Sharing: As in scientific research, detailed documentation of methods and outcomes helps others learn from your process and fosters a collaborative culture. Ultimately, whether you are diagnosing an intermittent crash or chasing elusive performance bottlenecks, scientific debugging brings clarity and objectivity to your workflow. By aligning your debugging practices with the scientific method, you build confidence in your solutions and empower your team to tackle complex software challenges with precision and reliability. But most importantly, do not get discouraged by the number of rigorous steps outlined above or by the fact you won't always manage to follow them all religiously. Debugging is a complex and often frustrating process, and it's okay to rely on your intuition and experience when needed. Feel free to adapt the debugging process to your needs and constraints, and as long as you keep the scientific mindset at heart, you'll be on the right track....

Introducing the express-typeorm-postgres Starter Kit cover image

Introducing the express-typeorm-postgres Starter Kit

Here at This Dot, we've been working with ExpressJS APIs for a while, and we've created a starter.dev kit for ExpressJS that you can use to scaffold your next backend project. The starter kit uses many well-known npm packages, such as TypeORM or BullMQ and integrates with databases such as PostgreSQL and Redis. Kit contents The express-typeorm-postgres starter kit provides you with infrastructure for development, and integrations with these infrastructures. It comes with a working Redis instance for caching and a second Redis instance for queues. It also starts up a Postgres instance for you, which you can seed with TypeORM. The infrastructure runs on docker using docker-compose. The generated project comes with prettier and eslint set-up, so you only need to spend time on configuration if you want to tweak or change the existing rules. Unit testing is set up using Jest, and there are some example tests provided with the example controllers. How to initialise API development usually requires more infrastructure than front-end development. Before you start, please make sure you have docker and docker-compose installed on your machine. To initialize a project with the express-typeorm-postgres kit, run the following: 1. Run npx @this-dot/create-starter to run the scaffolding tool 2. Select the Express.js, TypeORM, and PostgreSQL kit from the CLI library options 3. Name your project 4. cd into your project directory, and install dependencies using the tool of your choice (npm, yarn or pnpm) 5. copy the contents of the .env.example file into a .env file With this setup, you have a working starter kit that you can modify to your needs. TypeORM and Database When we started developing the kit, we decided to use PostgreSQL as the database, because it is a powerful, open-source object-relational database system that is widely used for storing and manipulating data. It has a strong reputation for reliability, performance, and feature richness, making it a great choice for a wide range of applications. It can also handle high levels of concurrency and large amounts of data, and supports complex queries and data types. Postgres is also highly extensible because it allows developers to add custom functions and data types to the database. It has a large and active community of developers and users who contribute to its ongoing development and support. The kit uses TypeORM to connect to the database instance. We chose TypeORM because it makes it easy to manage database connections and perform common database operations, such as querying, inserting, updating and deleting data. It supports TypeScript and a wide range of databases, such as PostgreSQL, MySQL, SQLite and MongoDB, therefore if you want to be able to switch between databases, it makes it easier. TypeORM also includes features such as database migrations, which help manage changes to database schema over time, and an entity model that allows you to define your database schema using classes and decorators. Overall, TypeORM is a useful tool for improving the efficiency and reliability of database-related code, and it can be a valuable addition to any TypeScript or JavaScript project that needs to interact with a database. To seed an initial set of data into your database, run the following commands: 1. npm run infrastructure:start - this starts up the database instance 2. npm run db:seed - this leverages TypeORM to seed the database. The seed command runs the src/db/run-seeders.ts file, where you can introduce your seeders for your own needs. The kit uses TypeORM-extension for seeding. Please refer to the src/db/seeding/technology-seeder.ts file for an example. Caching Storing response data in caches allows subsequent requests for the same data to be served more quickly. This can improve the performance and user experience of an API by reducing the amount of time it takes to retrieve data from the server. It can reduce the load on the database or mitigate rate limiting on third-party APIs called from your back-end. It also improves the reliability of an application by providing a fallback mechanism in case the database or the server is unavailable or slow to respond. There is a Redis instance set up in the kit to be used for caching data. Under the hood, we use the cachified library to store cached data in Redis. The kit has a useCache method exported from src/cache/cache.ts, which requires a key and a callback function to be called to fetch the data. ` When you need to invalidate cache entries, you can use the clearCacheEntry method by supplying a key string to it. It will remove the cached data from Redis, and the next request that fetches from the database will cache the new values. ` Under the src/modules/technology folder, you can see a complete example of a basic CRUD REST endpoint with caching enabled. Feel free to use those handlers as examples for your development needs. Queue A message queue allows different parts of an application, or different applications, to communicate with each other asynchronously by sending and receiving messages. This can be useful in a variety of situations, such as when one part of the application needs to perform a task that could take a long time, or when different parts of the application need to be decoupled from each other for flexibility and scalability. We chose BullMQ because it is a fast, reliable, and feature-rich message queue system. It is built on top of the popular Redis in-memory data store, which makes it very performant and scalable. It has support for parallel processing, rate limiting, retries, and a variety of other features that make it well-suited for a wide range of use cases. BullMQ has a straightforward API and good documentation. The kit has a second Redis instance set up to be used with BullMQ, and there is a queue set up out of the box, so resource-intensive tasks can be offloaded to a background process. The src/queue/ folder contains all the configuration and setup steps for the queue. Both the queue and its worker is set up in the queue.ts file. The job-processor.ts file contains the function that will process the data. To run int in a separate thread, we must pass the path to this file into the worker: ` When to use this kit This kit is most optimal when you: - want to build back-end services that can be consumed by other applications and services using ExpressJS - need a flexible and scalable way to build server-side applications - need to deal with CPU-intense operations on the server and you need a messaging queue - need to build an API with relational data - would like to just jump right into API development with ExpressJS using TypeORM and Postgres Conclusion The express-typeorm-postgres starter kit can help you kickstart your development by providing you with a working preset. It has testing configured, and it comes with a complete infrastructure orchestrated by docker-compose....

Quo v[AI]dis, Tech Stack? cover image

Quo v[AI]dis, Tech Stack?

Since we've started extensively leveraging AI at This Dot to enhance development workflows and experimenting with different ways to make it as helpful as possible, there's been a creeping thought on my mind - Is AI just helping us write code faster, or is it silently reshaping what code we choose to write? Eventually, this thought led to an interesting conversation on our company's Slack about the impact of AI on our tech stack choices. Some of the views shared there included: - "The battle between static and dynamic types is over. TypeScript won." - "The fast-paced development of new frameworks and the excitement around new shiny technologies is slowing down. AI can make existing things work with a workaround in a few minutes, so why create or adopt something new?" - "AI models are more trained on the most popular stacks, so they will naturally favor those, leading to a self-reinforcing loop." - "A lot of AI coding assistants serve as marketing funnels for specific stacks, such as v0 being tailored to Next.js and Vercel or Lovable using Supabase and Clerk." All of these points are valid and interesting, but they also made me think about the bigger picture. So I decided to do some extensive research (read "I decided to make the OpenAI Deep Research tool do it for me") and summarize my findings in this article. So without further ado, here are some structured thoughts on how AI is reshaping our tech stack choices, and what it means for the future of software development. 1. LLMs as the New Developer Platform If software development is a journey, LLMs have become the new high-speed train line. Long gone are the days when we used Copilot as a fancy autocomplete tool. Don't get me wrong, it was mind-bogglingly good when it first came out, and I've used it extensively. But now, a few years later, LLMs have evolved into something much more powerful. With the rise of tools like Cursor, Windsurf, Roo Code, or Claude Code, LLMs are essentially becoming the new developer platform. They are no longer just a helper that autocompletes a switch statement or a function signature, but a full-fledged platform that can generate entire applications, write tests, and even refactor code. And it is not just a few evangelists or early adopters who are using these tools. They have become mainstream, with millions of developers relying on them daily. According to Deloitte, nearly 20% of devs in tech firms were already using generative AI coding tools by 2024, with 76% of StackOverflow respondents using or planning to use AI tools in their development process, according to the 2024 StackOverflow Developer Survey. They've become an integral part of the development workflow, mediating how code is written, reviewed, and learned. I've argued in the past that LLMs are becoming a new layer of abstraction in software development, but now I believe they are evolving into something even more powerful - a new developer platform that is shaping how we think about and approach software development. 2. The Reinforcement Loop: Popular Stacks Get Smarter As we travel this AI-guided road, we find that certain routes become highways, while others lead to narrow paths or even dead ends. AI tools are not just helping us write code faster; they are also shaping our preferences for certain tech stacks. The most popular frameworks and languages, such as React.js on the frontend and Node.js on the backend (both with 40% adoption), are the ones that AI tools perform best with. Their increasing popularity is not just a coincidence; it's a result of a self-reinforcing loop. AI models are trained on vast amounts of code, and the most popular stacks naturally have more data available for training, given their widespread use, leading to more questions, answers, and examples in the training data. This means that AI tools are inherently better at understanding and generating code for these stacks. As an anecdotal example, I've noticed that AI tools tend to suggest React.js even when I specify a preference for another framework. As someone working with multiple tech stacks, I can attest that AI tools are significantly more effective with React.js or Node.js than, say, Yii2 or CakePHP. This phenomenon is not limited to just one or two stacks; it applies to the entire ecosystem. The more a stack is used, the more data there is for AI to learn from, and the better it gets at generating code for that stack, resulting in a feedback loop: 1. AI performs better on popular stacks. 2. Popular stacks get more adoption as developers find them easier to work with. 3. More developers using those stacks means more data for AI to learn from. 4. The cycle continues, reinforcing the popularity of those stacks. The issue is maybe even more evident with CSS frameworks. TailwindCSS, for example, has gained immense popularity thanks to its utility-first approach, which aligns well with AI's ability to generate and manipulate styles. As more developers adopt TailwindCSS, AI tools become better at understanding its conventions and generating appropriate styles, further driving its adoption. However, the Tailwind CSS example also highlights a potential pitfall of this reinforcement loop. Tailwind CSS v4 was released in January 2025. From my experience, AI tools still attempt to generate code using v3 concepts and often need to be reminded to use Tailwind CSS v4, requiring spoon-feeding with documentation to get it right. Effectively, this phenomenon can lead to a situation where AI tools not only reinforce the popularity of certain stacks but also potentially slow down the adoption of newer versions or alternatives. 3. Frontend Acceleration: React, Angular, and Beyond Navigating the frontend landscape has always been tricky, but with AI, some paths feel like smooth expressways while others remain bumpy dirt roads. AI is particularly transformative in frontend development, where the complexity and boilerplate code can be overwhelming. Established frameworks like React and Angular, which are already popular, are seeing even more adoption due to AI's ability to generate components, tests, and optimizations. React's widespread adoption and its status as the most popular framework on the frontend make it the go-to choice for many developers, especially with AI tools that can quickly scaffold new components or entire applications. However, Angular's strict structure and type safety also make it a strong contender. Angular's opinionated nature can actually benefit AI-generated code, as it provides a clear framework for the AI to follow, reducing ambiguity and potential bugs. > Call me crazy but I think that long term Angular is going to work better with AI tools for frontend work. > > More strict rules to follow, easier to build and scale. Just like for humans. > > We just need to keep Angular opinionated enough. > > — Daniel Glejzner on X But it's not just about how the frameworks are structured; it's also the documentation they provide. It has recently become the norm for frameworks to have AI-friendly documentation. Angular, for instance, has a llms.txt file that you can reference in your AI prompts to get more relevant results. The best example, however, in my opinion, is the Nuxt.ui documentation, which provides the option to copy each documentation page as markdown or a link to its markdown version, making it easy to reference in AI prompts. Frameworks that incorporate AI-friendly documentation and tooling are likely to experience increased adoption, as they facilitate developers' ability to leverage AI's capabilities. 4. Full-Stack TS/JS: The Sweet Spot On this AI-accelerated journey, some stacks have emerged as the smoothest rides, and full-stack JavaScript/TypeScript is leading the way. The combination of React on the frontend and Node.js on the backend provides a unified language ecosystem, making the road less bumpy for developers. Shared types, common tooling, and mature libraries enable faster prototyping and reduced context switching. AI seems to enjoy these well-paved highways too. I've observed numerous instances where AI tools default to suggesting Next.js and Tailwind CSS for new projects, even when users are prompted otherwise. While you can force a slight detour to something like Nuxt or SvelteKit, the road suddenly gets patchier - AI becomes less confident, requires more hand-holding, and sometimes outright stalls. So while still technically being in the sweet spot of full-stack JavaScript/TypeScript, deviating from the "main highway" even slightly can lead to a much rougher ride. React-based full-stack frameworks are becoming mainstream, not necessarily because they are always the best solution, but because they are the path of least resistance for both humans and AI. 5. The Polyglot Shift: AI Enables Multilingual Devs One fascinating development on this journey is how AI is enabling more developers to become polyglots. Where switching stacks used to feel like taking detours into unknown territory, AI now acts like an on-demand guide. Whether it’s switching from Laravel to Spring Boot or from Angular to Svelte, AI helps bridge those knowledge gaps quickly. At This Dot, we've always taken pride in our polyglot approach, but AI is lowering the barriers for everyone. Yes, we've done this before the rise of AI tooling. If you are an experienced engineer with a strong understanding of programming concepts, you'll be able to adapt to different stacks and projects quickly. But AI is now enabling even junior developers to become polyglots, and it's making it even easier for the experienced ones to switch between stacks seamlessly. AI doesn’t just shorten the journey - it makes more destinations accessible. This "AI boost" not only facilitates the job of a software consultant, such as myself, who often has to switch between different projects, but it also opens the door to unlimited possibilities for companies to mix and match stacks based on their needs - particularly useful for companies that have diverse tech stacks, as it allows them to leverage the strengths of different languages and frameworks without the steep learning curve that usually comes with it. 6. AI-Generated Stack Bundles: The Trojan Horse > Trend I'm seeing: AI app generators are a sales funnel. > > -Chef uses Convex. > > -V0 is optimized for Vercel. > > -Lovable uses Supabase and Clerk. > > -Firebase Studio uses Google services. > > These tools act like a trojan horse - they "sell" a tech stack. > > Choose wisely. > > — Cory House on X Some roads come pre-built, but with toll booths you may not notice until you're halfway through the trip. AI-generated apps from tools like v0, Firebase Studio, or Lovable are convenience highways - fast, smooth, and easy to follow - but they quietly nudge you toward specific tech stacks, backend services, databases, and deployment platforms. It's a smart business model. These tools don't just scaffold your app; they bundle in opinions on hosting, auth providers, and DB layers. The convenience is undeniable, but there's a trade-off in flexibility and long-term maintainability. Engineering leaders must stay alert, like seasoned navigators, ensuring that the allure of speed doesn't lead their teams down the alleyways of vendor lock-in. 7. From 'Buy vs Build' to 'Prompt vs Buy' The classic dilemma used to be _“buy vs build”_ - now it’s becoming “prompt vs buy.” Why pay for a bloated tour bus of a SaaS product, packed with destinations and detours you’ll never take (and priced accordingly), when you can chart a custom route with a few well-crafted prompts and have a lightweight internal tool up and running in days—or even hours? Do you need a simple tool to track customer contacts with a few custom fields and a clean interface? In the past, you might have booked a seat on the nearest SaaS solution - one that gets you close enough to your destination but comes with unnecessary stops and baggage. With AI, you can now skip the crowded bus altogether and build a tailor-made vehicle that drives exactly where you need to go, no more, no less. AI reshapes the travel map of product development. The road to MVPs has become faster, cheaper, and more direct. This shift is already rerouting the internal tooling landscape, steering companies away from bulky, one-size-fits-all platforms toward lean, AI-assembled solutions. And over time, it may change not just _how_ we build, but _where_ we build - with the smoothest highways forming around AI-friendly, modular ecosystems like Node, React, and TypeScript, while older “corporate” expressways like .NET, Java, or even Angular risk becoming the slow scenic routes of enterprise tech. 8. Strategic Implications: Velocity vs Maintainability Every shortcut comes with trade-offs. The fast lane that AI offers boosts productivity but can sometimes encourage shortcuts in architecture and design. Speeding to your destination is great - until you hit the maintenance toll booth further down the road. AI tooling makes it easier to throw together an MVP, but without experienced oversight, the resulting codebases can turn into spaghetti highways. Teams need to implement AI-era best practices: structured code reviews, prompt hygiene, and deliberate stack choices that prioritize long-term maintainability over short-term convenience. Failing to do so can lead to a "quick and dirty" mentality, where the focus is on getting things done fast rather than building robust, maintainable solutions, which is particularly concerning for companies that rely on in-house developers or junior teams who may not have the experience to recognize potential pitfalls in AI-generated code. 9. Closing Reflection: Are We Still Choosing Our Stacks? So, where are we heading? Looking at the current "traffic" on the modern software development pathways, one thing becomes clear: AI isn't just a productivity tool - the roads themselves are starting to shape the journey. What was once a deliberate process of choosing the right vehicle for the right terrain - picking our stacks based on product goals, team expertise, and long-term maintainability - now feels more like following GPS directions that constantly recalculate to the path of least resistance. AI is repaving the main routes, widening the lanes for certain tech stacks, and putting up "scenic route" signs for some frameworks while leaving others on neglected backroads. This doesn't mean we've lost control of the steering wheel, but it does mean that the map is changing beneath us in ways that are easy to overlook. The risk is clear: we may find ourselves taking the smoothest on-ramps without ever asking if they lead to where we actually want to go. Convenience can quietly take priority over appropriateness. Productivity gains in the short term can pave over technical debt potholes that become unavoidable down the road. But the story isn't entirely one of caution. There's a powerful opportunity here too. With AI as a co-pilot, we can explore more destinations than ever before - venturing into unfamiliar tech stacks, accelerating MVP development, or rapidly prototyping ideas that previously seemed out of reach. The key is to remain intentional about when to cruise with AI autopilot and when to take the wheel with both hands and steer purposefully. In this new era of AI-shaped development, the question every engineering team should be asking is not just "how fast can we go?" but "are we on the right road?" and "who's really choosing our route?" And let’s not forget — some of these roads are still being built. Open-source maintainers and framework authors play a pivotal role in shaping which paths become highways. By designing AI-friendly architectures, providing structured, machine-readable documentation, and baking in patterns that are easy for AI models to learn and suggest, they can guide where AI directs traffic. Frameworks that proactively optimize for AI tooling aren’t just improving developer experience — they’re shaping the very flow of adoption in this AI-accelerated landscape. If we're not mindful, we risk becoming passengers on a journey defined by default choices. However, if we remain vigilant, we can utilize AI to create more accurate maps, not just follow the fastest roads, but also chart new ones. Because while the routes may be getting redrawn, the destination should always be ours to choose. In the end, the real competitive advantage will belong to those who can harness AI's speed while keeping their hands firmly on the wheel - navigating not by ease, but by purpose. In this new era, the most valuable skill may not be prompt engineering - it might be strategic discernment....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co