Skip to content

Component Testing in Svelte

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Component Testing in Svelte

Testing helps us trust our application, and it's a safety net for future changes. In this tutorial, we will set up our Svelte project to run tests for our components.

Starting a new project

Let's start by creating a new project:

pnpm dlx create-vite
// Project name: › testing-svelte
// Select a framework: › svelte
// Select a variant: › svelte-ts

cd testing-svelte
pnpm install

There are other ways of creating a Svelte project, but I prefer using Vite. One of the reasons that I prefer using Vite is that SvelteKit will use it as well. I'm also a big fan of pnpm, but you can use your preferred package manager. Make sure you follow Vite's docs on starting a new project using npm or yarn.

Installing required dependencies

  • Jest: I'll be using this framework for testing. It's the one that I know best, and feel more comfortable with. Because I'm using TypeScript, I need to install its type definitions too.
  • ts-jest: A transformer for handling TypeScript files.
  • svelte-jester: precompiles Svelte components before tests.
  • Testing Library: Doesn't matter what framework I'm using, I will look for an implementation of this popular library.
pnpm install --save-dev jest @types/jest @testing-library/svelte svelte-jester ts-jest

Configuring tests

Now that our dependencies are installed, we need to configure jest to prepare the tests and run them.

A few steps are required:

  • Convert *.ts files
  • Complile *.svelte files
  • Run the tests

Create a configuration file at the root of the project:

// jest.config.js
export default {
  transform: {
    '^.+\\.ts$': 'ts-jest',
    '^.+\\.svelte$': [
      'svelte-jester',
      {
        preprocess: true,
      },
    ],
  },
  moduleFileExtensions: ['js', 'ts', 'svelte'],
};

Jest will now use ts-jest for compiling *.ts files, and svelte-jester for *.svelte files.

Creating a new test

Let's test the Counter component created when we started the project, but first, I'll check what our component does.

<script lang="ts">
  let count: number = 0;
  const increment = () => {
    count += 1;
  };
</script>

<button on:click={increment}>
  Clicks: {count}
</button>

<style>
  button {
    font-family: inherit;
    font-size: inherit;
    padding: 1em 2em;
    color: #ff3e00;
    background-color: rgba(255, 62, 0, 0.1);
    border-radius: 2em;
    border: 2px solid rgba(255, 62, 0, 0);
    outline: none;
    width: 200px;
    font-variant-numeric: tabular-nums;
    cursor: pointer;
  }

  button:focus {
    border: 2px solid #ff3e00;
  }

  button:active {
    background-color: rgba(255, 62, 0, 0.2);
  }
</style>

This is a very small component where a button when clicked, updates a count, and that count is reflected in the button text. So, that's exactly what we'll be testing.

I'll create a new file ./lib/__tests__/Counter.spec.ts

/**
 * @jest-environment jsdom
 */

import { render, fireEvent } from '@testing-library/svelte';
import Counter from '../Counter.svelte';

describe('Counter', () => {
  it('it changes count when button is clicked', async () => {
    const { getByText } = render(Counter);
    const button = getByText(/Clicks:/);
    expect(button.innerHTML).toBe('Clicks: 0');
    await fireEvent.click(button);
    expect(button.innerHTML).toBe('Clicks: 1');
  });
});

We are using render and fireEvent from testing-library. Be mindful that fireEvent returns a Promise and we need to await for it to be fulfilled. I'm using the getByText query, to get the button being clicked. The comment at the top, informs jest that we need to use jsdom as the environment. This will make things like document available, otherwise, render will not be able to mount the component. This can be set up globally in the configuration file.

What if we wanted, to test the increment method in our component? If it's not an exported function, I'd suggest testing it through the rendered component itself. Otherwise, the best option is to extract that function to another file, and import it into the component.

Let's see how that works.

// lib/increment.ts
export function increment (val: number) {
    val += 1;
    return val
  };
<!-- lib/Counter.svelte -->
<script lang="ts">
  import { increment } from './increment';
  let count: number = 0;
</script>

<button on:click={() => (count = increment(count))}>
  Clicks: {count}
</button>
<!-- ... -->

Our previous tests will still work, and we can add a test for our function.

// lib/__tests__/increment.spec.ts

import { increment } from '../increment';

describe('increment', () => {
  it('it returns value+1 to given value when called', async () => {
    expect(increment(0)).toBe(1);
    expect(increment(-1)).toBe(0);
    expect(increment(1.2)).toBe(2.2);
  });
});

In this test, there's no need to use jsdom as the test environment. We are just testing the function.

If our method was exported, we can then test it by accessing it directly.

<!-- lib/Counter.svelte -->
<script lang="ts">
  let count: number = 0;
  export const increment = () => {
    count += 1;
  };
</script>

<button on:click={increment}>
  Clicks: {count}
</button>
<!-- ... -->
// lib/__tests__/Counter.spec.ts

describe('Counter Component', () => {
 // ... other tests

  describe('increment', () => {
    it('it exports a method', async () => {
      const { component } = render(Counter);
      expect(component.increment).toBeDefined();
    });

    it('it exports a method', async () => {
      const { getByText, component } = render(Counter);
      const button = getByText(/Clicks:/);
      expect(button.innerHTML).toBe('Clicks: 0');
      await component.increment()
      expect(button.innerHTML).toBe('Clicks: 1');
    });
  });
});

When the method is exported, you can access it directly from the returned component property of the render function.

NOTE: I don't recommend exporting methods from the component for simplicity if they were not meant to be exported. This will make them available from the outside, and callable from other components.

Events

If your component dispatches an event, you can test it using the component property returned by render.

To dispatch an event, we need to import and call createEventDispatcher, and then call the returning funtion, giving it an event name and an optional value.

<!-- lib/Counter.svelte -->
<script lang="ts">
  import { createEventDispatcher } from 'svelte';
  const dispatch = createEventDispatcher();

  let count: number = 0;
  export const increment = () => {
    count += 1;
    dispatch('countChanged', count);
  };
</script>

<button on:click={increment}>
  Clicks: {count}
</button>
<!-- ... -->
// lib/__tests__/Counter.spec.ts
// ...

  it('it emits an event', async () => {
    const { getByText, component } = render(Counter);
    const button = getByText(/Clicks:/);
    let mockEvent = jest.fn();
    component.$on('countChanged', function (event) {
      mockEvent(event.detail);
    });
    await fireEvent.click(button);

    // Some examples on what to test
    expect(mockEvent).toHaveBeenCalled(); // to check if it's been called
    expect(mockEvent).toHaveBeenCalledTimes(1); // to check how any times it's been called
    expect(mockEvent).toHaveBeenLastCalledWith(1); // to check the content of the event
    await fireEvent.click(button);
    expect(mockEvent).toHaveBeenCalledTimes(2);
    expect(mockEvent).toHaveBeenLastCalledWith(2);
  });

//...

For this example, I updated the component to emit an event: countChanged. Every time the button is clicked, the event will emit the new count. In the test, I'm using getByText to select the button to click, and component.

Then, I'm using component.$on(eventName), and mocking the callback function to test the emitted value (event.detail).

Props

You can set initial props values, and modifying them using the client-side component API.

Let's update our component to receive the initial count value.

<!-- lib/Counter.svelte -->
<script lang="ts">
// ...
  export let count: number = 0;
// ...
</script>

<!-- ... -->

Converting count to an input value requires exporting the variable declaration.

Then we can test:

  • default values
  • initial values
  • updating values
// lib/__tests__/Counter.ts
// ...
describe('count', () => {
    it('defaults to 0', async () => {
      const { getByText } = render(Counter);
      const button = getByText(/Clicks:/);
      expect(button.innerHTML).toBe('Clicks: 0');
    });

    it('can have an initial value', async () => {
      const { getByText } = render(Counter, {props: {count: 33}});
      const button = getByText(/Clicks:/);
      expect(button.innerHTML).toBe('Clicks: 33');
    });

    it('can be updated', async () => {
      const { getByText, component } = render(Counter);
      const button = getByText(/Clicks:/);
      expect(button.innerHTML).toBe('Clicks: 0');
      await component.$set({count: 41})
      expect(button.innerHTML).toBe('Clicks: 41');
    });
});
// ...

We are using the second argument of the render method to pass initial values to count, and we are testing it through the rendered button

To update the value, we are calling the $set method on component, which will update the rendered value on the next tick. That's why we need to await it.

Wrapping up

Testing components using Jest and Testing Library can help you avoid errors when developing, and also can make you more confident when applying changes to an existing codebase. I hope this blog post is a step forward to better testing.

You can find these examples in this repo

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging) cover image

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging)

The Importance of a Scientific Mindset in Software Engineering: Part 2 (Debugging) In the first part of my series on the importance of a scientific mindset in software engineering, we explored how the principles of the scientific method can help us evaluate sources and make informed decisions. Now, we will focus on how these principles can help us tackle one of the most crucial and challenging tasks in software engineering: debugging. In software engineering, debugging is often viewed as an art - an intuitive skill honed through experience and trial and error. In a way, it is - the same as a GP, even a very evidence-based one, will likely diagnose most of their patients based on their experience and intuition and not research scientific literature every time; a software engineer will often rely on their experience and intuition to identify and fix common bugs. However, an internist faced with a complex case will likely not be able to rely on their intuition alone and must apply the scientific method to diagnose the patient. Similarly, a software engineer can benefit from using the scientific method to identify and fix the problem when faced with a complex bug. From that perspective, treating engineering challenges like scientific inquiries can transform the way we tackle problems. Rather than resorting to guesswork or gut feelings, we can apply the principles of the scientific method—forming hypotheses, designing controlled experiments, gathering and evaluating evidence—to identify and eliminate bugs systematically. This approach, sometimes referred to as "scientific debugging," reframes debugging from a haphazard process into a structured, disciplined practice. It encourages us to be skeptical, methodical, and transparent in our reasoning. For instance, as Andreas Zeller notes in the book _Why Programs Fail_, the key aspect of scientific debugging is its explicitness: Using the scientific method, you make your assumptions and reasoning explicit, allowing you to understand your assumptions and often reveals hidden clues that can lead to the root cause of the problem on hand. Note: If you'd like to read an excerpt from the book, you can find it on Embedded.com. Scientific Debugging At its core, scientific debugging applies the principles of the scientific method to the process of finding and fixing software defects. Rather than attempting random fixes or relying on intuition, it encourages engineers to move systematically, guided by data, hypotheses, and controlled experimentation. By adopting debugging as a rigorous inquiry, we can reduce guesswork, speed up the resolution process, and ensure that our fixes are based on solid evidence. Just as a scientist begins with a well-defined research question, a software engineer starts by identifying the specific symptom or error condition. For instance, if our users report inconsistencies in the data they see across different parts of the application, our research question could be: _"Under what conditions does the application display outdated or incorrect user data?"_ From there, we can follow a structured debugging process that mirrors the scientific method: - 1. Observe and Define the Problem: First, we need to clearly state the bug's symptoms and the environment in which it occurs. We should isolate whether the issue is deterministic or intermittent and identify any known triggers if possible. Such a structured definition serves as the groundwork for further investigation. - 2. Formulate a Hypothesis: A hypothesis in debugging is a testable explanation for the observed behavior. For instance, you might hypothesize: _"The data inconsistency occurs because a caching layer is serving stale data when certain user profiles are updated."_ The key is that this explanation must be falsifiable; if experiments don't support the hypothesis, it must be refined or discarded. - 3. Collect Evidence and Data: Evidence often includes logs, system metrics, error messages, and runtime traces. Similar to reviewing primary sources in academic research, treat your raw debugging data as crucial evidence. Evaluating these data points can reveal patterns. In our example, such patterns could be whether the bug correlates with specific caching mechanisms, increased memory usage, or database query latency. During this step, it's essential to approach data critically, just as you would analyze the quality and credibility of sources in a research literature review. Don't forget that even logs can be misleading, incomplete, or even incorrect, so cross-referencing multiple sources is key. - 4. Design and Run Experiments: Design minimal, controlled tests to confirm or refute your hypothesis. In our example, you may try disabling or shortening the cache's time-to-live (TTL) to see if more recent data is displayed correctly. By manipulating one variable at a time - such as cache invalidation intervals - you gain clearer insights into causation. Tools such as profilers, debuggers, or specialized test harnesses can help isolate factors and gather precise measurements. - 5. Analyze Results and Refine Hypotheses: If the experiment's outcome doesn't align with your hypothesis, treat it as a stepping stone, not a dead end. Adjust your explanation, form a new hypothesis, or consider additional variables (for example, whether certain API calls bypass caching). Each iteration should bring you closer to a better understanding of the bug's root cause. Remember, the goal is not to prove an initial guess right but to arrive at a verifiable explanation. - 6. Implement and Verify the Fix: Once you're confident in the identified cause, you can implement the fix. Verification doesn't stop at deployment - re-test under the same conditions and, if possible, beyond them. By confirming the fix in a controlled manner, you ensure that the solution is backed by evidence rather than wishful thinking. - Personally, I consider implementing end-to-end tests (e.g., with Playwright) that reproduce the bug and verify the fix to be a crucial part of this step. This both ensures that the bug doesn't reappear in the future due to changes in the codebase and avoids possible imprecisions of manual testing. Now, we can explore these steps in more detail, highlighting how the scientific method can guide us through the debugging process. Establishing Clear Debugging Questions (Formulating a Hypothesis) A hypothesis is a proposed explanation for a phenomenon that can be tested through experimentation. In a debugging context, that phenomenon is the bug or issue you're trying to resolve. Having a clear, falsifiable statement that you can prove or disprove ensures that you stay focused on the real problem rather than jumping haphazardly between possible causes. A properly formulated hypothesis lets you design precise experiments to evaluate whether your explanation holds true. To formulate a hypothesis effectively, you can follow these steps: 1. Clearly Identify the Symptom(s) Before forming any hypothesis, pin down the specific issue users are experiencing. For instance: - "Users intermittently see outdated profile information after updating their accounts." - "Some newly created user profiles don't reflect changes in certain parts of the application." Having a well-defined problem statement keeps your hypothesis focused on the actual issue. Just like a research question in science, the clarity of your symptom definition directly influences the quality of your hypothesis. 2. Draft a Tentative Explanation Next, convert your symptom into a statement that describes a _possible root cause_, such as: - "Data inconsistency occurs because the caching layer isn't invalidating or refreshing user data properly when profiles are updated." - "Stale data is displayed because the cache timeout is too long under certain load conditions." This step makes your assumption about the root cause explicit. As with the scientific method, your hypothesis should be something you can test and either confirm or refute with data or experimentation. 3. Ensure Falsifiability A valid hypothesis must be falsifiable - meaning it can be proven _wrong_. You'll struggle to design meaningful experiments if a hypothesis is too vague or broad. For example: - Not Falsifiable: "Occasionally, the application just shows weird data." - Falsifiable: "Users see stale data when the cache is not invalidated within 30 seconds of profile updates." Making your hypothesis specific enough to fail a test will pave the way for more precise debugging. 4. Align with Available Evidence Match your hypothesis to what you already know - logs, stack traces, metrics, and user reports. For example: - If logs reveal that cache invalidation events aren't firing, form a hypothesis explaining why those events fail or never occur. - If metrics show that data served from the cache is older than the configured TTL, hypothesize about how or why the TTL is being ignored. If your current explanation contradicts existing data, refine your hypothesis until it fits. 5. Plan for Controlled Tests Once you have a testable hypothesis, figure out how you'll attempt to _disprove_ it. This might involve: - Reproducing the environment: Set up a staging/local system that closely mimics production. For instance with the same cache layer configurations. - Varying one condition at a time: For example, only adjust cache invalidation policies or TTLs and then observe how data freshness changes. - Monitoring metrics: In our example, such monitoring would involve tracking user profile updates, cache hits/misses, and response times. These metrics should lead to confirming or rejecting your explanation. These plans become your blueprint for experiments in further debugging stages. Collecting and Evaluating Evidence After formulating a clear, testable hypothesis, the next crucial step is to gather data that can either support or refute it. This mirrors how scientists collect observations in a literature review or initial experiments. 1. Identify "Primary Sources" (Logs, Stack Traces, Code History): - Logs and Stack Traces: These are your direct pieces of evidence - treat them like raw experimental data. For instance, look closely at timestamps, caching-related events (e.g., invalidation triggers), and any error messages related to stale reads. - Code History: Look for related changes in your source control, e.g. using Git bisect. In our example, we would look for changes to caching mechanisms or references to cache libraries in commits, which could pinpoint when the inconsistency was introduced. Sometimes, reverting a commit that altered cache settings helps confirm whether the bug originated there. 2. Corroborate with "Secondary Sources" (Documentation, Q&A Forums): - Documentation: Check official docs for known behavior or configuration details that might differ from your assumptions. - Community Knowledge: Similar issues reported on GitHub or StackOverflow may reveal known pitfalls in a library you're using. 3. Assess Data Quality and Relevance: - Look for Patterns: For instance, does stale data appear only after certain update frequencies or at specific times of day? - Check Environmental Factors: For instance, does the bug happen only with particular deployment setups, container configurations, or memory constraints? - Watch Out for Biases: Avoid seeking only the data that confirms your hypothesis. Look for contradictory logs or metrics that might point to other root causes. You keep your hypothesis grounded in real-world system behavior by treating logs, stack traces, and code history as primary data - akin to raw experimental results. This evidence-first approach reduces guesswork and guides more precise experiments. Designing and Running Experiments With a hypothesis in hand and evidence gathered, it's time to test it through controlled experiments - much like scientists isolate variables to verify or debunk an explanation. 1. Set Up a Reproducible Environment: - Testing Environments: Replicate production conditions as closely as possible. In our example, that would involve ensuring the same caching configuration, library versions, and relevant data sets are in place. - Version Control Branches: Use a dedicated branch to experiment with different settings or configuration, e.g., cache invalidation strategies. This streamlines reverting changes if needed. 2. Control Variables One at a Time: - For instance, if you suspect data inconsistency is tied to cache invalidation events, first adjust only the invalidation timeout and re-test. - Or, if concurrency could be a factor (e.g., multiple requests updating user data simultaneously), test different concurrency levels to see if stale data issues become more pronounced. 3. Measure and Record Outcomes: - Automated Tests: Tests provide a great way to formalize and verify your assumptions. For instance, you could develop tests that intentionally update user profiles and check if the displayed data matches the latest state. - Monitoring Tools: Monitor relevant metrics before, during, and after each experiment. In our example, we might want to track cache hit rates, TTL durations, and query times. - Repeat Trials: Consistency across multiple runs boosts confidence in your findings. 4. Validate Against a Baseline: - If baseline tests manifest normal behavior, but your experimental changes manifest the bug, you've isolated the variable causing the issue. E.g. if the baseline tests show that data is consistently fresh under normal caching conditions but your experimental changes cause stale data. - Conversely, if your change eliminates the buggy behavior, it supports your hypothesis - e.g. that the cache configuration was the root cause. Each experiment outcome is a data point supporting or contradicting your hypothesis. Over time, these data points guide you toward the true cause. Analyzing Results and Iterating In scientific debugging, an unexpected result isn't a failure - it's valuable feedback that brings you closer to the right explanation. 1. Compare Outcomes to the hypothesis. For instance: - Did user data stay consistent after you reduced the cache TTL or fixed invalidation logic? - Did logs show caching events firing as expected, or did they reveal unexpected errors? - Are there only partial improvements that suggest multiple overlapping issues? 2. Incorporate Unexpected Observations: - Sometimes, debugging uncovers side effects - e.g. performance bottlenecks exposed by more frequent cache invalidations. Note these for future work. - If your hypothesis is disproven, revise it. For example, the cache may only be part of the problem, and a separate load balancer setting also needs attention. 3. Avoid Confirmation Bias: - Don't dismiss contrary data. For instance, if you see evidence that updates are fresh in some modules but stale in others, you may have found a more nuanced root cause (e.g., partial cache invalidation). - Consider other credible explanations if your teammates propose them. Test those with the same rigor. 4. Decide If You Need More Data: - If results aren't conclusive, add deeper instrumentation or enable debug modes to capture more detailed logs. - For production-only issues, implement distributed tracing or sampling logs to diagnose real-world usage patterns. 5. Document Each Iteration: - Record the results of each experiment, including any unexpected findings or new hypotheses that arise. - Through iterative experimentation and analysis, each cycle refines your understanding. By letting evidence shape your hypothesis, you ensure that your final conclusion aligns with reality. Implementing and Verifying the Fix Once you've identified the likely culprit - say, a misconfigured or missing cache invalidation policy - the next step is to implement a fix and verify its resilience. 1. Implementing the Change: - Scoped Changes: Adjust just the component pinpointed in your experiments. Avoid large-scale refactoring that might introduce other issues. - Code Reviews: Peer reviews can catch overlooked logic gaps or confirm that your changes align with best practices. 2. Regression Testing: - Re-run the same experiments that initially exposed the issue. In our stale data example, confirm that the data remains fresh under various conditions. - Conduct broader tests - like integration or end-to-end tests - to ensure no new bugs are introduced. 3. Monitoring in Production: - Even with positive test results, real-world scenarios can differ. Monitor logs and metrics (e.g. cache hit rates, user error reports) closely post-deployment. - If the buggy behavior reappears, revisit your hypothesis or consider additional factors, such as unpredicted user behavior. 4. Benchmarking and Performance Checks (If Relevant): - When making changes that affect the frequency of certain processes - such as how often a cache is refreshed - be sure to measure the performance impact. Verify you meet any latency or resource usage requirements. - Keep an eye on the trade-offs: For instance, more frequent cache invalidations might solve stale data but could also raise system load. By systematically verifying your fix - similar to confirming experimental results in research - you ensure that you've addressed the true cause and maintained overall software stability. Documenting the Debugging Process Good science relies on transparency, and so does effective debugging. Thorough documentation guarantees your findings are reproducible and valuable to future team members. 1. Record Your Hypothesis and Experiments: - Keep a concise log of your main hypothesis, the tests you performed, and the outcomes. - A simple markdown file within the repo can capture critical insights without being cumbersome. 2. Highlight Key Evidence and Observations: - Note the logs or metrics that were most instrumental - e.g., seeing repeated stale cache hits 10 minutes after updates. - Document any edge cases discovered along the way. 3. List Follow-Up Actions or Potential Risks: - If you discover additional issues - like memory spikes from more frequent invalidation - note them for future sprints. - Identify parts of the code that might need deeper testing or refactoring to prevent similar issues. 4. Share with Your Team: - Publish your debugging report on an internal wiki or ticket system. A well-documented troubleshooting narrative helps educate other developers. - Encouraging open discussion of the debugging process fosters a culture of continuous learning and collaboration. By paralleling scientific publication practices in your documentation, you establish a knowledge base to guide future debugging efforts and accelerate collective problem-solving. Conclusion Debugging can be as much a rigorous, methodical exercise as an art shaped by intuition and experience. By adopting the principles of scientific inquiry - forming hypotheses, designing controlled experiments, gathering evidence, and transparently documenting your process - you make your debugging approach both systematic and repeatable. The explicitness and structure of scientific debugging offer several benefits: - Better Root-Cause Discovery: Structured, hypothesis-driven debugging sheds light on the _true_ underlying factors causing defects rather than simply masking symptoms. - Informed Decisions: Data and evidence lead the way, minimizing guesswork and reducing the chance of reintroducing similar issues. - Knowledge Sharing: As in scientific research, detailed documentation of methods and outcomes helps others learn from your process and fosters a collaborative culture. Ultimately, whether you are diagnosing an intermittent crash or chasing elusive performance bottlenecks, scientific debugging brings clarity and objectivity to your workflow. By aligning your debugging practices with the scientific method, you build confidence in your solutions and empower your team to tackle complex software challenges with precision and reliability. But most importantly, do not get discouraged by the number of rigorous steps outlined above or by the fact you won't always manage to follow them all religiously. Debugging is a complex and often frustrating process, and it's okay to rely on your intuition and experience when needed. Feel free to adapt the debugging process to your needs and constraints, and as long as you keep the scientific mindset at heart, you'll be on the right track....

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers cover image

“Music and code have a lot in common,” freeCodeCamp’s Jessica Wilkins on what the tech community is doing right to onboard new software engineers

Before she was a software developer at freeCodeCamp, Jessica Wilkins was a classically trained clarinetist performing across the country. Her days were filled with rehearsals, concerts, and teaching, and she hadn’t considered a tech career until the world changed in 2020. > “When the pandemic hit, most of my gigs were canceled,” she says. “I suddenly had time on my hands and an idea for a site I wanted to build.” That site, a tribute to Black musicians in classical and jazz music, turned into much more than a personal project. It opened the door to a whole new career where her creative instincts and curiosity could thrive just as much as they had in music. Now at freeCodeCamp, Jessica maintains and develops the very JavaScript curriculum that has helped her and millions of developers around the world. We spoke with Jessica about her advice for JavaScript learners, why musicians make great developers, and how inclusive communities are helping more women thrive in tech. Jessica’s Top 3 JavaScript Skill Picks for 2025 If you ask Jessica what it takes to succeed as a JavaScript developer in 2025, she won’t point you straight to the newest library or trend. Instead, she lists three skills that sound simple, but take real time to build: > “Learning how to ask questions and research when you get stuck. Learning how to read error messages. And having a strong foundation in the fundamentals” She says those skills don’t come from shortcuts or shiny tools. They come from building. > “Start with small projects and keep building,” she says. “Books like You Don’t Know JS help you understand the theory, but experience comes from writing and shipping code. You learn a lot by doing.” And don’t forget the people around you. > “Meetups and conferences are amazing,” she adds. “You’ll pick up things faster, get feedback, and make friends who are learning alongside you.” Why So Many Musicians End Up in Tech A musical past like Jessica’s isn’t unheard of in the JavaScript industry. In fact, she’s noticed a surprising number of musicians making the leap into software. > “I think it’s because music and code have a lot in common,” she says. “They both require creativity, pattern recognition, problem-solving… and you can really get into flow when you’re deep in either one.” That crossover between artistry and logic feels like home to people who’ve lived in both worlds. What the Tech Community Is Getting Right Jessica has seen both the challenges and the wins when it comes to supporting women in tech. > “There’s still a lot of toxicity in some corners,” she says. “But the communities that are doing it right—like Women Who Code, Women in Tech, and Virtual Coffee—create safe, supportive spaces to grow and share experiences.” She believes those spaces aren’t just helpful, but they’re essential. > “Having a network makes a huge difference, especially early in your career.” What’s Next for Jessica Wilkins? With a catalog of published articles, open-source projects under her belt, and a growing audience of devs following her journey, Jessica is just getting started. She’s still writing. Still mentoring. Still building. And still proving that creativity doesn’t stop at the orchestra pit—it just finds a new stage. Follow Jessica Wilkins on X and Linkedin to keep up with her work in tech, her musical roots, and whatever she’s building next. Sticker illustration by Jacob Ashley....

Exploring Open Props and its Capabilities cover image

Exploring Open Props and its Capabilities

Exploring Open Props and its Capabilities With its intuitive approach and versatile features, Open Props empowers you to create stunning designs easily. It has the perfect balance between simplicity and power. Whether you're a seasoned developer or just starting, Open Props makes styling websites a breeze. Let's explore how Open Props can help your web development workflow. What is Open Props Open Props is a CSS library that packs a set of CSS variables for quickly creating consistent components using “Sub-Atomic” Styles. These web design tokens are crafted to help you get great-looking results from the start using consistent naming conventions and providing lots of possibilities out-of-the-box. At the same time, it's customizable and can be gradually adopted. Installing open props There are many ways to get started with open props, each with its advantages. The library can be imported from a CDN or installed using npm. You can import all or specific parts of it, and for greater control of what's bundled or not, you can use PostCSS to include only the variables you used. From Zero to Open Props Let's start with the simplest way to test and use open props. I'll create a simple HTML file with some structure, and we'll start from there. Create an index.html file. ` Edit the content of your HTML file. In this example, we’ll create a landing page containing a few parts: a hero section, a section for describing the features of a service, a section for the different pricing options available and finally a section with a call to action. We’ll start just declaring the document structure. Next we’ll add some styles and finally we’ll switch to using open props variables. ` To serve our page, we could just open the file, but I prefer to use serve, which is much more versatile. To see the contents of our file, let's serve our content. ` This command will start serving our site on port 3000. Our site will look something like this: Open-props core does not contain any CSS reset. After all, it’s just a set of CSS variables. This is a good start regarding the document structure. Adding open props via CDN Let's add open-props to our project. To get started, add: ` This import will make the library's props available for us to use. This is a set of CSS variables. It contains variables for fonts, colors, sizes, and many more. Here is an excerpt of the content of the imported file: ` The :where pseudo-class wraps all the CSS variables declarations, giving them the least specificity. That means you can always override them with ease. This imported file is all you need to start using open props. It will provide a sensible set of variables that give you some constraints in terms of what values you can use, a palette of colors, etc. Because this is just CSS, you can opt-out by not using the variables provided. I like these constraints because they can help with consistency and allow me to focus on other things. At the same time, you can extend this by creating your own CSS variables or just using any value whenever you want to do something different or if the exact value you want is not there. We should include some styles to add a visual hierarchy to our document. Working with CSS variables Let's create a new file to hold our styles. ` And add some styles to it. We will be setting a size hierarchy to headings, using open props font-size variables. Additionally, gaps and margins will use the size variables. ` We can explore these variables further using open-props’ documentation. It's simple to navigate (single page), and consistent naming makes it easy to learn them. Trying different values sometimes involves changing the number at the end of the variable name. For example: font-size-X, where X ranges from 0 to 8 (plus an additional 00 value). Mapped to font-sizes from 0.5rem up to 3.5rem. If you find your font is too small, you can add 1 to it, until you find the right size. Colors range from 0-12: –red-0 is the lightest one (rgb(255, 245, 245)) while –red-12 is the darkest (rgb(125, 26, 26)). There are similar ranges for many properties like font weight, size (useful for padding and margins), line height and shadows, to name a few. Explore and find what best fits your needs. Now, we need to include these styles on our page. ` Our page looks better now. We could keep adding more styles, but we'll take a shortcut and add some defaults with Open Props' built in normalized CSS file. Besides the core of open props (that contains the variables) there’s an optional normalization file that we can use. Let's tweak our recently added styles.css file a bit. Let’s remove the rules for headings. Our resulting css will now look like this. ` And add a new import from open-props. ` Open props provides a normalization file for our CSS, which we have included. This will establish a nice-looking baseline for our styles. Additionally, it will handle light/dark mode based on your preferences. I have dark mode set and the result already looks a lot better. Some font styles and sizes have been set, and much more. More CSS Variables Let's add more styles to our page to explore the variables further. I'd like to give the pricing options a card style. Open Props has a section on borders and shadows that we can use for this purpose. I would also like to add a hover effect to these cards. Also, regarding spacing, I want to add more margins and padding when appropriate. ` With so little CSS added and using many open props variables for sizes, borders, shadows, and easing curves, we can quickly have a better-looking site. Optimizing when using the CDN Open props is a pretty light package; however, using the CDN will add more CSS than you'll probably use. Importing individual parts of these props according to their utility is possible. For example, import just the gradients. ` Or even a subset of colors ` These are some options to reduce the size of your app if using the CDN. Open Props with NPM Open Props is framework agnostic. I want my site to use Vite. Vite is used by many frameworks nowadays and is perfect to show you the next examples and optimizations. ` Let's add a script to our package.json file to start our development server. ` Now, we can start our application on port 5173 (default) by running the following command: ` Your application should be the same as before, but we will change how we import open props. Stop the application and remove the open-props and normalize imports from index.html. Now in your terminal install the open-props package from npm. ` Once installed, import the props and the normalization files at the beginning of your styles.css file. ` Restart your development server, and you should see the same results. Optimizing when using NPM Let's analyze the size of our package. 34.4 kb seems a bit much, but note that this is uncompressed. When compressed with gzip, it's closer to 9 kb. Similar to what we did when using the CDN, we can add individual sections of the package. For example in our CSS file we could import open-props/animations or open-props/sizes. If this concerns you, don't worry; we can do much better. JIT Props To optimize our bundled styles, we can use a PostCSS plugin called posts-jit-props. This package will ensure that we only ship the props that we are using. Vite has support for PostCSS, so setting it up is straightforward. Let's install the plugin: ` After the installation finishes, let's create a configuration file to include it. ` The content of your file should look like this: ` Finally, remove the open-props/style import from styles.css. Remember that this file contains the CSS variables we will add "just in time". Our page should still look the same, but if we analyze the size of our styles.css file again, we can see that it has already been reduced to 13.2kb. If you want to know where this size is coming from, the answer is that Open Props is adding all the variables used in the normalize file + the ones that we require in our file. If we were to remove the normalize import, we would end up with much smaller CSS files, and the number of props added just in time would be minimal. Try removing commenting it out (the open-props/normalize import) from the styles.css file. The page will look different, but it will be useful to show how just the props used are added. 2.4kB uncompressed. That's a lot less for our example. If we take a quick look at our generated file, we can see the small list of CSS variables added from open props at the top of our file (those that we use later on the file). Open props ships with tons of variables for: - Colors - Gradients - Shadows - Aspect Ratios - Typography - Easing - Animations - Sizes - Borders - Z-Index - Media Queries - Masks You probably won't use all of these but it's hard to tell what you'll be using from the beginning of a project. To keep things light, add what you need as you go, or let JIT handle it for you. Conclusion Open props has much to offer and can help speed your project by leveraging some decisions upfront and providing a sensible set of predefined CSS Variables. We've learned how to install it (or not) using different methods and showcased how simple it is to use. Give it a try!...

The simplicity of deploying an MCP server on Vercel cover image

The simplicity of deploying an MCP server on Vercel

The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead. Example of Lightweight MCP Server Design At This Dot Labs, we built an MCP server that leverages the DocuSign Navigator API. The tools, like `get_agreements`, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way. ` Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access. The Oauth flow begins when the user's LLM client makes a request without a valid auth token. In this case they’ll get a 401 response from our server with a WWW-Authenticate header, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. ` This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface. Deployment Options I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel. The case for Vercel: * My own familiarity with Next.js API deployment * Fit for architecture * The extremely simple deployment process * Deploy previews (the eternal Vercel customer conversion feature, IMO) Previews of unfamiliar territory Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review. Stuff I’m Not Worried About Vercel’s mcp-handler package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works. ` A Brief Case for MCP on Next.js Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website's agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool. Conclusion I'll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co