Skip to content

Mapping Returned HTTP Data with RxJS

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

In frontend development, it's likely that you will make API requests to retrieve data from some backend resource. However, some backends are set up in such a way that they either send too much data or not enough data back.

When we combine what we know already about Reactive Programming and RxJS, we can handle these situations in a very elegant manner using some useful operators that RxJS provides us.

In this article, we will look at handling both scenarios and look at four operators in particular: map, mergeMap, concatMap and switchMap. We will also provide links to some StackBlitz examples so you can see these live.

For fun, we'll use the Star Wars API to fetch data and use Angular as our Frontend framework of choice as its HTTP library has Reactive Bindings out of the box.

Handling too much data (part 1)

Ok, let's set up the scenario. We want to find out some details about Luke Skywalker such as his name, birth year, height, weight, and eye color.

With Angular, we can do this by simply making an HTTP request:

this.http.get("https://swapi.dev/api/people/1")
    .subscribe(response => console.log(response));

If we do this, we do get back the info we need, but we also get back a lot of info we don't need such as when the API entry was created and edited, Luke's species, his Vehicles etc. Right now, we don't care about those details.

Let's see how we could use RxJS's map operator to only return the data we want.

this.http.get("https://swapi.dev/api/people/1")
    .pipe(map(response => ({
        name: response.name,
        birthYear: response.birth_year,
        height: Number(response.height),
        weight: Number(response.mass),
        eyeColor: response.eye_color
    })))
    .subscribe(luke => console.log(luke))

We can see from the example above that it's super easy to only pull the values you need from the response object!

Note

You can also see how we managed to map some of the fields in the response from snake_case to camelCase which is more of a convention in JavaScript as well as mapping some fields in the response to a new name (mass -> weight). We can also convert some types to other types, such as mass as string to weight as number. This can be super helpful to keep the domain language of your frontend codebase intact.

Live Example

You can check this code out in action over at the live example here on StackBlitz. Feel free to fork it and play with it.

Handling too much data (part 2)

Another common use case of having too much data is search results. Some times we only want to display the first result from a bunch of search results.

Let's set up a new scenario.

We'll create a typeahead search that only shows the most relevant search result. We want to request search results after every key press the user makes; however, we also don't want to make requests that are not needed. Therefore, we want to cancel any inflight requests created as the user types.

For this scenario, we'll be using a mix of the map and switchMap operators.

We'll also use Angular's Reactive Forms Module to provide us with some reactive bindings.

Our code for this is very simple:

this.searchResult$ = this.search.valueChanges.pipe(
    // We get the search term the user has typed
    // And use switchMap to cancel any inflight requests
    // then create a new request and switch to that
    // Observable stream
    switchMap(term =>
      this.http.get<any>(`https://swapi.dev/api/people/?search=${term}`)
    ),
    // Next we check that there is results, if so we pick the first one
    // If not we create an object to show there are no results
    map(response =>
      response.count > 0 ? response.results[0] : { name: "No results" }
    ),
    // We then map the full response data down to only the fields we 
    // care about.
    map(
      response =>
        ({
          name: response.name,
          birthYear: response.birth_year,
          height: Number(response.height),
          weight: Number(response.mass),
          eyeColor: response.eye_color
        } as PeopleData)
    )
);

It's that simple to create an eager typeahead search with Angular and RxJS.

You can see the live example of this here on StackBlitz.

Handling too little data

We've looked at how to handle too much data in the API response with RxJS, but how about when we have too little data?

We can build this scenario out perfectly with the Star Wars API. Let's say we want to see what films the characters we are searching for appear in. The API response for the characters does contain what films they are in, but it does not give us details, just a URL that we can send a new request to get that film's data. This is also an Array of films so we may want to get the details for all the films they appear in at this time.

Let's see how we can transform our search code to allow us to fetch more data related to the results and map it into the final object that we use to render the data.

this.searchResult$ = this.search.valueChanges.pipe(
    // Again we take the search term and map to a new api request
    switchMap(term =>
      this.http.get<any>(`https://swapi.dev/api/people/?search=${term}`)
    ),
    // Now we use mergeMap as we do not need cancellation
    mergeMap(response =>
        // We use from to iterate over each film for the character
        from(response.films).pipe(
            // We need to massage the url to be correct as SW API returns http rather 
            // than https in the character details
            map(
              (film: string) => `${film.substring(0, 4)}${film.substring(4)}`
            ),
            // Now we use concatMap as this will force RxJS to wait for each request
            // to complete before starting the next one, ensuring we have all the 
            // data needed for each film
            concatMap((film: string) => this.http.get<any>(film)),
            // The film API also returns more data than we care about so we map it down
            // to only the fields we care about
            map(film => ({
              title: film.title,
              releaseDate: film.release_date
            })),
            // We then need to collect each of these API responses and map them back into
            // a single array of films
            reduce((films, film) => [...films, film], []),
            // Finally we then map the character data and the film data into one percise
            // object that we care about
            map(
              films =>
                ({
                  name: response.name,
                  birthYear: response.birth_year,
                  height: Number(response.height),
                  weight: Number(response.mass),
                  eyeColor: response.eye_color,
                  films
                } as PeopleData)
            )
        )
    )
);

I highly recommend reading the code above and the comments to understand exactly how to achieve this scenario. This pattern is very powerful, especially for example if you need to iterate over an array of user IDs and fetch the user details associated with those IDs.

There is also a live example of this here on StackBlitz that you can use to try it out.

Conclusion

This is a small introduction to mapping data returned from HTTP requests with RxJS, but hopefully you can see use this as a reference point if you ever need to perform complex data mapping that involves additional API requests.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

The HTML Dialog Element: Enhancing Accessibility and Ease of Use cover image

The HTML Dialog Element: Enhancing Accessibility and Ease of Use

The HTML Dialog Element: Enhancing Accessibility and Ease of Use Dialogs are a common component added to applications, whether on the web or in native applications. Traditionally there has not been a standard way of implementing these on the web, resulting in many ad-hoc implementations that don’t act consistently across different web applications. Often, commonly expected features are missing from dialogs due to the complexity of implementing them. However, web browsers now offer a standard dialog element. Why use the dialog element? The native dialog element streamlines the implementation of dialogs, modals, and other kinds of non-modal dialogs. It does this by implementing many of the features needed by dialogs for you that are already baked into the browser. This is helpful as it reduces the burden on the developer when making their applications accessible by ensuring that user expectations concerning interaction are met, and it can also potentially simplify the implementation of dialogs in general. Basic usage Adding a dialog using the new tag can be achieved with just a few lines of code. ` However, adding the dialog alone won’t do anything to the page. It will show up only once you call the .showModal() method against it. ` Then if you want to close it you can call the .close() method on the dialog, or press the escape key to close it, just like most other modals work. Also, note how a backdrop appears that darkens the rest of the page and prevents you from interacting with it. Neat! Accessibility and focus management Correctly handling focus is important when making your web applications accessible to all users. Typically you have to move the current focus to the active dialog when showing them, but with the dialog element that’s done for you. By default, the focus will be set on the first focusable element in the dialog. You can optionally change which element receives focus first by setting the autofocus attribute on the element you want the focus to start on, as seen in the previous example where that attribute was added to the close element. Using the .showModal() method to open the dialog also implicitly adds the dialog ARIA role to the dialog element. This helps screen readers understand that a modal has appeared and the screen so it can act accordingly. Adding forms to dialogs Forms can also be added to dialogs, and there’s even a special method value for them. If you add a element with the method set to dialog then the form will have some different behaviors that differ from the standard get and post form methods. First off, no external HTTP request will be made with this new method. What will happen instead is that when the form gets submitted, the returnValue property on the form element will be set to the value of the submit button in the form. So given this example form: ` The form element with the example-form id will have its returnValue set to Submit. In addition to that, the dialog will close immediately after the submit event is done being handled, though not before automatic form validation is done. If this fails then the invalid event will be emitted. You may have already noticed one caveat to all of this. You might not want the form to close automatically when the submit handler is done running. If you perform an asynchronous request with an API or server you may want to wait for a response and show any errors that occur before dismissing the dialog. In this case, you can call event.preventDefault() in the submit event listener like so: ` Once your desired response comes back from the server, you can close it manually by using the .close() method on the dialog. Enhancing the backdrop The backdrop behind the dialog is a mostly translucent gray background by default. However, that backdrop is fully customizable using the ::backdrop pseudo-element. With it, you can set a background-color to any value you want, including gradients, images, etc. You may also want to make clicking the backdrop dismiss the modal, as this is a commonly implemented feature of them. By default, the &lt;dialog> element doesn’t do this for us. There are a couple of changes that we can make to the dialog to get this working. First, an event listener is needed so that we know when the user clicks away from the dialog. ` Alone this event listener looks strange. It appears to dismiss the dialog whenever the dialog is clicked, not the backdrop. That’s the opposite of what we want to do. Unfortunately, you cannot listen for a click event on the backdrop as it is considered to be part of the dialog itself. Adding this event listener by itself will effectively make clicking anywhere on the page dismiss the dialog. To correct for this we need to wrap the contents of the dialog content with another element that will effectively mask the dialog and receive the click instead. A simple element can do! ` Even this isn’t perfect though as the contents of the div may have elements with margins in them that will push the div down, resulting in clicks close to the edges of the dialog to dismiss it. This can be resolved by adding a couple of styles the the wrapping div that will make the margin stay contained within the wrapper element. The dialog element itself also has some default padding that will exacerbate this issue. ` The wrapping div can be made into an inline-block element to contain the margin, and by moving the padding from the parent dialog to the wrapper, clicks made in the padded portions of the dialog will now interact with the wrapper element instead ensuring it won’t be dismissed. Conclusion Using the dialog element offers significant advantages for creating dialogs and modals by simplifying implementation with reasonable default behavior, enhancing accessibility for users that need assistive technologies such as screen readers by using automatic ARIA role assignment, tailored support for form elements, and flexible styling options....

Introduction to Zod for Data Validation cover image

Introduction to Zod for Data Validation

As web developers, we're often working with data from external sources like APIs we don't control or user inputs submitted to our backends. We can't always rely on this data to take the form we expect, and we can encounter unexpected errors when it deviates from expectations. But with the Zod library, we can define what our data ought to look like and parse the incoming data against those defined schemas. This lets us work with that data confidently, or to quickly throw an error when it isn't correct. Why use Zod? TypeScript is great for letting us define the shape of our data in our code. It helps us write more correct code the first time around by warning us if we are doing something we shouldn't. But TypeScript can't do everything for us. For example, we can define a variable as a string or a number, but we can't say "a string that starts with user_id_ and is 20 characters long" or "an integer between 1 and 5". There are limits to how much TypeScript can narrow down our data for us. Also, TypeScript is a tool for us developers. When we compile our code, our types are not available to the vanilla JavaScript. JavaScript can't validate that the data we actually use in our code matches what we thought we'd get when we wrote our TypeScript types unless you're willing to manually write code to perform those checks. This is where we can reach for a tool like Zod. With Zod, we can write data schemas. These schemas, in the simplest scenarios, look very much like TypeScript types. But we can do more with Zod than we can with TypeScript alone. Zod schemas let us create additional rules for data parsing and validation. A 20-character string that starts with user_id_? It's z.string().startsWith('user_id_').length(20). An integer between 1 and 5 inclusive? It's z.number().int().gte(1).lte(5). Zod's primitives give us many extra functions to be more specific about *exactly* what data we expect. Unlike TypeScript, Zod schemas aren't removed on compilation to JavaScript—we still have access to them! If our app receives some data, we can verify that it matches the expected shape by passing it to your Zod schema's parse function. You'll either get back your data in exactly the shape you defined in your schema, or Zod will give you an error to tell you what was wrong. Zod schemas aren't a replacement for TypeScript; rather, they are an excellent complement. Once we've defined our Zod schema, it's simple to derive a TypeScript type from it and to use that type as we normally would. But when we really need to be sure our data conforms to the schema, we can always parse the data with our schema for that extra confidence. Defining Data Schemas Zod schemas are the variables that define our expectations for the shape of our data, validate those expectations, and transform the data if necessary to match our desired shape. It's easy to start with simple schemas, and to add complexity as required. Zod provides different functions that represent data structures and related validation options, which can be combined to create larger schemas. In many cases, you'll probably be building a schema for a data object with properties of some primitive type. For example, here's a schema that would validate a JavaScript object representing an order for a pizza: ` Zod provides a number of primitives for defining schemas that line up with JavaScript primitives: string, number, bigint, boolean, date, symbol, undefined, and null. It also includes primitives void, any, unknown, and never for additional typing information. In addition to basic primitives, Zod can define object, array, and other native data structure schemas, as well as schemas for data structures not natively part of JavaScript like tuple and enum. The documentation contains considerable detail on the available data structures and how to use them. Parsing and Validating Data with Schemas With Zod schemas, you're not only telling your program what data should look like; you're also creating the tools to easily verify that the incoming data matches the schema definitions. This is where Zod really shines, as it greatly simplifies the process of validating data like user inputs or third party API responses. Let's say you're writing a website form to register new users. At a minimum, you'll need to make sure the new user's email address is a valid email address. For a password, we'll ask for something at least 8 characters long and including one letter, one number, and one special character. (Yes, this is not really the best way to write strong passwords; but for the sake of showing off how Zod works, we're going with it.) We'll also ask the user to confirm their password by typing it twice. First, let's create a Zod schema to model these inputs: ` So far, this schema is pretty basic. It's only making sure that whatever the user types as an email is an email, and it's checking that the password is at least 8 characters long. But it is *not* checking if password and confirmPassword match, nor checking for the complexity requirements. Let's enhance our schema to fix that! ` By adding refine with a custom validation function, we have been able to verify that the passwords match. If they don't, parsing will give us an error to let us know that the data was invalid. We can also chain refine functions to add checks for our password complexity rules: ` Here we've chained multiple refine functions. You could alternatively use superRefine, which gives you even more fine grained control. Now that we've built out our schema and added refinements for extra validation, we can parse some user inputs. Let's see two test cases: one that's bound to fail, and one that will succeed. ` There are two main ways we can use our schema to validate our data: parse and safeParse. The main difference is that parse will throw an error if validation fails, while safeParse will return an object with a success property of either true or false, and either a data property with your parsed data or an error property with the details of a ZodError explaining why the parsing failed. In the case of our example data, userInput2 will parse just fine and return the data for you to use. But userInput1 will create a ZodError listing all of the ways it has failed validation. ` ` We can use these error messages to communicate to the user how they need to fix their form inputs if validation fails. Each error in the list describes the validation failure and gives us a human readable message to go with it. You'll notice that the validation errors for checking for a valid email and for checking password length have a lot of details, but we've got three items at the end of the error list that don't really tell us anything useful: just a custom error of Invalid input. The first is from our refine checking if the passwords match, and the second two are from our refine functions checking for password complexity (numbers and special characters). Let's modify our refine functions so that these errors are useful! We'll add our own error parameters to customize the message we get back and the path to the data that failed validation. ` Now, our error messages from failures in refine are informative! You can figure out which form fields aren't validating from the path, and then display the messages next to form fields to let the user know how to remedy the error. ` By giving our refine checks a custom path and message, we can make better use of the returned errors. In this case, we can highlight specific problem form fields for the user and give them the message about what is wrong. Integrating with TypeScript Integrating Zod with TypeScript is very easy. Using z.infer&lt;typeof YourSchema> will allow you to avoid writing extra TypeScript types that merely reflect the intent of your Zod schemas. You can create a type from any Zod schema like so: ` Using a TypeScript type derived from a Zod schema does *not* give you any extra level of data validation at the type level beyond what TypeScript is capable of. If you create a type from z.string.min(3).max(20), the TypeScript type will still just be string. And when compiled to JavaScript, even that will be gone! That's why you still need to use parse/safeParse on incoming data to validate it before proceeding as if it really does match your requirements. A common pattern with inferring types from Zod schemas is to use the same name for both. Because the schema is a variable, there's no name conflict if the type uses the same name. However, I find that this can lead to confusing situations when trying to import one or the other—my personal preference is to name the Zod schema with Schema at the end to make it clear which is which. Conclusion Zod is an excellent tool for easily and confidently asserting that the data you're working with is exactly the sort of data you were expecting. It gives us the ability to assert at runtime that we've got what we wanted, and allows us to then craft strategies to handle what happens if that data is wrong. Combined with the ability to infer TypeScript types from Zod schemas, it lets us write and run more reliable code with greater confidence....

Best Practices for Managing RxJS Subscriptions cover image

Best Practices for Managing RxJS Subscriptions

When we use RxJS, it's standard practice to subscribe to Observables. By doing so, we create a Subscription. This object provides us with some methods that will aid in managing these subscriptions. This is very important, and is something that should not be overlooked! Why do we care about subscription management? If we do not put some thought into how we manage and clean up the subscriptions we create, we can cause an array of problems in our applications. This is due to how the Observer Pattern is implemented. When an Observable emits a new value, its Observers execute code that was set up during the subscription. For example: ` If we do not manage this subscription, every time obs$ emits a new value doSomethingWithDataReceived will be called. Let's say this code is set up on the Home View of our App. It should only ever be run when the user is on the Home View. Without managing this subscription correctly when the user navigates to a new view in the App, doSomethingWithDataReceived could still be called, potentially causing unexpected results, errors or even hard-to-track bugs. So what do we mean by Subscription Management? Essentially, subscription management revolves around knowing when to complete or unsubscribe from an Observable, to prevent incorrect code from being executed, especially when we would not expect it to be executed. We can refer to this management of subscriptions as cleaning up active subscriptions. How can we clean up subscriptions? So, now that we know that managing subscriptions are an essential part of working with RxJS, what methods are available for us to manage them? Unsubscribing Manually One method we can use, is to unsubscribe manually from active subscriptions when we no longer require them. RxJS provides us with a convenient method to do this. It lives on the Subscription object and is simply called .unsubscribe(). If we take the example we had above; we can see how easy it is to unsubscribe when we need to: ` 1. We create a variable to store the subscription. 2. We store the subscription in a variable when we enter the view. 3. We unsubscribe from the subscription when we leave the view preventing doSomethingWithDataReceived() from being executed when we don't need it. This is great; however, when working with RxJS, you will likely have more than one subscription. Calling unsubscribe for each of them could get tedious. A solution I have seen many codebases employ is to store an array of active subscriptions, loop through this array, unsubscribing from each when required. Let's modify the example above to see how we could do this: ` 1. We create an array to store the subscriptions. 2. We add each subscription to the array when we enter the view. 3. We loop through and unsubscribe from the subscriptions in the array. These are both valid methods of managing subscriptions and can and should be employed when necessary. There are other options however, that can add a bit more resilience to your management of subscriptions. Using Operators RxJS provides us with some operators that will clean up the subscription automatically when a condition is met, meaning we do not need to worry about setting up a variable to track our subscriptions. Let's take a look at some of these! first The first operator will take only the first value emitted, or the first value that meets the specified criteria. Then it will complete, meaning we do not have to worry about manually unsubscribing. Let's see how we would use this with our example above: ` When obs$ emits a value, first() will pass the value to doSomethingWithDataReceived and then unsubscribe! take The take operator allows us to specify how many values we want to receive from the Observable before we unsubscribe. This means that when we receive the specified number of values, take will automatically unsubscribe! ` Once obs$ has emitted five values, take will unsubscribe automatically! takeUntil The takeUntil operator provides us with an option to continue to receive values from an Observable until a different, notifier Observable emits a new value. Let's see it in action: ` 1. We create a notifier$ Observable using a Subject. _(You can learn more about Creating Observables here.)_ 2. We use takeUntil to state that we want to receive values until notifier$ emits a value 3. We tell notifier$ to emit a value and complete _(we need to clean notifer$ up ourselves) when we leave the view, allowing our original subscription to be unsubscribed. takeWhile Another option is the takeWhile operator. It allows us to continue receiving values whilst a specified condition remains true. Once it becomes false, it will unsubscribe automatically. ` In the example above we can see that whilst the property finished on the data emitted is false we will continue to receive values. When it turns to true, takeWhile will unsubscribe! BONUS: With Angular RxJS and Angular go hand-in-hand, even if the Angular team has tried to make the framework as agnostic as possible. From this, we usually find ourselves having to manage subscriptions in some manner. async Pipe Angular itself provides one option for us to manage subscriptions, the async pipe. This pipe will subscribe to an Observable in the template, and when the template is destroyed, it will unsubscribe from the Observable automatically. It's very simple to use: ` By using the as data, we set the value emitted from the Observable to a template variable called data, allowing us to use it elsewhere in the children nodes to the div node. When the template is destroyed, Angular will handle the cleanup! untilDestroyed Another option comes from a third-party library developed by Netanel Basal. It's called until-destroyed, and it provides us with multiple options for cleaning up subscriptions in Angular when Angular destroys a Component. We can use it similarly to takeUntil: ` It can _also_ find which properties in your component are Subscription objects and automatically unsubscribe from them: ` This little library can be beneficial for managing subscriptions for Angular! When should we employ one of these methods? The simple answer to this question would be: > When we no longer want to execute code when the Observable emits a new value But that doesn't give an example use-case. - We have covered one example use case in this article: when you navigate away from a view in your SPA. - In Angular, you'd want to use it when you destroy Components. - Combined with State Management, you could use it only to select a slice of state once that you do not expect to change over the lifecycle of the application. - Generally, you'd want to do it when a condition is met. This condition could be anything from the first click a user makes to when a certain length of time has passed. Next time you're working with RxJS and subscriptions, think about when you no longer want to receive values from an Observable, and ensure you have code that will allow this to happen!...

“We were seen as amplifiers, not collaborators,” Ashley Willis, Sr. Director of Developer Relations at GitHub, on How DevRel has Changed, Open Source, and Holding Space as a Leader cover image

“We were seen as amplifiers, not collaborators,” Ashley Willis, Sr. Director of Developer Relations at GitHub, on How DevRel has Changed, Open Source, and Holding Space as a Leader

Ashley Willis has seen Developer Relations evolve from being on the sidelines of the tech team to having a seat at the strategy table. In her ten years in the space, she’s done more than give great conference talks or build community—she’s helped shape what the DevRel role looks like for software providers. Now as the Senior Director of Developer Relations at GitHub, Ashley is focused on building spaces where developers feel heard, seen, and supported. > “A decade ago, we were seen as amplifiers, not collaborators,” she says. “Now we’re influencing product roadmaps and shaping developer experience end to end.” DevRel Has Changed For Ashley, the biggest shift hasn’t been the work itself—but how it’s understood. > “The work is still outward-facing, but it’s backed by real strategic weight,” she explains. “We’re showing up in research calls and incident reviews, not just keynotes.” That shift matters, but it’s not the finish line. Ashley is still pushing for change when it comes to burnout, representation, and sustainable metrics that go beyond conference ROI. > “We’re no longer fighting to be taken seriously. That’s a win. But there’s more work to do.” Talking Less as a Leader When we asked what the best advice Ashley ever received, she shared an early lesson she received from a mentor: “Your presence should create safety, not pressure.” > “It reframed how I saw my role,” she says. “Not as the one with answers, but the one who holds the space.” Ashley knows what it’s like to be in rooms where it’s hard to speak up. She leads with that memory in mind, and by listening more than talking, normalizing breaks, and creating environments where others can lead too. > “Leadership is emotional labor. It’s not about being in control. It’s about making it safe for others to lead, too.” Scaling More Than Just Tech Having worked inside high-growth companies, Ashley knows firsthand: scaling tech is one thing. Scaling trust is another. > “Tech will break. Roadmaps will shift. But if there’s trust between product and engineering, between company and community—you can adapt.” And she’s learned not to fall for premature optimization. Scale what you have. Don’t over-design for problems you don’t have yet. Free Open Source Isn’t Free There’s one myth Ashley is eager to debunk: that open source is “free.” > “Open source isn’t free labor. It’s labor that’s freely given,” she says. “And it includes more than just code. There’s documentation, moderation, mentoring, emotional care. None of it is effortless.” Open source runs on human energy. And when we treat contributors like an infinite resource, we risk burning them out, and breaking the ecosystem we all rely on. > “We talk a lot about open source as the foundation of innovation. But we rarely talk about sustaining the people who maintain that foundation.” Burnout is Not Admirable Early in her career, Ashley wore burnout like a badge of honor. She doesn’t anymore. > “Burnout doesn’t prove commitment,” she says. “It just dulls your spark.” Now, she treats rest as productive. And she’s learned that clarity is kindness—especially when giving feedback. > “I thought being liked was the same as being kind. It’s not. Kindness is honesty with empathy.” The Most Underrated GitHub Feature? Ashley’s pick: personal instructions in GitHub Copilot. Most users don’t realize they can shape how Copilot writes, like its tone, assumptions, and context awareness. Her own instructions are specific: empathetic, plainspoken, technical without being condescending. For Ashley, that helps reduce cognitive load and makes the tool feel more human. > “Most people skip over this setting. But it’s one of the best ways to make Copilot more useful—and more humane.” Connect with Ashley Willis She has been building better systems for over a decade. Whether it’s shaping Copilot UX, creating safer teams, or speaking truth about the labor behind open source, she’s doing the quiet work that drives sustainable change. Follow Ashley on BlueSky to learn more about her work, her maker projects, and the small things that keep her grounded in a fast-moving industry. Sticker Illustration by Jacob Ashley....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co