Skip to content

Can Vercel Fix React?

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Tracy Lee, Ben Lesh and Adam Rackis discuss the current state of React and its potential future.

React and Vercel: A Positive Change?

One of the key topics of conversation revolves around React and its relationship with Vercel. It's worth noting that high-profile React core team members have recently made the move to Vercel, which many see as a positive change. This move brings fresh perspectives and expertise to the project, potentially leading to exciting developments in the future.

However, the hosts also acknowledge the challenges of upgrading to React 18, as there are still many users on older versions. Upgrading can have a significant impact on tests and requires careful consideration. It's clear that React's evolution is an ongoing process, and the community plays a crucial role in shaping its future.

The Potential of Bun: A New Default Runtime for Node Development

Is there potential for the runtime called Bun to become the default for node development? Ben expresses skepticism about this possibility, highlighting the need for perfect execution and widespread community support. Switching to a new runtime involves extensive testing and potential risks, especially for critical financial services.

On the other hand, Adam presents a more optimistic view, emphasizing the advantages, such as its built-in TypeScript support and improved interop with node. He believes that as Bun continues to grow and improve, and could become a compelling option for new projects.

The Future of Front-End Development: Likability and AI

They also share their thoughts on the future of front-end development and various technologies. They highlight the importance of likability and the team behind a project, as these factors can greatly influence its success. Additionally, they discuss the potential impact of AI on framework development, raising interesting questions about the role of automation in shaping the future of web development.

While React is praised for its strengths, its perceived limitations compared to other frameworks are also mentioned. The advancements in state management in Angular are highlighted, showcasing the continuous evolution of front-end technologies and the need for developers to stay adaptable and open to new possibilities.

Listen to the full podcast here: https://modernweb.podbean.com/e/modern-web-podcast-s11e20-can-vercel-fix-react-a-conversation-with-tracy-lee-ben-lesh-adam-rackis/

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Communication Between Client Components in Next.js cover image

Communication Between Client Components in Next.js

Communication Between Client Components in Next.js In recent years, Next.js has become one of the most popular React frameworks for building server-rendered applications. With the introduction of the App Router in Next.js 13, the framework has taken a major leap forward by embracing a new approach to building web applications: the concept of server components and client components. This separation of concerns allows developers to strategically decide which parts of their application should be rendered on the server and which are then hydrated in the browser for interactivity. The Challenge: Communicating Between Client Components While server components offer numerous benefits, they also introduce a new challenge: how can client components within different boundaries communicate with each other? For instance, let's consider a scenario where you have a button (a client component) and a separate client component that displays the number of times the button has been clicked. In a regular React application, this would typically be accomplished by lifting the state to a common ancestor component, allowing the button to update the state, which is then passed down to the counter display component. However, the traditional approach may not be as straightforward in a Next.js application that heavily relies on server components, with client components scattered across the page. This blog post will explore three ways to facilitate communication between client components in Next.js, depending on where you want to store the state. Lifting State to a Common Client Component One approach to communicating between client components is to lift the state to a common client component. This could be a React context provider, a state management system like Zustand, or any other solution that allows you to share state across components. The key aspect is that this wrapper component should be higher up in the tree (perhaps even in the layout) and accept server components as children. Next.js allows you to interleave client and server components as much as you want, as long as server components are passed to client components as props or as component children. Here's how this approach might look in practice. First of all, we'd create a wrapper client component that holds the state: ` This wrapper component can be included in the layout: ` Any server components can be rendered within the layout, potentially nesting them several levels deep. Finally, we'd create two client components, one for the button and one for the counter display: ` ` The entire client and server component tree is rendered on the server, and the client components are then hydrated in the browser and initialized. From then on, the communication between the client components works just like in any regular React application. Check out the page using this pattern in the embedded Stackblitz window below: Using Query Params for State Management Another approach is to use query params instead of a wrapper client component and store the state in the URL. In this scenario, you have two client components: the button and the counter display. The counter value (the state) is stored in a query param, such as counterValue. The client components can read the current counter value using the useSearchParams hook. Once read, the useRouter hook can update the query param, effectively updating the counter value. However, there's one gotcha to this approach. If a route is statically rendered, calling useSearchParams will cause the client component tree up to the closest Suspense boundary to be client-side rendered. Next.js recommends wrapping the client component that uses useSearchParams in a boundary. Here's an example of how this approach might look. The button reads the current counter value and updates it on click by using the router's replace function: ` The counter display component is relatively simple, only reading the counter value: ` And here is the page that is a server component, hosting both of the above client components: ` Feel free to check out the above page in the embedded Stackblitz below: Storing State on the Server The third approach is to store the state on the server. In this case, the counter display component accepts the counter value as a prop, where the counter value is passed by a parent server component that reads the counter value from the database. The button component, when clicked, calls a server action that updates the counter value and calls revalidatePath() so that the counter value is refreshed, and consequently, the counter display component is re-rendered. It's worth noting that in this approach, unless you need some interactivity in the counter display component, it doesn't need to be a client component – it can be purely server-rendered. However, if both components need to be client components, here's an example of how this approach might look. First, we'll implement a server action that updates the counter value. We won't get into the mechanics of updating it, which in a real app would require a call to the database or an external API - so we're only commenting that part. After that, we revalidate the path so that the Next.js caches are purged, and the counter value is retrieved again in server components that read it. ` The button is a simple client component that calls the above server action when clicked. ` The counter display component reads the counter value from the parent: ` While the parent is a server component that reads the counter value from the database or an external API. ` This pattern can be seen in the embedded Stackblitz below: Conclusion Next.js is a powerful framework that offers various choices for implementing communication patterns between client components. Whether you lift state to a common client component, use query params for state management, or store the state on the server, Next.js provides all the tools you need to add a communication path between two separate client component boundaries. We hope this blog post has been useful in demonstrating how to facilitate communication between client components in Next.js. Check out other Next.js blog posts we've written for more insights and best practices. You can also view the entire codebase for the above snippets on StackBlitz....

Improving INP in React and Next.js cover image

Improving INP in React and Next.js

Improving INP in React and Next.js In one of my previous articles, I've explained what INP is, how it works, and how it may affect your website. I also promised you to follow up with more concrete advice on how to improve your INP in your favorite framework. This is the follow-up article, where I'll focus on how to improve your INP score in React and Next.js. How to prepare for INP in React and Next.js? The first thing to do is to ensure you're using the latest version of React. The React team has been working on making React more INP-friendly and has already made some improvements in the latest versions. To enhance your INP score, consider fully taking advantage of new features introduced in React 18, such as Concurrent Rendering, Automatic Batching, and Selective Hydration. However, there are also some general areas to focus on, such as SSR and SSG in Next.js, Web Workers, or optimizing your hooks and state management. Concurrent Rendering The Concurrent Mode in React uses an algorithm that breaks rendering down into so-called "fiber nodes" and schedules the renders based on their expiration and priority. This effectively allows the user to interact with the page while the rendering is still in progress. In previous React versions, all updates, such as setState calls were treated as "urgent" and once the re-render had started, there was no way to interrupt it. Concurrent Mode changes this by being able to prioritize the updates and interrupt a non-blocking state update started with startTransition. For a simple explanation of concurrency in React, you can check out Dan Abramov's explanation. As part of the Concurrent Mode, React introduced several lifecycle methods that allow you to prioritize the rendering of certain parts of your UI, such as: - useTransition hook that allows you to update the state without blocking the UI, - useDeferredValue hook that allows you to defer the rendering of certain parts of your UI, - startTransition API that, similarly to the useTransition hook lets you mark a state update as non-blocking. It lacks, however, an indication of whether it's still pending. Automatic Batching Introduced in React 18, Automatic Batching reduces the number of re-renders that happen on state changes even when they happen outside of event handlers, e.g. in a setTimeout or Promise callback. This feature comes out of the box and you don't have to do anything to enable it, and it makes a great argument for upgrading to React 18. Selective Hydration Selective Hydration allows you to take hydration off the main thread by wrapping your components in a Suspense boundary. This way, components can become interactive faster as the browser can do other work on the main thread while the hydration is happening. To fully take advantage of selective hydration, consider the following: - Prioritizing Above-the-Fold Content: Use Suspense boundaries strategically around any parts of your application that may take the server longer to deliver to ensure they don’t block critical content from becoming interactive as soon as possible. - Hydration on Interaction: Implementing hydration upon user interaction for non-critical components can drastically reduce the main thread's workload, enhancing INP. Vercel even has a small case study showing how using selective hydration improved the performance of a Next.js site. Server-Side Rendering (SSR) and Static Site Generation (SSG) in Next.js Not everything has to run client-side. Next.js excels in SSR and SSG capabilities, which can significantly impact INP by delivering content to users faster. Optimizing SSR with techniques like incremental static regeneration (ISR) or leveraging SSG for static pages ensures that users can interact with content faster, improving the perceived performance. Workers Offloading heavy computations to Web Workers can free up the main thread, enhancing the responsiveness of React and Next.js applications. This strategy is especially useful when dealing with third-party scripts. Offloading such scripts in Next.js can be easily done by specifying the "worker" strategy on your Script component. Be aware that this feature is not yet stable and does not work with the app directory, though. If you want to take things one step further, you could use Partytown, which helps you offload any resource-intensive scripts to Web Workers. It comes with a React component that you can use to wrap your third-party scripts and offload them to a Web Worker, and it's compatible with Next.js as well. Hooks and State Management State management in React applications can easily get out of hand, leading to unnecessary re-renders and effectively an increased INP. Sometimes, using a state management library like Redux or MobX can help you consolidate your state and reduce the number of re-renders. However, they are not silver bullets and can also introduce performance issues if not used properly. If you are dealing with a lot of re-renders due to prop changes, make sure you are leveraging memoization. As of now, you may need to work with useMemo and useCallback hooks to memoize your values and functions, respectively. The upcoming React 19’s Forget Compiler, however, will apparently memoize everything under the hood, making these hooks obsolete. Using memoization properly can help you reduce the number of re-renders and improve your INP. To investigate your hook dependencies and re-renders, you can leverage React Developer Tools or use this handy helper hook I found on the internet to trace your re-renders: ` Conclusion Improving INP in React and Next.js is not easy and can require much investigation and fine-tuning. Still, it's worth doing to avoid being penalized by Google in its search results and provide a better experience for your users. Adopting React 18's new features, leveraging SSR and SSG in Next.js, utilizing Web Workers, and optimizing hooks and state management can significantly boost your INP score and deliver a faster application to your users. Remember, INP is just one among many performance metrics emphasizing the need for a comprehensive approach to performance optimization...

The simplicity of deploying an MCP server on Vercel cover image

The simplicity of deploying an MCP server on Vercel

The current Model Context Protocol (MCP) spec is shifting developers toward lightweight, stateless servers that serve as tool providers for LLM agents. These MCP servers communicate over HTTP, with OAuth handled clientside. Vercel’s infrastructure makes it easy to iterate quickly and ship agentic AI tools without overhead. Example of Lightweight MCP Server Design At This Dot Labs, we built an MCP server that leverages the DocuSign Navigator API. The tools, like `get_agreements`, make a request to the DocuSign API to fetch data and then respond in an LLM-friendly way. ` Before the MCP can request anything, it needs to guide the client on how to kick off OAuth. This involves providing some MCP spec metadata API endpoints that include necessary information about where to obtain authorization tokens and what resources it can access. By understanding these details, the client can seamlessly initiate the OAuth process, ensuring secure and efficient data access. The Oauth flow begins when the user's LLM client makes a request without a valid auth token. In this case they’ll get a 401 response from our server with a WWW-Authenticate header, and then the client will leverage the metadata we exposed to discover the authorization server. Next, the OAuth flow kicks off directly with Docusign as directed by the metadata. Once the client has the token, it passes it in the Authorization header for tool requests to the API. ` This minimal set of API routes enables me to fetch Docusign Navigator data using natural language in my agent chat interface. Deployment Options I deployed this MCP server two different ways: as a Fastify backend and then by Vercel functions. Seeing how simple my Fastify MCP server was, and not really having a plan for deployment yet, I was eager to rewrite it for Vercel. The case for Vercel: * My own familiarity with Next.js API deployment * Fit for architecture * The extremely simple deployment process * Deploy previews (the eternal Vercel customer conversion feature, IMO) Previews of unfamiliar territory Did you know that the MCP spec doesn’t “just work” for use as ChatGPT tooling? Neither did I, and I had to experiment to prove out requirements that I was unfamiliar with. Part of moving fast for me was just deploying Vercel previews right out of the CLI so I could test my API as a Connector in ChatGPT. This was a great workflow for me, and invaluable for the team in code review. Stuff I’m Not Worried About Vercel’s mcp-handler package made setup effortless by abstracting away some of the complexity of implementing the MCP server. It gives you a drop-in way to define tools, setup https-streaming, and handle Oauth. By building on Vercel’s ecosystem, I can focus entirely on shipping my product without worrying about deployment, scaling, or server management. Everything just works. ` A Brief Case for MCP on Next.js Building an API without Next.js on Vercel is straightforward. Though, I’d be happy deploying this as a Next.js app, with the frontend features serving as the documentation, or the tools being a part of your website's agentic capabilities. Overall, this lowers the barrier to building any MCP you want for yourself, and I think that’s cool. Conclusion I'll avoid quoting Vercel documentation in this post. AI tooling is a critical component of this natural language UI, and we just want to ship. I declare Vercel is excellent for stateless MCP servers served over http....

Quo v[AI]dis, Tech Stack? cover image

Quo v[AI]dis, Tech Stack?

Since we've started extensively leveraging AI at This Dot to enhance development workflows and experimenting with different ways to make it as helpful as possible, there's been a creeping thought on my mind - Is AI just helping us write code faster, or is it silently reshaping what code we choose to write? Eventually, this thought led to an interesting conversation on our company's Slack about the impact of AI on our tech stack choices. Some of the views shared there included: - "The battle between static and dynamic types is over. TypeScript won." - "The fast-paced development of new frameworks and the excitement around new shiny technologies is slowing down. AI can make existing things work with a workaround in a few minutes, so why create or adopt something new?" - "AI models are more trained on the most popular stacks, so they will naturally favor those, leading to a self-reinforcing loop." - "A lot of AI coding assistants serve as marketing funnels for specific stacks, such as v0 being tailored to Next.js and Vercel or Lovable using Supabase and Clerk." All of these points are valid and interesting, but they also made me think about the bigger picture. So I decided to do some extensive research (read "I decided to make the OpenAI Deep Research tool do it for me") and summarize my findings in this article. So without further ado, here are some structured thoughts on how AI is reshaping our tech stack choices, and what it means for the future of software development. 1. LLMs as the New Developer Platform If software development is a journey, LLMs have become the new high-speed train line. Long gone are the days when we used Copilot as a fancy autocomplete tool. Don't get me wrong, it was mind-bogglingly good when it first came out, and I've used it extensively. But now, a few years later, LLMs have evolved into something much more powerful. With the rise of tools like Cursor, Windsurf, Roo Code, or Claude Code, LLMs are essentially becoming the new developer platform. They are no longer just a helper that autocompletes a switch statement or a function signature, but a full-fledged platform that can generate entire applications, write tests, and even refactor code. And it is not just a few evangelists or early adopters who are using these tools. They have become mainstream, with millions of developers relying on them daily. According to Deloitte, nearly 20% of devs in tech firms were already using generative AI coding tools by 2024, with 76% of StackOverflow respondents using or planning to use AI tools in their development process, according to the 2024 StackOverflow Developer Survey. They've become an integral part of the development workflow, mediating how code is written, reviewed, and learned. I've argued in the past that LLMs are becoming a new layer of abstraction in software development, but now I believe they are evolving into something even more powerful - a new developer platform that is shaping how we think about and approach software development. 2. The Reinforcement Loop: Popular Stacks Get Smarter As we travel this AI-guided road, we find that certain routes become highways, while others lead to narrow paths or even dead ends. AI tools are not just helping us write code faster; they are also shaping our preferences for certain tech stacks. The most popular frameworks and languages, such as React.js on the frontend and Node.js on the backend (both with 40% adoption), are the ones that AI tools perform best with. Their increasing popularity is not just a coincidence; it's a result of a self-reinforcing loop. AI models are trained on vast amounts of code, and the most popular stacks naturally have more data available for training, given their widespread use, leading to more questions, answers, and examples in the training data. This means that AI tools are inherently better at understanding and generating code for these stacks. As an anecdotal example, I've noticed that AI tools tend to suggest React.js even when I specify a preference for another framework. As someone working with multiple tech stacks, I can attest that AI tools are significantly more effective with React.js or Node.js than, say, Yii2 or CakePHP. This phenomenon is not limited to just one or two stacks; it applies to the entire ecosystem. The more a stack is used, the more data there is for AI to learn from, and the better it gets at generating code for that stack, resulting in a feedback loop: 1. AI performs better on popular stacks. 2. Popular stacks get more adoption as developers find them easier to work with. 3. More developers using those stacks means more data for AI to learn from. 4. The cycle continues, reinforcing the popularity of those stacks. The issue is maybe even more evident with CSS frameworks. TailwindCSS, for example, has gained immense popularity thanks to its utility-first approach, which aligns well with AI's ability to generate and manipulate styles. As more developers adopt TailwindCSS, AI tools become better at understanding its conventions and generating appropriate styles, further driving its adoption. However, the Tailwind CSS example also highlights a potential pitfall of this reinforcement loop. Tailwind CSS v4 was released in January 2025. From my experience, AI tools still attempt to generate code using v3 concepts and often need to be reminded to use Tailwind CSS v4, requiring spoon-feeding with documentation to get it right. Effectively, this phenomenon can lead to a situation where AI tools not only reinforce the popularity of certain stacks but also potentially slow down the adoption of newer versions or alternatives. 3. Frontend Acceleration: React, Angular, and Beyond Navigating the frontend landscape has always been tricky, but with AI, some paths feel like smooth expressways while others remain bumpy dirt roads. AI is particularly transformative in frontend development, where the complexity and boilerplate code can be overwhelming. Established frameworks like React and Angular, which are already popular, are seeing even more adoption due to AI's ability to generate components, tests, and optimizations. React's widespread adoption and its status as the most popular framework on the frontend make it the go-to choice for many developers, especially with AI tools that can quickly scaffold new components or entire applications. However, Angular's strict structure and type safety also make it a strong contender. Angular's opinionated nature can actually benefit AI-generated code, as it provides a clear framework for the AI to follow, reducing ambiguity and potential bugs. > Call me crazy but I think that long term Angular is going to work better with AI tools for frontend work. > > More strict rules to follow, easier to build and scale. Just like for humans. > > We just need to keep Angular opinionated enough. > > — Daniel Glejzner on X But it's not just about how the frameworks are structured; it's also the documentation they provide. It has recently become the norm for frameworks to have AI-friendly documentation. Angular, for instance, has a llms.txt file that you can reference in your AI prompts to get more relevant results. The best example, however, in my opinion, is the Nuxt.ui documentation, which provides the option to copy each documentation page as markdown or a link to its markdown version, making it easy to reference in AI prompts. Frameworks that incorporate AI-friendly documentation and tooling are likely to experience increased adoption, as they facilitate developers' ability to leverage AI's capabilities. 4. Full-Stack TS/JS: The Sweet Spot On this AI-accelerated journey, some stacks have emerged as the smoothest rides, and full-stack JavaScript/TypeScript is leading the way. The combination of React on the frontend and Node.js on the backend provides a unified language ecosystem, making the road less bumpy for developers. Shared types, common tooling, and mature libraries enable faster prototyping and reduced context switching. AI seems to enjoy these well-paved highways too. I've observed numerous instances where AI tools default to suggesting Next.js and Tailwind CSS for new projects, even when users are prompted otherwise. While you can force a slight detour to something like Nuxt or SvelteKit, the road suddenly gets patchier - AI becomes less confident, requires more hand-holding, and sometimes outright stalls. So while still technically being in the sweet spot of full-stack JavaScript/TypeScript, deviating from the "main highway" even slightly can lead to a much rougher ride. React-based full-stack frameworks are becoming mainstream, not necessarily because they are always the best solution, but because they are the path of least resistance for both humans and AI. 5. The Polyglot Shift: AI Enables Multilingual Devs One fascinating development on this journey is how AI is enabling more developers to become polyglots. Where switching stacks used to feel like taking detours into unknown territory, AI now acts like an on-demand guide. Whether it’s switching from Laravel to Spring Boot or from Angular to Svelte, AI helps bridge those knowledge gaps quickly. At This Dot, we've always taken pride in our polyglot approach, but AI is lowering the barriers for everyone. Yes, we've done this before the rise of AI tooling. If you are an experienced engineer with a strong understanding of programming concepts, you'll be able to adapt to different stacks and projects quickly. But AI is now enabling even junior developers to become polyglots, and it's making it even easier for the experienced ones to switch between stacks seamlessly. AI doesn’t just shorten the journey - it makes more destinations accessible. This "AI boost" not only facilitates the job of a software consultant, such as myself, who often has to switch between different projects, but it also opens the door to unlimited possibilities for companies to mix and match stacks based on their needs - particularly useful for companies that have diverse tech stacks, as it allows them to leverage the strengths of different languages and frameworks without the steep learning curve that usually comes with it. 6. AI-Generated Stack Bundles: The Trojan Horse > Trend I'm seeing: AI app generators are a sales funnel. > > -Chef uses Convex. > > -V0 is optimized for Vercel. > > -Lovable uses Supabase and Clerk. > > -Firebase Studio uses Google services. > > These tools act like a trojan horse - they "sell" a tech stack. > > Choose wisely. > > — Cory House on X Some roads come pre-built, but with toll booths you may not notice until you're halfway through the trip. AI-generated apps from tools like v0, Firebase Studio, or Lovable are convenience highways - fast, smooth, and easy to follow - but they quietly nudge you toward specific tech stacks, backend services, databases, and deployment platforms. It's a smart business model. These tools don't just scaffold your app; they bundle in opinions on hosting, auth providers, and DB layers. The convenience is undeniable, but there's a trade-off in flexibility and long-term maintainability. Engineering leaders must stay alert, like seasoned navigators, ensuring that the allure of speed doesn't lead their teams down the alleyways of vendor lock-in. 7. From 'Buy vs Build' to 'Prompt vs Buy' The classic dilemma used to be _“buy vs build”_ - now it’s becoming “prompt vs buy.” Why pay for a bloated tour bus of a SaaS product, packed with destinations and detours you’ll never take (and priced accordingly), when you can chart a custom route with a few well-crafted prompts and have a lightweight internal tool up and running in days—or even hours? Do you need a simple tool to track customer contacts with a few custom fields and a clean interface? In the past, you might have booked a seat on the nearest SaaS solution - one that gets you close enough to your destination but comes with unnecessary stops and baggage. With AI, you can now skip the crowded bus altogether and build a tailor-made vehicle that drives exactly where you need to go, no more, no less. AI reshapes the travel map of product development. The road to MVPs has become faster, cheaper, and more direct. This shift is already rerouting the internal tooling landscape, steering companies away from bulky, one-size-fits-all platforms toward lean, AI-assembled solutions. And over time, it may change not just _how_ we build, but _where_ we build - with the smoothest highways forming around AI-friendly, modular ecosystems like Node, React, and TypeScript, while older “corporate” expressways like .NET, Java, or even Angular risk becoming the slow scenic routes of enterprise tech. 8. Strategic Implications: Velocity vs Maintainability Every shortcut comes with trade-offs. The fast lane that AI offers boosts productivity but can sometimes encourage shortcuts in architecture and design. Speeding to your destination is great - until you hit the maintenance toll booth further down the road. AI tooling makes it easier to throw together an MVP, but without experienced oversight, the resulting codebases can turn into spaghetti highways. Teams need to implement AI-era best practices: structured code reviews, prompt hygiene, and deliberate stack choices that prioritize long-term maintainability over short-term convenience. Failing to do so can lead to a "quick and dirty" mentality, where the focus is on getting things done fast rather than building robust, maintainable solutions, which is particularly concerning for companies that rely on in-house developers or junior teams who may not have the experience to recognize potential pitfalls in AI-generated code. 9. Closing Reflection: Are We Still Choosing Our Stacks? So, where are we heading? Looking at the current "traffic" on the modern software development pathways, one thing becomes clear: AI isn't just a productivity tool - the roads themselves are starting to shape the journey. What was once a deliberate process of choosing the right vehicle for the right terrain - picking our stacks based on product goals, team expertise, and long-term maintainability - now feels more like following GPS directions that constantly recalculate to the path of least resistance. AI is repaving the main routes, widening the lanes for certain tech stacks, and putting up "scenic route" signs for some frameworks while leaving others on neglected backroads. This doesn't mean we've lost control of the steering wheel, but it does mean that the map is changing beneath us in ways that are easy to overlook. The risk is clear: we may find ourselves taking the smoothest on-ramps without ever asking if they lead to where we actually want to go. Convenience can quietly take priority over appropriateness. Productivity gains in the short term can pave over technical debt potholes that become unavoidable down the road. But the story isn't entirely one of caution. There's a powerful opportunity here too. With AI as a co-pilot, we can explore more destinations than ever before - venturing into unfamiliar tech stacks, accelerating MVP development, or rapidly prototyping ideas that previously seemed out of reach. The key is to remain intentional about when to cruise with AI autopilot and when to take the wheel with both hands and steer purposefully. In this new era of AI-shaped development, the question every engineering team should be asking is not just "how fast can we go?" but "are we on the right road?" and "who's really choosing our route?" And let’s not forget — some of these roads are still being built. Open-source maintainers and framework authors play a pivotal role in shaping which paths become highways. By designing AI-friendly architectures, providing structured, machine-readable documentation, and baking in patterns that are easy for AI models to learn and suggest, they can guide where AI directs traffic. Frameworks that proactively optimize for AI tooling aren’t just improving developer experience — they’re shaping the very flow of adoption in this AI-accelerated landscape. If we're not mindful, we risk becoming passengers on a journey defined by default choices. However, if we remain vigilant, we can utilize AI to create more accurate maps, not just follow the fastest roads, but also chart new ones. Because while the routes may be getting redrawn, the destination should always be ours to choose. In the end, the real competitive advantage will belong to those who can harness AI's speed while keeping their hands firmly on the wheel - navigating not by ease, but by purpose. In this new era, the most valuable skill may not be prompt engineering - it might be strategic discernment....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co