Skip to content

Git Reflog: A Guide to Recovering Lost Commits

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Losing data can be very frustrating. Sometimes data is lost because of hardware dying, but other times it’s done by mistake. Thankfully, Git has tools that can assist with the latter case at least. In this article, I will demonstrate how one can use the git-reflog tool to recover lost code and commits.

What is Reflog?

Whenever you add data to your local Git repository or perform destructive operations, Git keeps track of all these using reference logs, also known as reflogs. These log entries contain a SHA-1 hash of the commit associated with it and any references, or refs for short. Refs themselves are branch names, tags, and symbolic refs like HEAD, which is always pointing to the ref or commit id that’s currently checked out.

These reflogs can prove very useful in assisting with data recovery against a Git repository if some code is lost in a destructive operation. Reflog records contain data such as the SHA-1 hash that HEAD was pointing to when an operation was performed, and a description of the operation that was performed as well.

Here is an example of what a reflog might look like:

956eb2f (HEAD -> branch-prefix/v2-1-4, origin/branch-prefix/v2-1-4) HEAD@{0}: commit: fix: post-rebase errors

The first part 956eb2f is the commit hash of the currently checked out commit when this entry was added to the reflog. If a ref currently exists in the repo that points to the commit id, such as the branch-prefix/v2-1-4 branch in this case, then those refs will be printed alongside the commit id in the reflog entry.

It should be noted that the actual refs themselves are not always stored in the entry, but are instead inferred by Git from the commit id in the entry when dumping the reflog. This means that if we were to remove the branch named branch-prefix/v2-1-4, it would no longer appear in the reflog entry here.

There’s also a HEAD part as well. This just tells us that HEAD is currently pointing to the commit id in the entry. If we were to navigate to a different branch such as main, then the HEAD -> section would disappear from that specific entry.

The HEAD@{n} section is just an index that specifies where HEAD was n moves ago. In this example, it is zero, which means that is where HEAD currently is. Finally what follows is a text description of the operation that was performed. In this case, it was just a commit. Descriptions for supported operations include but are not limited to commit, pull, checkout, reset, rebase, and squash.

Basic Usage

Running git reflog with no other arguments or git reflog show will give you a list of records that show when the tips of branches and other references in the repository have been updated. It will also be in the order that the operations were done. The output for a fresh repository with an initial commit will look something like this.

13deb8e (HEAD -> main) HEAD@{0}: commit (initial): initial commit

Now let’s create a new branch called feature with git switch -c feature and then commit some changes. Doing this will add a couple of entries to the reflog. One for the checkout of the branch, and one for committing some changes.

4f8d10d (HEAD -> feature) HEAD@{0}: commit: more stuff
13deb8e (main) HEAD@{1}: checkout: moving from main to feature
13deb8e (main) HEAD@{2}: commit (initial): initial commit

This log will continue to grow as we perform more operations that write data to git.

A Rebase Gone Wrong

Let’s do something slightly more complex. We’re going to make some changes to main and then rebase our feature branch on top of it. This is the current history once a few more commits are added.

138afbf (HEAD -> feature) here's some more
cb72b26 even more stuff
4f8d10d more stuff
13deb8e initial commit

And this is what main looks like:

a84bdfa (HEAD -> main) add other content
13deb8e initial commit

After doing a git rebase main while checked into the feature branch, let’s say some merge conflicts got resolved incorrectly and some code was accidentally lost. A Git log after doing such a rebase might look something like this.

be44ab0 (HEAD -> feature) here's some more
a84bdfa (main) add other content
13deb8e initial commit

Fun fact: if the contents of a commit are not used after a rebase between the tip of the branch and the merge base, Git will discard those commits from the active branch after the rebase is concluded. In this example, I entirely discarded the contents of two commits “by mistake”, and this resulted in Git discarding them from the current branch.

Alright. So we lost some code from some commits, and in this case, even the commits themselves. So how do we get them back as they’re in neither the main branch nor the feature branch?

Reflog to the Rescue

Although our commits are inaccessible on all of our branches, Git did not actually delete them. If we look at the output of git reflog, we will see the following entries detailing all of the changes we’ve made to the repository up till this point:

be44ab0 (HEAD -> feature) HEAD@{0}: rebase (continue) (finish): returning to refs/heads/feature
be44ab0 (HEAD -> feature) HEAD@{1}: rebase (continue): here's some more
a84bdfa (main) HEAD@{2}: rebase (start): checkout main
138afbf HEAD@{3}: checkout: moving from main to feature
a84bdfa (main) HEAD@{4}: commit: add other content
13deb8e HEAD@{5}: checkout: moving from feature to main
138afbf HEAD@{6}: commit: here's some more
cb72b26 HEAD@{7}: commit: even more stuff
4f8d10d HEAD@{8}: commit: more stuff
13deb8e HEAD@{9}: checkout: moving from main to feature
13deb8e HEAD@{10}: commit (initial): initial commit

This can look like a bit much. But we can see that the latest commit on our feature branch before the rebase reads 138afbf HEAD@{6}: commit: here's some more.

The SHA1 associated with this entry is still being stored in Git and we can get back to it by using git-reset. In this case, we can run git reset --hard 138afbf. However, git reset --hard ORIG_HEAD also works. The ORIG_HEAD in the latter command is a special variable that indicates the last place of the HEAD since the last drastic operation, which includes but is not limited to: merging and rebasing.

So if we run either of those commands, we’ll get output saying HEAD is now at 138afbf here's some more and our git log for the feature branch should look like the following.

138afbf (HEAD -> feature) here's some more
cb72b26 even more stuff
4f8d10d more stuff
13deb8e initial commit

Any code that was accidentally removed should now be accessible once again! Now the rebase can be attempted again.

Reflog Pruning and Garbage Collection

One thing to keep in mind is that the reflog is not permanent. It is subject to garbage collection by Git on occasion. In reality, this isn’t a big deal since most uses of reflog will be against records that were created recently. By default, reflog records are set to expire after 90 days. The duration of this can be controlled via the gc.reflogExpire key in your git config.

Once reflog records are expired, they then become eligible for removal by git-gc. git gc can be invoked manually, but it usually isn’t. git pull, git merge, git rebase and git commit are all examples of commands that will trigger git gc to run behind the scenes.

I will abstain from going into detail about git gc as that would be deserving of its own article, but it’s important to know about in the context of git reflog as it does have an effect on it.

Conclusion

git reflog is a very helpful tool that allows one to recover lost code and commits when used in conjunction with git reset. We learned how to use git reflog to view changes made to a repository since we’ve cloned it, and to undo a bad rebase to recover some lost commits.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Git Bisect: the Time Traveling Bug Finder cover image

Git Bisect: the Time Traveling Bug Finder

I think it’s safe to say that most of us have been in a situation where we pull down some changes from main and something breaks unexpectedly, or a bug got introduced in a recent deployment. It might not take long to narrow down which commit caused the issue if there’s only a couple of new commits, but if you’re a dozen or more commits behind, it can be a daunting task to determine which one caused it. But can’t I just check each commit until I find the culprit? You could just check each commit individually without any special tools until you find the one that caused the issue, but that can be a very slow process. This is not ideal and is analogous to the reason why linear search isn’t as effective as binary search. As the title suggests, there is a tool that Git provides called “bisect”. What this command does is checks out various commit refs in the tree of the branch you’re currently working in and allows you to mark commits as, “good”, “bad”, or “skip” (invalid / broken build). It does away with the need of having to check each commit individually as it is able to infer if commits are good or bad based on which other commits you have already marked. Git Bisect in Action Let’s imagine a hypothetical scenario where some bug was reported for the software we’re working on. Starting a git bisect session usually looks like the following example. ` In this case, the commit hash in this example comes from a commit that I already know works. In the case where you pull down changes, and only then does something break, you can use whatever commit you were at prior before you pulled them down. If it’s an older bug, then you could check an older tag or two to see if it exists there. Next is the part where we search for the offending commit. Every time you mark a commit, bisect will then navigate to another commit in-between your good and bad starting points using a specialized binary search algorithm. ` This is the general workflow you will follow when bisecting for its most basic use case, and these commands will be repeated until there are no more revisions left to review. Bisect will try to predict how many steps are left, and let you know every time you mark a commit. Once you are done, you will be checked into the commit that introduced the regression. This assumes that you marked everything accurately! After you are done bisecting, you can quickly return to where you started by running git bisect reset. How Git Bisect Works Firstly, bisect makes the reasonable assumption that any commits after a bad commit remain bad, and any commits before a good commit remain good. It then continues to narrow down which commit is the cause by asking you to check the middlemost commit, along with some added bias when navigating around invalid commits. Though, that’s not vitally important to understand as a user of the command. The following graphic shows how bisect moves throughout your branch’s history. Bisect becomes incredibly useful when dealing with repositories with a lot of history, or when tracking down the cause of a bug that’s been in a codebase for a long time. It makes it possible to mule over hundreds of commits in fewer than a dozen steps! That’s a lot better than going through commits one-by-one or at random. Limitations It is worth mentioning that bisect isn’t as useful in cases where commits are very large because they incorporate several different changes all bundled together (e.g. squash merges). In an ideal world, each commit in the main branch’s history can be built, and they will implement or fix one thing and one thing only. But in reality, this isn’t always the case. The skip command is available to help with this scenario, but even with that, it’s possible that a change that caused the bug is in one of those skipped commits; therefore, relying solely on the diff of the determined commit to find the root cause of a bug may be misleading. Conclusion Git bisect is a very useful tool that can dramatically decrease the amount of time it takes to identify the cause of a regression. I would also recommend reading the official documentation on git bisect as it’s actually quite informative! There are a lot of good examples in here that demonstrate how you can use the command to its full potential....

Mastering Git Rerere: Solving Repetitive Merge Conflicts with Ease cover image

Mastering Git Rerere: Solving Repetitive Merge Conflicts with Ease

Mastering Git Rerere: Solving Repetitive Merge Conflicts with Ease Introduction: Git, the popular version control system, has revolutionized how developers collaborate on projects. However, one common pain point in Git is repetitive merge conflicts. Fortunately, Git provides a powerful and not-so-well-known solution called git rerere (reuse recorded resolution) that can save you time and effort when resolving conflicts. In this blog post, we will explore how to configure and use git rerere, and demonstrate its effectiveness in solving merge conflicts. Understanding Git Rerere: Git rerere is a feature that allows Git to remember how you resolved a particular merge conflict in a particular file and automatically apply the same resolution in the future. It works by recording the conflict resolution in a hidden directory called .git/rr-cache. This way, when Git encounters the same conflict in the same file in the future, it can reuse the recorded resolution, saving you from manually resolving the conflict again. Configuring Git Rerere: Before using git rerere, you need to enable it in your Git configuration; only once git rerere has been enabled, Git will start recording and remembering resolved conflicts.Open your terminal and run the following command: ` This command enables git rerere globally, making it available for all your repositories. You can also enable it per-repository by omitting the --global flag. How to use it: Let's start with a really easy example, then describe a couple of use cases. We have merged a branch (branch-one) into our main, and we are working on two different features in two different branches (branch-two and branch-three). We need to rebase our branches with main, so we start with ` It turns out that there are some conflicts on App.tsx file: photo1.png We solve all the conflicts and finish with the rebase and push. As you can see, there are an extra line in the rebase output message that says: This means that thanks to rerere option enabled, we have saved in our project's .git/rr-cache this resolution for this particular conflict. Now let's switch branches into branch-three cause we want to rebase on main also this one: ` It seems that we have the same conflicts here too, but this time, on the rebase output message, we can read: The conflict has been resolved automatically; if we check our IDE, we can see the change (and check if it works for us) ready to be committed and pushed with the same resolution we manually used in the past rebase. The example above focuses on a rebase, but of course, git rerere also works for conflicts that came out from a merge command. Here are a couple of real-life scenarios where git rerere can save the day: Frequent Integration of Feature Branches: Imagine you're working on a feature branch that frequently needs to be merged into the main development branch. Each time you merge, you encounter the same merge conflicts. With git rerere, you only need to resolve these conflicts once. After that, Git remembers the resolutions and automatically applies them in future merges, saving you from resolving the same conflicts repeatedly. Reapplying Patches or Fixes: Let's say you have a situation where you need to apply the same set of changes or fixes to multiple branches. When you encounter conflicts during this process, git rerere can remember how you resolved them the first time. Then, when you apply the changes to other branches, Git can automatically reuse the recorded resolutions, sparing you from manually resolving the same conflicts repeatedly. Benefits of Git Rerere: Git rerere offers several benefits that make it a valuable tool for developers, regardless of whether you prefer git merge or git rebase: 1. Time-saving: By reusing recorded resolutions, git rerere eliminates the need to manually resolve repetitive merge conflicts, saving you valuable time and effort. 2. Consistency: Git rerere ensures consistent conflict resolutions across multiple merges or rebases, reducing the chances of introducing errors or inconsistencies. 3. Improved productivity: With git rerere, you can focus on more critical tasks instead of getting stuck in repetitive conflict resolutions. Conclusion: Git rerere is a powerful feature that simplifies resolving repetitive merge conflicts. By enabling git rerere and recording conflict resolutions, you can save time, improve productivity, and ensure consistent conflict resolutions across your projects. Incorporate git rerere into your Git workflow, and say goodbye to the frustration of repetitive merge conflicts....

An Introduction to Laravel Queues and Temporary URLs cover image

An Introduction to Laravel Queues and Temporary URLs

Laravel is a mature, robust, and powerful web framework that makes developing PHP applications a breeze. In particular, I want to demonstrate how to create a website that can be used to convert videos online using queue jobs for processing and temporary URLs for downloading the converted files. This article is aimed at those who aren’t very familiar with Laravel yet. Prerequisites There are many ways to set up Laravel, and which is the best method may depend on your operating system or preference. I have found Laravel Herd to be very easy to use if you’re using Windows or macOS. Herd is a Laravel development environment that has everything you need with minimal configuration required. Command-line tools are installed and added to your path, and background services are configured automatically. If you’re developing on Linux then Herd is not an option. However, Laravel Sail works for all major operating systems and uses a Docker based environment instead. You can find a full list of supported installation methods in the Laravel documentation. To keep things simpl,e this article assumes the use of Herd, though this won’t make a difference when it comes to implementation. You will also need a text editor or IDE that has good PHP support. PhpStorm is a great editor that works great with Laravel, but you can also use VSCode with the Phpactor language server, and I’ve found Phpactor to work quite well. Project Setup With a development environment setup, you can create a new Laravel project using composer, which is the most popular package manager for PHP. Herd installs composer for you. composer installs dependencies and lets you run scripts. Let’s create a Laravel project using it: ` Once that is done you can navigate into the project directory and start the server with artisan: ` Awesome! You can now navigate to http://localhost:8000/ and see the Laravel starter application’s welcome page. Artisan is the command-line interface for Laravel. It comes with other utilities as well such as a database migration tool, scripts for generating classes, and other useful things. Uploading Videos Using Livewire Livewire is a library that allows you to add dynamic functionality to your Laravel application without having to add a frontend framework. For this guide we’ll be using Livewire to upload files to our server and update the status of the video conversion without requiring any page reloads. Livewire can be installed with composer like so. ` With it installed we need to make a Livewire component now. This component will act as the controller of our video upload page. ` With that done you should see two new files were created according to the output of the command, one being a PHP file and the other being a Blade file. Laravel has its own HTML template syntax for views that allow you to make your pages render dynamically. For this demo we’ll make the video conversion page render at the root of the site. You can do this by going to routes/web.php and editing the root route definition to point to our new component. ` However, if we visit our website now it will return an error. This is due to the app template being missing, which is the view that encapsulates all page components and contains elements such as the document head, header, footer, etc. Create a file at resources/views/components/layouts/app.blade.php and put the following contents inside. This will give you a basic layout that we can render our page component inside of. ` The {{ $slot }} string in the main tag is a Blade echo statement. That is where our Livewire component will be injected when loading it. Now, let’s edit the Livewire component’s template so it has something meaningful in it that will allow us to verify that it renders correctly. Edit resources/views/livewire/video-uploader.blade.php and put in the following: ` With that done you can go to the root of the site and see this hello message rendered inside of a box. Seeing that means everything is working as it should. We may as well delete the welcome template since we’re not using it anymore. This file is located at resources/views/welcome.blade.php. Now, let’s go ahead and add uploading functionality. For now we’ll just upload the file into storage and do nothing with it. Go ahead and edit app/Livewire/VideoUploader.php with the following: ` This will only allow uploading files with video file MIME types. The $video class variable can be wired inside of the component’s blade template using a form. Create a form in resources/views/livewire/video-uploader.blade.php like so: ` You will note a wire:submit attribute attached to the form. This will prevent the form submission from reloading the page and will result in Livewire calling the component’s save method using the video as a parameter. The $video property is wired with wire:model="video". Now you can upload videos, and they will be stored into persistent storage in the storage/app/private directory. Awesome! Increase the Filesize Limit If you tried to upload a larger video you may have gotten an error. This is because the default upload size limit enforced by Livewire and PHP is very small. We can adjust these to accommodate our use-case. Let’s start with adjusting the Livewire limit. To do that, we need to generate a configuration file for Livewire. ` All values in the generated file are the defaults we have been using already. Now edit config/livewire.php and make sure the temporary_file_upload looks like this: ` The rules key allows us to change the maximum file size, which in this case is 100 megabytes. This alone isn’t good enough though as the PHP runtime also has a limit of its own. We can configure this by editing the php.ini file. Since this article assumes the use of Herd, I will show how that is done with it. Go to Herd > Settings > PHP > Max File Upload Size > and set it to 100. Once done you need to stop all Herd services in order for the changes to take effect. Also make sure to close any background PHP processes with task manager in-case any are lingering, as this happened with me. Once you’ve confirmed everything is shut off, turn on all the services again. If you’re not using Herd, you can add the following keys to your php.ini file to get the same effect: ` Creating a Background Job Now, let’s get to the more interesting part that is creating a background job to run on an asynchronous queue. First off, we need a library that will allow us to convert videos. We’ll be using php-ffmpeg. It should be noted that FFmpeg needs to be installed and accessible in the system path. There are instructions on their website that tell you how to install it for all major platforms. On macOS this is automatic if you install it with homebrew. On Windows you can use winget. On macOS and Linux you can confirm that ffmpeg is in your path like so: ` If a file path to ffmpeg is returned then it’s installed correctly. Now with FFmpeg installed you can install the PHP library adapter with composer like so: ` Now that we have everything we need to convert videos, let’s make a job class that will use it: ` Edit app/Jobs/ProcessVideo.php and add the following: ` To create a job we need to make a class that implements the ShouldQueue interface and uses the Queueable trait. The handle method is called when the job is executed. Converting videos with php-ffmpeg is done by passing in an input video path and calling the save method on the returned object. In this case we’re going to convert videos to the WebM container format. Additional options can be specified here as well, but for this example we’ll keep things simple. One important thing to note with this implementation is that the converted video is moved to a file path known by the livewire component. Later and to keep things simple we’re going to modify the component to check this file path until the file appears, and while for demo purposes this is fine, in an app deployed at a larger scale with multiple instances it is not. In that scenario it would be better to write to a cache like Redis instead with a URL to the file (if uploaded to something like S3) that can be checked instead. Now let’s use this job! Edit app/Livewire/VideoUploader.php and let’s add some new properties and expand on our save method. ` How this works is we tell the job where it can find the video and tell it where it should output the converted video when it’s done. We have to make the output filename be the same as the original with just the extension changed, so we use pathinfo to extract that for us. The ProcessVideo::dispatch method is fire and forget. We aren’t given a handle of any kind to be able to check the status of a job out of the box. For this example we’ll be waiting for the video to appear at the output location. To process jobs on the queue you need to start a queue worker as jobs are not processed in the same process as the server that we are currently running. You can start the queue with artisan: ` Now the queue is running and ready to process jobs! Technically you can upload videos for conversion right now and have them be processed by the job, but you won’t be able to download the file in the browser yet. Generating a Temporary URL and Sending it with Livewire To download the file we need to generate a temporary URL. Traditionally this feature has only been available for S3, but as of Laravel v11.24.0 this is also usable with the local filesystem, which is really useful for development. Let’s add a place to render the download link and the status of the job. Edit resources/views/livewire/video-uploader.blade.php and add a new section under the form: ` Note the wire:poll attribute. This will cause the Blade echo statements inside of the div to refresh occasionally and will re-render if any of them changed. By default, it will re-render every 2.5 seconds. Let’s edit app/Livewire/VideoUploader.php to check the status of the conversion, and generate a download URL. ` Every time the page polls we check if the video has appeared at the output path. Once it’s there we generate the link, store it to state, and pass it to the view. Temporary URLs are customizable as well. You can change the expiration time to any duration you want, and if you’re using S3, you can also pass S3 request parameters using the optional 3rd argument. Now you should be able to upload videos and download them with a link when they’re done processing! Limitations Although this setup works fine in a development environment with a small application, there are some changes you might need to make if you plan on scaling beyond that. If your application is being served by multiple nodes then you will need to use a remote storage driver such as the S3 driver, which works with any S3 compatible file storage service. The same Laravel API calls are used regardless of the driver you use. You would only have to update the driver passed into the Storage facade methods from local to s3, or whichever driver you choose. You also wouldn’t be able to rely on the same local filesystem being shared between your job workers and your app server either and would have to use a storage driver or database to pass files between them. This demo uses the database driver for simplicity’s sake, but it's also worth noting that by default, queues and jobs use the database driver, but SQS, Redis, and Beanstalkd can also be used. Consider using these other drives instead of depending on how much traffic you need to process. Conclusion In this article, we explored how to utilize queues and temporary URLs to implement a video conversion site. Laravel queues allow for efficient processing of long-running tasks like video conversion in a way that won’t bog down your backend servers that are processing web requests. While this setup works fine for development, some changes would need to be made for scaling this such as using remote storage drivers for passing data between the web server and queue workers. By effectively leveraging Laravel’s features, developers can create robust and scalable applications with relative ease....

Making AI Deliver: From Pilots to Measurable Business Impact cover image

Making AI Deliver: From Pilots to Measurable Business Impact

A lot of organizations have experimented with AI, but far fewer are seeing real business results. At the Leadership Exchange, this panel focused on what it actually takes to move beyond experimentation and turn AI into measurable ROI. Over the past few years, many organizations have experimented with AI, but the challenge today is translating experimentation into measurable business value. Moderated by Tracy Lee, CEO at This Dot Labs, panelists featured Dorren Schmitt, Vice President IT Strategy & Innovation at Allen Media Group, Greg Geodakyan, CTO at Client Command, and Elliott Fouts, CAIO & CTO at This Dot Labs. Panelists discussed how companies are moving from early AI experiments to initiatives that deliver real results. They began by examining how experimentation has evolved over the past year. While many organizations did not fully utilize AI experimentation budgets in 2025, 2026 is showing a shift toward more intentional investment. Structured budgets and clearly defined frameworks are enabling companies to explore AI strategically and identify initiatives with high potential impact. The conversation then turned to alignment and ROI. Panelists highlighted the importance of connecting AI projects to corporate strategy and leadership priorities. Ensuring that AI initiatives translate into operational efficiency, productivity gains, and measurable business impact is essential. Companies that successfully align AI efforts with organizational goals are better equipped to demonstrate tangible outcomes from their investments. Moving from pilots and proofs of concept to production was another major focus. Governance, prioritization, and workflow integration were cited as essential for scaling AI initiatives. One panelist shared that out of nine proofs of concept, eight successfully launched, resulting in improvements in quality and operational efficiency. Panelists also explored the future of AI within organizations, including the potential for agentic workflows and reduced human-in-the-loop processes. New capabilities are emerging that extend beyond coding tasks, reshaping how teams collaborate and how work is structured across departments. Key Takeaways - Structured experimentation and defined budgets allow organizations to explore AI strategically and safely. - Alignment with business priorities is essential for translating AI capabilities into measurable outcomes. - Governance and workflow integration are critical to moving AI initiatives from pilot stages to production deployment. Successfully leveraging AI requires a balance between experimentation, strategic alignment, and operational discipline. Organizations that approach AI as a structured, measurable initiative can capture meaningful results and unlock new opportunities for innovation. Curious how your organization can move from AI experimentation to real impact? Let’s talk. Reach out to continue the conversation or join us at an upcoming Leadership Exchange. Tracy can be reached at tlee@thisdot.co....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co