Skip to content

Docker CLI Deep Dive

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

In my last post I covered all of the basics of getting started with Docker. In this post, I'll dive more deeply into the most common uses for the Docker CLI. I'm assuming that you've already got a working local Docker install. If not, you can refer back to my previous post.

Running Containers

docker run

This is the most important, and likely the most commonly used Docker command. This is the command that is used to run container images. You can use it with images that you've built yourself, or you can use it to run images from a remote repository like DockerHub.

docker run IMAGE This is the most basic way to use the run command. Docker will look for the named image locally first, and if it cannot find it, it will check to see if it's available from Docker Hub and download it. The image runs in the foreground and can be exited by pressing ctrl+c.

docker run IMAGE COMMAND [ARGS] Most Docker images will define a specific command to be executed when the container is run, but you can specify a custom command to run instead by adding it to your docker run command after the image tag. Optionally, you can also append any arguments that should be passed to your custom command. Keep in mind that the container will only run as long as the command executed continues to run. If your custom command exits for any reason, so will the container.

docker run -it IMAGE By default, you cannot provide any input to a running container via STDIN. In order to respond to prompts, you need to add the --interactive option to run the image in interactive mode, and the --tty option to connect your terminal's STDIN to the container's. You can combine both options using the shorthand option -it.

docker run -p HOST_PORT:CONTAINER_PORT Often when running a container, you will want to make a connection from your local host machine into your local docker container. This is only possible if you use the --port or -p option to specify a local host port to connect to the internal port exposed by the container.

docker run -d IMAGE If you don't need to interact with your container and you'd rather not block your terminal shell, you can use --detach or -d to run your container in the background.

NOTE: All of these options can be combined as desired.

docker exec

You can use exec to run arbitrary commands inside of a running container. I use this most often when troubleshooting problems in containers that I'm building. If your container has bash or another shell available, you can use it to get an interactive shell inside of a container.

docker exec CONTAINER COMMAND [ARGS] This is similar to docker run, but instead of giving it the name of a container image, you provide the ID or name of a running container. The command you specify will run inside the specified container in the foreground of your shell. You can use the -it and -d options with exec just like you can with run.

Managing Containers and Images

docker list List all of your running containers with their metadata

docker list -a List all containers including inactive ones

docker stop CONTAINER Terminate the container specified by the given ID or name via SIGTERM. This is the most graceful way to stop a container.

docker kill CONTAINER Terminate the container specified by the given ID or name via SIGKILL.

docker rm CONTAINER Delete the container specified by the given ID. This will completely remove it and it will no longer appear in docker ps -a

docker stats Starts a real-time display of stats like CPU and memory usage for your running containers. Press Ctrl + c to exit.

docker image list List all the container images present in your local docker registry.

docker image remove IMAGE_NAME[:TAG] Delete the given image from your local repository

docker image prune -a Over time, you will accumulate a lot of images that take up disk space but are not in use. This command will bulk delete any image you have stored locally that isn't currently being used in a container (including stopped containers).

Building Images

Aside from run, docker build is the the other crucial docker command. This command builds a portable container image from your Dockerfile and stores it in your local Docker registry.

docker build PATH This is the most basic usage for build. PATH is a relative path for the folder your dockerfile is in. The image is stored within docker and tagged with a hash derived from the image's contents.

docker build -t REPOSITORY_NAME[:VERSION_TAG] PATH The automatically generated hash image names aren't easy to remember or refer back to, so I usually add a custom tag at build time using the --tag or -t option. If you don't provide a version tag, it will default to latest

Publishing Images

docker tag

You may find that you need to re-tag an image after it's built. This is what docker tag is for.

docker tag SOURCE_IMAGE[:VERSION_TAG] TARGET_IMAGE[:VERSION_TAG] To use tag you simply need to provide a source image repository name and version tag and repository name and version tag for the new tag. As always, the version tags are optional and default to latest.

docker login

In order to pull images from private registries, you'll need to use docker login.

docker login [REGISTRY_HOST] registry host defaults to hub.docker.com. You will be prompted for your username and password.

docker push

push is used to publish docker images to a remote registry.

docker push REPOSITORY_NAME[:VERSION_TAG] Publish the specified image to a registry. If your repository name does not include a registry host, it will be published to [Docker Hub][https://hub.docker.sh]. If you want to use a custom registry, you will need to use docker tag to re-tag the image such that the repository name includes the registry host name (ex: docker tag my-image-repo my-registry.com/my-image-repo). You will most likely need to use docker login to login to your registry first.

Conclusion

Congratulations! You're on your way to being a Docker expert. However, it's worth noting that this list really only scratches the surface of the commands available in the Docker CLI. For more information check out the CLI docs or simply type docker --help at your shell. You can also use --help with most other docker CLI commands.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Building Docker Containers cover image

Building Docker Containers

Revisiting The Basics In my earlier post, Getting Started with Docker, I covered building a basic Dockerfile using the FROM, COPY, RUN, and CMD instructions and how to use a .dockerignore file to keep unnecessary files out of your images and containers. If you haven't read that post, go check it out to learn the basics of building Docker images. In this post, I'll cover some more advanced techniques for building container images. In addition, I recently published a post exploring advanced Docker CLI usage. I recommend giving it a read, too, if you aren't already a CLI pro. Installing Dependencies Using FROM with an official image for your language or framework will get you a long way, but many applications will require a system dependency that's not included in the FROM image. For example, many applications use ImageMagick for processing image uploads, but it's not included by default in the Debian images that most language images are based on. You can use RUN and apt-get to install missing dependencies. ` We started the Dockerfile just like the example from my earlier post, using the official NodeJS 15 image, but then we do 2 additional steps to install ImageMagick using apt-get. To keep the base image size low, Debian does not come pre-loaded with all of the data it needs to install packages from apt-get, so we need to run apt-get update first so that apt-get has that info. Then, we simply use apt-get install -y imagemagick to install imagemagick. The -y option is used to automatically respond with "yes" when apt-get prompts you to confirm the package installation. RUN vs CMD (vs ENTRYPOINT) By now you've probably noticed that there are two different instructions that run commands in your containers, RUN and CMD. While both are used to run commands, they're used in very different contexts. As we've seen in previous examples, RUN is used exclusively in the build process to run commands to modify the image as needed. CMD is different because it specifies the command that will be run by the container when you launch it using docker run. You can have as many RUN instructions as you need, but only one CMD. If you need to run a different command at runtime, you can pass it as an argument when you launch the container with docker run (check out my Docker CLI Deep Dive post). Additionally, Docker provides the ENTRYPOINT instruction. This is a command that the command you provide to the CMD instruction will be passed to as arguments. If you do not provide an ENTRYPOINT it will default to /bin/sh -c which will cause your CMD command to execute in a basic unix shell environment. The default ENTRYPOINT will satisfy most use cases. It's possible to override a container's CMD at runtime, but it is not possible to change its ENTRYPOINT. Docker's own ENTRYPOINT documentation goes into more detail about how it can be used. In the example Dockerfile above, you probably noticed that the way commands are passed to CMD and RUN looks different. Typically, when using RUN you provide commands using shell syntax, and you provide commands to CMD (and ENTRYPOINT) using the exec syntax, but they can be used interchangably. When using shell syntax, you can resolve shell expressions within your command. You can use shell variables and operators like output pipes (|) and redirects (>, >>), as well as boolean operations (&&, ||) to join commands. Exec syntax is much more straightforward. Each string within the bracketed array is joined with the other elements with a space in between and run exactly as provided. Layers and Caching Each isntruction in your Dockerfile adds a new Layer to your image. For performance reasons, it's considered a best practice to limit the total number of layers that comprises your finished image. There are a number of ways to do this. The simplest is by combining lines where RUN or COPY are used in close proximity to each other. Consider the example above where we installed ImageMagick; instead of using two separate RUN instructions, we can combine them using the bash && operator. ` Combining copy commands is a bit easier. The COPY instruction takes any number of arguments. The first N parameters provided to COPY are interpreted as a list of files to copy, and the N+1th paramter is the location to copy those files to. You can also use * as a wildcard character as I did in the first example when copying the package.json and package-lock.json files to the image. Anothing thing to consider when thinking about how your image layers are composed is caching. When Docker processes your Dockerfile to build your image, it runs each of the instructions in order to create the layers of your image. Docker analyzes each instruction before it is run and checks its cache to determine whether or not there is an identical existing image layer. When analyzing RUN instructions, Docker looks for any cached image layer that was built using the exact same command and uses it instead of rebuilding the same layer. For COPY and ADD instructions, it analyzes the files to be copied and looks for a previously built layer that has the exact same file contents. If at any point any instruction requires its layer to be rebuilt, all of the following instructions will result in a rebuild. Optimizing your Dockerfile to take advantage of the layer cache can greatly reduce the time it takes to build your image. Organize your Dockerfile so that the layers least likely to change are processed first (ex: installing dependencies) and those more likely to change (ex: copying application code) are processed later. Conclusion These techniqes will help you create more advanced container images and hopefully help you optimize them. However, I've only covered a small slice of the options available to you when building container images. If you dig deeper into the official Dockerfile reference you'll find information about all of the instructions available to you and more advanced concepts and use cases....

Putting Your NodeJS App in a Docker Container cover image

Putting Your NodeJS App in a Docker Container

Putting Your App in a Docker Container The days of having to provision servers and VMs by hand or by using complicated and heavy handed toolchains like Chef, Puppet, and Ansible are over. Docker simplifies the process by providing developers with a simple domain specific language for creating pre-configured virtual machine images, and simple tools for building, publishing and running them on the (virtual) hardware you're already using. In this guide, I will show you how to install your NodeJS application into a Docker container. The Dockerfile Language Docker containers are built using a single file called a Dockerfile. This file uses a simple domain specific language (think SQL) to define how to configure a virtual machine to run your application. The language provides a small number of commands called "instructions" that can be used to define the steps required to build a new virtual machine called a "container". First, I'll explain which instructions we'll be using and what they'll be used for. FROM The FROM instruction is used to define a base image to use as the foundation for your custom image. You can use any local or published image with FROM. There are published images that only contain popular Linux distributions (or even Windows!) and there are also images that come preinstalled with popular software development stacks like NodeJS, Python, or .NET. RUN The RUN instruction is used in the image build process to run commands required to bootstrap your application environment. We'll use it mostly to install dependencies, but it's capable of running any command that your container OS supports. COPY The COPY instruction is used to copy files from the local filesystem into the container. We will use this instruction to copy our application code, etc., into our image. ENTRYPOINT The ENTRYPOINT instruction contains a command that will be run when your container is launched. It is different from RUN because the command passed to ENTRYPOINT does not run at build time. Instead the command passed to ENTRYPOINT will run when your container is started via docker run (check out my Docker CLI Deep Dive post). Only a single ENTRYPOINT instruction per Dockerfile is allowed. If used multiple times, only the last usage will be operative. The value of ENTRYPOINT can be overridden when running a container image. CMD The CMD instruction is an extension of the ENTRYPOINT instruction. The content passed to CMD is tacked onto the end of the command passed to ENTRYPOINT to create a complete command to start your application. Like with ENTRYPOINT, only the final usage of CMD in a Dockerfile is operative and the value given can be overridden at runtime. EXPOSE EXPOSE is a little different from the other instructions in that it doesn't have a practical purpose. It only exists to provide metadata about what port the container expects to expose. You don't _need_ to use EXPOSE in your container, but anyone who has to understand how to connect to it will appreciate it. And more... More details about these Dockerfile instructions and some others that I didn't cover are available via the official Dockerfile reference. Choosing a Base Image Generally, choosing a base image will be simple. If you're using a popular language, there is most likely already an official image available that installs the language runtimes. For instance, NodeJS application developers will most likely find it easiest to use the official NodeJS image provided via Dockerhub. This image is developed by the NodeJS team and comes pre-installed with everything you need to run basic NodeJS applications. Similarly, users of other popular languages will find similar images available (ex: Python, Ruby). Once you've chosen your base image, you also need to choose which specific version you will use. Generally, images are available with any supported version of a language's toolchain so that a wide range of applications can be supported using official images. You can usually find a list of all available version tags on an image's DockerHub page. In addition to offering images with different versions of language tools installed, there are also typically additional images available with different operating systems as well. Unless specified otherwise, images usually use the most recent Debian Linux release as their base image. Since it's considered best practice to keep the size of your images as low as possible, most languages also offer variants of their images built with a "slim" version of Debian Linux, or built with Alpine Linux, a Linux distribution designed for building Docker containers with tiny footprints. Both Debian Slim and Alpine ship with fewer system packages installed than the typical Debian Linux base image. They only include the packages that are required to run the language tools. This will make your Docker images more compact, but may result in more work to build your containers if you require specific system dependencies that are not preinstalled in those versions. Some languages, like .NET Core, even offer Windows-based images. Though it's typically not necessary, you can choose to use a base operating system image without any additional language specific tools installed by default. Images containing _only_ Debian Linux, Debian Slim, and Alpine Linux are available. However, the most popular images contain many other operating systems like Ubuntu Linux, Red Hat Linux or Windows are available as well. Choosing one of these images will add much more complexity to your Dockerfile. It is _highly reccommended_ that you use a more specific official image if your use case allows. In the interest of keeping things simple for our example NodeJS app, we will choose the most recent (at the time of writing) Debian Linux version of the official NodeJS image. This image is called node:15. Note that we have only included a major version number in the image's version tag (The "version tag" is the part after the colon that specifies a specific version of an image). The NodeJS team (as well as most other maintainers of official images) also publishes images with more specific versions of Node. Using node:15 instead of node:15.5.1 means that my image with be automatically upgraded to new versions of NodeJS 15 at build time when an update is available. This is good for development, but for production workloads, you may want to use a more specific version so you don't get surprised with upgrades to NodeJS that your application can't support. Starting Your Dockerfile Now that we've chosen an image, we will create our Dockerfile. This part is very easy since the FROM instruction is going to do most of the work for us. To get started, simply create a new file in your project's root folder called Dockerfile. To this file, we will add this one simple line: ` Now we have installed everything we need to run a basic NodeJS application along with any other system dependencies that come pre-installed in Debian Linux. Installing Additional Depenencies If your application is simple and only requires NodeJS binaries to be installed and run, congratulations! You get to skip this section. Many developers won't be so lucky. If you use a tool like Image Magick to process images or wkhtmltopdf for generating PDFs, or any other libraries or tools that are not included by your chosen language or don't come installed by default on Debian Linux, you will need to add instructions to your Dockerfile so that they will be installed by Docker when your image is built. We will primarily use the RUN instruction to specify the operating system commands required to install our desired packages. If you recall, RUN is used to give Docker commands to run when building your image. We will use RUN to issue the commands required to install our dependencies. You may choose to use a package management system like Debian's apt-get (or Alpine's apm) or you may install via source. Installing via package manager is always the simplest route, but thanks to the simplicity of the RUN instruction, it's fairly straightforward to install from source if your required package isn't available to install via package management. Installing Package Dependencies Using a package manager is the easiest way to install dependencies. The package manager handles most of the heavy lifting like installing dependencies. The node:15 image is based on Debian, so we will use the RUN instruction with the apt-get package manager to install ImageMagick for image processing. Add the following lines to the bottom of our Dockerfile: `` RUN apt-get update && \ apt-get install -y imagemagick `` This is all the code you need in your Dockerfile to use the RUN instruction to install ImageMagic via apt-get. It's really not very different from how you would install it by hand on an Ubuntu or Debian host. If you've done that before, you probably noticed that there are some unfamiliar instructions. Before we installed using apt-get install, we had to run apt-get update. This is required because in order to keep the docker images small, Debian linux containers don't come with any of the package manager metadata pre-downloaded. apt-get update bootstraps the OS with all the metadata it needs to install packages. We've also added the -y option to apt-get install. This option automatically answers affimatively to any yes/no prompts when apt-get would otherwise ask for a user response. This is necessary because you will not be able to respond to prompts when Docker is building your image. Finally, we use the && operator to run both commands within the same shell context. When installing dependencies, it's a good practice to combine commands that are part of the same procedure under the same RUN instruction. This will ensure that the whole procedure is contained in the same "layer" in the container image so that Docker can cache and reuse it to save time in future builds. Check out the official documentation for more information on image layering and caching. Installing Source Dependencies Sometimes, since they use pre-compiled binaries, package managers will contain a version of a dependency that doesn't line up with the version you need. In these cases, you'll need to install the dependency from source. If you've done it by hand before, the commands used will be familiar. As with package installs, it's only different in that we use && to combine the whole procedure into a single RUN instruction. Let's install ImageMagick from source this time. `` RUN wget https://download.imagemagick.org/ImageMagick/download/ImageMagick-7.0.10-60.tar.gz && \ tar -xzf ImageMagick-7.0.10-60.tar.gz && \ cd ImageMagick-7.0.10-60 && \ ./configure --prefix /usr/local && \ make install && \ ldconfig /usr/local/lib && \ cd .. && \ rm -rf ImageMagick* `` As you can see, there's a lot more going on in this instruction. First, we need Docker to download the code for the specific ImageMagic version we want to install with wget, and unpack it using tar. Once the source is unpacked, we have it navigate to the source directory with cd and use ./configure to prepare the code for compilation. Then, make install and ldconfig are used to compile and install the binaries from source. Afterward, we navigate back to the root directory and clean the source tarball and directory since they are no longer needed. Installing Your App Now that we've installed dependencies, we can start installing our own application into the container. We will use the COPY instruction to add our own node app's source code to the container, and RUN to install npm dependencies. We'll install NPM dependencies first. In order to get the most out of Docker's build caching, it's best to install external dependencies first, since your dependency tree will change less often than your application code. A single cache miss will disable caching for the remainder of the build instructions. Application code typically changes more often between builds, so we will apply it as late in the process as we possibly can. To install your application's NPM packages, add these lines to the end of your Dockerfile: `` WORKDIR /var/lib/my-app `` COPY package*.json . `` RUN npm install `` First, we use the WORKDIR instruction to change the Dockerfile's working directory to /var/lib/my-app. This is similar to using the cd command in a shell environment. It changes the working directory for all of the following Docker instructions. Then we use COPY to copy our package.json and package-lock.json from the local filesystem to the working directory within the container. We used the wildcard operator (*), to copy both files with a single instruction. After the package files have been copied, use RUN to execute npm install Finally, we will use COPY to bring the rest of our application code into the container: `` COPY * . `` This will copy the rest of your NodeJS app's source code to the container, again using COPY and a much more broad usage of the wildcard. However, since we're using * to copy everything, we need to introduce a new configuration file called .dockerignore to prevent some local files from being copied to the container at this time. For example, we want to make sure that we aren't copying the contents of our local node_modules folder so that the modules we installed previously don't get overwritten by the ones we've installed on our development machine. It's likely that your local build platform is different from the one in the container, so copying your local node_modules folder will likely cause your app to malfunction or not run at all. The .dockerignore file is very simple. Simply add the names of files or folders that Docker should ignore at build time. You can use the * character as a wildcard just like you can in COPY instructions. Create a .dockerignore with this content: `` node_modules/ `` You may wish to add additional entries to the .dockerignore. For example, if you're using git for version control, you'll want to add the .git/ folder since it's not needed and will unnecessarily increase the size of your image. Any file or directory name you add will be skipped over when copying files via COPY at build time. Running Your App Now that we've installed all our external dependencies and copied our application code into the container, we're ready to tell docker how to run our application. We will run our app using node index.js. Docker provides the ENTRYPOINT and CMD instructions for this purpose. Both instructions have nearly the same behavior of defining the command that the container should use to start the application when our container is run, but ENTRYPOINT is less straightforward to override. They can be used together and Docker will concatenate their values and run them as a single command. In this case, you would provide the main application command to ENTRYPOINT (in our case, node) and any arguments to CMD (index.js in our case). However, we're just going to use CMD. Using both would make sense if NodeJS was our main process, but really, our main command is the whole node index.js expression. We could use only ENTRYPOINT but it's more complicated to override an ENTRYPOINT instruction's value at runtime, and we will want to be able to override the main command simply so that it's easier to troubleshoot issues within the conatainer when they arise. With all that said, add the following to the end of your Dockerfile: `` CMD ["node", "index.js"] `` Now Docker understands what to do to start our application. We provide our command to CMD (and ENTRYPOINT if it's used) in a different way than we supply commands to the RUN instruction. The form we're using for CMD is called "exec form" and the form used for RUN is called "shell form". Using shell form for RUN allows you to access all of the power of the sh shell environment. You can use variable and wildcard substitution in shell form, in addition to other shell features like piping and chaining commands using && and ||. When using exec form, you do not have access to any of these shell features. When passing a command via exec form, each element within the square brackets is joined with a space in between and run exactly as is. Using shell form is preferred for RUN so that you can leverage build arguments and chaining (recall we did that above for better layering/caching). It's better to use exec form for CMD or ENTRYPOINT so that it's always straightforward to understand which action the container takes at runtime. Conclusion I hope this article has helped to demystify the process of getting your app into a container. Docker can have a steep learning curve if you're not already a seasoned systems administrator, but features like portability, distribution, and reproducible builds make getting over the hump totally worth it, especially for developers working in teams....

Build IT Better - DevOps - Monitoring Roundup cover image

Build IT Better - DevOps - Monitoring Roundup

Build IT Better DevOps - Monitoring Roundup On This Dot's Build IT Better show, I talk to people who make popular tools that help developers make great software. In my most recent series, we looked at application monitoring tools. Marcus Olssen from Grafana and Ben Vinegar from Sentry showed us how the tools they work on can help developers keep their applications running smoothly. Grafana Grafana is an organization that builds a number of open source observation and monitoring tools for collecting and visualizing application metrics. Their namesake product is a platform for aggregating and visualizing any kind of data from a near limitless number of sources via their rich plugin library. Grafana's commercial counterpart, Grafana Labs, maintains this plugin library as well as educational resources for the Grafana ecosystem and paid products and services for companies that are looking for help managing their own Grafana tooling. Flexibility Grafana is a platform for application monitoring and analytics that offers a really huge amount of flexibility for collecting and analyzing application data. Instead of providing a hyper-focused application monitoring solution, Grafana provides unparallelled flexibility for collecting almost any kind of data. Grafana offers built in integrations for all the most popular SQL and non-SQL databases, as well as Grafana's own popular application monitoring tools, Prometheus, Loki, and Tempo (and a handful of other popular sources). Community developed plugins can be used to add support for most other platforms. This flexibility allows Grafana to have applications outside the traditional application monitoring use cases. Some are even using Grafana to track their own home energy usage and health data. You can really analyze almost any kind of data in Grafana. *A Grafana dashboard with custom metrics* Datasource Compatibility While flexibility allows Grafana to reach across industries to find users and use cases, it still excels at traditional application monitoring. Developers can use Prometheus to pull data out of their own applications. Most popular host operating systems and appliation development frameworks offer community developed integrations with Prometheus that will provide useful system and application data like resource usage and response time, as well as the ability to publish your own custom application data. Loki is a tool for aggregating and querying system and application logs. Also, you can use Tempo for aggregating distributed application trace data from tools like Jaeger, OpenTelemetry, and Zipkin. If you use all 4 tools together, you can visually trace transactions all the way through your application, even as the user shifts between different components of your microservice architecture. Visualization and Analysis All of this flexible data collection technology would be useless without Grafana's equally flexible visualization platform. Once you've integrated all your data sources, you can use Grafana to explore and visualize data you've collected. You can use dashboards to create an array of vizualizations of your data. As a DevOps engineer, one of my favorite things about Grafana is their Dashboard library. The dashboard library contains community developed dashboards for a number of popular (and not so popular) application frameworks and backend tools and systems. Instead of needing to make your own dashboards from scratch for monitoring Rails apps and PostgreSQL databases, you can simply add and modify community Dashboards, saving you time and providing insights you may not have considered on your own. Finally, we have to mention the Explore tool. It can be easy to overlook with everything that's possible with Dashboards, but it allows users to easily view, query, and analyze data streams on the fly without needing to create permanent dashboards. *Grafana Nginx Dashboard - available from the dashboard library* This big tent collection of features makes Grafana a great platform for observing any amount of any kind data. The flexibility does come with the overhead of needing to know a lot about a number of different tools like Prometheus and Loki, which have a non-trivial amount of overhead on their own. As with any community-developed content, plugins and dashboards from the library don't always work as expected out of the box and will often need to be modified to line up with your devops procedures and environments. Sentry Sentry, like Grafana, is a tool for monitoring application health and performance. However, unlike Grafana, Sentry is laser-focused on providing curated experiences with deep first party integrations for popular application development tools and provides some additional tools for tracking user errors and code changes, which it uses as the framing narrative for all of the data the Sentry platform surfaces for developers. Integratons are available for most popular frontend JavaScript frameworks (React, Angular, Vue, etc) and backend applications in Python, Ruby, Go, and more. Sentry gives developers a huge amount of visibility without the overhead of more complex devops driven platforms like Grafana. Developer Focused Sentry's primary goal is to help you understand what's wrong with all of the parts of your application. They do this by giving you a view into what errors your users are experiencing in real time. Sentry collects data on all of the exceptions thrown by applications which have a Sentry integration. As you investigate individual issues, Sentry provides you with a curated collection of datapoints to cross reference with the specific error. Sentry provides some very traditional data, such as user like browser agent, OS, their geographical location, and the url they were visiting, but it also connects that error back to the code. Not only can you see the stack trace and easily see the lines of code where the error manifested, but Sentry also uses its deep integration to provide what they call "Breadcrumbs." These are pieces of data about what actual activity led up to the error. Depending on the what type of application you're troubleshooting, this might be things like log output, events fired from UI elements, or your own custom breadcrumb events. These can give you a better idea of the actions the user took leading up to the error. *Sentry's Issue (aka Error) View* *A sample of Sentry's Breadcrumbs* Integrations In addition to helping you identify the root cause of your errors, Sentry also aggregates errors to make it easier for you to understand which errors have the highest impact on your application. You can easily identify errors that are happening frequently and on critical paths. If you've enabled integration with a source control platform like GitHub, Sentry will even make suggestions as to which code commits introduced the problem. All these features together will help you tackle application health like a devops expert, without needing to be a devops expert. Application Performance Debugging and error surfacing aren't the only place where Sentry shines. I'm really excited to talk about Sentry's performance and application tracing platform. Using their deep framework and platform integrations, you're able to collect a lot of performance data from your applicaitons and to coallate them with user behaviors. Similar to the debugging experience, Sentry starts you from a broad view of your performance picture, and shows you the slowest pages and endpoints of your application, and provides you with another curated experince for investigating and resolving performance problems. The most interesting aspect of the performance investigation tools are transactions, or traces. When you choose a slow page to begin investigating, alongside the individual performance metrics for that page, are transactions. These transactions allow you to see the performance of your pages broken into waterfall graphs, like you might already be used to from browser dev tools. However, Sentry adds some really cool tricks since they're deeply integrated into all the parts of your application. If you analyze a transaction that starts from your javascript app and see that there's a fetch request that's taking a long time, assuming the API is part of your stack that's integrated with Sentry, you can click down into that fetch request within the Sentry UI and switch contexts to the API application and see a waterfall graph of what the API did to handle that request, allowing you to simply traverse your whole application to identify the exact source of performance problems. These transactions also benefit from the same Breadcrumb and code change data that's provided in the error analysis tools. Conclusions Sentry and Grafana are both strong tools to add to your DevOps toolbelt. While they both provide great features for observing application health and analyzing data, they really fill two pretty different niches. Sentry provides curated developer experiences and deep integrations that will help developers dive head first into error and performance monitoring for their applications without needing to be experts. However for experts and "data scientists" Grafana provides an incredibly powerful and flexible platform for not only analyzing application metrics and health, but really any data you can manage to get into a Dashboard. Some organizations may even benefit from using both tools for different use cases....

Understanding Sourcemaps: From Development to Production cover image

Understanding Sourcemaps: From Development to Production

What Are Sourcemaps? Modern web development involves transforming your source code before deploying it. We minify JavaScript to reduce file sizes, bundle multiple files together, transpile TypeScript to JavaScript, and convert modern syntax into browser-compatible code. These optimizations are essential for performance, but they create a significant problem: the code running in production does not look like the original code you wrote. Here's a simple example. Your original code might look like this: ` After minification, it becomes something like this: ` Now imagine trying to debug an error in that minified code. Which line threw the exception? What was the value of variable d? This is where sourcemaps come in. A sourcemap is a JSON file that contains a mapping between your transformed code and your original source files. When you open browser DevTools, the browser reads these mappings and reconstructs your original code, allowing you to debug with variable names, comments, and proper formatting intact. How Sourcemaps Work When you build your application with tools like Webpack, Vite, or Rollup, they can generate sourcemap files alongside your production bundles. A minified file references its sourcemap using a special comment at the end: ` The sourcemap file itself contains a JSON structure with several key fields: ` The mappings field uses an encoding format called VLQ (Variable Length Quantity) to map each position in the minified code back to its original location. The browser's DevTools use this information to show you the original code while you're debugging. Types of Sourcemaps Build tools support several variations of sourcemaps, each with different trade-offs: Inline sourcemaps: The entire mapping is embedded directly in your JavaScript file as a base64 encoded data URL. This increases file size significantly but simplifies deployment during development. ` External sourcemaps: A separate .map file that's referenced by the JavaScript bundle. This is the most common approach, as it keeps your production bundles lean since sourcemaps are only downloaded when DevTools is open. Hidden sourcemaps: External sourcemap files without any reference in the JavaScript bundle. These are useful when you want sourcemaps available for error tracking services like Sentry, but don't want to expose them to end users. Why Sourcemaps During development, sourcemaps are absolutely critical. They will help avoid having to guess where errors occur, making debugging much easier. Most modern build tools enable sourcemaps by default in development mode. Sourcemaps in Production Should you ship sourcemaps to production? It depends. While security by making your code more difficult to read is not real security, there's a legitimate argument that exposing your source code makes it easier for attackers to understand your application's internals. Sourcemaps can reveal internal API endpoints and routing logic, business logic, and algorithmic implementations, code comments that might contain developer notes or TODO items. Anyone with basic developer tools can reconstruct your entire codebase when sourcemaps are publicly accessible. While the Apple leak contained no credentials or secrets, it did expose their component architecture and implementation patterns. Additionally, code comments can inadvertently contain internal URLs, developer names, or company-specific information that could potentially be exploited by attackers. But that’s not all of it. On the other hand, services like Sentry can provide much more actionable error reports when they have access to sourcemaps. So you can understand exactly where errors happened. If a customer reports an issue, being able to see the actual error with proper context makes diagnosis significantly faster. If your security depends on keeping your frontend code secret, you have bigger problems. Any determined attacker can reverse engineer minified JavaScript. It just takes more time. Sourcemaps are only downloaded when DevTools is open, so shipping them to production doesn't affect load times or performance for end users. How to manage sourcemaps in production You don't have to choose between no sourcemaps and publicly accessible ones. For example, you can restrict access to sourcemaps with server configuration. You can make .map accessible from specific IP addresses. Additionally, tools like Sentry allow you to upload sourcemaps during your build process without making them publicly accessible. Then configure your build to generate sourcemaps without the reference comment, or use hidden sourcemaps. Sentry gets the mapping information it needs, but end users can't access the files. Learning from Apple's Incident Apple's sourcemap incident is a valuable reminder that even the largest tech companies can make deployment oversights. But it also highlights something important: the presence of sourcemaps wasn't actually a security vulnerability. This can be achieved by following good security practices. Never include sensitive data in client code. Developers got an interesting look at how Apple structures its Svelte codebase. The lesson is that you must be intentional about your deployment configuration. If you're going to include sourcemaps in production, make that decision deliberately after considering the trade-offs. And if you decide against using public sourcemaps, verify that your build process actually removes them. In this case, the public repo was quickly removed after Apple filed a DMCA takedown. (https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md) Making the Right Choice So what should you do with sourcemaps in your projects? For development: Always enable them. Use fast options, such as eval-source-map in Webpack or the default configuration in Vite. The debugging benefits far outweigh any downsides. For production: Consider your specific situation. But most importantly, make sure your sourcemaps don't accidentally expose secrets. Review your build output, check for hardcoded credentials, and ensure sensitive configurations stay on the backend where they belong. Conclusion Sourcemaps are powerful development tools that bridge the gap between the optimized code your users download and the readable code you write. They're essential for debugging and make error tracking more effective. The question of whether to include them in production doesn't have a unique answer. Whatever you decide, make it a deliberate choice. Review your build configuration. Verify that sourcemaps are handled the way you expect. And remember that proper frontend security doesn't come from hiding your code. Useful Resources * Source map specification - https://tc39.es/ecma426/ * What are sourcemaps - https://web.dev/articles/source-maps * VLQ implementation - https://github.com/Rich-Harris/vlq * Sentry sourcemaps - https://docs.sentry.io/platforms/javascript/sourcemaps/ * Apple DMCA takedown - https://github.com/github/dmca/blob/master/2025/11/2025-11-05-apple.md...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co