Skip to content

Docker CLI Deep Dive

In my last post I covered all of the basics of getting started with Docker. In this post, I'll dive more deeply into the most common uses for the Docker CLI. I'm assuming that you've already got a working local Docker install. If not, you can refer back to my previous post.

Running Containers

docker run

This is the most important, and likely the most commonly used Docker command. This is the command that is used to run container images. You can use it with images that you've built yourself, or you can use it to run images from a remote repository like DockerHub.

docker run IMAGE This is the most basic way to use the run command. Docker will look for the named image locally first, and if it cannot find it, it will check to see if it's available from Docker Hub and download it. The image runs in the foreground and can be exited by pressing ctrl+c.

docker run IMAGE COMMAND [ARGS] Most Docker images will define a specific command to be executed when the container is run, but you can specify a custom command to run instead by adding it to your docker run command after the image tag. Optionally, you can also append any arguments that should be passed to your custom command. Keep in mind that the container will only run as long as the command executed continues to run. If your custom command exits for any reason, so will the container.

docker run -it IMAGE By default, you cannot provide any input to a running container via STDIN. In order to respond to prompts, you need to add the --interactive option to run the image in interactive mode, and the --tty option to connect your terminal's STDIN to the container's. You can combine both options using the shorthand option -it.

docker run -p HOST_PORT:CONTAINER_PORT Often when running a container, you will want to make a connection from your local host machine into your local docker container. This is only possible if you use the --port or -p option to specify a local host port to connect to the internal port exposed by the container.

docker run -d IMAGE If you don't need to interact with your container and you'd rather not block your terminal shell, you can use --detach or -d to run your container in the background.

NOTE: All of these options can be combined as desired.

docker exec

You can use exec to run arbitrary commands inside of a running container. I use this most often when troubleshooting problems in containers that I'm building. If your container has bash or another shell available, you can use it to get an interactive shell inside of a container.

docker exec CONTAINER COMMAND [ARGS] This is similar to docker run, but instead of giving it the name of a container image, you provide the ID or name of a running container. The command you specify will run inside the specified container in the foreground of your shell. You can use the -it and -d options with exec just like you can with run.

Managing Containers and Images

docker list List all of your running containers with their metadata

docker list -a List all containers including inactive ones

docker stop CONTAINER Terminate the container specified by the given ID or name via SIGTERM. This is the most graceful way to stop a container.

docker kill CONTAINER Terminate the container specified by the given ID or name via SIGKILL.

docker rm CONTAINER Delete the container specified by the given ID. This will completely remove it and it will no longer appear in docker ps -a

docker stats Starts a real-time display of stats like CPU and memory usage for your running containers. Press Ctrl + c to exit.

docker image list List all the container images present in your local docker registry.

docker image remove IMAGE_NAME[:TAG] Delete the given image from your local repository

docker image prune -a Over time, you will accumulate a lot of images that take up disk space but are not in use. This command will bulk delete any image you have stored locally that isn't currently being used in a container (including stopped containers).

Building Images

Aside from run, docker build is the the other crucial docker command. This command builds a portable container image from your Dockerfile and stores it in your local Docker registry.

docker build PATH This is the most basic usage for build. PATH is a relative path for the folder your dockerfile is in. The image is stored within docker and tagged with a hash derived from the image's contents.

docker build -t REPOSITORY_NAME[:VERSION_TAG] PATH The automatically generated hash image names aren't easy to remember or refer back to, so I usually add a custom tag at build time using the --tag or -t option. If you don't provide a version tag, it will default to latest

Publishing Images

docker tag

You may find that you need to re-tag an image after it's built. This is what docker tag is for.

docker tag SOURCE_IMAGE[:VERSION_TAG] TARGET_IMAGE[:VERSION_TAG] To use tag you simply need to provide a source image repository name and version tag and repository name and version tag for the new tag. As always, the version tags are optional and default to latest.

docker login

In order to pull images from private registries, you'll need to use docker login.

docker login [REGISTRY_HOST] registry host defaults to hub.docker.com. You will be prompted for your username and password.

docker push

push is used to publish docker images to a remote registry.

docker push REPOSITORY_NAME[:VERSION_TAG] Publish the specified image to a registry. If your repository name does not include a registry host, it will be published to [Docker Hub][https://hub.docker.sh]. If you want to use a custom registry, you will need to use docker tag to re-tag the image such that the repository name includes the registry host name (ex: docker tag my-image-repo my-registry.com/my-image-repo). You will most likely need to use docker login to login to your registry first.

Conclusion

Congratulations! You're on your way to being a Docker expert. However, it's worth noting that this list really only scratches the surface of the commands available in the Docker CLI. For more information check out the CLI docs or simply type docker --help at your shell. You can also use --help with most other docker CLI commands.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Building Docker Containers cover image

Building Docker Containers

Revisiting The Basics In my earlier post, Getting Started with Docker, I covered building a basic Dockerfile using the FROM, COPY, RUN, and CMD instructions and how to use a .dockerignore file to keep unnecessary files out of your images and containers. If you haven't read that post, go check it out to learn the basics of building Docker images. In this post, I'll cover some more advanced techniques for building container images. In addition, I recently published a post exploring advanced Docker CLI usage. I recommend giving it a read, too, if you aren't already a CLI pro. Installing Dependencies Using FROM` with an official image for your language or framework will get you a long way, but many applications will require a system dependency that's not included in the `FROM` image. For example, many applications use ImageMagick for processing image uploads, but it's not included by default in the Debian images that most language images are based on. You can use `RUN` and `apt-get` to install missing dependencies. ` FROM node:15 RUN apt-get update RUN apt-get install -y imagemagick WORKDIR /usr/src/app COPY package.json ./ RUN npm install COPY . . EXPOSE 3000 Use the start script defined in package.json to start the application CMD ["npm", "start"] ` We started the Dockerfile just like the example from my earlier post, using the official NodeJS 15 image, but then we do 2 additional steps to install ImageMagick using apt-get. To keep the base image size low, Debian does not come pre-loaded with all of the data it needs to install packages from apt-get, so we need to run apt-get update` first so that `apt-get` has that info. Then, we simply use `apt-get install -y imagemagick` to install imagemagick. The `-y` option is used to automatically respond with "yes" when apt-get prompts you to confirm the package installation. RUN vs CMD (vs ENTRYPOINT) By now you've probably noticed that there are two different instructions that run commands in your containers, RUN` and `CMD`. While both are used to run commands, they're used in very different contexts. As we've seen in previous examples, `RUN` is used exclusively in the build process to run commands to modify the image as needed. `CMD` is different because it specifies the command that will be run by the container when you launch it using `docker run`. You can have as many `RUN` instructions as you need, but only one `CMD`. If you need to run a different command at runtime, you can pass it as an argument when you launch the container with `docker run` (check out my Docker CLI Deep Dive post). Additionally, Docker provides the `ENTRYPOINT` instruction. This is a command that the command you provide to the `CMD` instruction will be passed to as arguments. If you do not provide an `ENTRYPOINT` it will default to `/bin/sh -c` which will cause your `CMD` command to execute in a basic unix shell environment. The default `ENTRYPOINT` will satisfy most use cases. It's possible to override a container's `CMD` at runtime, but it is not possible to change its `ENTRYPOINT`. Docker's own ENTRYPOINT documentation goes into more detail about how it can be used. In the example Dockerfile above, you probably noticed that the way commands are passed to CMD` and `RUN` looks different. Typically, when using `RUN` you provide commands using shell syntax, and you provide commands to `CMD` (and `ENTRYPOINT`) using the exec syntax, but they can be used interchangably. When using shell syntax, you can resolve shell expressions within your command. You can use shell variables and operators like output pipes (`|`) and redirects (`>`, `>>`), as well as boolean operations (`&&`, `||`) to join commands. Exec syntax is much more straightforward. Each string within the bracketed array is joined with the other elements with a space in between and run exactly as provided. Layers and Caching Each isntruction in your Dockerfile adds a new Layer to your image. For performance reasons, it's considered a best practice to limit the total number of layers that comprises your finished image. There are a number of ways to do this. The simplest is by combining lines where RUN` or `COPY` are used in close proximity to each other. Consider the example above where we installed ImageMagick; instead of using two separate `RUN` instructions, we can combine them using the bash `&&` operator. ` FROM node:15 RUN apt-get update && apt-get install -y imagemagick ` Combining copy commands is a bit easier. The COPY instruction takes any number of arguments. The first N parameters provided to COPY are interpreted as a list of files to copy, and the N+1th paramter is the location to copy those files to. You can also use *` as a wildcard character as I did in the first example when copying the package.json and package-lock.json files to the image. Anothing thing to consider when thinking about how your image layers are composed is caching. When Docker processes your Dockerfile to build your image, it runs each of the instructions in order to create the layers of your image. Docker analyzes each instruction before it is run and checks its cache to determine whether or not there is an identical existing image layer. When analyzing RUN instructions, Docker looks for any cached image layer that was built using the exact same command and uses it instead of rebuilding the same layer. For COPY` and `ADD` instructions, it analyzes the files to be copied and looks for a previously built layer that has the exact same file contents. If at any point any instruction requires its layer to be rebuilt, all of the following instructions will result in a rebuild. Optimizing your Dockerfile to take advantage of the layer cache can greatly reduce the time it takes to build your image. Organize your Dockerfile so that the layers least likely to change are processed first (ex: installing dependencies) and those more likely to change (ex: copying application code) are processed later. Conclusion These techniqes will help you create more advanced container images and hopefully help you optimize them. However, I've only covered a small slice of the options available to you when building container images. If you dig deeper into the official Dockerfile reference you'll find information about all of the instructions available to you and more advanced concepts and use cases....

Getting Started with Docker cover image

Getting Started with Docker

Getting Started With Docker Introduction Docker is quickly becoming one of the most popular technologies for hosting web applications. It is a set of tools for packaging, distributing, and running software applications. Developers can write configuration files to create packages called images, which are distributed via decentralized, web-based repositories (some public, some private). Images downloaded from repositories are used as templates to create isolated environments called "containers" that run applications within them. Many containers may exist alongside each other on a single host. Memory and CPU resources are shared between all the containers running on a machine, but each container has its own fully isolated file system and environment. This is convenient for a number of reasons, but most of all, it simplifies the process of installing and running one or more applications on a single host machine. Installing Docker If you are on MacOS or Windows, the best way to install Docker is by installing Docker Desktop. It provides a complete installation of Docker and provides a GUI for managing it. You can use the GUI to start or stop your Docker daemon, or to manage installing software updates to the Docker platform. (Bonus: Docker Desktop can also manage a local Kubernetes cluster for you. It's not relevant to this article, but it provides a straightforward way to get started with Kubernetes, a platform for managing running containers across a scalable number of hosts). Linux users can install docker from their distribution’s package manager, but the Docker Desktop GUI is not included. Installation instructions for the most popular Linux distributions can be found in the Docker documentation. Working With 3rd Party Containers The first thing to try once you've installed Docker on your computer is running containers based on 3rd party images. This exercise is a great way to quickly display the power of Docker. First open your favorite system terminal and enter docker pull nginx`. This command will download the official nginx image from Docker Hub. Docker Hub is a managed host for Docker images. You can think of it sort of like npm for Docker. We've pulled the newest version of the nginx image, however, as with npm, we could have chosen a specific version to download by changing the command to docker pull nginx:1.18`. You can find more details about an image, including which versions are available for download, on its Docker Hub page. Now that we've downloaded an image, we can use it to create a container on our local machine just as simply as we downloaded it. Run docker run -d -p 8080:80 nginx` to start an nginx container. I’ve added a couple options to the command. By default, nginx runs on port 80, and your system configuration likely prevents you from exposing port 80. Therefore, we use `-p 8080:80` to bind port 80 on the container to port 8080 on your local machine. We use `-d` to detach the running container from the terminal session. This will allow us to continue using the same terminal while the nginx container continues to run in the background. Now, you can navigate to http://localhost:8080 with your web browser, and see the nginx welcome page that is being served from within Docker. You can stop the nginx container running in the background by using the docker kill` command. First, you'll need to use `docker ps` to get its container ID, then you can run `docker kill `. Now, if you navigate to http://localhost:8080 again, you will be met with an error, and `docker ps` will show no containers running. The ability to simply download and run any published image is one of the most powerful features of Docker. Docker Hub hosts millions of already baked images, many of which are officially supported by the developer of the software contained within. This allows you to quickly and easily deploy 3rd party software to your servers and workstations without having to follow bespoke installation processes. However, this isn’t all that Docker can do. You can also use it build your own images so that you can benefit from the same streamlined deployment processes for your own software. Build Your Own As I said before, Docker isn’t only good for running software applications from 3rd parties. You can build and publish your own images, so that your applications can also benefit from the streamlined deployment workflows that Docker provides. Docker images are built using 2 configuration files, Dockerfile` and `.dockerignore`. `Dockerfile` is the most important of the two. It contains instructions for telling docker how to run your application within a container. The `.dockerignore` file is similar to Git’s `.gitignore` file. It contains a list of project files that should never be copied into container images. For this example, we'll Dockerize a dead "hello world" app, written with Node.js and Express. Our example project has a package.json` and `index.js` like the following: package.json: ` { "name": "hiwrld", "version": "1.0.0", "description": "hi world", "main": "index.js", "scripts": { "start": "node index.js" }, "author": "Adam Clarke", "license": "MIT", "dependencies": { "express": "^4.17.1" } } ` --- index.js: `js const express = require('express') const app = express() const port = 3000 const greeting = process.env.GREETING app.get('/', (req, res) => { res.send('Hello World!') }) app.listen(port, () => { console.log('Example app listening at http://localhost:' + port) }) ` The package.json` manages our single express dependency, and configures an `npm start` command with which to start the application. In ` index.js`, I've defined a basic express app that responds to requests on the root path with a greeting message. The first step to Dockerizing this application is creating a Dockerfile`. The first thing we should do with our empty Dockerfile is add a `FROM` directive. This tells Docker which image we want to use as the base for our application image. Any Docker image published to a repository can be used in your `FROM` directive. Since we've created a Node.js application, we'll use the official node docker image. This will prevent us from needing to install Node.js on our own. Add the following to the top of your empty Dockerfile: ` FROM node:15 ` Next, we need to make sure that our npm dependencies are installed into the container so that the application will run. We will use the COPY` and `RUN` directives to copy our `package.json` file (along with the `package-lock.json` that was generated when modules were installed locally) and run `npm install`. We'll also use the `WORKDIR` directive to create a folder and make it the image's working directory. Add the following to the bottom of your `Dockerfile`: ` Create a directory for the app and make it the working directory WORKDIR /usr/src/app Copy package files from the local filesystem directory to the working directory of the container You can use a wildcard character to capture multiple files for copying. In this case we capture package.json and package-lock.json COPY package.json ./ now install the dependencies into the container image RUN npm install ` Now that we've configured the image so that Docker installs the application dependencies, we need to copy our application code and tell Docker how to run our application. We will again use COPY`, but we’ll add `CMD` and `EXPOSE` directives as well. These will explain to Docker how to start our application and which ports it needs exposed to operate. Add these lines to your Dockerfile: ` Copy everything from the local filesystem directory to the working directory. Including the source code COPY . . The app runs on port 3000 EXPOSE 3000 Use the start script defined in package.json to start the application CMD ["npm", "start"] ` Your completed Dockerfile should look like this: ` FROM node:15 Create a directory for the app and make it the working directory WORKDIR /usr/src/app Copy package files from the local filesystem directory to the working directory of the container You can use a wildcard character to capture multiple files for copying. In this case we capture package.json and package-lock.json COPY package.json ./ now install the dependencies into the container image RUN npm install Copy everything from the local filesystem directory to the working directory. Including the source code COPY . . The app runs on port 3000 EXPOSE 3000 Use the start script defined in package.json to start the application CMD ["npm", "start"] ` Now that we have a complete Dockerfile, we need to create a .dockerignore` as well. Since our project is simple, we only need to ignore our local node_modules folder. That will ensure that the locally installed modules aren’t copied from your local disk via the `COPY . .` directive in our Dockerfile after they've already been installed into the container image with npm. We'll also ignore npm debug logs since they're never needed, and it's a best practice to keep Docker images' storage footprints as small as possible. Add the following `.dockerignore` to the project directory: ` nodemodules npm-debug.log ` On a larger project, you would want to add things like the .git` folder and any text and/or configuration files that aren't required for the app to run, like continuous integration configuration, or project readme files. Now that we've got our Docker configuration files, we can build an image and run it! In order to build your Docker image open your terminal and navigate to the same location where your Dockerfile is, then run docker build -t hello-world .`. Docker will look for your `Dockerfile` in the working folder, and will build an image, giving it a tag of “hello-world”. The “tag” is just a name we can use later to reference the image. Once your image build has completed, you can run it! Just as you did before with nginx, simply run docker run -d -p 3000:3000 hello-world`. Now, you can navigate your browser to http://localhost:3000, and you will be politely greeted by our example application. You may also use `docker ps` and `docker kill` as before in order to verify or stop the running container. Conclusion By now, it should be clear to see the power that Docker provides. Not only does Docker make it incredibly easy to run 3rd party software and applications in your cloud, it also gives you tools for making it just as simple to deploy your own applications. Here, we've only scratched the surface of what Docker is capable of. Stay tuned to the This Dot blog for more information about how you can use Docker and other cloud native technologies with your applications....

Comparing App Platforms with Heroku/Salesforce, AWS, and Vercel cover image

Comparing App Platforms with Heroku/Salesforce, AWS, and Vercel

Introduction Recently on This Dot's Build It Better show, I had the opportunity to sit down with some folks from popular platform as a service vendors. I asked them to tell me about what got them excited about their platforms, and what advice they have to offer to help viewers choose which platform is best for them. Salesforce/Heroku I spoke with Julian Duque and Mohith Shrivastava from Salesforce about their "low code" products, and how those products are enhanced by their Heroku platform for "pro code" solutions. Salesforce makes it easy for almost anyone to get started building and running cloud apps whether they're software engineers or not. The Salesforce Platform offers a wide variety of low-code products that make it simple for non-developers to build apps, and Heroku provides a streamlined set of tools and services for running custom software applications using a wide array of application development frameworks and programming languages. I personally really love the simplicity and flexibility of the Heroku platform. I've used it for tons of projects over the years. You can host almost any application, built using any language or framework, using Heroku's buildpack technology. Heroku Buildpacks are sets of scripts that automate your app's build and deployment steps. Official Buildpacks for a dozen different platforms are available for use and in most cases, Heroku can automatically detect which one your app needs when you deploy your code. If your stack isn't supported by an official buildpack, you can build your own or use one of the many community maintained buildpacks for languages and frameworks that don't have first party support. Another benefit of Heroku is that each Heroku app has an internal Git repository and all you need to do to deploy your code is push your code to that repository using git push`. There are no additonal tools required for deployments. Not only does this simplify the process of deploying your code by hand, but it also means that Heroku is automatically compatible with any CI/CD system that supports Git, which is almot all of them by now. In addition to custom applicaiton hosting, Heroku also has PaaS integrated offerings for PostgreSQL, Redis, and Apache Kafka that can all be managed through the Heroku dashboard or CLI. Even though I'm a long time user of Heroku, I wasn't really aware of everything that Salesforce brings to the table. Heroku offers a strong platform for pro code applications, but in addition, the Salesforce Platform provides a variety of low code tools that can be used to build applications by people who aren't experienced in custom software development. These tools allow businesses to begin the digital transformation process without needing to bring on a large in-house IT staff. There are point and click tools for managing authentication and identity as well as automating workflows and building user interfaces. They even offer a product called Einstein that can imbue your workflows with AI powers. However, you don't need to worry about outgrowing the low code solutions because the Salesforce Platform can also be integrated with pro code applications hosted in the Heroku ecosystem. This makes Salesforce/Heroku a great platform that businesses can rely on all the way through their digital transformation process. Technology isn't the only thing that sets Salesforce and Heroku apart from their competition. They also provide a couple of huge documentation libraries. For the Salesforce Platform, you can head to their Salesforce Trailhead site. Trailhead offers interactive courses and learning tracks that will teach you how to build applications from the ground up on the Salesforce Platform. Heroku also has an expansive documentation library that not only applies directly to the Heroku platform, but I've used their documentation personally many times to assist in resolving problems with my applications on other platforms. The Heroku documentation site is not only comprehensive, but it's also easier to consume than that of many of their competitors (I'm looking at you Amazon). And finally, when documentation isn't enough, Heroku and Salesforce also have excellent support teams who will work quickly to resolve any problems you're experiencing with their platform, and in many cases they can act proactively before you are aware you have a problem. Vercel I also spoke with Lee Robinson from Vercel. Vercel is a platform that's quite similar to Heroku in a lot of ways. However they are laser focused on providing a great hosting platform for your Jamstack applications. While Heroku can support a nearly limitless number of programming languages and application frameworks, Vercel is focused on providing the best possible experience for "serverless" Javascript apps. These are apps that use a hybrid or static JavaScript framework for building frontends and backends that are powered by NodeJS serverless functions. Serverless functions written in Python, Go, or Ruby are also supported, but there are no options for supporting functions written in languages that aren't officially supported. Compared to Heroku's flexibility, one might take this to mean that Vercel is an inferior platform, but this isn't the case at all. What Vercel doesn't offer in terms of flexibility, they make up for in developer experience. Where Heroku provides the simplicity of being able to effortlessly scale your applications by dragging a slider, Vercel takes the simplicity to the extreme and automagically scales your applications without needing to ever even use the dashboard or CLI. Not only do they completely automate and manage all the complexities of scaling your app to meet the demands of your users, you also get the benefit of having the Vercel Edge Network CDN to ensure your app is always available and performant no matter where your users are located geographically. This is all part of every single app hosted on Vercel, even the free tier! Vercel also provides additional tools to help you supercharge your development workflows and improvement cycles. "Develop. Preview. Ship" is Vercel's mantra. To help developers achieve this, not only do they provide Git-based deployments, but for each branch or pull request opened via version control, Vercel provides a "preview URL" which is connected to a preview version of your application that reflects the code on that branch/PR. This eliminates the need for complicated staging and QA workflows, since preview URLs provide isolated environments for testing and demoing new features. Another mantra Lee shared with me is the idea that "developers are scientists." As developers, we can use data to inform how we build the solutions we work on, but often that data can be cumbersome or difficult to obtain. Vercel simplifies the data collection process by offering a high quality analytics platofrm to help you understand how your applicaiton performs, not only in terms of response performance but also tracking frontend user experience metrics like layout shift and input delay. Being able to easily collect and visualize these metrics allows you to really be a scientist and always be able to justify priorities and improvements to your products with real user data. Another interesting aspect of Vercel is that they've also created a NodeJS application development framework in-house called Next.js that is meant to pair perfectly with their platform. It provides a "zero-configuration" framework for building applications with NodeJS and React. It's an incredibly flexible platform that can support the simplest one-page statically rendered applications, but also can support request-time server-side frontend rendering and custom backend API endpoints supproted by Vercel's serverless functions. To help new and experienced developers alike, Vercel offers a library of starter projects using Next.js and/or other JavaScript frameworks you can use to get your project started with just a few button clicks. Amazon Web Services I spoke with Nader Dabit from Amazon about their new Amplify platform. Amazon has been the biggest player in the PaaS marketplace for well over a decade now. Most developers have used an EC2 virtual server or stored application assets and uploads in S3. What developers may not know is that Amazon offers more than 200 different services for use by developers and other business users. Ec2 and S3 are pretty simple and straightforward, but branching out into the broader ecosystem or learning to tie everything together can be pretty intimidating. This isn't a big deal for companies like Netflix or AirBnB who can afford to bring in devops engineers that are already AWS experts, but historically it's been a lot more difficult for less exprienced developers to take full advantage of what AWS has to offer. With Amplify, the AWS team is hoping to demystify the process and give new and experienced developers a way to work with the core AWS platform in a more streamlined way. Instead of having to udnerstand which service to use out of a list of 200+ services with intimidating names, Amplify selects a smaller subset of these services and gives them less esoteric names. So Amazon Cognito becomes "Authentication" and AWS Lambda becomes "Functions". They also provide simplified client libraries over the traditional AWS SDK that are compatible with JavaScript, Android, iOS and Flutter. Another neat thing about the Amplify platform is that they, like Salesforce, are steering users toward Amazon's low code tools like AWS AppSync and API Gateway, and making it easier for developers to integrate with AWS tools for things like AI/ML predictions and PubSub. Also like Salesforce, if developers outgrow the lowcode tools, it's easier than ever to expand out to the boader ecosystem and some of the more specialized services that amazon offers. In addition to making it easy to build your application's backend with little or no code, Amplify also offers the frontend components you need to build interactive web or mobile apps. Amplify UI components are available for React, Angular, Vue and more. And of course, on top of the simplified Amplify toolchain, AWS still provides the same 200+ services they've traditionally offered. So if you outgrow Amplify, or need services that aren't compatible with it, you can always integrate offerings outside of the Amplify ecosystem with other AWS services. Another thing I really like about Amplify, and AWS in general, is the pricing. All of the Amplify services have a free tier. This makes it useful for hobby projects or to keep development costs low before you launch your applications. Also, it's important to note that the other services like Heroku and Vercel are often based on AWS themselves (. As such, buying services direct from AWS will usually save you at least a little bit of money over using a more managed service. Conclusion Developers have a ton of choices when they are choosing a platform to build their applications on. All of the vendors I spoke with have compelling solutions that will make your life as a developer better. I always personally reach for platforms like Heroku or Vercel first since they're quick and easy to get started with, but it's clear that AWS has taken note of that and is trying to close that gap. So really, there's not a bad choice if these are your options. I hope I've explained them well enough so you can choose which one suits your project the best!...

Nuxt DevTools v1.0: Redefining the Developer Experience Beyond Conventional Tools cover image

Nuxt DevTools v1.0: Redefining the Developer Experience Beyond Conventional Tools

In the ever-evolving world of web development, Nuxt.js has taken a monumental leap with the launch of Nuxt DevTools v1.0. More than just a set of tools, it's a game-changer—a faithful companion for developers. This groundbreaking release, available for all Nuxt projects and being defaulted from Nuxt v3.8 onwards, marks the beginning of a new era in developer tools. It's designed to simplify our development journey, offering unparalleled transparency, performance, and ease of use. Join me as we explore how Nuxt DevTools v1.0 is set to revolutionize our workflow, making development faster and more efficient than ever. What makes Nuxt DevTools so unique? Alright, let's start delving into the features that make this tool so amazing and unique. There are a lot, so buckle up! In-App DevTools The first thing that caught my attention is that breaking away from traditional browser extensions, Nuxt DevTools v1.0 is seamlessly integrated within your Nuxt app. This ensures universal compatibility across browsers and devices, offering a more stable and consistent development experience. This setup also means the tools are readily available in the app, making your work more efficient. It's a smart move from the usual browser extensions, making it a notable highlight. To use it you just need to press Shift + Option + D` (macOS) or `Shift + Alt + D` (Windows): With simple keystrokes, the Nuxt DevTools v1.0 springs to life directly within your app, ready for action. This integration eliminates the need to toggle between windows or panels, keeping your workflow streamlined and focused. The tools are not only easily accessible but also intelligently designed to enhance your productivity. Pages, Components, and Componsables View The Pages, Components, and Composables View in Nuxt DevTools v1.0 are a clear roadmap for your app. They help you understand how your app is built by simply showing its structure. It's like having a map that makes sense of your app's layout, making the complex parts of your code easier to understand. This is really helpful for new developers learning about the app and experienced developers working on big projects. Pages View lists all your app's pages, making it easier to move around and see how your site is structured. What's impressive is the live update capability. As you explore the DevTools, you can see the changes happening in real-time, giving you instant feedback on your app's behavior. Components View is like a detailed map of all the parts (components) your app uses, showing you how they connect and depend on each other. This helps you keep everything organized, especially in big projects. You can inspect components, change layouts, see their references, and filter them. By showcasing all the auto-imported composables, Nuxt DevTools provides a clear overview of the composables in use, including their source files. This feature brings much-needed clarity to managing composables within large projects. You can also see short descriptions and documentation links in some of them. Together, these features give you a clear picture of your app's layout and workings, simplifying navigation and management. Modules and Static Assets Management This aspect of the DevTools revolutionizes module management. It displays all registered modules, documentation, and repository links, making it easy to discover and install new modules from the community! This makes managing and expanding your app's capabilities more straightforward than ever. On the other hand, handling static assets like images and videos becomes a breeze. The tool allows you to preview and integrate these assets effortlessly within the DevTools environment. These features significantly enhance the ease and efficiency of managing your app's dynamic and static elements. The Runtime Config and Payload Editor The Runtime Config and Payload Editor in Nuxt DevTools make working with your app's settings and data straightforward. The Runtime Config lets you play with different configuration settings in real time, like adjusting settings on the fly and seeing the effects immediately. This is great for fine-tuning your app without guesswork. The Payload Editor is all about managing the data your app handles, especially data passed from server to client. It's like having a direct view and control over the data your app uses and displays. This tool is handy for seeing how changes in data impact your app, making it easier to understand and debug data-related issues. Open Graph Preview The Open Graph Preview in Nuxt DevTools is a feature I find incredibly handy and a real time-saver. It lets you see how your app will appear when shared on social media platforms. This tool is crucial for SEO and social media presence, as it previews the Open Graph tags (like images and descriptions) used when your app is shared. No more deploying first to check if everything looks right – you can now tweak and get instant feedback within the DevTools. This feature not only streamlines the process of optimizing for social media but also ensures your app makes the best possible first impression online. Timeline The Timeline feature in Nuxt DevTools is another standout tool. It lets you track when and how each part of your app (like composables) is called. This is different from typical performance tools because it focuses on the high-level aspects of your app, like navigation events and composable calls, giving you a more practical view of your app's operation. It's particularly useful for understanding the sequence and impact of events and actions in your app, making it easier to spot issues and optimize performance. This timeline view brings a new level of clarity to monitoring your app's behavior in real-time. Production Build Analyzer The Production Build Analyzer feature in Nuxt DevTools v1.0 is like a health check for your app. It looks at your app's final build and shows you how to make it better and faster. Think of it as a doctor for your app, pointing out areas that need improvement and helping you optimize performance. API Playground The API Playground in Nuxt DevTools v1.0 is like a sandbox where you can play and experiment with your app's APIs. It's a space where you can easily test and try out different things without affecting your main app. This makes it a great tool for trying out new ideas or checking how changes might work. Some other cool features Another amazing aspect of Nuxt DevTools is the embedded full-featured VS Code. It's like having your favorite code editor inside the DevTools, with all its powerful features and extensions. It's incredibly convenient for making quick edits or tweaks to your code. Then there's the Component Inspector. Think of it as your code's detective tool. It lets you easily pinpoint and understand which parts of your code are behind specific elements on your page. This makes identifying and editing components a breeze. And remember customization! Nuxt DevTools lets you tweak its UI to suit your style. This means you can set up the tools just how you like them, making your development environment more comfortable and tailored to your preferences. Conclusion In summary, Nuxt DevTools v1.0 marks a revolutionary step in web development, offering a comprehensive suite of features that elevate the entire development process. Features like live updates, easy navigation, and a user-friendly interface enrich the development experience. Each tool within Nuxt DevTools v1.0 is thoughtfully designed to simplify and enhance how developers build and manage their applications. In essence, Nuxt DevTools v1.0 is more than just a toolkit; it's a transformative companion for developers seeking to build high-quality web applications more efficiently and effectively. It represents the future of web development tools, setting new standards in developer experience and productivity....