Free Resources
Pro Tips on Using AWS with Vue
Recommended Articles
How to configure and optimize a new Serverless Framework project with TypeScript
How to configure and optimize a new Serverless Framework project with TypeScript If you’re trying to ship some serverless functions to the cloud quickly, Serverless Framework is a great way to deploy to AWS Lambda quickly. It allows you to deploy APIs, schedule tasks, build workflows, and process cloud events through a code configuration. Serverless Framework supports all the same language runtimes as Lambda, so you can use JavaScript, Ruby, Python, PHP, PowerShell, C#, and Go. When writing JavaScript though, TypeScript has emerged as a popular choice as it is a superset of the language and provides static typing, which many developers have found invaluable. In this post, we will set up a new Serverless Framework project that uses TypeScript and some of the optimizations I recommend for collaborating with teams. These will be my preferences, but I’ll mention other great alternatives that exist as well. Starting a New Project The first thing we’ll want to ensure is that we have the serverless framework installed locally so we can utilize its commands. You can install this using npm: ` From here, we can initialize the project our project by running the serverless command to run the CLI. You should see a prompt like the following: We’ll just use the Node.js - Starter for this demo since we’ll be doing a lot of our customization, but you should check out the other options. At this point, give your project a name. If you use the Serverless Dashboard, select the org you want to use or skip it. For this, we’ll skip this step as we won’t be using the dashboard. We’ll also skip deployment, but you can run this step using your default AWS profile. You’ll now have an initialized project with 4 files: .gitignore index.js - this holds our handler that is configured in the configuration README.md - this has some useful framework commands that you may find useful serverless.yml - this is the main configuration file for defining your serverless infrastructure We’ll cover these in more depth in a minute, but this setup lacks many things. First, I can’t write TypeScript files. I also don’t have a way to run and test things locally. Let’s solve these. Enabling TypeScript and Local Dev Serverless Framework’s most significant feature is its rich plugin library. There are 2 packages I install on any project I’m working on: - serverless-offline - serverless-esbuild Serverless Offline emulates AWS features so you can test your API functions locally. There aren’t any alternatives to this for Serverless Framework, and it doesn’t handle everything AWS can do. For instance, authorizer functions don’t work locally, so offline development may not be on the table for you if this is a must-have feature. There are some other limitations, and I’d consult their issues and READMEs for a thorough understanding, but I’ve found this to be excellent for 99% of projects I’ve done with the framework. Serverless Esbuild allows you to use TypeScript and gives you extremely fast bundler and minifier capabilities. There are a few alternatives for TypeScript, but I don’t like them for a few reasons. First is Serverless Bundle, which will give you a fully configured webpack-based project with other features like linters, loaders, and other features pre-configured. I’ve had to escape their default settings on several occasions and found the plugin not to be as flexible as I wanted. If you need that advanced configuration but want to stay on webpack, Serverless Webpack allows you to take all of what Bundle does and extend it with your customizations. If I’m getting to this level, though, I just want a zero-configuration option which esbuild can be, so I opt for it instead. Its performance is also incredible when it comes to bundle times. If you want just TypeScript, though, many people use serverless-plugin-typescript, but it doesn’t support all TypeScript features out of the box and can be hard to configure. To configure my preferred setup, do the following: 1. Install the plugins by setting up your package.json. I’m using yarn but you can use your preferred package manager. ` Note: I’m also installing serverless here, so I have a local copy that can be used in package.json scripts. I strongly recommend doing this. 2. In our serverless.yml, let’s install the plugins and configure them. When done, our configuration should look like this: ` 3. Now, in our newly created package.json, let’s add a script to run the local dev server. Our package.json should now look like this: ` We can now run yarn dev in our project root and get a running server 🎉 But our handler is still in JavaScript. We’ll want to pull in some types and set our tsconfig.json to fix this. For the types, I use aws-lambda and @types/serverless, which can be installed as dev dependencies. For reference, my tsconfig.json looks like this: ` And I’ve updated our index.js to index.ts and updated it to read as follows: ` With this, we now have TypeScript running and can do local development. Our function doesn’t expose an HTTP route making it harder to test so let’s expose it quickly with some configuration: ` So now we can point our browser to http://localhost:3000/healthcheck and see an output showing our endpoint working! Making the Configuration DX Better ion DX Better Many developers don’t love YAML because of its strict whitespace rules. Serverless Framework supports JavaScript or JSON configurations out of the box, but we want to know if our configuration is valid as we’re writing it. Luckily, we can now use TypeScript to generate a type-safe configuration file! We’ll need to add 2 more packages to make this work to our dev dependencies: ` Now, we can change our serverless.yml to serverless.ts and rewrite it with type safety! ` Note that we can’t use the export keyword for our configuration and need to use module.exports instead. Further Optimization There are a few more settings I like to enable in serverless projects that I want to share with you. AWS Profile In the provider section of our configuration, we can set a profile field. This will relate to the AWS configured profile we want to use for this project. I recommend this path if you’re working on multiple projects to avoid deploying to the wrong AWS target. You can run aws configure –profile to set this up. The profile name specified should match what you put in your Serverless Configuration. Individual Packaging Cold starts#:~:text=Cold%20start%20in%20computing%20refers,cache%20or%20starting%20up%20subsystems.) are a big problem in serverless computing. We need to optimize to make them as small as possible and one of the best ways is to make our lambda functions as small as possible. By default, serverless framework bundles all functions configured into a single executable that gets uploaded to lambda, but you can actually change that setting by specifying you want each function bundled individually. This is a top-level setting in your serverless configuration and will help reduce your function bundle sizes leading to better performance on cold starts. ` Memory & Timeout Lambda charges you based on an intersection of memory usage and function runtime. They have some limits to what you can set these values to but out of the box, the values are 128MB of memory and 3s for timeout. Depending on what you’re doing, you’ll want to adjust these settings. API Gateway has a timeout window of 30s so you won’t be able to exceed that timeout window for HTTP events, but other Lambdas have a 15-minute timeout window on AWS. For memory, you’ll be able to go all the way to 10GB for your functions as needed. I tend to default to 512MB of memory and a 10s timeout window but make sure you base your values on real-world runtime values from your monitoring. Monitoring Speaking of monitoring, by default, your logs will go to Cloudwatch but AWS X-Ray is off by default. You can enable this using the tracing configuration and setting the services you want to trace to true for quick debugging. You can see all these settings in my serverless configuration in the code published to our demo repo: https://github.com/thisdot/blog-demos/tree/main/serverless-setup-demo Other Notes Two other important serverless features I want to share aren’t as common in small apps, but if you’re trying to build larger applications, this is important. First is the useDotenv feature which I talk more about in this blog post. Many people still use the serverless-dotenv-plugin which is no longer needed for newer projects with basic needs. The second is if you’re using multiple cloud services. By default, your lambdas may not have permission to access the other resources it needs. You can read more about this in the official serverless documentation about IAM permissions. Conclusion If you’re interested in a new serverless project, these settings should help you get up and running quickly to make your project a great developer experience for you and those working with you. If you want to avoid doing these steps yourself, check out our starter.dev serverless kit to help you get started....
Jan 26, 2024
7 mins
Deploying apps and services to AWS using AWS Copilot CLI
Copilot has become a household name for developers, all thanks to GitHub’s popular AI tooling. Before GitHub released Copilot, AWS already had a developer tool in the wild, also named Copilot. I stumbled across AWS Copilot a year or two ago and found it to be a really great tool for easily deploying Serverless applications and services to AWS infrastructure. Since deploying to AWS has been such a huge pain point for so many developers, Copilot CLI is one of many tools that is designed to make this process a lot easier. AWS Copilot is a command-line interface (CLI) that simplifies the process of deploying and managing containerized applications on AWS. It abstracts away the complexity of managing infrastructure, allowing us to focus on writing code. This blog post will provide an overview of AWS Copilot CLI and explore its practical use cases. AWS Copilot CLI is designed to simplify the building, releasing, and operating of production-ready containerized applications on Amazon ECS and AWS Fargate. It offers an intuitive interface for developers to launch and manage environments, jobs, pipelines, and services in the cloud. Although ECS and Fargate are considered ‘Serverless’; you are actually deploying to more traditional, stateful, web servers on EC2 _(these are cool again)_. Installation You can install the Copilot CLI using Homebrew. ` Otherwise use one of the scripts from the installation page. Credentials You need to add AWS credentials with proper permissions to your ~/aws/.credentials file to use Copilot. See the credentials docs for more details on this. ` Overview All you need is a Dockerfile that knows how to build and run your application and Copilot will handle the rest. Copilot provides a simple declarative set of commands, including examples and guided experiences built-in. These commands make it easy to start from scratch and get a containerized application running in the cloud in just a few steps. The configuration files that Copilot generates (called manifests) allow us to easily configure and adjust the amount of compute resources available to our web service _(cpu, memory, instances, etc)_. Here’s an architecture diagram for a load-balanced web service running on AWS and deployed with Copilot. The outermost layer is the region your application is deployed to on AWS ie: us-east-1, us-west-1, etc. Copilot handles all the networking for your application which includes a VPC, and public subnets that your server instances can be reached from. The load balancer (ALB) sits at the top, listens for requests, and directs traffic to the ECS Cluster (instances of your application server). Typically, the work involved with setting all of these pieces up and getting them working together is cumbersome to put it lightly. Concepts Before we dive into some of the commands, a quick look at the concepts that Copilot is built on so we have a clear understanding of what we can achieve with it. Applications Applications in Copilot is the top-level parent of all the AWS infrastructure managed by your Copilot setup. In this article I will generally refer to an application as any kind of software that you are deploying (api server, etc). In Copilot your ‘Application’ is the entire collection of all the environments and services that you have configured. Here’s the diagram from the documentation. The Vote application can consist of a number of different services, jobs, pipelines, etc. Services In AWS Copilot, “Services” refer to the type of application (not to be confused with the definition of Application in the previous section) that you’re deploying and the underlying AWS service infrastructure that supports it. Using the Load Balanced Web Service service from the diagram above, the service consists of the ECS (Elastic Container Service) service, the application load balancer, and the network load balancer. These are some of the AWS infrastructure that Copilot orchestrates for you when deploying this type of “Service”. There are a few main types of services that you can deploy with Copilot - Internet-facing web services (Static Site, Load Balanced Web Service, etc) - Backend services (services that can only be accessed internally) - Worker services (pub/sub queues, etc) Environments When setting up your project and your first service you will supply Copilot with the name of the environment you want to deploy to. For example, you can create a production environment initially to deploy your application and services to and then later on add additional environments like a staging environment. Jobs You might also know these as ‘crons’ but these are just tasks or some code that runs on a schedule. Pipelines Pipelines are for automating tests and releases. This is AWS’s CI/CD service somewhat similar to something like GitHub actions. Copilot can initialize and create new pipelines for you. We can configure this as a manifest in our codebase that declares instructions on how we build and deploy our application(s). Configuration Copilot is configured through various configuration files called ‘Manifests’. The documentation contains examples on how to configure your application, services, environments, jobs, and pipelines. You can also refer to it to learn what options are available. Since Copilot is a CLI tool they make a lot of the configuration process pretty painless by providing walkthrough prompts and generating configuration files for you. The CLI Now that we have a pretty good idea of what AWS Copilot is and the types of things we can accomplish with it, let's walk through using the CLI. ` This is where you will most likely want to start to get your application ready for deployment to AWS. The init command will prompt you with questions like what kind of service you want to generate and once you’re done, it will generate all the initial configuration files for you. Example setup: ` You’ll pick your service type, tell Copilot where your Dockerfile lives, name your environment, and let it do its magic from there. Copilot will generate your Manifest/config files which in turn become CloudFormation stacks that set up all your infrastructure on AWS. Other Commands There are commands for each concept that we covered above. There are Create, Delete, and Deploy operation commands for Environments, Services, Jobs, and Pipelines. If you wanted to add a staging environment to your application you could just run copilot env init . There are also commands to manage your Application or things like tasks. To see details about your Application you can run copilot app show -n [name] Conclusion AWS Copilot CLI is a powerful tool that simplifies the deployment and management of containerized applications on AWS. It abstracts away the complexities of the underlying infrastructure, allowing developers to focus on writing code and delivering value to their customers. Whether you're deploying a simple web service or managing multiple environments, AWS Copilot CLI can make your cloud development process more efficient and enjoyable....
Jan 3, 2024
6 mins