Skip to content

How to Retry Failed Steps in GitHub Action Workflows

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

Sometimes things can go wrong in your GitHub Action workflow step(s), and you may want to retry them. In this article, we'll cover two methods for doing this!

Pre-requisites

  • Git This should be installed in your path.
  • GitHub account: We'll need this to use GitHub Actions.

Initial setup

In order to follow along, here are the steps you can take to setup your GitHub Actions workflow:

Initialize your git repository

In your terminal, run git init to create an empty git repository or skip this step if you already have an existing git repository.

Create a workflow file

GitHub workflow files are usually .yaml/.yml files that contain a series of jobs and steps to be executed by GitHub Actions. These files often reside in .github/workflows. If the directories do not exist, go ahead and create them. Create a file retry.yml in .github/workflows. For now, the file can contain the following:

name: "Retry action using retry step"
on:
	# This action is called when this is pushed to github
	push:
	# This action can be manually triggered
	workflow_call:
jobs:
    # This name is up to you
	retry-job:
		runs-on: "ubuntu-latest"
		name: My Job
		steps:
			- name: Checkout repository
			  uses: actions/checkout@v3

			- name: Print
			  run: |
				echo 'Hello'

Testing your workflow

You can test your GitHub Action workflow by pushing your changes to GitHub and going to the actions tab of the repository. You can also choose to test locally using Act.

Retrying failed steps

Approach 1: Using the retry-step action

By using the retry-step action, we can retry any failed shell commands. If our step or series of steps are shell commands, we can use the retry-step action to retry them.

If, however, you'd like to try retry a step that is using another action, then the retry-step action will NOT work for you. In that case, you may want to try the alternative steps mentioned below.

Modify your action file to contain the following:

name: "Retry action using retry step"
on:
	# This action is called when this is pushed to github
	push:
	# This action can be manually triggered
	workflow_call:
jobs:
    # This name is up to you
	retry-job:
		runs-on: "ubuntu-latest"
		name: My Job
		steps:
			- name: Checkout repository
			  uses: actions/checkout@v3

			- name: Use the reusable workflow
			  # Use the retry action
			  uses: nick-fields/retry@v2
			  with:
				max_attempts: 3
				retry_on: error
				timeout_seconds: 5
				# You can specify the shell commands you want to retry here
				command: |
					echo 'some command that would potentially fail'				

Approach 2: Duplicate steps

If you are trying to retry steps that use other actions, the retry-step action may not get the job done. In this case, you can still retry steps by retrying steps conditionally, depending on whether or not a step failed.

GitHub provides us with two main additional attributes in our steps:

  • continue-on-error - Setting this to true means that the even if the current step fails, the job will continue on to the next one (by default failure stops a job's running).
  • steps.{id}.outcome - where {id} is an id you add to the steps you want to retry. This can be used to tell whether a step failed or not, potential values include 'failure' and 'success'.
  • if - allows us to conditionally run a step
name: "Retry action using retry step"
on:
	# This action is called when this is pushed to GitHub
	push:
	# This action can be manually triggered
	workflow_call:
jobs:
    # This name is up to you
	retry-job:
		runs-on: "ubuntu-latest"
		name: My Job
		steps:
			- name: Checkout repository
			  uses: actions/checkout@v3

			- name: Some action that can fail
			  # You need to specify an id to be able to tell what the status of this action was
			  id: myStepId1
			  # This needs to be true to proceed to the next step of failure
			  continue-on-error: true
			  uses: actions/someaction

			# Duplicate of the step that might fail ~ manual retry
			- name: Some action that can fail
			  id: myStepId2
			  # Only run this step if step 1 fails. It knows that step one failed because we specified an `id` for the first step
			  if: steps.myStepId1.outcome == 'failure'
			  # This needs to be true to proceed to the next step of failure
			  continue-on-error: true
			  uses: actions/someaction

Bonus: Retrying multiple steps

If you want to retry multiple steps at once, then you can use composite actions to group the steps you want to retry, and then use the duplicate steps approach mentioned above.

Conclusion

How do you decide which approach to use?

  • If you are retrying a step that is only shell commands, then you can use the retry step action.

  • If you are retrying a step that needs to use another action, then you can use duplication of steps with conditional running to manually retry the steps.

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Creating Custom GitHub Actions cover image

Creating Custom GitHub Actions

Since its generally available release in Nov 2019, Github Actions has seen an incredible increase in adoptions. Github Actions allows you to automate, customize and execute your software development workflows. In this article, we will learn how to create our first custom Github Actions using Typescript. We will also show some of the best practices, suggested by Github, for publishing and versioning our actions. Types of Actions There two types of publishable actions: Javascript and Docker actions. Docker containers provide a more consistent and reliable work unit than Javascript actions because they package the environment with the Github Actions code. They are ideal for actions that must run in a specific configuration. On the other hand, JavaScript actions are faster than Docker actions since they run directly on a runner machine and do not have to worry about building the Docker image every time. Javascript actions can run in Windows, Mac, and Linux, while Docker actions can just run in Linux. But most importantly (for the purpose of this article), Javascript actions are easier to write. There is a third kind of Action: the Composite run steps Actions. These help you reuse code inside your project workflows, and hide complexity when you do not want to publish the Action to the marketplace. You can quickly learn how to create Composite run step Actions in this video, or by reading through the docs. The Action For this article, we will be creating a simple Javascript Action. We will use the Typescript Action template to simplify our setup, and use TypeScript out of the box. The objective is to walk over the whole lifecycle of creating and publishing a custom GitHub Action. We will be creating a simple action that counts the Lines of Code (LOC) of a given type of file, and throws if the sum of LOC exceeds a given threshold. > Keep in mind that the source code is not production-ready and should only be used for learning. The Action will receive three params: - fileOrFolderToProcess (optional): The file or folder to process - filesAndFoldersToIgnore (optional): A list of directories to ignore. It supports glob patterns. - maxCount (required): The maximum number of LOC for the sum of files. The Action recursively iterates over all files under the folder to calculate the total amount of Lines of Code for our project. During the process, the Actions will skip the files and folders marked to ignore, and at the end, if the max count is reached, we throw an error. Additionally, we will set the total LOC in an Action output no matter the result of the Action. Setting up the Environment JavaScript Github Actions are not significantly different from any other Javascript project. We will set up some minimal configuration, but you should feel free to add your favorite workflow. Let us start by creating the repository. As mentioned, we will use the Typescript Github Actions template, which will provide some basic configuration for us. We start by visiting https://github.com/actions/typescript-action. We should see something like this: The first thing we need to do is add a start to the repo :). Once that is completed, we will then click on the "Use this template" button. We are now in a regular "create new repository" page that we must fill. We can then create our new repository by clicking the "Create repository from template" button. Excellent, now our repository is created. Let us take a look at what this template has provided for us. The first thing to notice is that Github recognizes that we are in a GitHub Actions source code. Because of that, GitHub provides a contextual button to start releasing our Action. The file that allows this integration is the action.yml file. That is the action metadata container, including the name, description, inputs, and outputs. It is also where we will reference the entry point .js for our Action. The mentioned entry point will be located in the dist folder, and the files contained there is the result of building our Typescript files. > Important! Github uses the dist folder to run the Actions. Unlike other repositories, this build bundle MUST be included in the repository, and should not be ignored. Our source code lives in the source folder. The main.ts is what would be compiled to our Action entry point index.js. There is where most of our work will be focused. Additional files and configurations In addition to the main files, the TypeScript template also adds configuration files for Jest, TypeScript, Prettier and ESLint. A Readme template and a CODEOWNERS file are included, along with a LICENSE. Lastly, it will also provide us with a GitHub CI YAML file with everything we need to e2e test our Action. Final steps To conclude our setup walkthrough, let us clone the repository. I will be using mine, but you should replace the repository with yours. ` Navigate to the cloned project folder, and install the dependencies. ` Now we are ready to start implementing our Action. The implementation First we must configure our action.yml file and define our API. The metadata The first three properties are mostly visual metadata for the Workspace, and the Actions tab. ` The name property is the name of your Action. GitHub displays the name in the Actions tab to help visually identify actions in each job. GitHub will also use the name, the description, and the author of the Action to inform users about the Action goal in the Actions Marketplace. Ensure a short and precise description; Doing so will help the users of the Action quickly identify the problem that the Action is solving. Next, we define our inputs. Like we did with the Action, we should write a short and precise description to avoid confusion about the usage of each input variable. ` We will mark our inputs as required or optional, according to what we already specified when describing our plans for the Action. The default values help provide pre-configured data to our Action. As with the inputs, we must define the outputs. ` Actions that run later in a workflow can use the output data set in our Action run. If you don't declare an output in your action metadata file, you can still set outputs and use them in a workflow. However, it would not be evident for a user searching for the Action in the Marketplace since GitHub cannot detect outputs that are not defined in the metadata file. Finally, we define the application running the Action and the entry point for the Action itself. ` Now, let's see everything together so we can appreciate the big picture of our Action metadata. ` The Code Now that we have defined all our metadata and made GitHub happy, we can start coding our Action. Our code entry point is located at src/maint.ts. Let's open the file in our favorite IDE and start coding. Let's clean all the unnecessary code that the template created for us. We will, however, keep the core tools import. ` The core library will give us all the tools we need to interact with the inputs and outputs, force the step to fail, add debugging information, and much more. Discover all the tools provided by the Github Actions Toolkit. After cleaning up all of the example code, the initial step would be extracting and transforming our inputs to a proper form. ` With our inputs ready, we need to start thinking about counting our LOC while enforcing the input restrictions. Luckily there is a couple of libraries that can do this for us. For this example, we will be using node-sloc, but feel free to use any other. Go on and install the dependency using npm or any package manager that you prefer. ` Import the library. ` And the rest of the implementation is straightforward. ` Great! We have our LOC information ready. Let's use it to set the output defined in the metadata before doing anything else. ` Additionally, we will also provide debuggable data. Notice that debug information is only available if the repository owner activated debug logging capabilities. ` Here is the link if you are interested in debugging the Action yourself. Finally, verify that the count of the LOC is not exceeding the threshold. ` If the threshold is exceeded, we use the core.setFailed, to make this action step fail and, therefore, the entire pipeline fails. ` Excellent! We just finished our Action. Now we have to make it available for everyone. But first, lets configure our CI to perform an e2e test of our Action. Go to the file .github/workflows/*.yml. I called mine ci.yml but you can use whatever name makes sense to you. ` Here, we are triggering the pipeline whenever a pull request is created with base branch main or the main branch itself is pushed. Then, we run the base setup steps, like installing the packages, and building the action to verify that everything works as it should. Finally, we run e2e jobs that will test the actions as we were running it in an external application. That's it! Now we can publish our Action with confidence. Publish and versioning Something you must not forget before any release is to build and package your Action. ` These commands will compile your TypeScript and JavaScript into a single file bundle on the dist folder. With that ready, we can commit our changes to the main branch, and push to origin. Go back to your browser and navigate to the Action repository. First, go to the Actions tab and verify that our pipeline is green and the Action is working as expected. After that check, go back to the "Code" tab, the home route of our repository. Remember the "Draft a release" button? Well, it is time to click it. We are now on the releases page. This is where our first release will be created. Click on the terms and conditions link, and agree with the terms to publish your actions. Check the "Publish this Action to the Github Marketplace" input, and fill in the rest of the information. You can mark this as pre-release if you want to experiment with the Action before inviting users to use it. And that's it! Just click the "Publish release" button. Tada! Click in the marketplace button to see how your Action looks! After the first release is out, you will probably start adding features or fixing bugs. There are some best practices that you should follow while maintaining your versioning. Use this guide to keep your version under control. But the main idea is that the major tag- v1 for instance- should always be referencing the latest tag with the same major version. This means that if we release v1.9.3 we should update v1 to the same commit as v1.9.3. Our Action is ready. The obvious next step is to test it with a real application. Using the Action Now it is time to test our Action, and see how it works in the wild. We are going to use our Plugin Architecture example application. If you have read that article yet, here is the link. The first thing we need to do is create a new git branch. After that, we create our ci.yml file under .github/workflows. And we add the following pipeline code. ` Basically, we are just triggering this Action when a PR is created using main as the base branch, or if we push directly to main. Then, we add a single job that will checkout the PR branch and use our Action with a max count of 200. Finally, we print the value of our output variable. Save, commit, and push. Create your PR, go to the check tab, and see the result of your effort. Great! We have our first failing custom GitHub action. Now, 200 is a bit strict. Maybe 1000 lines of code are more appropriate. Adjust your step, commit, and push to see your pipeline green and passing. How great is that!? Conclusion Writing Custom GitHub Actions using JavaScript and TypeScript is really easy, but it can seem challenging when we are not familiar with the basics. We covered an end-to-end tutorial about creating, implementing, publishing, and testing your Custom GitHub Action. This is really just the beginning. There are unlimited possibilities to what you can create using GitHub Actions. Use what you learned today to make the community a better place with the tools you can create for everyone....

GitHub Actions for Serverless Framework Deployments cover image

GitHub Actions for Serverless Framework Deployments

Background Our team was building a Serverless Framework API for a client that wanted to use the Serverless Dashboard) for deployment and monitoring. Based on some challenges from last year, we agreed with the client that using a monorepo tool like Nx) would be beneficial moving forward as we were potentially shipping multiple Serverless APIs and frontend applications. Unfortunately, we discovered several challenges integrating with the Serverless Dashboard, and eventually opted into custom CI/CD with GitHub Actions. We’ll cover the challenges we faced, and the solution we created to mitigate our problems and generate a solution. Serverless Configuration Restrictions By default, the Serverless Framework does all its configuration via a serverless.yml file. However, the framework officially supports alternative formats) including .json, .js, and .ts. Our team opted into the TypeScript format as we wanted to setup some validation for our engineers that were newer to the framework through type checks. When we eventually went to configure our CI/CD via the Serverless Dashboard UI, the dashboard itself restricted the file format to just the YAML format. This was unfortunate, but we were able to quickly revert back to YAML as configuration was relatively simple, and we were able to bypass this hurdle. Prohibitive Project Structures With our configuration now working, we were able to select the project, and launch our first attempt at deploying the app through the dashboard. Immediately, we ran into a build issue: ` What we found was having our package.json in a parent directory of our serverless app prevented the dashboard CI/CD from being able to appropriately detect and resolve dependencies prior to deployment. We had been deploying using an Nx command: npx nx run api:deploy --stage=dev which was able to resolve our dependency tree which looked like: To resolve, we thought maybe we could customize the build commands utilized by the dashboard. Unfortunately, the only way to customize these commands is via the package.json of our project. Nx allows for package.json per app in their structure, but it defeated the purpose of us opting into Nx and made leveraging the tool nearly obsolete. Moving to GitHub Actions with the Serverless Dashboard We thought to move all of our CI/CD to GitHub Actions while still proxying the dashboard for deployment credentials and monitoring. In the dashboard docs), we found that you could set a SERVERLESS_ACCESS_KEY and still deploy through the dashboard. It took us a few attempts to understand exactly how to specify this key in our action code, but eventually, we discovered that it had to be set explicitly in the .env file due to the usage of the Nx build system to deploy. Thus the following actions were born: api-ci.yml ` api-clean.yml ` These actions ran smoothly and allowed us to leverage the dashboard appropriately. All in all this seemed like a success. Local Development Problems The above is a great solution if your team is willing to pay for everyone to have a seat on the dashboard. Unfortunately, our client wanted to avoid the cost of additional seats because the pricing was too high. Why is this a problem? Our configuration looks similar to this (I’ve highlighted the important lines with a comment): serverless.ts ` The app and org variables make it so it is required to have a valid dashboard login. This meant our developers working on the API problems couldn’t do local development because the client was not paying for the dashboard logins. They would get the following error: Resulting Configuration At this point, we had to opt to bypass the dashboard entirely via CI/CD. We had to make the following changes to our actions and configuration to get everything 100% working: serverless.ts - Remove app and org fields - Remove accessing environment secrets via the param option ` api-ci.yml - Add all our secrets to GitHub and include them in the scripts - Add serverless confg ` api-cleanup.yml - Add serverless config - Remove secrets ` Conclusions The Serverless Dashboard is a great product for monitoring and seamless deployment in simple applications, but it still has a ways to go to support different architectures and setups while being scalable for teams. I hope to see them make the following changes: - Add support for different configuration file types - Add better support custom deployment commands - Update the framework to not fail on login so local development works regardless of dashboard credentials The Nx + GitHub actions setup was a bit unnatural as well with the reliance on the .env file existing, so we hope the above action code will help someone in the future. That being said, we’ve been working with this on the team and it’s been a very seamless and positive change as our developers can quickly reference their deploys and know how to interact with Lambda directly for debugging issues already....

Getting Started with Vuetify in Vue 3 cover image

Getting Started with Vuetify in Vue 3

If you're here to learn about Vuetify and how to use it with Vue 3, you've come to the right place. In this article, we'll introduce Vuetify, a Material Design component framework for Vue.js, and walk you through setting it up in a Vue 3 project. We will also build a simple web app that allows us to create and vue jokes. Oh, sorry, view* jokes! What is Vuetify? Vuetify is a VueJS Material Design component framework. That is to say that Vuetify provides us with prebuilt components and features (e.g. internationalization, tree shaking, theming, etc.) to help us build Vue applications faster. Additionally, Vuetify components have built-in standard functionality, such as validation in form inputs, or various disabled states in buttons. You can check out a full list of components supported by Vuetify on their official documentation website. Setting things up Before we get started, you will need the following on your machine: - NodeJS installed - A NodeJS package manager e.g. yarn or npm installed - Intermediate knowledge of Vue 3.x with the composition API If you’d like to follow along, feel free to clone the github repository for this article. ` Installing Vuetify We will be using yarn to install and setup Vuetify 3 (the version that supports Vue 3) in this case. First, let's create a new Vuetify project. You can do this by running the following command in your terminal or command prompt: ` This will create a new Vuetify project. For our jokes app, we will use the Essentials (Vuetify, VueRouter, Pinia) preset when creating our app. We will also be using TypeScript for our app, but this is not necessary. Since VueJS allows us to build incrementally, if you would like to instead add Vuetify to an existing project, you can use the manual steps provided by the Vuetify team. Testing our application Once we have installed and configured our application, cd into the project's directly, and run the app using the following command: ` Visit localhost:3000/ to see your app in action. Vuetify project folder structure Our Vuetify project is generally structured as follows: - public - Contains static assets that do not need preprocessing eg. our application favicon - src - Contains our VueJS source code. We'll mostly be working here. - assets - Assets that need to be preprocessed eg. our images that may need to be compressed when building for production - components - layouts - Layouts - plugins - Everything gets wired up here (registration of our app as well as vuetify, our router & pinia) - router - Vue router-related functionality - store - Pinia store - styles - views - our web app's "pages" Worth noting before building It is worth noting that all components in Vuetify start with a v- for example: - v-form is a form - v-btn is a button component - v-text-field is an input field and so on and so forth. When creating your own components, it is recommended to use a different naming approach so that it is easier to know which components are Vuetify components, and which ones are your components. Building our Vuetify application We will build a web app that allows us to add, view and delete jokes. Here are the steps we will take to build our app: - Delete unnecessary boilerplate from generated Vuetify app - Add a joke pinia store - we'll be using this to store our jokes globally using Pinia - Create our joke components - components/jokes/CreateJokeForm.vue - the form that allows us to add jokes - components/jokes/JokeList.vue - Lists our jokes out. - Add our components to our Home.vue to view them in our home page Setting up the jokes pinia store In the src/store/ directory, create a new file called joke.ts that will serve as our Pinia store for storing our jokes. The file content for this will be as follows: ` This code defines a special storage space called a "store" for jokes in our Vue.js app. This store keeps track of all the jokes we've added through our app's form. Each joke has an ID, title, and punchline. The addJoke function in the store is used to add a new joke to the store when a user submits the form. The removeJoke function is used to delete a joke from the store when a user clicks the delete button. By using this store, we can keep track of all the jokes that have been added through the app, and we can easily add or remove jokes without having to manage the list ourselves. Creating the joke components CreateJokeForm.vue Create a file in src/components/jokes/ called CreateJokeForm.vue. This file defines a Vue.js component that displays a form for adding new jokes. The file should have the following contents: Template section ` In the template section, we define a form using the v-form component from Vuetify. We bind the form's submit event to the submitJoke method, which will be called when the form is submitted. Inside the form, we have two text fields" one for the joke title, and one for the punchline. These text fields are implemented using the v-text-field component from Vuetify. The label prop sets the label for each text field, and the outlined prop creates an outlined style for each text field. The required prop sets the fields as required, meaning that they must be filled in before the form can be submitted. Finally, we have a submit button implemented using the v-btn component from Vuetify. The button is disabled until both the title and punchline fields are filled in, which is accomplished using the :disabled prop with a computed property that checks if both fields are empty. Script section ` In the script section, we import some functions and types from Vue.js and the joke store. We then define a jokeStore variable that holds the instance of the useJokeStore function from the joke store. We also define two refs, jokeTitle, and jokePunchline, which hold the values of the form's title and punchline fields, respectively. We then define a computed property, joke, which creates a new Joke object using the jokeTitle and jokePunchline refs, as well as the length of the jokes array in the jokeStore to set the id property. Finally, we define a submitJoke function that calls the addJoke method from the jokeStore to add the new joke object to the store. We also reset the jokeTitle and jokePunchline refs to empty strings. JokeList.vue Template section This one looks bulky. But in essence, all we are doing is displaying a list of jokes when they are found, and a message that lets us know that there are no jokes if we have none that have been added. ` In the template section, we define a v-card component, which is a container component used to group related content in Vuetify. The card contains a title, which includes an excited emoticon icon from the mdi-emoticon-excited-outline icon set from Material Design Icons, displayed using the v-icon component. The jokes are displayed in a v-list, which is a component used to display lists in Vuetify. Each joke is represented by a v-list-item containing a title and subtitle. The v-row and v-col components from Vuetify are used to divide each list item into two columns: one column for the joke title and punchline, and another column for the delete button. The delete button is implemented using the v-btn component from Vuetify. The button is red, and outlined using the color="error" and variant="outlined" props, respectively. The @click event is used to call the deleteJoke function when the button is clicked. If there are no jokes in the jokeStore, the component displays an v-alert component with a message to add some jokes. Script section ` In the script section, we import some functions and types from the joke store. We then define a jokeStore variable that holds the instance of the useJokeStore function from the joke store. We also define a deleteJoke function that takes a joke object as an argument and calls the removeJoke method from the jokeStore to remove the joke from the store. This component is called JokeList.vue and displays a list of jokes using Vuetify components like v-card, v-list, v-list-item, v-row, v-col, and v-btn. The component includes a deleteJoke function to remove a joke from the jokeStore as well. Wiring it up To display our form as well as list of jokes, we will go to the src/views/Home.vue file and change its contents to the following: ` The Home.vue file defines a Vue.js view that displays the home page of our app. The view contains a v-container component, which is a layout component used to provide a responsive grid system for our app. Inside the v-container, we have a v-row component, which is used to create a horizontal row of content. The v-row contains two v-col components, each representing a column of content. The cols prop specifies that each column should take up 12 columns on small screens (i.e. the entire row width), while on medium screens, each column should take up 6 columns (i.e. half the row width). The first v-col contains the CreateJokeForm component, which displays a form for adding new jokes. The second v-col contains the JokeList component, which is used to display a list of jokes that have been added through the form. In the script section of the file, we import the CreateJokeForm and JokeList components, and register them as components for use in the template. This view provides a simple and responsive layout for our app's home page, with the CreateJokeForm and JokeList components displayed side by side on medium screens and stacked on small screens. Bonus: Layouts & Theming Layouts Even though we had no need to adjust our layouts in our current jokes application, layouts are an important concept in Vuetify. They allow us to pre-define reusable layouts for our applications. These could include having a different layout for when users are logged in, and when they are logged out or layouts for different types of users. In our application, we used the default Layout (src/layouts/default/Default.vue) but Vuetify offers us the flexibility to build different layouts for the different domains in our applications. Vuetify also supports nested layouts. You can learn more about layouts in Vuetify in the official Vuetify documentation. Theming If you have specific brand needs for your application. Vuetify has a built-in theming system that allows you to customize the look and feel of your application. You can learn more about theming in the official Vuetify theming documentation. Conclusion In this article, we introduced Vuetify, and covered how to set it up with Vue 3. We built a VueJS app that allows us to add and manage jokes. We also discussed how to use various Vuetify components to compose our UI, from v-form for declaring forms to v-row for creating a row/column layout, and v-list for displaying a list of items among others. With this knowledge, you can start using Vuetify in your Vue 3 projects and create stunning user interfaces. Also, if you'd like to start your own VueJS project but need help with how to structure it or would like to skip the tedious setup steps of setting up a VueJS project, you can use the Vue 3 Starter.dev kit to skip the boilerplate and start building! Next steps for learning Vuetify and Vue 3 development Now that you have an understanding of Vuetify, it's time to dive deeper into its features, and explore more advanced use cases. To continue your learning journey, consider the following resources: 1. Official Vuetify documentation: The Vuetify documentation is an excellent resource for learning about all the features and components Vuetify offers. 2. Vue 3 documentation: To get the most out of Vuetify, it's essential to have a solid understanding of Vue 3. Read the official Vue 3 documentation and practice building Vue applications. Happy coding, and have fun exploring the world of Vuetify and Vue 3!...

Docusign Momentum 2025 From A Developer’s Perspective cover image

Docusign Momentum 2025 From A Developer’s Perspective

*What if your contract details stuck in PDFs could ultimately become the secret sauce of your business automation workflows?* In a world drowning in PDFs and paperwork, I never thought I’d get goosebumps about agreements – until I attended Docusign Momentum 2025. I went in expecting talks about e-signatures; I left realizing the big push and emphasis with many enterprise-level organizations will be around Intelligent Agreement Management (IAM). It is positioned to transform how we build business software, so let’s talk about it. As Director of Technology at This Dot Labs, I had a front-row seat to all the exciting announcements at Docusign Momentum. Our team also had a booth there showing off the 6 Docusign extension apps This Dot Labs has released this year. We met 1-on-1 with a lot of companies and leaders to discuss the exciting promise of IAM. What can your company accomplish with IAM? Is it really worth it for you to start adopting IAM?? Let’s dive in and find out. After his keynote, I met up with Robert Chatwani, President of Docusign and he said this > “At Docusign, we truly believe that the power of a great platform is that you won’t be able to exactly predict what can be built on top of it,and builders and developers are at the heart of driving this type of innovation. Now with AI, we have entered what I believe is a renaissance era for new ideas and business models, all powered by developers.” Docusign’s annual conference in NYC was an eye-opener: agreements are no longer just documents to sign and shelve, but dynamic data hubs driving key processes. Here’s my take on what I learned, why it matters, and why developers should pay attention. From E-Signatures to Intelligent Agreements – A New Era Walking into Momentum 2025, you could feel the excitement. Docusign’s CEO and product team set the tone in the keynote: “Agreements make the world go round, but for too long they’ve been stuck in inboxes and PDFs, creating drag on your business.” Their message was clear – Docusign is moving from a product to a platform​. In other words, the company that pioneered e-signatures now aims to turn static contracts into live, integrated assets that propel your business forward. I saw this vision click when I chatted with an attendee from a major financial services firm. His team manages millions of forms a year – loan applications, account forms, you name it. He admitted they were still “just scanning and storing PDFs” and struggled to imagine how IAM could help. We discussed how much value was trapped in those documents (what Docusign calls the “Agreement Trap” of disconnected processes​). By the end of our coffee, the lightbulb was on: with the right platform, those forms could be automatically routed, data-extracted, and trigger workflows in other systems – no more black hole of PDFs. His problem wasn’t unique; many organizations have critical data buried in agreements, and they’re waking up to the idea that it doesn’t have to be this way. What Exactly is Intelligent Agreement Management (IAM)? So what is Docusign’s Intelligent Agreement Management? In essence, IAM is an AI-powered platform that connects every part of the agreement lifecycle. It’s not a single product, but a collection of services and tools working in concert​. Docusign IAM helps transform agreement data into insights and actions, accelerate contract cycles, and boost productivity across departments. The goal is to address the inefficiencies in how agreements are created, signed, and managed – those inefficiencies that cost businesses time and money. At Momentum, Docusign showcased the core components of IAM: - Docusign Navigator link: A smart repository to centrally store, search, and analyze agreements. It uses AI to convert your signed documents (which are basically large chunks of text) into structured, queryable data​. Instead of manually digging through contracts for a specific clause, you can search across all agreements in seconds. Navigator gives you a clear picture of your organization’s contractual relationships and obligations (think of it as Google for your contracts). Bonus: it comes with out-of-the-box dashboards for things like renewal dates, so you can spot risks and opportunities at a glance. - Docusign Maestro link: A no-code workflow engine to automate agreement workflows from start to finish. Maestro lets you design customizable workflows that orchestrate Docusign tasks and integrate with third-party apps – all without writing code​. For example, you could have a workflow for new vendor onboarding: once a vendor contract is signed, Maestro could automatically notify your procurement team, create a task in your project tracker, and update a record in your ERP system. At the conference, they demoed how Maestro can streamline processes like employee onboarding and compliance checks through simple drag-and-drop steps or archiving PDFs of signed agreements into Google Drive or Dropbox. - Docusign Iris (AI Engine) link: The brains of the operation. Iris is the new AI engine powering all of IAM’s “smarts” – from reading documents to extracting data and making recommendations​. It’s behind features like automatic field extraction, AI-assisted contract review, intelligent search, and even document summarization. In the keynote, we saw examples of Iris in action: identify key terms (e.g. payment terms or renewal clauses) across a stack of contracts, or instantly generate a summary of a lengthy agreement. These capabilities aren’t just gimmicks; as one Docusign executive put it, they’re “signals of a new way of working with agreements”. Iris essentially gives your agreement workflow a brain – it can understand the content of agreements and help you act on it. - Docusign App Center link: A hub to connect the tools of your trade into Docusign. App Center is like an app store for integrations – it lets you plug in other software (project management, CRM, HR systems, etc.) directly into your Maestro workflows. This is huge for developers (and frankly, anyone tired of building one-off integrations). Instead of treating Docusign as an isolated e-signature tool, App Center makes it a platform you can extend. I’ll dive more into this in the next section, since it’s close to my heart – my team helped build some of these integrations! In short, IAM ties together the stages of an agreement (create → sign → store → manage) and supercharges each with automation and AI. It’s modular, too – you can adopt the pieces you need. Docusign essentially unbundled the agreement process into building blocks that developers and admins can mix-and-match. The future of agreements, as Docusign envisions it, is a world where organizations *“seamlessly add, subtract, and rearrange modular solutions to meet ever-changing needs”* on a single trusted platform. The App Center and Real-World Integrations (Yes, We Built Those!) One of the most exciting parts of Momentum 2025 for me was seeing the Docusign App Center come alive. As someone who works on integrations, I was practically grinning during the App Center demos. Docusign highlighted several partner-built apps that snap into IAM, and I’m proud to say This Dot Labs built six of them – including integrations for Monday.com, Slack, Jira, Asana, Airtable, and Mailchimp. Why are these integrations a big deal? Because developers often spend countless hours wiring up systems that need to talk to each other. With App Center, a lot of that heavy lifting is already done. You can install an app with a few clicks and configure data flows in minutes instead of coding for months​. In fact, a study found it takes the average org 12 months to develop a custom workflow via APIs, whereas with Docusign’s platform you can do it via configuration almost immediately​. That’s a game-changer for time-to-value. At our This Dot Labs booth, I spoke with many developers who were intrigued by these possibilities. For example, we showed how our Docusign Slack Extension lets teams send Slack messages and notifications when agreements are sent and signed.. If a sales contract gets signed, the Slack app can automatically post a notification in your channel and even attach the signed PDF – no more emailing attachments around. People loved seeing how easily Docusign and Slack now talk to each other using this extension​. Another popular one was our Monday.com app. With it, as soon as an agreement is signed, you can trigger actions in Monday – like assigning onboarding tasks for a new client or employee. Essentially, signing the document kicks off the next steps automatically. These integrations showcase why IAM is not just about Docusign’s own features, but about an ecosystem. App Center already includes connectors for popular platforms like Salesforce, HubSpot, Workday, ServiceNow, and more. The apps we built for Monday, Slack, Jira, etc., extend that ecosystem. Each app means one less custom integration a developer has to build from scratch. And if an app you need doesn’t exist yet – well, that’s an opportunity. (Shameless plug: we’re happy to help build it!) The key takeaway here is that Docusign is positioning itself as a foundational layer in the enterprise software stack. Your agreement workflow can now natively include things like project management updates, CRM entries, notifications, and data syncs. As a developer, I find that pretty powerful. It’s a shift from thinking of Docusign as a single SaaS tool to thinking of it as a platform that glues processes together. Not Just Another Contract Tool – Why IAM Matters for Business After absorbing all the Momentum keynotes and sessions, one thing is clear: IAM is not “just another contract management tool.” It’s aiming to be the platform that automates critical business processes which happen to revolve around agreements. The use cases discussed were not theoretical – they were tangible scenarios every developer or IT lead will recognize: - Procurement Automation: We heard how companies are using IAM to streamline procurement. Imagine a purchase order process where a procurement request triggers an agreement that goes out for e-signature, and once signed, all relevant systems update instantly. One speaker described connecting Docusign with their ERP so that vendor contracts and purchase orders are generated and tracked automatically. This reduces the back-and-forth with legal and ensures nothing falls through the cracks. It’s easy to see the developer opportunity: instead of coding a complex procurement approval system from scratch, you can leverage Docusign’s workflow + integration hooks to handle it. Docusign IAM is designed to connect to systems like CRM, HR, and ERP so that agreements flow into the same stream of data. For developers, that means using pre-built connectors and APIs rather than reinventing them. - Faster Employee Onboarding: Onboarding a new hire or client typically involves a flurry of forms and tasks – offer letters or contracts to sign, NDAs, setup of accounts, etc. We saw how IAM can accelerate onboarding by combining e-signature with automated task generation. For instance, the moment a new hire signs their offer letter, Maestro could trigger an onboarding workflow: provisioning the employee in systems, scheduling orientation, and creating tasks in tools like Asana or Monday. All those steps get kicked off by the signed agreement. Docusign Maestro’s integration capabilities shine here – it can tie into HR systems or project management apps to carry the baton forward​. The result is a smoother day-one experience for the new hire and less manual coordination for IT and HR. As developers, we can appreciate how this modular approach saves us from writing yet another “onboarding script”; we configure the workflow, and IAM handles the rest. - Reducing Contract Auto-Renewal Risk: If your company manages a lot of recurring contracts (think vendor services, subscriptions, leases), missing a renewal deadline can be costly. One real-world story shared at Momentum was about using IAM to prevent unwanted auto-renewals. With traditional tracking (spreadsheets or calendar reminders), it’s easy to forget a termination notice and end up locked into a contract for another year. Docusign’s solution: let the AI engine (Iris) handle it. It can scan your repository, surface any renewal or termination dates, and proactively remind stakeholders – or even kick off a non-renewal workflow if desired. As the Bringing Intelligence to Obligation Management session highlighted, “Missed renewal windows lead to unwanted auto-renewals or lost revenue… A forgotten termination deadline locks a company into an unneeded service for another costly term.”​ With IAM, those pitfalls are avoidable. The system can automatically flag and assign tasks well before a deadline hits​. For developers, this means we can deliver risk-reduction features without building a custom date-tracking system – the platform’s AI and notification framework has us covered. These examples all connect to a bigger point: agreements are often the linchpin of larger business processes (buying something, hiring someone, renewing a service). By making agreements “intelligent,” Docusign IAM is essentially automating chunks of those processes. This translates to real outcomes – faster cycle times, fewer errors, and less risk. From a technical perspective, it means we developers have a powerful ally: we can offload a lot of workflow logic to the IAM platform. Why code it from scratch if a combination of Docusign + a few integration apps can do it? Why Developers Should Care about IAM (Big Time) If you’re a software developer or architect building solutions for business teams, you might be thinking: This sounds cool, but is it relevant to me? Let me put it this way – after Momentum 2025, I’m convinced that ignoring IAM would be a mistake for anyone in enterprise software. Here’s why: - Faster time-to-value for your clients or stakeholders: Business teams are always pressuring IT to deliver solutions faster. With IAM, you have ready-made components to accelerate projects. Need to implement a contract approval workflow? Use Maestro, not months of coding. Need to integrate Docusign with an internal system? Check App Center for an app or use their APIs with far less glue code. Docusign’s own research shows that connecting systems via App Center and Maestro can cut development time dramatically (from ~12 months of custom dev to mere weeks or less). For us developers, that means we can deliver results sooner, which definitely wins points with the business. - Fewer custom builds (and less maintenance): Let’s face it – maintaining custom scripts or one-off integrations is not fun. Every time a SaaS API changes or a new requirement comes in, you’re back in the code. IAM’s approach offers more reuse and configuration instead of raw code. The platform is doing the hard work of staying updated (for example, when Slack or Salesforce change something in their API, Docusign’s connector app will handle it). By leveraging these pre-built connectors and templates, you write less custom code, which means fewer bugs and lower maintenance overhead. You can focus your coding effort on the unique parts of your product, not the boilerplate integration logic. - Reusable and modular workflows: I love designing systems as Lego blocks – and IAM encourages that. You can build a workflow once and reuse it across multiple projects or clients with slight tweaks. For instance, an approval workflow for sales contracts might be 90% similar to one for procurement contracts – with IAM, you can reuse that blueprint. The fact that everything is on one platform also means these workflows can talk to each other or be combined. This modularity is a developer’s dream because it leads to cleaner architecture. Docusign explicitly touts this modular approach, noting that organizations can easily rearrange solutions on the fly to meet new needs​. It’s like having a library of proven patterns to draw from. - AI enhancements with minimal effort: Adding AI into your apps can be daunting if you have to build or train models yourself. IAM essentially gives you AI-as-a-service for agreements. Need to extract key data from 1,000 contracts? Iris can do that out-of-the-box​. Want to implement a risk scoring for contracts? The AI can flag unusual terms or deviations. As a developer, being able to call an API or trigger a function that returns “these are the 5 clauses to look at” is incredibly powerful – you’re injecting intelligence without needing a data science team. It means you can offer more value in your applications (and impress those end-users!) by simply tapping into IAM’s AI features. Ultimately, Docusign IAM empowers developers to build more with less code. It’s about higher-level building blocks. This doesn’t replace our jobs – it makes our jobs more focused on the interesting problems. I’d rather spend time designing a great user experience or tackling a complex business rule than coding yet another Docusign-to-Slack integration. IAM is taking care of the plumbing and adding a layer of smarts on top. Don’t Underestimate Agreement Intelligence – Your Call to Action Momentum 2025 left me with a clear call to action: embrace agreement intelligence. If you’re a developer or tech leader, it’s time to explore what Docusign IAM can do for your projects. This isn’t just hype from a conference – it’s a real shift in how we can deliver solutions. Here are a few ways to get started: - Browse the IAM App Center – Take a look at the growing list of apps in the Docusign App Center. You might find that integration you’ve been meaning to build is already available (or one very close to it). Installing an app is trivial, and you can configure it to fit your workflow. This is the low-hanging fruit to immediately add value to your existing Docusign processes. If you have Docusign eSignature or CLM in your stack, App Center is where you extend it. - Think about integrations that could unlock value – Consider the systems in your organization that aren’t talking to each other. Is there a manual step where someone re-enters data from a contract into another system? Maybe an approval that’s done via email and could be automated? Those are prime candidates for an IAM solution. For example, if Legal and Sales use different tools, an integration through IAM can bridge them, ensuring no agreement data falls through the cracks. Map out your agreement process end-to-end and identify gaps – chances are, IAM has a feature to fill them. - Experiment with Maestro and the API – If you’re technical, spin up a trial of Docusign IAM. Try creating a Maestro workflow for a simple use case, or use the Docusign API/SDKs to trigger some AI analysis on a document. Seeing it in action will spark ideas. I was amazed how quickly I could set up a workflow with conditions and parallel steps – things that would take significant coding time if I did them manually. The barrier to entry for adding complex logic has gotten a lot lower. - Stay informed and involved – Docusign’s developer community and IAM documentation are growing. Momentum may be over, but the “agreement intelligence” movement is just getting started. Keep an eye on upcoming features (they hinted at even more AI-assisted tools coming soon). Engage with the community forums or join Docusign’s IAM webinars. And if you’re building something cool with IAM, consider sharing your story – the community benefits from hearing real use cases. My final thought: don’t underestimate the impact that agreement intelligence can have in modern workflows. We spend so much effort optimizing various parts of our business, yet often overlook the humble agreement – the contracts, forms, and documents that initiate or seal every deal. Docusign IAM is shining a spotlight on these and saying, “Here is untapped gold. Let’s mine it.” As developers, we have an opportunity (and now the tools) to lead that charge. I’m incredibly excited about this new chapter. After seeing what Docusign has built, I’m convinced that intelligent agreements can be a foundational layer for digital transformation. It’s not just about getting documents signed faster; it’s about connecting dots and automating workflows in ways we couldn’t before. As I reflect on Momentum 2025, I’m inspired and already coding with new ideas in mind. I encourage you to do the same – check out IAM, play with the App Center, and imagine what you could build when your agreements start working intelligently for you. The future of agreements is here, and it’s time for us developers to take full advantage of it. Ready to explore? Head to the Docusign App Center and IAM documentation and see how you can turn your agreements into engines of growth. Trust me – the next time you attend Momentum, you might just have your own success story to share. Happy building!...

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co