Skip to content
Hassan Sani

AUTHOR

Hassan Sani

Developer Advocate

Select...
Select...
How to Create Better Test Coverage Using Cypress 10 cover image

How to Create Better Test Coverage Using Cypress 10

Testing is an integral part of software development, and it is important that all developers learn best testing practices. Jordan Powell, DX engineer at Cypress and Angular Google Developer Expert, has some tips to share with developers on how to write better tests using Cypress 10. In this article, we will take a look at Powell’s tips for writing end-to-end tests, component tests, and advanced testing patterns in Cypress. If you want to learn more from Jordan Powell, please check out his Advanced Cypress JS Drop training. Table of Contents - Why is Testing important? - Types of Testing - A new take on the testing pyramid - Differences between end-to-end testing and component testing - Jordan Powell’s best practices for E2E testing - Don’t use HTML Native selectors - Use Closures - Independent Test - Use Route Aliases - Setting a Global baseUrl - Jordan Powell’s Best Practices for Component Testing - Default Config Mount - Cypress Intercept - Use createOutputSpy - Jordan Powell’s Advanced Cypress Patterns - Session - Origin - Conclusion Why is Testing important? Software testing identifies issues within the application, and helps ensure that only high quality products are shipped to the user. Here are Jordan Powell’s reasons on why testing is important, and how it helps developers: Documentation: Good testing coverage will result in developers creating stronger test plans and better QA testing for new releases. Confidence: Writing good tests allows developers to build new features in confidence because the applications are working as intended. Safe Refactoring: Good test coverage leads to less refactoring down the road, and provides developers more time to work on new features. Improved UX: Good test coverage provides better UX (User Experience) for the end user because the application is working as intended. Types of Testing E2E, Integration, and Unit testing are three common methods for testing software applications. Unit Test: Unit testing serves as the foundation of the test pyramid. If you're working in a functional language, a unit will most likely be a single function. Your unit tests will call a function with different parameters and ensure that it returns the expected values. In an object-oriented language, a unit can range from a single method to an entire class. Integration test: Integration testing is where individual components of your application are tested as a group. End-to-end testing: E2E Testing is a technique that tests your app from the web browser through to the back end of your application, as well as testing integrations with third-party APIs and services. These types of tests are great at making sure your entire app is functioning as a cohesive whole. A new take on the testing pyramid In Jordan Powell’s Advanced Cypress JS Drop training, he challenges developers to rethink the traditional testing pyramid, and believes that component testing should also be included. Component Testing with Cypress 10 allows developers to test individual components quickly, regardless of its complexity. Component tests differ from end-to-end tests in that instead of visiting a URL to pull up an entire app, a component can be "mounted" and tested on its own. Differences between end-to-end testing and component testing Here are a few key differences between End-to-end and component testing: | End-to-end testing | Component testing | | ------------------------------------------------------- | ------------------------------------------------------------------------ | | The entire application and all of its layers are tested | Only the independent components are tested | | Testing can be done by developers and QA Teams | Testing is done by the developers | | Often requires a complex setup | No extra configuration for CI(Continuous Integration) environments needed | | Initialization command: cy.visit(url) | Initialization command: cy.mount() | Jordan Powell’s best practices for E2E testing Don’t use HTML Native selectors It is not good practice to use element selectors, id attributes, or class attributes when writing End-to-end tests. Using HTML Native selectors can lead to team members refactoring tests later on, or changing attributes which could affect the tests. Jordan Powell recommends using data attributes that can inform the team that this is a test attribute. When other team members need to change anything, they will be aware of the function of the attribute and what it may affect. Use Closures Jordan warns against assigned return values for Cypress assertions, and recommends using closures to access the Commands yield like this: ` Independent Test It is not good practice to combine related tests. Jordan Powell suggests that tests should be run independently from one another without sharing or relying on other test states. If test suites are related in any way, run the related properties or methods before each of the test runs, like in the code below: ` Use Route Aliases Another recommended practice is to use route aliases to guard Cypress against proceeding until an explicit condition is met. ` Setting a Global baseUrl It is bad practice to use hardcoded URL strings for the baseUrl because it will eventually fail when the environment changes. Instead, Powell recommends using a config file that holds the environment baseUrl. Jordan Powell’s Best Practices for Component Testing All of the code examples in this next section will be for Angular, but Cypress component testing works for all major Frontend JavaScript frameworks. Default Config Mount Component testing takes two parameters: the component to mount, and the object for configuration. A Custom Mount Config can boost flexibility. It ships a default mount config which you can use, but to manage multiple modules and import without adding too many boilerplate codes, it is recommended to have a custom mount config for specific instances. ` Cypress Intercept Component testing handles the individual components you are testing at a particular time, to access external API it is recommended to use cy.intercept. According to the Doc Cy.intercept can be used to passively listen for matching routes without manipulating the request or its response in any way. ` Use createOutputSpy When working with eventEmitter for components that may contain forms or other elements with events, createOutputSpy will automatically create an EventEmitter and set up the spy on it's .emit() method. ` Jordan Powell’s Advanced Cypress Patterns Session When testing for routes or components that require authentication Cypress provides cy.session which eliminates creating functions in your test to check for sessions on every call to the route or component. cy.session caches the sessions, and it will skip the login process for testing components that need the sessions on a second call. ` Origin Running tests for projects that require a visit to multiple different domains could get CORS, or a limitation of visits, due to browser environment. Cypress provides Origin to help bypass some of the browser limitations. ` Conclusion In this article, we looked at Jordan Powell’s tips for writing End-to-end tests, component tests, and advanced testing patterns in Cypress. I recommend watching the full video for Jordan Powell’s Advanced Cypress JS Drop training. Are you excited about the Cypress Component Testing? Let us know on Twitter....

State of Deno: A Look at the Deno CLI, Node.js Compatibility and the Fresh Framework cover image

State of Deno: A Look at the Deno CLI, Node.js Compatibility and the Fresh Framework

In this State of Deno event, our panelists discussed the Deno CLI, Node.js compatibility for the npm ecosystem, and the Fresh framework. In this wrap-up, we will take a deeper look into these latest developments and explore what is on the horizon for Deno. You can watch the full State of Deno event on the This Dot Media YouTube Channel. Here is a complete list of the host and panelists that participated in this online event. Hosts: Tracy Lee, CEO, This Dot Labs, @ladyleet Panelists: Colin Ihrig, Software Engineer at Deno, @cjihrig Luca Casonato, Software Engineer at Deno, @lcasdev Bartek Iwańczuk, Software Engineer at Deno, @biwanczuk David Sherret, Software Engineer at Deno, @DavidSherret Table of Contents - Exploring the Deno CLI and its features - What is Deno? - Built in support for TypeScript - Built in toolchain - Deno install and upgrade commands - Deno permissions - Upcoming features - Deno products - Deno Deploy - Deno and Node.js compatibility - Future support for npm packages - The Deno to Node Transform library tool - Fresh framework - Conclusion - We look forward to seeing you at our next State of Deno! Exploring the Deno CLI and Its Features What is Deno? Deno is server-side runtime for JavaScript that also behaves similarly to a browser because it supports all of the same browser APIs on the server. This support provides access to existing knowledge, resources, and documentation for these browser APIs. The team at Deno works closely with browser vendors to make sure that new web APIs work well for both server-side runtime and browsers. Built In Support for TypeScript One of the advantages of Deno is that it ships with TypeScript by default. This removes the setup and configuration time, and reduces the barrier of entry for getting started with TypeScript. Deno also type checks your TypeScript code so you no longer have to use tsc. Built in Toolchain The Deno CLI comes with an entire toolchain which includes a formatter, Linter, package manager, vendor remote dependencies, editor integrations, and more. One of those tools is the Documentation Generator, which annotates library function comments, types, or interfaces with JSDocs comments to easily generate documentation. For a complete list of the Deno CLI tools, please visit their documentation page. Deno install and upgrade commands Deno install is another feature that allows you to take a script and install it using a global command. ` If there is a new version of Deno, you can just run the upgrade command and it will upgrade itself, which makes it a version manager for itself. ` Deno permissions Deno by default will not have file, network or environment access unless you grant those permissions by running a script with command line flags. ` Even with permissions granted, you can scope it to certain directories to allow it to only read and write in the directory of your choosing. If you run the program without permissions granted, the program will still prompt you for permission access before accessing a file. To learn more about Deno's permissions, please read through the documentation. Upcoming features Deno is currently working on improving performance in the areas of HTTP, FFI (Foreign Function Interface) and Node compatibility. There are also improvements being made on the Documentation Generator, to make sure that the docs provided are good and removing the need to maintain two separate docs. Deno Products Deno Deploy Deno deploy is a hosted offering that makes it easy to go from local development to production ready. This service integrates well with GitHub, and provides an option to pay only when users make requests to your services. Deno deploy has a dashboard that allows you to automate most of the processes and show you metrics, deployment statistics, CPU utilization, and network utilization. It also allows you to set up custom domains and provision certificates. Deno and Node.js compatibility Deno v1.15 will introduce a Node.js compatibility mode which will make it possible to run some of Node's programs in Deno. Node APIs like the http server will work in Deno as they would in Node.js. When it comes to the NPM ecosystem compatibility, the goal is to support the large number of packages built on Node.js. The Deno team is working on these compatibility issues, because it uses web APIs for most of their operations. All of these Web APIs were created after Node.js was conceived, meaning that Node implements a whole different set of APIs to do certain operations like performing network requests. This new Node.js API compatibility layer will be able to translate API calls to modern underlying APIs which is what the actual runtime implements. Future support for npm packages When it comes to supporting npm packages on Deno, the goal is to have a transpiler server that takes common.js code and translates that into ESM (ECMAScript module) code. The reason for this is because, just like browsers, Deno only supports ESM code. Deno uses the concept of npm specifiers to provide access to the npm package. Deno downloads the npm package and runs it from a global cache. common.js is also supported ,and it runs the code as it is. ` For npm packages, Deno will create a single copy of the downloaded package instead of multiple directories or sub-directories of modules. It is one global hash file, with no node_modules directory by default, and no need for a package.json by default. If a package requires a node_modules directory, then that directory can be created using a specifier flag. The following command will create a node_modules directory in the project directory, using symlink. ` The Deno to Node Transform library tool The Deno team built a tool to allow library authors to transform Deno packages to Node.js packages. Deno to Node Transform (DNT) takes the Deno code then builds it for Node and distributes it as an npm package. This allows library authors to take advantage of the Deno tool chain. This package can also be shipped on npm for Node.js users. Fresh framework Fresh is a new web framework for Deno, that makes use of the Deno toolchain ecosystem. Fresh uses JSX for templating, and it is similar to Next.js or Remix. A key difference between Fresh and Next.js or Remix, is that Fresh is built to be server-side rendered on the edge rather than server-side in a few locations. Another difference is that with Fresh, no JavaScript is shipped to the client by default, which makes it faster. Fresh handles the Server-side rendering, generates the HTML, sends the file to the client, and hydrates only the part of the page that requires JavaScript on the client by default. Here are some products that already use the Fresh framework: - Deno - merch.deno.com - Deno Deploy To learn more about how to build apps with the Fresh framework, please read through this helpful blog post. Conclusion The team at Deno is making great progress to bring more exciting features to the community to make the runtime engine easy to use for building or migrating existing libraries. If you have any questions about the State of Deno, be sure to ask here. What is it you find exciting about Deno? We will be happy to hear about it on Twitter! We look forward to seeing you at our next State of Deno!...

Using a Starter Kit on Starter.dev to Kickstart Your React and Angular Projects cover image

Using a Starter Kit on Starter.dev to Kickstart Your React and Angular Projects

As a developer, deciding on the stack to use and the dependencies can be a difficult choice. Once you start, configuration, installation, and making sure you have the correct versions are also chores. Starting with a prebuilt framework for some of these tasks can boost a developer's productivity and save time by eliminating repetitions. In this article, I will walk you through how you can cut 40 hours of setup and boost productivity by focusing on what really matters. We will learn a simple solution for implementing project functionality without worrying about setting up different versions of dependencies and configurations, or repeating the same process for different projects. Introducing Starter Kits starter.dev has a curated list of starter kits for various frameworks and libraries to help boost developer productivity. Why starter kits? Our architects discovered they were repeatedly going through the same process every time they needed to start a new project. This process included initializing, installing dependencies, setting up tests, and everything else that comes with bootstrapping a new project. How to use a Starter Kit Let’s walk through starting apps with Angular, and React with starter kits using the Starter Kits Showcases. Every starter kit is structured with the basic requirements to initialize the projects with key features: - Routing - Forms - State Management - API interactions - REST & GraphQL - Authentication - Test Runners These features help developers understand how to architect and build large scale applications. The default project directories are scaffolded with added directories, configurations, and a demo app to help quickly build or modify the components. Using the Angular Apollo TailwindCSS Starter Kit Every starter kit is easy to initialize so you can start building features for your project. The Angular starter kit sets you up with these tools to quickly get you started: - Angular 13: for build frontend components - Apollo Client: for managing both local and remote data with GraphQL - TailwindCSS: for styling components and elements - Storybook: Component library - Jest: Test runner Initializing the project: To initialize a project with the starter kits, run the following: 1. Run npx @this-dot/create-starter to run the scaffolding tool 2. Select Angular, Apollo, and TailwindCSS kits from the CLI library options 3. Name your project 4. cd into your project directory, install dependencies using the tool of your choice, and voila! You can start building features in your new project immediately. Configurations The project pre-configuration is the recommended Angular GraphQL configuration. The ApolloModule and ApolloClientOptions are imported for the App NgModule. ` ` Project Implementation. Starter kits aim to make your project set up fast, easy and reusable. The initial demo project is a counter example connected to a GraphQL module with a fetch example and routes implemented - all following industry’s best practices. Go ahead to add more components or modify existing components without the need to make any changes to the configuration. Extras The preconfigured Storybook for component management gives developers the opportunity to create organized UI systems making both the building process and documentation more efficient and easier to use. React RxJs and Styled Components The React starter kit tools scaffold to quickly get you started with a React project, and include: - React 17: for building frontend components - RxJs: for managing data - Styled Components: for styling components and elements - React Router: For matching project routes - Storybook: Component library - Jest: Test runner Initializing the project: To initialize a project with the starter kits, run the following: - Run npx @this-dot/create-starter to run the scaffolding tool - Select Create React App, RxJS and Styled Components kits from the CLI library options - Name your project - cd into your project directory, install dependencies using the tool of your choice, and voila! You can start building features in your new project immediately. Configurations The project configuration is the same with an initial create-react-app config boilerplate, but with pre-configurations for Storybook and style component. Conclusion In this article, you were able to boost productivity by cutting the time of setting up and moving straight to implementing components, allowing you to focus on working on what matters. I hope this article was helpful, let us know what other starter kits you will love to see on starter.dev....

How to Build Apps with Great Startup Performance Using Qwik cover image

How to Build Apps with Great Startup Performance Using Qwik

In this article, we will recap the JS Drops Qwik workshop with Misko Hevery. This workshop provided an overview on Qwik, its unique features and a look at some example components. We will also address some of the questions raised at the end of the workshop. If you want to learn more about Qwik with Misko Hevery, then please check out this presentation on This Dot Media’s YouTube Channel. Also don’t forget to subscribe to get the latest on all things web development. Table of Contents - What is Qwik? - How to create a Counter Component in Qwik - Unique features of Qwik - Directory Based Routing - Slots in Qwik - Very little JavaScript in production - Resumability with Qwik - Lazy load components by default - Questions asked during the workshop - Are all these functions generated at build time or are they generated at runtime? What's the server consideration here (if any) or are we able to put everything behind a CDN? - How do you access elements in Qwik? - Can you force a download of something out of view? - What is the choice to use $ on Qwik declarations? - Can you explain the interop story with web components and Qwik? Any parts of the Qwik magic that aren’t available to us if, say, our web components are too complex? - Is there an ideal use case for Qwik? - When to use useWatch$ instead of useClientEffect$? - Conclusion What is Qwik? Qwik is a web framework that builds fast web applications with consistent performance at scale regardless of size or complexity. To get started with Qwik, run the following command: ` The Qwik CLI will prompt options to scaffold a starter project on your local machine. To start the demo application, run npm start and navigate to http://127.0.0.1:5173/ in the browser. How to create a Counter Component in Qwik Create a sub-directory in the routes directory named counter and add an index.tsx file with the component definition below. ` Now navigate to http://127.0.0.1:5173/counter and you should see the counter component rendered on the page. Unique features of Qwik Directory Based Routing Qwik is a directory-based routing framework. When we initiated Qwik, it created a routes sub-directory in the src directory and added index and layout files for route matching. The index.tsx is the base route component and the layout.tsx is the component for handling the base page layout. The sub-directories in the route directory serve as the application’s structure for route matching with its index.tsx files as the route components. Every index.tsx file does a look up for the layout component. If it doesn’t exist in the same directory, then it moves up to the parent directory. ` Slots in Qwik Qwik uses slots as a way of connecting content from the parent component to the child projection. The parent component uses the q:slot attribute to identify the source of the projection and the element to identify the destination of the projection. To learn more about slots, please check out the Qwik documentation. Very little JavaScript in production In production, Qwik starts the application with no JavaScript at startup, which makes the startup performance really fast. To see this in action, open the browser’s dev tools, click on the Network tab, and on the Filter tab select JS. You will notice the Vite files for hot module reloading are currently the only JavaScript files served which will not be shipped to production. Go to the filter tab and check the invert checkbox then in the filter input type select Vite. Resumability with Qwik Qwik applications do not require hydration to resume an application on the client. To see this in action, click on the increment button and observe the browser’s dev tools network tab. You will notice Qwik is downloading only the required amount of JavaScript needed. The way Qwik attaches the event to the DOM and handles the state of components is that it serializes the attribute, which tells the browser where to download the event handler and its state. To learn more about serialization with Qwik, read through the Qwik documentation. By default, the code associated with the click event will not download until the user triggers that event. On this interaction, Qwik will only download the minimum code for the event handler to work. To learn more about Qwik events, please read through the [documentation] (https://qwik.builder.io/docs/components/events/#events)....

State of Web Performance: A look into the Interaction To Next Paint, Aurora, and the importance of learning web performance cover image

State of Web Performance: A look into the Interaction To Next Paint, Aurora, and the importance of learning web performance

In this article, we will cover the main topics discussed during this State of Web Performance event. We will recap the panelists' thoughts and address some of the questions raised during the event. Here is the complete list of hosts and panelists that participated. Hosts: - Rob Ocel, Team Lead & Software Architect, This Dot Labs, @robocell - Henri Helvetica, Dev. Community Manager, WebPageTest by Catchpoint, @HenriHelvetica Panelists: - Katie Hempenius, Developer Programs Engineer, Web Performance at Google @KatieHempenius - Alex Russell, Product Manager on Microsoft Edge, @slightlylate - Sia Karamalegos, Web Performance at Shopify, @thegreengreek - Mel Ada, Web Performance at Etsy, @mel_melificent - Annie Sullivan, Team lead for Core Web Vitals metrics / Google Chrome, @anniesullie Table of Contents - Updates to Web performance - Google Chrome - New API’s to help with image CDN’s - Aurora - Shopify - The panelists view on understanding performance - Questions from the chat - How much should performance initiatives rely on platforms & frameworks vs. the education of engineers on best practices and performance analysis? - Do Frameworks and abstractions influence performance? - As we push for new developers to learn performance, is it going to be confusing for the community to also understand Real User Monitoring (RUM)? - Should browsers be more aggressive in dealing with third parties? - Conclusion Updates to Web Performance Google Chrome Annie Sullivan started by sharing updates on web performance for the Google Chrome browser. The team at Google is working on the new Interaction To Next Paint (INP) metric, to help single-page apps detect transitions for better metric measurements and how they help with app performance. New API’s to help with image CDN’s Katie Hempenius then shared an experimental API that is coming to Angular to help with image CDNs and correct image sizes. Aurora Katie continued to share the work her team is doing with Aurora. Aurora is a collaboration between Google Chrome Team and framework authors to help build optimization into frameworks by default. This currently works with Next.js, Nuxt, and Angular to add performance features shipped by default. Shopify Sia Karamalegos shared that the team at Shopify is ensuring that educational resources are available for the community so developers can learn about best practices and web performance. The panelists view on understanding performance Alex Russell explained that handling performance is mostly cultural as it is not an education you can get or learn from a course. It is mostly about learning from the people you work with and hoping that will add the knowledge and values for your skillset. Annie Sullivan talked about her experience with working on the frontend of Google search. When it comes to performance, there are many best practices which all work but the approach should be to look at the bottle-necks of your project to identify the requirements of the framework or architect that may affect the performance of the app. Questions from the chat How much should performance initiatives rely on platforms & frameworks vs. the education of engineers on best practices and performance analysis? Sia Karamalegos pointed out that people don’t prioritize performance. But when you start measuring performance and have it influence other parts of the business then you start getting more motivation to fix those things. These are the incentives that make an impact and most times developers are focused on pushing out features to save time. This leads to developers ignoring the need to consider performance or training to learn how to measure performance unless it is specified on the ticket they are working on. Katie added that part of the updates coming to Google Chrome is to provide prescriptive details to point people in the right direction towards optimizations that will help specifically for the use-case of your business. Do Frameworks and abstractions influence performance? Alex Russell discussed that based on traffic, type of clients, bandwidth, and interactions on the application will influence trade-offs for frameworks or systems as you start to build or rebuild your application. Developers should always consider how much complexity is necessary for a project, and they should be aware that everything that comes with a framework is now owned by them. Katie Hempenius added that this type of complexity is hard to fix and this is where the need to have someone on the team who understands how the app fits together and how to make it better in terms of performance. As we push for new developers to learn performance, is it going to be confusing for the community to also understand Real User Monitoring (RUM)]? Katie Hempenius explained that once you have it in place, it can be clarified, rather than going in circles trying to predict how an element will perform in the field if you have the RUM data it can tell you the answer. It is recommended that people should use webvital.js with debugging data to get information about what element is the Largest Contentful Paint (LCP) element to allow them to run their own experiments. Should browsers be more aggressive in dealing with third parties? Katie Hempenius pointed out that if it is well executed, it could move the industry in a good direction. Annie Sullivan referred to the work she is doing with her team for SPA transition to help understand third parties better, and the potential to measure it better. Alex Russell mentioned that what you want is to build solidarity between folks who are making decisions and the people implementing it, and he made an example with the browser’s lock icon and how it creates solidarity and causes decision-makers to prioritize a particular behavior about their technology choices. There is a role for browsers to play to help surface whether or not most users are going to have a good time or bad time on this site and to use that as a way to signpost to everyone in the industry whether or not choices that were made in this particular location were appropriate. Sia Karamalegos mentioned that browsers can do more and it doesn’t take off the responsibility of an organization or a team to keep an eye on that. This is common when an organization has not done web performance, and uses 20 to 30 different third parties, ignoring the need to either optimize or remove. Even though browsers can do so much, the organization still needs to review those. Conclusion I hope you enjoyed that recap of the State of Web Performance event. Are you excited about learning web performance and improving your apps? Tell us what excites you on Twitter!...

A Look At Bun.sh: the Modern JavaScript Runtime cover image

A Look At Bun.sh: the Modern JavaScript Runtime

Bun is a modern JavaScript runtime like Node or Deno focused on speed, and performance. It is an all-in-one tool (runtime, bundler, package manager, transpiler). In this article, we will look at the excitement behind it, and dive into some features. Overview Bun is developed from scratch using the zig programming language. It uses JavaScriptCore Engine (Same with Safari browser), which is unlike Node.js and Deno, which use Chrome’s V8 engine. Bun natively implements hundreds of Node.js and Web APIs, including ~90% of Node-API functions (native modules), fs, path, buffer, and more. Plus, it supports Typescript and JSX out of the box. Getting started To install Bun on our machine, simply run the command: ` For Mac, Linux, and Windows Subsystem. Now run bun --version to verify that it is correctly installed. First Bun Script Create a javascript file called http.js and add the following: ` Now run the following: ` Then open http://localhost:3000 in your browser. You can create the same file in Typescript as http.ts and run: ` Then open http://localhost:3000 in your browser. Without modification or extra installation, we now have scripts in JavaScript and Typescript running. Features Let's dive into some of the features of bun. Packages Bun supports node packages and provides some integration with the latest React ecosystem with the create command. Bun uses node_modules.bun to house all the imported dependencies. Let’s add React to our new project. ` Bun will generate the node_module.bun file in the directory. For an existing application similar to yarn or npm to install dependencies with bun, simply run bun install in the project directory. It will use the existing package.json in combination with the lock file when present to add dependencies. Scaffolding an App To scaffold or create a new project from a template, framework (React), or blank project, use the create command. ` SQLite3 Out of the box With Bun, you don’t have to install SQLite as it’s built-in out of the box. ` Run the code with bun run db.js, and you should see the records that are inserted logged on the terminal. Environment Variables Bun automatically loads environment variables from .env files. No more require("dotenv").config() and you can simply access proccess.env without needing to install packages. Create an .env file with the following: ` Create a file http.js ` Run the code with bun run http.js, and you should see the NotReallyA_Key logged on the terminal. Conclusion Hopefully this article showed you how easy it is to get started with Bun and the features you should be excited about. The Bun approach to performance is truly a big win for the community. Bun is still young and seems to have a promising future. Will Bun replace Deno or Node? That is too early to call, as both existing runtimes have been around for a while and are actively maintained and features are being added often with constant improvements on the existing APIs. Bun does not yet have a stable release (as of this writing) and is still very early in development. What’s your opinion on bun or some exciting projects you might build or migrate with bun?...

Building a Lightweight Component with Lit cover image

Building a Lightweight Component with Lit

Component-based apps make it much easier to build independent components and share them across projects. With broken-down units as small as a button or user interactions, project iterations tend to become faster and more developer friendly. For this article, we will use Lit to build web components for our front-end apps. What is Lit? According to the docs, Lit is a simple library for building fast, lightweight web components. Using Lit, you can build highly reusable components with the following features: - Native component for the browser aka Web Component - Framework agnostic, sharable and usable across multiple web frameworks - Lightweight, not requiring too much JavaScript code to implement - Flexible and can easily be plugged into any future project and still work as expected. Prerequisites To follow along with this article, I recommend you have: - Basic knowledge of JavaScript and a frontend framework (React, Angular or Vue) - Code Editor For this, we will be using the Lit JavaScript starter kit. Clone the repo to your local machine ` And change directory ` Install dependencies. ` Open the project with a code editor, and navigate to the file list-element.js: ` Our web component ListElememt is a class that extends to the LitElement base class with the styles() and property() methods. The styles() method returns the CSS for the component that uses the css method imported from Lit to define component-scoped CSS in a template literal. The properties() method returns the properties which expose the component's reactive variables. We declare an internal state of todoList to hold the list of todos. Then, we assign default values to properties in the constructor method. todoList is an empty array. The render() method returns the view using the html imported from Lit to define the html elements in a template literal. In the rendered html, we added an input element with id of fullname and a button with a click event triggering the custom method _pushTodo to push new Todo to the todoList array. Then, we map through the list of todos with map directives imported from Lit and attach a button to each item with an event to remove the item when clicked, calling a custom method _removeTodo that filters through the array, and returns items that satisfy our condition. The getter method _input() returns the input element we created to give access to the input value so we can pass it to the _pushTodo method. Now, to render the component, we will modify the index.html head section. ` We will also need to modify the body element of our index.html file. ` Now, run the app to see our modification, run: ` From the browser, navigate to localhost:8000, and you should see the element rendered below. Benefits of Lit Components Now we have a running app with a reusable component. Let's explore the Lit benefits of this component, and what differentiates it from other tools for building components. Native: Lit is built on top of the Web Components standards and you are able to extend just what you need for productivity: reactivity, declarative templates, and a handful of thoughtful features to reduce boilerplate and make your job easier. In our ListElement code, we defined the custom HTML element with window.customElements.define which defines and registers a new custom element to the browser DOM and associates it with an element name list-element, making our component native. Every Lit component is built with web platform evolution in mind. Fast and Small: Lit has a 5 kb compressed size and a minified bundle which allows your application to load faster without the need to rebuild a virtual tree (No virtual DOM). Lit batches updates to maximize performance and efficiency. Setting multiple properties at once triggers only one update, performed asynchronously at microtask timing. Interoperable & future-ready: Every Lit component is a native web component with the superpower of interoperability. Web components work anywhere you use HTML with any framework, or none at all. This makes Lit ideal for building shareable components, design systems, or maintainable, future-ready sites and apps. Here is how you can quickly make our ListElement component usable and shareable across any project built with any framework or without one. For the dev/index.html head, we simply imported the JavaScript module of list-element.js and quickly add the custom HTML element to the body: ` Reactivity with Lit: Lit components receive input and store their state as JavaScript class fields or properties. Reactive properties are properties that can trigger the reactive update cycle when changed, re-rendering the component, and optionally be read or written to attributes. ` All JavaScript properties in Lit are defined inside this properties object, and then we used the constructor method in our App component class to set an initial value for the properties we created. Events: In addition to the standard addEventListener API, Lit introduces a declarative way to add event listeners. You will notice we used @ expressions in our template to add event listeners to elements in our component's template. We added the @click event and bind it into the template. Declarative event listeners are added when the template is rendered. ` Lifecycle Lit components use the standard custom element lifecycle methods. In addition, Lit introduces a reactive update cycle that renders changes to DOM when reactive properties change. Lit components are standard custom elements and inherit the custom element lifecycle methods. In our ListElement class, we have the constructor() method which is called when an element is created. Also, it’s invoked when an existing element is upgraded, which happens when the definition for a custom element is loaded after the element is already in the DOM. You will notice in our _pushTodo() method we initiated a Lifecycle update by calling requestUpdate() method to perform an update immediately. Lit saves any properties already set on the element. This ensures values set before the upgrade are maintained and correctly override defaults set by the component. This is so we can push new items to the array and immediately re-render the component. Conclusion Congratulations! You`ve learned how to build lightweight components with Lit.dev, and explore some of the benefits of using native web components which can be used across frameworks and non-frameworks apps. What is the exciting thing you will be building with Lit? Let us know on Twitter!...

Building a Bot to Fetch Discord Scheduled Events with 11ty and Netlify cover image

Building a Bot to Fetch Discord Scheduled Events with 11ty and Netlify

If you haven’t heard of 11ty (Eleventy) yet, we are here to share why you should be excited about this new-ish static site generator on the block! For clarification, a static site generator (SSG) is a tool that generates a static HTML website based on raw data and a set of templates. 11ty is an SSG with a small bundle size and no runtime, which means it only ships your code to the browser. It supports multiple templating engines providing flexibility to use templating languages like njk, .html, .md, and more. For this article, we will be using nunjucks (.njk) templating. It’s worth pointing out that 11ty is NOT a javascript framework. This article is a recap of the JS Drop training by Domitrius Clark where he showed us how to build a Discord Scheduled Event List with 11ty and Netlify. In this workshop, you will learn how to build a static generated site for a list of scheduled events fetched from a Discord server using 11ty and Netlify. A Notion document outlining the steps for this workshop. (Thanks Dom!) 👋 Before getting into the steps, make sure you have: - A GitHub account and a repo created to house your workshop code - A Discord account & a server for us to connect your bot to - A Netlify account to deploy your application at the end Initial Project Scaffold and Install Dependencies npm init -y to init a package.json with default values yarn add -D @11ty/eleventy @netlify/functions tailwindcss node-fetch@2 dotenv npm-run-all Open the new project in your favorite code editor Edit the script property of the package.json file: ` The above scripts are the build commands for 11ty production, CSS build for Tailwind, and also to run the dev server for testing our application. Now that we have our packages and scripts defined, let’s scaffold out the folders and files we’ll need for the site. First edit the .gitignore: ` Next, define the 11ty configs: - Types of templating files (Nunjucks) - Directories to use include build output, where get components, layouts, and includes. - Defining plugins Edit the _eleventy/config.js file with the following: ` Next, we edit the netlify config file netlify.toml to config Netlify deployment with the following: - Build commands - Path to Netlify functions ` Creating base layout and setting up Tailwind We created the _includes folder with two sub-folders, for our components (or macros), simply named components and layouts, which is where we’re going to be focused in this lesson. 11ty exposes the _includes folders so that we have access to our layouts and components inside of our pages and inside of each other (for example using macros inside of other macros). Let’s go ahead and create the HTML scaffold for our pages. Inside of /src/_includes/layouts/ we’ll create the base.njk file. ` The layout will be used to wrap all of our pages. We can also create sub-layouts and new layouts depending on the needs of the page. For this tutorial we will need only this base layout. For the base.njk we: - We made the Tailwind styles visible to our page by adding a link tag for /styles.css - We are using the title variable, because of 11ty’s data cascade, and we’re able to pull variables in from our pages frontmatter. In our files, we’ll need to define a title to ensure our layout doesn’t break. - Notice the {{ content | safe }}. The content variable is the page content itself. So, in our case, this will be our .njk page and components. the safe variable is a builtin filter to nunjucks, to make sure it will not be HTML escaped. Next, we will modify tailwind.config.js to make our Tailwind work as expected: ` And modify the styles.css file to import Tailwind utilities, base, and components: ` Then we edit the index.njk file with the default content and frontmatter: ` Now to test that everything works, start the dev server: ` Everything should work! Now navigate to http://localhost:8080 in your browser. Creating a Navbar component in Nunjucks Let's create a Navbar component for our layout with Nunjucks, in the src/_includes/components/ add a navbar.njk: ` Next, we modify the index.njk file to include the navbar in our pages and add: ` Now the final document should look like this: ` Initialize a Discord BOT from the Discord server Now that we have the template and base file set up, next we should connect the Discord bot to the page. Before we start to initialize a discord bot, we need to put some things in place, so head over to your discord. Go to the User Settings, navigate to the Advanced tab and enable the Developer Mode. Head over to the Discord Developer Portal and click on New Application to create an application. Fill in the necessary details and click on create. On the sidebar menu, navigate to Bot and click on add bot. We will need a few details for our app to connect with the Discord server. Let’s copy them to the .env file. Add environment variables DISCORD_BOT_TOKEN & DISCORD_GUILD_ID and assign the value for discord token from the Bot page by clicking reset token. For the DISCORD_GUILD_ID, head over to a Discord server that you manage, or create one for this tutorial, side-click on the server and click on Copy ID. Now paste the ID to set the value for the DISCORD_GUILD_ID environment variable. Next, add the bot to your server https://discordapp.com/api/oauth2/authorize?scope=bot&client_id=YOUR_CLIENT_ID Find the client ID from the 0Auth2 tab and click on copy. Now we are all set and connected to the server. Using global data files in 11ty to fetch scheduled events from Discord In 11ty, data is merged from multiple different sources before the template is rendered. The data is merged in what 11ty calls the Data Cascade. We will be fetching data from discord from a javascript function in the global data file. Inside of src/_data, create a new file named events.js. Previously, we created environment variables called DISCORD_BOT_TOKEN & DISCORD_GUILD_ID. Now, we can fetch our events endpoint, grab our events, and inject them into our templates. Our file will look like this: ` Creating the events page In the src directory, create an events.njk file: ` Currently we’ve just got a page rendering some static content. Let’s use Nunjucks loops to render a card for each of our events. The data we care about right now from the large event object coming back are: - creator - name - scheduled start time - description - and if it’s not inside of Discord, where is it We also need to make sure we check the event for any meta data that could point us toward an external link for this event. Thankfully, this is another quick fix with Nunjucks if blocks. Our final card (should) end up looking something like below. ` Before we test the application, schedule a test event on Discord, restart the dev server, then click on events tab in the navbar: You should see your newly scheduled events. Pushing to GitHub and deploying to Netlify Pushing to Github Let’s initialize a repo so we can track our changes and deploy live to the web as we go. Start off with a quick command to initialize the repo: ` Then let’s get all of our current changes added and pushed to main, so we can create a repo. ` Using the GitHub CLI, create a repo and push it, ` This will create your repo, name it, and push up the commits all in one command. To confirm that the repo is up, run: ` Deploy to Netlify To create a new project on Netlify with the new repo as the base, run: ` Fill in the prompts. You should be asked the following: - Choosing to create and configure a new site - Choose your team - Set your unique site name Now, you should have an admin URL and base URL link in the console. There will be a few more prompts: - Authenticate Github through Netlify - leave build command blank - leave netlify functions folder blank Once all that is done, we’re going to want to run a few commands: - git push - netlify open If something was wrong with your initial linking of your code, try to run a new production deploy using: - netlify deploy --prod Netlify CLI will deploy the local project to the Netlify server and generate a random URL which you can visit to see the live app. Conclusion In this workshop, you learned to use 11ty to fetch and display your scheduled events from a discord server and deploy an app to Netlify. That was pretty easy! Did you run into any issues? There is more! Watch the full training on the ThisDot YouTube Channel Are you excited about 11ty? What are you building using it? Tell us what excites you!...

Building Your First Application with AWS Amplify cover image

Building Your First Application with AWS Amplify

AWS (Amazon Web Services) is popular for the cloud solution it provides across the globe, in various regions with data centers. In this article, we will be looking at a particular platform by AWS for frontend developers, AWS Amplify. AWS Amplify is a set of tools and features that let web and mobile developers quickly and easily build full-stack applications on AWS. This article is a summary of JavaScript Marathon: AWS for Frontend Developers with Michael Liendo. If you want a more detailed explanation of building and deploying frontend apps with AWS Amplify, I recommend you go and check out the video! Application User Flow Most applications need certain key features to be created for users. Let’s explore a few of them. - User Login: - This can be created by spinning up an ExpressJS application with Authentication, and handling things like user hashing, password policy, and forgot password. - API integration: - This is another common need as we typically need to handle user data with a backend application. - Database: - Most applications store user information. This would be key in creating an interactive user experience in an application. Bringing these services together can be a lot for many developers. Developers will also have to consider application scalability as the users increase. AWS Amplify AWS Amplify is built to specifically handle scale for frontend developers, and also provides the opportunity for an application to scale as the application and users grow. With scalability handled, this allows developers to focus on providing value for their users versus having to worry about scalability at every stage. AWS Amplify Tools AWS Amplify tools for building and deploying frontend applications include: - CLI: To connect frontend with AWS backend cloud resources. - UI Components: AWS UI components library is an open-source design system with cloud-connected components and primitives that simplify building accessible, responsive, and beautiful applications. - Hosting Solution: For deploying frontend applications, static sites, and server-side apps, with a CI/CD pipeline. - Amplify Studio: A GUI for UI to plug a Figma component and automatically convert it into a ReactJS component. Walking back to how AWS will help manage the user journey we listed above and make developer lives easier, here are some of the services provided by AWS that help spin up applications with ease: - User Login: For user login, we can use Amazon Cognito, AWS’s user directory service to handle user authentication, password policies, forgot password, and more. - API: For API access, we can use AWS AppSync, a serverless GraphQL and Pub/Sub API service. - Database: for Database, we can use Amazon’s DynamoDB, which is a fully managed, serverless, key-value NoSQL database. - Storage: for assets storage, we can be use Amazon Simple Storage Service (Amazon S3). Building a Project & Project Setup Now that you’re familiar with a few of the services we can use to build an application easily, let’s get started and build one together! Before we start, let’s install the AWS Amplify CLI. Run: ` This will give us access to use Amplify’s commands for our application. The Application We will be building a Next framework application. This application will be a collection of pictures of Dogs. To scaffold a Next application, run: ` Now cd into the application directory. ` Install the packages and dependencies we will be using from AWS: ` Now, open the project in your code editor. We will be using VS Code. First, we will wrap the root component in an AmplifyProvider component. Open _app.js and replace the code: ` This is to make the application aware of Amplify. We also imported the style library from the React Amplify library. We will be using the install amplify CLI tool to initialize the Amplify configuration in our project. To do this, run: ` You can modify the properties as well, but for this demo, we will leave it as default, and when it asks Initialize the project with the above configuration? we will choose NO. This is because we will replace the src directory with a . directory and the build directory replace with .next directory. If you don’t already have AWS credentials set up, Amplify will walk you through setting up new credentials. For this demo, we will be accepting the default credentials settings provided, but we recommend you follow with the required information for your project. Check out the Documentation to learn more. AWS will add a few cloud functions and create a configuration file in the project directory, aws-exports.js. You can add services to your Amplify project by running the amplify add command. For example, to add the authentication service (AWS Cognito), run: ` This will ask for the type of security configuration you want for the project. Next, it asks how you want users to authenticate. This will add authentication to your application. To test it out, let's edit the index.js file and replace the content: ` Now, run the application in dev environment: ` Navigate to the dev localhost URL in the browser, http://localhost:3000/. The landing URL is now authenticated, and requires a username and password to login. The application now has full authentication with the ability to sign in: There is a registration function and user detail fields: There is also a forgotten password function that emails the user a code to reset the password, all from just a few lines of code: This is a fully functioning application with authentication included locally. To use the full authentication, we will need to push the application to AWS service. To do that, run: ` This will list services created in the application and prompt if you want to continue with the command. Upon accepting, it will push the application to the cloud and update the amplify-exports.js configuration file with the cloud configuration and AWS services that we enabled in our application. Now, let's modify the _app.js to apply the Amplify configurations. Add the Amplify and config imports as follows: ` The authentication configuration handles form validation out-of-the-box including password policy and email, or phone number verification depending on what you choose. You can view ampliy-exports.js to confirm the configuration options available for the project. Now to add an API to the application, run: ` For this demo, we will choose GraphQL for the API service, API key for authentication, and Amazon Cognito. Everything else will be the default. Amplify will auto generate the GraphQL schema for the project, which you can modify to fit your use case. Push the API updates to AWS: ` Amplify will trigger to generate code for your GraphQL API. We suggest you accept the default options. Storage We’ll add a storage service to our project to allow users to upload favorite dog images. Run: ` You can apply default settings or modify it to fit your use case. Building a Demo app Now that we have prepared the project, let’s modify index.js to implement file upload for authenticated users. ` Walk Through First, we created a state to hold a list of dogs’ data from the API. We then declared an async function to handle the form submission. Using the AWS component library, we loop through the dogItems, rendering each item to display the uploaded image and details of the dog. We imported the Storage module from amplify, passed dogPicFile to dogPicName for upload, and set the level to protected to give user access only to read and update data. Then, we imported the API module from amplify, and the destructured data property. Using the GraphQL code generated for us by amplify when we run amplify add api, we imported createDogs from mutations so we can post form data to the database with GraphQL. We set a new state with the return data from the database. With React’s useEffect, we declared an async function to fetch data from the database with GraphQL query, and set the state with the returned data, and we call the fetchDogData. To test the application, run: ` Conclusion In this article, we learned how to use AWS Amplify to implement authentication, integrate an API with your frontend application, connect with a database, and also how to store files in AWS storage. This can all be accomplished within a short time, and using very few lines of code. If you want a more detailed explanation of the content covered in this write up, I recommend you watch the video by JavaScript Marathon: AWS for Frontend Developers with Michael Liendo on This Dot’s YouTube Channel. What are you planning on building with AWS Amplify?...

Choosing Remix: The JavaScript Fullstack Framework and Building Your First Application cover image

Choosing Remix: The JavaScript Fullstack Framework and Building Your First Application

Remix is a React framework for building server-side rendering full-stack applications. It focuses on using web standards to build modern user interfaces. Remix was created by the team behind React Router, and was recently open-sourced for the community. Remix extends most of the core functionality of React and React Router with server-side rendering, server requests, and backend optimization. But why start using Remix? Remix comes with a number of advantages: - Nested routes - nested routes help eliminate nearly every loading state by using React Router’s outlet to render nested routes from directories and subdirectories. - Setup is easy! Spinning up a remix project takes just a few minutes and gets you productive immediately. - Remix uses server side progressive enhancement which means only necessary JavaScript, JSON, and CSS content is sent to the browser. - Remix focuses on server-side rendering. - File-system-based routing automatically generates the router configuration based on your file directory. - Remix is built on React which makes it easy to use if you know React. Key Features Let’s highlight a few key features of Remix you should be aware of. Nested Route Routes in Remix are file-system-based. Any component in the route directory is handled as a route and rendered to its parent outlet components. If you define a parent component inside the routes directory, and then different routes inside a directory with the same name as the parent component, the latter will be nested within the first one. Error boundaries Error handling with applications is critical. In many cases, a single error can cause the entire application to be affected. With Remix, when you get an error in a Remix component or a nested route, the errors are limited to the component and the component will fail to render or it will display the error without disrupting the entire application’s functionality. Loading State Remix handles loading states in parallel on the server and sends the fully formed HTML document to the browser; this eliminates the need to use a loading skeleton or spinner when fetching data or submitting form data. Loaders and Actions Among the most exciting features of Remix are Loaders and Actions. These are special functions: Loaders are functions (Hooks) that retrieve dynamic data from the database or API using the native fetch API. You can add loaders to a component by calling the useLoaderData() hook. Actions are functions used to mutate data. Actions are mostly used to send form data to the API or database, to make changes to an API, or perform a delete action. Building Your First Remix App The next portion of this blog post will show you how to build your first Remix app! We will be building a small blog app with primsa’s SQLite to store the posts. To start a Remix project the prerequisites are: - A familiarity with JavaScript and React - Node.js installed - A code editor i.e VSCode Open your system terminal and run: ` You can accept the default prompts or use your own preferences. Remix will install all the dependencies it requires and scaffold the application with directories and files. In the project’s directory let install the other dependencies we will be using, run: ` You should use something like the directory in the image. File structure walk through The app directory contains the main building files for our Remix application. The route directory holds all the routes that expose the exported default function as the route handler from the file. entry.client.jsx and entry.server.jsx are core Remix’s files. Remix uses entry.client.jsx as the entry point for the browser bundle, and uses entry.server.jsx to generate the HTTP response when rendering on the server. root.jsx is the root component of Remix application, the default export is the layout component that renders the rest of the app in an These are the files we really want to touch on in our project. To learn more about the file directory, check out Remix’s API conventions. Project set up Open the root.jsx file and replace the code with: ` Since we are using bootstrap for styling we imported the minified library, Remix uses the to add stylesheet at component level using the Route Module links export. The links function defines which elements to add to the page when the user visits the route. Visit Remix Styles to learn more about stylesheets in Remix. Similar to the stylesheet, Remix can add the Meta tags to the route header by calling Meta Module. The meta function defines which meta elements you can add to the routes. This is a huge plus for your application’s SEO. The app file has three components with a default export of the App component. Here, we declared a Document Component for the HTML document template, a Layout Component to further improve the template layout for rendering components, and added the {process.env.NODE_ENV === "development" ? : null} for Hot reload when changing things in the file during development. One important note is to not confuse Links with Link. The latter, Link, is a router component for matching routes in Remix’s apps. We will handle the components to match our routes below. To test out the application: ` You should have a similar app as shown below: Let’s configure the db we will use for the app. We will use primsa SQLite to store the posts. Install the Prisma packages in your project: ` Now let’s initialize the prisma SQlite in project: ` This will add a Prisma directory to the project. Open prisma/schema.prisma and add the following code at the end of the file: ` This is the schema for the blog database. Simply, the schema is identifying that we will need a slug which will be a unique string and used for our blog’s unique route. The title property is also a string type for the title of our blog. The body type of string will hold the body, and finally we want to have the dates created and updated. Now, let’s push the schema to Prisma: ` Prisma will create a dev.db in the Prisma directory. Now let's seed our database with a couple posts. Create prisma/seed.js and add this to the file. ` Now edit the package.json file just before the script property and add this: ` Now, to seed the db with Prisma, run: ` Our database setup is now done. There is just one more thing for database connection, We need to create a typescript file app/utils/db.server.ts. We specify a server file by appending .server at the end of the file name, Remix will compile and deploy this file on the server.. Add the code below to the file: ` Now, go back to the code editor and create post.jsx in the route directory, and add the following code: ` All we are doing is rendering our nested routes of posts here, nothing crazy. Now create routes/posts/index.jsx ` Here, we are declaring the loader and fetching all the posts from the database with a limit of 20 and destructuring the posts from the useLoaderData hook, then finally looping through the returned posts. Dynamic Routes Remix dynamic routes are defined with a $ sign and the id key. We will want to use the slugs properties as our dynamic routes for the blogs. To do this, create app/routes/posts/$postSlug.jsx ` Here, the loader is fetching from the database using the unique slug we provided and rendering the article. To add where to post new blogs, create app/routes/posts/new.jsx, and the following code: ` We created an action function to handle the form post request, and Remix will catch all the formData by passing the request. And there you have it! You’ve created a functional blog application. Conclusion In this article, we learned why Remix is a great choice for building your next application, and why it’s becoming a more popular framework to use amongst developers. We also explored the core features of Remix and learned how to build a sample blog application using a database....

How to Build Stripe Apps with React: Learning Once, Writing Everywhere cover image

How to Build Stripe Apps with React: Learning Once, Writing Everywhere

Have you had a chance to check out the newly released Stripe App Marketplace? In this article, we will learn what Stripe Apps are and how we can build applications using the Stripe App CLI and plugins. This article is a summary of JavaScript Marathon: Building Stripe Apps with React: Learning Once, Writing Everywhere with Rob Ocel. You may read this article or watch the above video to learn how to build an app on Stripe. What Are Stripe Apps? Stripe Apps allow developers to extend the Stripe platform by building React-based applications that can be embedded directly in the Stripe Dashboard and orchestrate the Stripe API. To build your app you’ll benefit by having a basic understanding of React, a Stripe account, and the Stripe CLI. Stripe Apps then adds supporting libraries and tools including: - Stripe CLI plugin for Stripe Apps for scaffolding a new app with a manifest,, views, and components - Stripe ui-extension-sdk, a utility for accessing Stripe APIs. - Stripe App UI Toolkits, which is a component library for building Stripe UI components The key difference in Stripe Apps vs React apps is that an app on Stripe is rendered in a viewport but hosted inside a sandbox iframe to reduce the connection with the actual Stripe page on a browser. Stripe Apps uses a set of custom components required to build frontend views as found in the UI toolkit. No traditional HTML elements like h1, span, and div are supported. Some of these custom Stripe components include, Box, Inline, ContextView and more, which we will use as we work with building a demo application. You can check out the Stripe UI components docs for more](https://stripe.com/docs/stripe-apps/components). In this tutorial, we’ll be building a LeaderBoard of a donor app using the Stripe API. Our Stripe App will… To start we’ll install the Stripe CLI plugin for building a Stripe App. To install the Stripe Apps CLI plugin, run: ` To scaffold a new Stripe App, run: ` The Stripe CLI will prompt the following: - ID: auto-generated application ID. A unique name for Stripe to identify your application. - Display name: Enter a display name. This is the name your Dashboard displays for your app. For this tutorial. We’ll name it Javascript Marathon. A directory with the app name will be created. Run: ` Let’s run the application to preview the default UI: ` You will be prompted to preview the application in the Stripe Dashboard. The Stripe Dashboard will provide a dropdown of accounts you can choose from. Best of luck and feel free to [reach out to me]( if you have any further questions!...

How to Discover JavaScript Libraries in React, Angular, and Vue cover image

How to Discover JavaScript Libraries in React, Angular, and Vue

Navigating the JavaScript ecosystem to find tools and resources is often an overwhelming and consuming task for many developers. But that makes sense, because it’s already so vast, and changes by the day! Most libraries and frameworks have packages that help integrate with other development workflows like build tools, state management, styling, data-fetching, and form building. This both makes it even more challenging to understand the ecosystem, and demands that developers continually learn with help from quality resources. There are a few criteria to look out for when choosing tools and resources, including: __Community adoption - Confirming the community’s adoption of the project is also important since it helps validate that others have tried a particular technology and have vetted it. You can infer the degree of adoption by checking out the number of downloads as identified by NPM. __Community support - GitHub has a feature that allows the community to “star” or sponsor a repository. This helps the community support and promote the project, and generally means that the community is using the project and providing feedback for improvements. __The Project maintainer - This is important because knowing the person, company, or community behind a project gives you an idea of the investment and commitment backing a project. You wouldn’t want to use a project that may be swiftly abandoned. The maintainer of a project is typically the host of the project repository on GitHub. __Other criteria - The number of pull requests and open issues. Take note of not only the quantity of pull requests and issues, but when the most recent pull request merged. This helps identify the health of the project and if it is still actively maintained. Finding relevant technologies doesn’t need to be this hard, which is why framework.dev is such a great open source tool for discovering tools, technologies, and learning resources! Framework.dev Framework.dev helps developers choose the appropriate tools for their projects by comparing libraries, frameworks, and tools with individual platforms for React, Angular, and Vue. The platform considers all the criterias identified above such as open source community support, maintainers, and adoption, and compares them to help developers make important decisions for your project. Since it is an open source project, you can add, edit, or suggest any project you feel should be added by submitting a PR. Using Framework.dev Let me walk you through how to use Framework.dev! Visit https://www.framework.dev/ and click on one of the available technologies. For this example, I will select Angular. Here, you can search and compare tools, libraries, blogs, courses, and more for the Angular framework or simply browse through the collection by clicking on the categories from the sidebar. Click on the Library dropdown from the sidebar and select State Management. This will curate a list of every resource in the collection that has “state management” in its name, description, or tag. To filter the list with advanced options, click on the Advanced button and type filter criteria you want to apply to the list. In the list, you will notice each item displays a description including the number of stars, downloads, and size of the project. In the top right corner of each item, there is a plus icon. Click on the plus icon of 2 random items to compare. You can see that two selected items are highlighted. For this article, I selected Akita and NgRx Store. Go to the footer of the page and click “Compare”. Here, you will be able to see side by side to Compare the library with details of Authors, Weekly Downloads, Overall Health and Stars to help in making decisions for your project. You can also search for a library or tool if it is not listed on the menu. Here, I searched for “State Management”, which curated a result with every resource in the collection that has “state management” in its name, description, or tag. You can navigate to other categories of blogs, books, courses, communities and podcasts to browse and compare these resources. Conclusion What will you use Framework.dev for next? Let us know how it has helped you! Framework.dev is committed to helping developers navigate the often confusing landscape of JavaScript technologies and educational resources. One of the best parts of this project is that it welcomes contributions from the greater web development community. If there is any content you want to see added to the platform, you can Submit a PR!...