Skip to content

Declarative Canvas with Svelte

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

The <canvas> element and the Canvas API let us draw graphics via JavaScript. However, its Imperative API can be converted into a Declarative one using Svelte.

The technique to achieve this will require you to use what is sometimes called Renderless Components.

Renderless Components

In Svelte, all the sections of a .svelte file are optional, including the template.

This allows us to create a component that will not be rendered, but can contain some logic in the <script> section.

Let's create a new project. I'll be using Vite and Svelte for this tutorial.

npm init vite

✔ Project name: canvas-svelte
✔ Select a framework: › svelte
✔ Select a variant: › svelte-ts

cd canvas-svelte
npm i

Now that our project is ready, let's create a new component.

<!-- src/lib/Renderless.svelte -->
<script>
    console.log("No template");
</script>

We will be printing a message to the console when the component is initialized.

Let's see how it works by making some changes to the entry point of our application.

// src/main.ts
// import App from './App.svelte'
import Renderless from './lib/Renderless.svelte'

const app = new Renderless({
  target: document.getElementById('app')
})

export default app

If we start our server and open the developer tools in our browser, we will see the message printed.

Screen Shot 2022-06-30 at 00.05.30

It's working.

Note that this component, even if it doesn't have a template, still behaves as a regular component instance, and you will still have access to the component Lifecycle methods.

Let's test it.

<!-- src/lib/Renderless.svelte -->
<script>
    import { onMount } from "svelte";
    console.log("No template");
    onMount(() => {
        console.log("Component mounted");
    });
</script>

We added a second message to be shown after the component is mounted.

Both messages are now printed in the expected order.

Screen Shot 2022-06-30 at 00.08.33

This means that we can use our Renderless Component just as any other Svelte Component.

Let's revert the changes to the main.ts file, and "render" the component inside the App component.

// src/main.ts
import App from './App.svelte'

const app = new App({
  target: document.getElementById('app')
})

export default app
<!-- src/App.svelte -->
<script lang="ts">
  import { onMount } from "svelte";

  import Renderless from "./lib/Renderless.svelte";
  console.log("App: initialized");
  onMount(() => {
    console.log("App: mounted");
  });
</script>

<main>
  <Renderless />
</main>

Finally, let's also modify our Renderless component to log more meaningful messages.

<!-- src/lib/Renderless.svelte -->
<script>
    import { onMount } from "svelte";
    console.log("Renderless: initialized");
    onMount(() => {
        console.log("Renderless: mounted");
    });
</script>

Screen Shot 2022-06-30 at 00.24.48

It's important to note the order of initialization and mounting of the components. This will be important when we create our Canvas and renderless Components.

There's a third way to mount our component and that's passing it as a child of another component. This is also called content projection. And the way that we do this is by using slots.

Let's create a container component that will render elements in a slot. I will also add more elements that will live along with the element.

<!-- src/lib/Container.svelte -->
<script>
    import { onMount } from "svelte";
    console.log("Container: initialized");
    onMount(() => {
        console.log("Container: mounted");
    });
</script>

<h1>The container of things</h1>
<slot />
<p>invisible things</p>

Let's also add a prop to the Renderless component to add some kind of identifier to it.

<!-- src/lib/Renderless.svelte -->
<script lang="ts">
    import { onMount } from "svelte";
    export let id:string = "NoId"
    console.log(`Renderless ${id}: initialized`);
    onMount(() => {
        console.log(`Renderless ${id}: mounted`);
    });
</script>

Finally, in our App, we update the template to use the container, and pass multiple instances of Renderless to it.

<!-- src/App.svelte -->
<script lang="ts">
  import { onMount } from "svelte";
  import Container from "./lib/Container.svelte";

  import Renderless from "./lib/Renderless.svelte";
  console.log("App: initialized");
  onMount(() => {
    console.log("App: mounted");
  });
</script>

<main>
  <Container>
    <Renderless id="Foo"/>
    <Renderless id="Bar"/>
    <Renderless id="Baz"/>
  </Container>
</main>

Now, we can see the rendered Container and the renderless components logging when they are initialized and mounted.

Screen Shot

Now that we've learned about renderless components, let's use them with the <canvas> element.

<canvas> and the Canvas API

The canvas element cannot contain any children, except for a fallback element to render. Anything that you may want to render inside the canvas must be done using its imperative API.

Let's create a new Canvas component and render an empty canvas.

<!-- src/lib/Canvas.svelte -->
<script>
    import { onMount } from "svelte";

    console.log("Canvas: initialized");
    onMount(() => {
        console.log("Canvas: mounted");
    });
</script>

<canvas />

Update the App component to import and use Canvas.

<!-- src/App.svelte -->
<script lang="ts">
  import { onMount } from "svelte";
  import Canvas from "./lib/Canvas.svelte";

  console.log("App: initialized");
  onMount(() => {
    console.log("App: mounted");
  });
</script>

<main>
 <Canvas />
</main>

If we open the browser dev tools, we should see a canvas element rendered now.

Screen Shot 2022-07-01 at 06.30.15

Rendering elements inside canvas

As mentioned previously, we cannot add elements to draw inside our canvas. We have to use the API to do it.

To get a reference to the element, we will use the bind:this directive. It's important to understand that, to use the API, we need the element to be available. This means that we will have to draw after the component is mounted.

<script lang="ts">
    import { onMount } from "svelte";
    let canvasElement: HTMLCanvasElement
    console.log("1", canvasElement) // undefined!!!
    console.log("Canvas: initialized");
    onMount(() => {
        console.log("2", canvasElement) // OK!!!
        console.log("Canvas: mounted");
    });
</script>

<canvas bind:this={canvasElement}/>

Now let's draw a line (I'm removing all the logging from the component for clarity).

<script lang="ts">
    import { onMount } from "svelte";
    let canvasElement: HTMLCanvasElement
    onMount(() => {
        // get canvas context
        let ctx = canvasElement.getContext("2d")

        // draw line
        ctx.beginPath();
        ctx.moveTo(10, 20); // line will start here
        ctx.lineTo(150, 100); // line ends here
        ctx.stroke(); // draw it
    });
</script>

<canvas bind:this={canvasElement}/>

To draw, we need the canvas context. So we must do it after mounting the component. Then, we can start drawing using the canvas API.

Screen Shot

If we want to add a second line, we would have to add a new block of code.

<script lang="ts">
    import { onMount } from "svelte";
    let canvasElement: HTMLCanvasElement
    onMount(() => {
        // get canvas context
        let ctx = canvasElement.getContext("2d")

        // draw first line
        ctx.beginPath();
        ctx.moveTo(10, 20); // line will start here
        ctx.lineTo(150, 100); // line ends here
        ctx.stroke(); // draw it

       // draw second line
        ctx.beginPath();
        ctx.moveTo(10, 40); // line will start here
        ctx.lineTo(150, 120); // line ends here
        ctx.stroke(); // draw it
    });
</script>

We can see that we are starting to add more and more code to our component just by drawing simple shapes. This can get out of hand quickly. We can create helper functions to draw the lines.

<script lang="ts">
    import { onMount } from "svelte";
    let canvasElement: HTMLCanvasElement;
    onMount(() => {
        // get canvas context
        let ctx = canvasElement.getContext("2d");

        // draw first line
        drawLine(ctx, [10, 20], [150, 100]);

        // draw second line
        drawLine(ctx, [10, 40], [150, 120]);
    });

    type Point = [number, number];
    function drawLine(ctx: CanvasRenderingContext2D, start: Point, end: Point) {
        ctx.beginPath();
        ctx.moveTo(...start); // line will start here
        ctx.lineTo(...end); // line ends here
        ctx.stroke(); // draw it
    }
</script>

<canvas bind:this={canvasElement} />

The code becomes more readable, but we are still delegating all the responsibility to the Canvas component, which will translate into having a very complex component.

We can avoid this by using renderless components and the Context API.

We know a few things so far:

  • We require the Canvas context to draw.
  • We can get the context after the component is mounted.
  • Child components are mounted before the parent component.
  • Parent components are initialized before child components.
  • We can use to mount child components.

We want to split our component into multiple. For this example, we want the Line component to draw itself.

Canvas and Line are coupled. A Line component cannot be drawn without a Canvas, and it needs the canvas context. The problem is that the context is not available when we mount the Child component (Line is mounted before Canvas), so we need a different approach.

Instead of passing the context to draw itself, we will let the parent component know that a child component needs to be drawn.

We'll communicate the Canvas and Line components using Context.

Context is a way for two or more components to communicate. Context can only be set or retrieved during initialization, which is what we need in our case. Remember that Canvas is initialized before our Line component.

Let's start by moving the line rendering to its own component. I will also move some types to their own file to be shared across components.

// src/types.ts
export type Point = [number, number];
export type DrawFn = (ctx: CanvasRenderingContext2D) => void;
export type CanvasContext = {
  addDrawFn: (fn: DrawFn) => void;
  removeDrawFn: (fn: DrawFn) => void;
};
<!-- src/lib/Line.svelte -->
<script lang="ts">
    import type { Point } from "./types";

    export let start: Point;
    export let end: Point;

    function draw(ctx: CanvasRenderingContext2D) {
        ctx.beginPath();
        ctx.moveTo(...start);
        ctx.lineTo(...end);
        ctx.stroke();
    }
</script>

This is very similar to what we had in our Canvas component, but abstracted to a reusable component. Now we need a Communicate Canvas and Line components.

Our Canvas will work as the orchestrator of all the rendering.

It will initialize all the Child components, gather the rendering functions, and draw them when required.

<script lang="ts">
  import { onMount, setContext } from "svelte";
  import type { DrawFn } from "./types";

  let canvasElement: HTMLCanvasElement;
  let fnsToDraw = [] as DrawFn[];

  setContext("canvas", {
    addDrawFn: (fn: DrawFn) => {
      fnsToDraw.push(fn);
    },
    removeDrawFn: (fn: DrawFn) => {
        let index = fnsToDraw.indexOf(fn);
        if (index > -1){
        fnsToDraw.splice(index, 1);
        }
    },
  });

  onMount(() => {
    // get canvas context
    let ctx = canvasElement.getContext("2d");
    draw(ctx);
  });

  function draw(ctx){
    fnsToDraw.forEach(draw => draw(ctx));
  }
</script>

<canvas bind:this={canvasElement} />
<slot />

The first thing to note is that our template has changed and now we have a <slot> element beside our canvas. It will be used to mount any children that we pass into our canvas-- in our case, the Line components. These will not add any HTML element.

In the script section, we added an array to hold all the render functions to draw.

We also set a new context. This has to be done during initialization. Our Canvas is initialized before Line, so we set two methods here. These are methods to add and remove a function from our array that holds them. Then any Child component can have access to this context, and call its methods.

That's exactly what we'll do next in the Line component.

<script lang="ts">
  import { getContext, onDestroy, onMount } from "svelte";
  import type { Point, CanvasContext } from "./types";

  export let start: Point;
  export let end: Point;

  let canvasContext = getContext("canvas") as CanvasContext;

  onMount(() => {
    canvasContext.addDrawFn(draw);
  });

  onDestroy(() => {
    canvasContext.removeDrawFn(draw);
  });

  function draw(ctx: CanvasRenderingContext2D) {
    ctx.beginPath();
    ctx.moveTo(...start);
    ctx.lineTo(...end);
    ctx.stroke();
  }
</script>

We register the function using the context previously set by Canvas when we mount this component. We could do it on initialization too because we know that context will be available anyway. But I prefer doing it after the component is mounted. And when the element is destroyed, it removes itself from the list of rendering functions.

Finally, let's update our App to use the new Canvas and Line components.

<script lang="ts">
  import Canvas from "./lib/Canvas.svelte";
  import Line from "./lib/Line.svelte";
</script>

<main>
  <Canvas>
    <Line start={[10, 20]} end={[150, 100]} />
    <Line start={[10, 40]} end={[150, 120]} />
  </Canvas>
</main>
Screen Shot 2022-07-01

We've successfully updated our Canvas component to use a declarative approach. A few things are missing though. We are only drawing once when the Canvas component is mounted.

We need to make the canvas render frequently to update itself when changes happen (unless you only want to render once). Note that we would have to do it with or without the approach we've taken. And it's a common way of updating the canvas contents.

<script lang="ts">
  // NOTE: some code removed for readability 
  // ...
  let frameId: number

  // ...

  onMount(() => {
    // get canvas context
    let ctx = canvasElement.getContext("2d");
    frameId = requestAnimationFrame(() => draw(ctx));
  });

  onDestroy(() => {
    if (frameId){
        cancelAnimationFrame(frameId)
    }
  })

  function draw(ctx: CanvasRenderingContext2D) {
	if (clearFrames) {
	    ctx.clearRect(0,0,canvasElement.width, canvasElement.width)
	}
    fnsToDraw.forEach((fn) => fn(ctx));
    frameId = requestAnimationFrame(() => draw(ctx))
  }
</script>

We achieve this rerendering of the canvas using the requestAnimationFrame method. The callback passed in will be run before the browsers' repaint. First, we create a new variable to assign the current frameId (required for canceling the animation). Then, when we mount the component, we invoke requestAnimationFrame and assign the returned id to our variable. So far, the end result is as before. The difference is now in our draw function that will request a new animation frame each time after being drawn. We will also clear our canvas by default. Otherwise, when we are animating, each frame would be drawn on top of each other (This might be the desired effect. In that case the clearFrame prop can be set to false). Our canvas will update each frame until we destroy our component and cancel any current animation using the id previously stored.

Adding more features

The basic functionality for the components is working, but we may want to add more features.

For this example, we will be exposing two events: onmousemove and onmouseleave. To do this, we need to add a few things two our Canvas component. In the template, change the canvas element to this:

<canvas on:mousemove on:mouseleave bind:this={canvasElement} />

Now, the events can be handled in our App:

<script lang="ts">
  import Canvas from "./lib/Canvas.svelte";
  import Line from "./lib/Line.svelte";
  import type { Point } from "./lib/types";

  function followMouse(e) {
    let rect = e.target.getBoundingClientRect();
    end = [e.clientX - rect.left, e.clientY - rect.top];
  }
  let start = [0, 0] as Point;
  let end = [0, 0] as Point;
</script>

<main>
  <Canvas
    on:mousemove={(e) => followMouse(e)}
    on:mouseleave={() => {
      end = [0, 0];
    }}
  >
    <Line {start} {end} />
  </Canvas>
</main>

Svelte is responsible for updating the end position of the line. But our Canvas component is the one used to update the canvas content (using requestAnimationFrame).

animated canvas

Wrapping up

I hope this tutorial helps you as an introduction to use canvas in Svelte, but also to understand how we can turn a library with an Imperative API into a more declarative one.

There are a few examples of these ideas with more complex examples using a similar approach, like svelte-cubed or svelte-leaflet.

From the svelte-cubed docs:

This ...

import * as THREE from 'three';

function render(element) {
	const scene = new THREE.Scene();
	const camera = new THREE.PerspectiveCamera(
		45,
		element.clientWidth / element.clientHeight,
		0.1,
		2000
	);

	const renderer = new THREE.WebGLRenderer();
	renderer.setSize(element.clientWidth / element.clientHeight);
	element.appendChild(renderer.domElement);

	const geometry = new THREE.BoxGeometry();
	const material = new THREE.MeshNormalMaterial();
	const box = new THREE.Mesh(geometry, material);
	scene.add(box);

	camera.position.x = 2;
	camera.position.y = 2;
	camera.position.z = 5;

	camera.lookAt(new THREE.Vector3(0, 0, 0));

	renderer.render(scene, camera);
}

becomes...

<script>
	import * as THREE from 'three';
	import * as SC from 'svelte-cubed';
</script>

<SC.Canvas>
	<SC.Mesh geometry={new THREE.BoxGeometry()} />
	<SC.PerspectiveCamera position={[1, 1, 3]} />
</SC.Canvas>

We just scratched the surface of the Canvas API, but you can extend it for your own needs, or even create a library for it!

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Svelte 4: Unveiled Speed Enhancements and Developer-Centric Features cover image

Svelte 4: Unveiled Speed Enhancements and Developer-Centric Features

Svelte 4: Unveiled Speed Enhancements and Developer-Centric Features Svelte, a widely favored framework for building user interfaces, unveiled its much-anticipated version 4 on June 22. This major release, while paving the way for future advancements, brings a plethora of remarkable enhancements. Focusing on enriching the development experience and boosting performance, Svelte 4 is indeed reshaping the landscape of frontend development. In this post, we'll delve into the specifics of this exciting release, covering the significant performance boosts, enriched developer tools and features, revamped websites, and simplified migration guide. A Deeper Look at Performance Enhancements Svelte 4 delivers remarkable improvements in performance, focusing on shrinking the Svelte package size, and enhancing hydration efficiency. Streamlined Svelte Package Svelte 4 has substantially slimmed down, reducing its overall package size from 10.6 MB to a sleek 2.8 MB - a 75% decrease. This reduction in dependencies from 61 to 16 not only lightens Svelte but also optimizes SvelteKit, significantly accelerating the REPL experience and npm install times. For instance, npm install times have been trimmed from over 5 minutes to less than a minute, a leap in quality that any developer will appreciate. NPM I Before: NPM I After: Bundle Size Before: Bundle Size After: Optimized Hydration and Performance Scores Alongside the impressive package size reduction, Svelte 4 offers more efficient code hydration, reducing the generated code size for the SvelteKit website by nearly 13%. This leaner codebase contributes to higher performance on benchmarks like Google Lighthouse. The performance score for the new Svelte 4 starter on starter.dev has soared from 75% to a near perfect 95+%. Overall, the performance enhancements introduced with Svelte 4 mean a faster, more efficient, and smoother developer experience. Before: After: Enhanced Developer Experience in Svelte 4 Localized Transitions Transitions in Svelte 4 are local by default, preventing potential conflicts during page loading. ` Improved Web Component Authoring Web Components authoring is simplified with the dedicated customElement attribute in svelte:options. ` Stricter Type Enforcement Svelte 4 introduces stricter types for createEventDispatcher, Action, ActionReturn, and onMount. ` These changes collectively offer a streamlined, robust, and efficient coding experience. Revamped Svelte Websites With Svelte 4, the team has also revamped its main website, offering an improved and more user-friendly experience. The Tutorial Website The Svelte tutorial website has been overhauled for an enhanced learning journey. New improvements include a visible file structure, fewer elements in the navbar, smoother navigation between sections, and a new dark mode. The Svelte Website The primary Svelte website received a makeover too, including better mobile navigation, improved TypeScript documentation, and a handy dark mode. These website updates aim to provide a more engaging, intuitive, and user-friendly experience for all Svelte users. A Smooth Migration to Svelte 4 Transitioning from Svelte 3 to Svelte 4 is designed to be as straightforward as possible. The Svelte team has provided an updated migration tool to simplify this process. Here is a step-by-step guide for the transition: 1. Run the Svelte migration tool. ` 2. Remove Svelte 3 packages. ` 3. Update your eslintrc.json configuration file. ` 4. Upgrade Storybook related packages to the latest v7. Note: as of the publishing of this article, the latest version is 7.0.26. ` Do note that the minimum version requirements have changed. You will now need: - NodeJS 16 or higher - SvelteKit 1.20.4 or higher - TypeScript 5 or higher For more detailed instructions and information, please refer to the official Svelte 4 migration guide or you can take a look at our Svelte 4 starter kit on starter.dev. The focus is to ensure a hassle-free transition, allowing developers to take advantage of the new features and enhancements Svelte 4 offers without significant obstacles. Conclusion Svelte 4, with its performance enhancements and streamlined development process, offers a new pinnacle in the realm of JavaScript frameworks. If you're keen on shifting from Svelte 3 to Svelte 4, a comprehensive migration guide is provided to facilitate a smooth transition. For a quick start with Svelte 4, check out our ready-to-use Svelte Kit with SCSS Starter Kit. In addition, we've developed two showcases demonstrating Svelte 4's power: 1. Svelte Kit with SCSS & 7GUIs - A comprehensive demo showcasing various UI challenges. 2. GitHub Replica Showcase - A clone of the popular code hosting platform, GitHub, demonstrating the potential of Svelte 4 in building complex and high-performance web applications. In conclusion, Svelte 4 brings numerous performance improvements and enriches the development experience, thereby increasing developer productivity and enabling the creation of more efficient applications. Its thoughtful design, alongside the streamlined migration process, is set to expand its adoption in the web development community....

Harnessing the Power of Threlte - Building Reactive Three.js Scenes in Svelte cover image

Harnessing the Power of Threlte - Building Reactive Three.js Scenes in Svelte

Introduction Web development has evolved to include immersive 3D experiences through libraries like Three.js. This powerful JavaScript library enables the creation of captivating 3D scenes within browsers. Three.js: The 3D Powerhouse Three.js democratizes 3D rendering, allowing developers of all skill levels to craft interactive 3D worlds. Svelte Ecosystem: Svelte Cubed and Svelthree The Svelte ecosystem presents solutions like Svelte Cubed and Svelthree, which bridges Svelte with Three.js, offering streamlined reactivity for 3D web experiences. Introducing Threlte v6: Uniting SvelteKit 1.0, Svelte 4, and TypeScript Threlte v6 is a rendering and component library for Svelte that seamlessly integrates Three.js. By harnessing TypeScript's types, it provides a robust and delightful coding experience. In this tutorial, we'll showcase Threlte's capabilities by building an engaging website header: an auto-rotating sphere that changes color on mouse down. Using Threlte v6, SvelteKit 1.0, and Three.js, we're set to create a visually stunning experience. Let's dive in! Setting up Threlte Before building our scene, we need to set up Threlte. We can scaffold a new project using the CLI or manually install Threlte in an existing project. Option 1: Scaffold a New Threlte Project Create a new SvelteKit project and install Threlte with: ` Option 2: Manual Installation For an existing project, select the necessary Threlte packages and install: ` Configuration adjustments for SvelteKit can be made in the "vite.config.js" file: ` With Threlte configured, we're ready to build our interactive sphere. In the next chapter, we'll lay the groundwork for our exciting 3D web experience! Exploring the Boilerplate of Threlte Upon scaffolding a new project using npm create threlte, a few essential boilerplate files are generated. In this chapter, we'll examine the code snippets from three of these files: lib/components/scene.svelte, routes/+page.svelte, and lib/components/app.svelte. 1. lib/components/scene.svelte: This file lays the foundation for our 3D scene. Here's a brief breakdown of its main elements: - Perspective Camera: Sets up the camera view with a specific field of view and position, and integrates OrbitControls for auto-rotation and zoom management. - Directional and Ambient Lights: Defines the lighting conditions to illuminate the scene. - Grid: A grid structure to represent the ground. - ContactShadows: Adds shadow effects to enhance realism. - Float: Wraps around 3D mesh objects and defines floating properties, including intensity and range. Various geometrical shapes like BoxGeometry, TorusKnotGeometry, and IcosahedronGeometry are included here. 2. routes/+page.svelte: This file handles the ui of the index page and imports all necessary components we need to bring our vibrant design to life. 3. lib/components/app.svelte: This file is where you would typically define the main application layout, including styling and embedding other components. Heading to the Fun Stuff With the boilerplate components explained, we're now ready to dive into the exciting part of building our interactive 3D web experience. In the next section, we'll begin crafting our auto-rotating sphere, and explore how Threlte's robust features will help us bring it to life. Creating a Rotating Sphere Scene In this chapter, we'll walk you through creating an interactive 3D sphere scene using Threlte. We'll cover setting up the scene, the sphere, the camera and lights, and finally the interactivity that includes a scaling effect and color changes. 1. Setting Up the Scene First, we need to import the required components and utilities from Threlte. ` 2. Setting Up the Sphere We'll create the 3D sphere using Threlte's and components. ` 1. : This is a component from Threlte that represents a 3D object, which in this case is a sphere. It's the container that holds the geometry and material of the sphere. 2. : This is the geometry of the sphere. It defines the shape and characteristics of the sphere. The args attribute specifies the parameters for the sphere's creation: - The first argument (1) is the radius of the sphere. - The second argument (32) represents the number of width segments. - The third argument (32) represents the number of height segments. 3. : This is the material applied to the sphere. It determines how the surface of the sphere interacts with light. The color attribute specifies the color of the material. In this case, the color is dynamic and defined by the sphereColor variable, which updates based on user interaction. The roughness attribute controls the surface roughness of the sphere, affecting how it reflects light. 3. Setting Up the Camera and Lights Next, we'll position the camera and add lights to create a visually appealing scene. ` 1. : This component represents the camera in the scene. It provides the viewpoint through which the user sees the 3D objects. The position attribute defines the camera's position in 3D space. In this case, the camera is positioned at (-10, 20, 10). The fov attribute specifies the field of view, which affects how wide the camera's view is. - makeDefault: This attribute makes this camera the default camera for rendering the scene. 2. : This component provides controls for easy navigation and interaction with the scene. It allows the user to pan, zoom, and orbit around the objects in the scene. The attributes within the component configure its behavior: - enableZoom: Disables zooming using the mouse scroll wheel. - enablePan: Disables panning the scene. - enableDamping: Enables a damping effect that smoothens the camera's movement. - autoRotate: Enables automatic rotation of the camera around the scene. - autoRotateSpeed: Defines the speed of the auto-rotation. 3. : This component represents a directional light source in the scene. It simulates light coming from a specific direction. The attributes within the component configure the light's behavior: - intensity: Specifies the intensity of the light. - position.x and position.y: Define the position of the light source in the scene. 4. : This component represents an ambient light source in the scene. It provides even lighting across all objects in the scene. The intensity attribute controls the strength of the ambient light. 4. Interactivity: Scaling and Color Changes Now we'll add interactivity to the sphere, allowing it to scale and change color in response to user input. First, we'll import the required utilities for animation and set up a spring object to manage the scale. ` We'll update the sphere definition to include scaling: ` Lastly, we'll add code to update the color of the sphere based on the mouse's position within the window. ` We have successfully created a rotating sphere scene with scaling and color-changing interactivity. By leveraging Threlte's capabilities, we have built a visually engaging 3D experience that responds to user input, providing a dynamic and immersive interface. Adding Navigation and Scroll Prompt in app.svelte In this chapter, we'll add a navigation bar and a scroll prompt to our scene. The navigation bar provides links for user navigation, while the scroll prompt encourages the user to interact with the content. Here's a step-by-step breakdown of the code: 1. Importing the Canvas and Scene The Canvas component from Threlte serves as the container for our 3D scene. We import our custom Scene component to render within the canvas. ` 2. Embedding the 3D Scene The Canvas component wraps the Scene component to render the 3D content. It is positioned absolutely to cover the full viewport, and the z-index property ensures that it's layered behind the navigation elements. ` 3. Adding the Navigation Bar We use a element to create a horizontal navigation bar at the top of the page. It contains a home link and two navigation list items. The styling properties ensure that the navigation bar is visually appealing and positioned correctly. ` 4. Adding the Scroll Prompt We include a "Give a scroll" prompt with an element to encourage user interaction. It's positioned near the bottom of the viewport and styled for readability against the background. ` 5. Styling the Components Finally, the provided CSS styles control the positioning and appearance of the canvas, navigation bar, and scroll prompt. The CSS classes apply appropriate color, font, and layout properties to create a cohesive and attractive design. ` Head to the github repo to view the full code. Check out the result: https://threlte6-spinning-ball.vercel.app/ Conclusion We've successfully added navigation and a scroll prompt to our Threlte project in the app.svelte file. By layering 2D HTML content with a 3D scene, we've created an interactive user interface that combines traditional web design elements with immersive 3D visuals....

Overview of the New Signal APIs in Angular cover image

Overview of the New Signal APIs in Angular

Overview of the New Signal APIs in Angular Google's Minko Gechev and Jeremy Elbourn announced many exciting things at NG Conf 2024. Among them is the addition of several new signal-based APIs. They are already in developer preview, so we can play around with them. Let's dig into it, starting with signal-based inputs and the new matching outputs API. Signal Based Inputs Discussions about signal-based inputs have been taking place in the Angular community for some time now, and they are finally here. Until now, you used the @Input() decorator to define inputs. This is what you'd have to write in your component to declare an optional and a required input: ` With the new signal-based inputs, you can write much less boilerplate code. Here is how you can define the same inputs using the new syntax: ` It's not only less boilerplate, but because the values are signals, you can also use them directly in computed signals and effects. That, effectively, means you get to avoid computing combined values in ngOnChanges or using setters for your inputs to be able to compute derived values. In addition, input signals are read-only. The New Output I intentionally avoid calling them signal-based outputs because they are not. They still work the same way as the old outputs. The Angular team has just introduced a new output API that is more consistent with the latest inputs API and allows you to write less boilerplate, similar to the new input API. Here is how you would define an output until now: ` Here is how you can define the same output using the new syntax: ` The thing I like about the new output API is that sometimes it happens to me that I forget to instantiate the EventEmitter because I do this instead: ` You won't forget to instantiate the output with the new syntax because the output function does it for you. Signal Queries I am sure most readers know the @ViewChild, @ViewChildren, @ContentChild, and @ContentChildren decorators very well and have experienced the pain of triggering the infamous ExpressionChangedAfterItHasBeenCheckedError or having the values unavailable when needed. Here is a refresher on how you would use these decorators until now: ` With the new signal queries, similar to the new input API, the values are signals, and you can use them directly in computed signals and effects. You can define the same queries using the new syntax: ` Jeremy Elbourn mentioned that the new signal queries have better type inference and are more consistent with the new input and output APIs. He also showcased a brand new feature not available with the old queries. You can now define a query as required, and the Angular compiler will throw an error if the query has no result, guaranteeing that the value won't be undefined. Here is how you can define a required query: ` Model Inputs Jeremy and Minko announced the last new feature is the model inputs. The name is vague, but the feature is cool—it simplifies the definition of two-way bindings. Until now, to achieve two-way binding, you would have to define an input and an output following a given naming convention. @Input and @Output had to be defined with the same name (followed by "Change" in the case of the output). Then, you could use the template's [()] syntax. ` ` That way, you could keep the value in sync between the parent and the child component. With the new model inputs, you can define a two-way binding with a single line of code. Here is how you can define the same two-way binding using the new syntax: ` The html template stays the same: ` The model function returns a writable signal that can be updated directly. The value will be propagated back to any two-way bindings. Conclusion The new signal-based APIs are a great addition to Angular. They allow you to write less boilerplate code and make your components more reactive. The new APIs are already in developer preview, so you can start playing around with them today. I look forward to seeing how the community will adopt these new features and what other exciting things the Angular team has in store for us, such as zoneless apps by default....

Increasing development velocity with Cursor cover image

Increasing development velocity with Cursor

If you’re a developer, you’ve probably heard of Cursor by now and have either tried it out or are just curious to learn more about it. Cursor is a fork of VSCode with a ton of powerful AI/LLM-powered features added on. For around $20/month, I think it’s the best value in the AI coding space. Tech giants like Shopify and smaller companies like This Dot Labs have purchased Cursor subscriptions for their developers with the goal of increased productivity. I have been using Cursor heavily for a few months now and am excited to share how it’s impacted me personally. In this post, we will cover some of the basic features, use cases, and I’ll share some tips and tricks I’ve learned along the way. If you love coding and building like me, I hope this post will help you unleash some of the superpowers Cursor’s AI coding features make possible. Let’s jump right in! Cursor 101 The core tools of the Cursor tool belt are Autocomplete, Ask, and Agent. Feature: Autocomplete The first thing that got me hooked was Autocomplete. It just worked so much better than the tools I had used previously, like GitHub Copilot. It was quicker and smarter, and I could immediately notice the amount of keystrokes that it was saving me. This feature is great because it doesn’t really require any work or skilled prompting from the user. There are a couple of tricks for getting a little bit more out of it that I will share later, but for now, just enjoy the ride! Feature: Ask If you’ve interacted with AI/LLMs before, like ChatGPT - this is what the Ask feature is. It’s just a chat feature you can easily provide context to from your code base and choose which Model to chat with. This feature is best suited for just asking more general questions that you might have queried Google or Stack Overflow for in the past. It’s also good for planning how to implement a feature you’re working on. After chatting or planning, you can switch directly to Agent mode to pick up and take action on something you were cooking up in Ask mode. Here’s an example of planning a simple tic-tac-toe game implementation using the Ask feature: Feature: Agent Agent mode lets the AI model take the wheel and write code, make edits, or take other similar actions on your code base. The goal is that you can write prompts and give instructions, and the Agent can generate the code and build features or even entire applications for you. With great power comes great responsibility. Agents are a feature where the more you put into them, the more you get out. The more skilled you become in using them by providing better prompts and including the right context, you will continue to get better results. The AI doesn’t always get it right, but the fact that the models and the users are both getting better is exciting. Throughout this post, I will share the best use cases, tips, and tricks I have found using Cursor Agent. Here’s an example using the Agent to execute the implementation details of the tic-tac-toe game we planned using Ask: Core Concept: Context After understanding the features and the basics of prompting, context is the most important thing for getting the best results out of Cursor. In Cursor and in general, whenever you’re prompting a chat or an agent, you want to make sure that it has all the relevant information that it needs to provide an answer or result. Cursor, by default, always has some context of your code. It indexes your code base and usually keeps the open buffer in the context window at the very least. At the top left of the Ask or Agent panel, there is an @ button, and next to that are badges for all the current items that have been explicitly added to the context for the current session. The @ button has a dropdown that allows you to add files, folders, web links, past chats, git commits, and more to the context. Before you prompt, always make sure you add the relevant content it needs as context so that it has everything it needs to provide the best response. Settings and Rules Cursor has its own settings page, which you can access through Cursor → Settings → Cursor Settings. This is where you log in to your account, manage various features, and enable or disable models. In the General section, there is an option for Privacy Mode. This is one setting in particular I recommend enabling. Aside from that, just explore around and see what’s available. Models The model you use is just as important as your prompt and the context that you provide. Models are the underlying AI/LLM used to process your input. The most well-known is GPT-4o, the default model for ChatGPT. There are a lot of different models available, and Cursor provides access to most of them out of the box. Model pricing A lot of the most common models, like GPT-4o or Sonnet 3.5/3.7, are included in your Cursor subscription. Some models like o1 and Sonnet 3.7 MAX are considered premium models, and you will be billed for usage for these. Be sure to pay attention to which models you are using so you don’t get any surprise bills. Choosing a Model Some models are better suited for certain tasks than others. You can configure which models are enabled in the Cursor Settings. If you are planning out a big feature or trying to solve some complex logic issue, you may want to use one of the thinking models, like o1, o3-mini, or Deep Seek R1. For most coding tasks and as a good default, I recommend using Sonnet 3.5 or 3.7. The great thing about Cursor is that you have the options available right in your editor. The most important piece of advice that I can give in this post is to keep trying things out and experimenting. Try out different models for different tasks, get a feel for it, and find what works for you. Use cases Agents and LLM models are still far from perfect. That being said, there are already a lot of tasks they are very good at. The more effective you are with these tools, the more you will be able to get done in a shorter amount of time. Generating test cases Have some code that you would like unit tested? Cursor is very good at generating test cases and assertions for your code. The fewer barriers there are to testing a piece of code, the better the result you will get. So, try your best to write code that is easily testable! If testing the code requires some mocks or other pieces to work, do your best to provide it the context and instructions it needs before writing the tests. Always review the test cases! There could be errors or test cases that don’t make sense. Most of the time, it will get you pretty close to where you want to be. Here’s an example of using the Agent mode to install packages for testing and generate unit tests for the tic-tac-toe game logic: Generating documentation This is another thing we know AI models are good at - summarizing large chunks of information. Make sure it has the context of whatever you want to document. This one, in particular, is really great because historically, keeping documentation up to date is a rare and challenging practice. Here’s an example of using the Agent mode to generate documentation for the tic-tac-toe game: Code review There are a lot of up-and-coming tools outside of Cursor that can handle this. For example, GitHub now has Copilot integrated in pull requests for code reviews. It’s never a bad idea to have whatever change set you’re looking to commit reviewed and inspected before pushing it up to the remote, though. You can provide your unstaged changes or even specific commits as context to a Cursor Ask or Agent prompt. Getting up to speed in a new code base Being able to query a codebase with the power of LLM’s is truly fantastic. It can be a great help to get up to speed in a large new codebase quickly. Some example prompts: > Please provide an overview of this project and how to get started developing with it > I need to make some changes to the way that notifications are grouped in the UI, please provide a detailed analysis and pseudo code outlining how the grouping algorithm works If you have a question about the code base, ask Cursor! Refactoring Refactoring code in a code base is a much quicker process in Cursor. You can execute refactors depending on their scope in a couple of distinct ways. For refactors that don’t span a lot of files or are less complex, you can probably get away with just using the autocomplete. For example, if you make a change to something in a file and there are several instances of the same pattern following, the autocomplete will quickly pick up on this and help you tab through the changes. If you switch to another file, this information will still be in context and can be continued most of the time. For larger refactors spanning several files, using the Agent feature will most likely be the quickest way to get it done. Add all the files you plan to make changes to the Agent tab’s context window. Provide specific instructions and/or a basic example of how to execute the refactor. Let the Agent work, if it doesn’t get it exactly right initially, you can always give it corrections in a follow-up prompt. Generating new code/features This is the big promise of AI agents and the one with the most room for mixed results. My main recommendation here is to keep experimenting. Keep learning to prompt more effectively, compare results from different models, and pay attention to the results you get from each use case. I personally get the best results building new features in small, focused chunks of work. It can also be helpful to have a dialog with the Ask feature first to plan out the feature's details that the Agent can follow up on and implement. If there are existing patterns in your codebase for accomplishing certain things, provide this information in your prompts and make sure to add the relevant code to the context. For example, if you’re adding a new form to the web page and you have other similar forms that handle validation and making back-end calls in the same way, Cursor can base the code for the new feature on this. Example prompt: Generate a form for creating a new post, follow similar patterns from the create user profile form, and look to the post schema for the fields that should be included. Remember that you can always follow up with additional prompts if you aren’t quite happy with the results of the first.. If the results are close but need to be adjusted in some way, let the agent know in the next prompt. You may find that for some things, it just doesn’t do well yet. Mentally note these things and try to get to a place where you can intuit when to reach for the Agent feature or just write some of the code the old-fashioned way. Tips and tricks The more you use Cursor, the more you will find little ways to get more out of it. Here are some of the tips and patterns that I find particularly useful in my day-to-day work. Generating UI with screenshots You can attach images to your prompts that the models can understand using computer vision. To the left of the send button, there is a little button to attach an image from your computer. This functionality is incredibly useful for generating UI code, whether you are giving it an example UI as a reference for generating new UI in your application or providing a screenshot of existing UI in your application and prompting it to change details in reference to the image. Cursor Rules Cursor Rules allow you to add additional information that the LLM models might need to provide the best possible experience in your codebase. You can create global rules as well as project-specific ones. An example use case is if your project has some updated dependency with newer APIs than the one on which the LLM has been trained. I ran into this when adding Tailwind v4 to a project; the models are always generating code based on Tailwind v3 or earlier. Here’s how we can add a rules file to handle this use case: ` If you want to see some more examples, check out the awesome-cursorrules repository. Summary Learn to use Cursor and similar tools to enhance your development process. It may not give you actual superpowers, but it may feel like it. All the features and tools we’ve covered in this post come together to provide an amazing experience for developing all types of software and applications....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co