Skip to content

Web App Deployment with Firebase in 2 minutes

This article was written over 18 months ago and may contain information that is out of date. Some content may be relevant but please refer to the relevant official documentation or available resources for the latest information.

In this tutorial, I'm going to show you how to deploy your web app to Firebase in 2 minutes! (that's the goal 😏)

Setting Up Firebase

  1. Go to https://firebase.google.com/, and sign in with your Google account.
  2. Go to Console
  3. Click on Add project
  4. Enter the name of your project
  5. Disable Google Analytics Screen Shot 2019-12-12 at 3.51.13 PM
  6. Click on Create project
  7. Click on Hosting on the left sidebar Screen Shot 2019-12-12 at 3.53.47 PM
  8. Now, on the main banner click on Get started Screen Shot 2019-12-12 at 3.54.15 PM
  9. Install the Firebase CLI tool in your machine by running:
npm install -g firebase-tools

Building Your Project

  1. Before you can deploy your project, you have to build it. If you are using React or Angular, you can easily do this by running the following command:
npm run build --prod

Note: Depending on the tech stack you are using (React, Vue, Angular, etc), a folder will be created after running the build command. This folder will contain your HTML, CSS, JS, etc.

Firebase App Setup

  1. Inside of your project, open the command line and run:
firebase login
  1. Then run:
firebase init
  1. Select Hosting
Screen Shot 2019-12-12 at 11.26.00 AM
  1. Select 'use an exciting project' and select the project you just created in Firebase.
Screen Shot 2019-12-12 at 11.27.19 AM
Screen Shot 2019-12-12 at 11.27.26 AM
  1. When it asks you about the public directory, you select the folder that was created whenever you ran your build command. E.g In React it's build, and in Angular it's dist/Your-project-name
Screen Shot 2019-12-12 at 4.19.08 PM
  1. When asked to configured as a web app, enter Y Screen Shot 2019-12-12 at 11.27.45 AM

  2. When asked about overwriting the index.html type N

Screen Shot 2019-12-12 at 11.31.18 AM

App Deployment

  1. run the following command:
firebase deploy

After a successfull deployment, you should see something like this: Screen Shot 2019-12-12 at 11.28.32 AM

Now, clicking on Hosting URL, should open the web app you just deployed.

  1. You can also access the hosting url in your Firebase console. You can also setup a custom domain via the console.
Screen Shot 2019-12-12 at 4.28.42 PM

This Dot is a consultancy dedicated to guiding companies through their modernization and digital transformation journeys. Specializing in replatforming, modernizing, and launching new initiatives, we stand out by taking true ownership of your engineering projects.

We love helping teams with projects that have missed their deadlines or helping keep your strategic digital initiatives on course. Check out our case studies and our clients that trust us with their engineering.

You might also like

Incremental Hydration in Angular cover image

Incremental Hydration in Angular

Incremental Hydration in Angular Some time ago, I wrote a post about SSR finally becoming a first-class citizen in Angular. It turns out that the Angular team really treats SSR as a priority, and they have been working tirelessly to make SSR even better. As the previous blog post mentioned, full-page hydration was launched in Angular 16 and made stable in Angular 17, providing a great way to improve your Core Web Vitals. Another feature aimed to help you improve your INP and other Core Web Vitals was introduced in Angular 17: deferrable views. Using the @defer blocks allows you to reduce the initial bundle size and defer the loading of heavy components based on certain triggers, such as the section entering the viewport. Then, in September 2024, the smart folks at Angular figured out that they could build upon those two features, allowing you to mark parts of your application to be server-rendered dehydrated and then hydrate them incrementally when needed - hence incremental hydration. I’m sure you know what hydration is. In short, the server sends fully formed HTML to the client, ensuring that the user sees meaningful content as quickly as possible and once JavaScript is loaded on the client side, the framework will reconcile the rendered DOM with component logic, event handlers, and state - effectively hydrating the server-rendered content. But what exactly does "dehydrated" mean, you might ask? Here's what will happen when you mark a part of your application to be incrementally hydrated: 1. Server-Side Rendering (SSR): The content marked for incremental hydration is rendered on the server. 2. Skipped During Client-Side Bootstrapping: The dehydrated content is not initially hydrated or bootstrapped on the client, reducing initial load time. 3. Dehydrated State: The code for the dehydrated components is excluded from the initial client-side bundle, optimizing performance. 4. Hydration Triggers: The application listens for specified hydration conditions (e.g., on interaction, on viewport), defined with a hydrate trigger in the @defer block. 5. On-Demand Hydration: Once the hydration conditions are met, Angular downloads the necessary code and hydrates the components, allowing them to become interactive without layout shifts. How to Use Incremental Hydration Thanks to Mark Thompson, who recently hosted a feature showcase on incremental hydration, we can show some code. The first step is to enable incremental hydration in your Angular application's appConfig using the provideClientHydration provider function: ` Then, you can mark the components you want to be incrementally hydrated using the @defer block with a hydrate trigger: ` And that's it! You now have a component that will be server-rendered dehydrated and hydrated incrementally when it becomes visible to the user. But what if you want to hydrate the component on interaction or some other trigger? Or maybe you don't want to hydrate the component at all? The same triggers already supported in @defer blocks are available for hydration: - idle: Hydrate once the browser reaches an idle state. - viewport: Hydrate once the component enters the viewport. - interaction: Hydrate once the user interacts with the component through click or keydown triggers. - hover: Hydrate once the user hovers over the component. - immediate: Hydrate immediately when the component is rendered. - timer: Hydrate after a specified time delay. - when: Hydrate when a provided conditional expression is met. And on top of that, there's a new trigger available for hydration: - never: When used, the component will remain static and not hydrated. The never trigger is handy when you want to exclude a component from hydration altogether, making it a completely static part of the page. Personally, I'm very excited about this feature and can't wait to try it out. How about you?...

Next.js + MongoDB Connection Storming cover image

Next.js + MongoDB Connection Storming

Building a Next.js application connected to MongoDB can feel like a match made in heaven. MongoDB stores all of its data as JSON objects, which don’t require transformation into JavaScript objects like relational SQL data does. However, when deploying your application to a serverless production environment such as Vercel, it is crucial to manage your database connections properly. If you encounter errors like these, you may be experiencing Connection Storming: * MongoServerSelectionError: connect ECONNREFUSED <IP_ADDRESS>:<PORT> * MongoNetworkError: failed to connect to server [<hostname>:<port>] on first connect * MongoTimeoutError: Server selection timed out after <x> ms * MongoTopologyClosedError: Topology is closed, please connect * Mongo Atlas: Connections % of configured limit has gone above 80 Connection storming occurs when your application has to mount a connection to Mongo for every serverless function or API endpoint call. Vercel executes your application’s code in a highly concurrent and isolated fashion. So, if you create new database connections on each request, your app might quickly exceed the connection limit of your database. We can leverage Vercel’s fluid compute model to keep our database connection objects warm across function invocations. Traditional serverless architecture was designed for quick, stateless web app transactions. Now, especially with the rise of LLM-oriented applications built with Next.js, interactions with applications are becoming more sequential. We just need to ensure that we assign our MongoDB connection to a global variable. Protip: Use global variables Vercel’s fluid compute model means all memory, including global constants like a MongoDB client, stays initialized between requests as long as the instance remains active. By assigning your MongoDB client to a global constant, you avoid redundant setup work and reduce the overhead of cold starts. This enables a more efficient approach to reusing connections for your application’s MongoDB client. The example below demonstrates how to retrieve an array of users from the users collection in MongoDB and either return them through an API request to /api/users or render them as an HTML list at the /users route. To support this, we initialize a global clientPromise variable that maintains the MongoDB connection across warm serverless executions, avoiding re-initialization on every request. ` Using this database connection in your API route code is easy: ` You can also use this database connection in your server-side rendered React components. ` In serverless environments like Vercel, managing database connections efficiently is key to avoiding connection storming. By reusing global variables and understanding the serverless execution model, you can ensure your Next.js app remains stable and performant....

How to do social Media sharing in your PWA cover image

How to do social Media sharing in your PWA

PWA using Web Share API Have you wondered how you can make a use of the "social" sharing API PWA? You know, when you want to share something, and it gives you the option to share via email, Twitter, Instagram, etc? Well, it's actually very easy! Take a look at the demo app to test it on your device. LIVE DEMO https://pwa-share-api.firebaseapp.com/ About the project I have built the sample project that can be found in this repo. In this project, you can see how I added the share functionality to the PWA , but you don't need a service worker or a PWA to add this functionality. I added it to this project because I wanted to show you how to do it specifically in a PWA, but you can add my code to any web app easily! Web Share API The bland definition of what a WSA is: > The Web Share API is meant to help developers implement sharing functionality into their apps or websites, but using the device’s native sharing capabilities instead of having to resort to scripts from the individual social platforms and DIY implementations. The API surface is a simple as it gets.- alligator.io The Web Share API has two share methods: share() and canShare(). The ShareData dictionary of the web share v1 consists of several optional members: text member: Arbitrary text that forms the body of the message being shared. title member : The title of the document being shared. May be ignored by the target. url member : A URL string referring to a resource being shared. The canShare() method contains an extra field which is the files property. files member: A File array referring to files being shared. To read more about it, check out this link So let's have a look at how it actually works. First, let's collect data to make our ShareData dictionary. ` Then, after we have declared what we want to share, we can use it in the .share() method. ` We can put that inside of a function, called onShare(), for example. ` Then pass the onShare() as a click handler to the share button. ` If you want to take it to the next level, you can check to make sure the *web share api* is supported by your browser. Your code will look as follows: ` If you want to use the canShare(), to send files, your code might look like this ` Browser Support If you go to canisue.com, you can see that browser's support for the share() method. Pretty much every major browser supports it. What about the canShare() method? Not as good as the share() method, hopefully it gets to more browsers soon. Resources https://www.w3.org/TR/web-share/#sharedata-dictionary https://alligator.io/js/web-share-api/ https://web.dev/web-share/...

Vercel BotID: The Invisible Bot Protection You Needed cover image

Vercel BotID: The Invisible Bot Protection You Needed

Nowadays, bots do not act like “bots”. They can execute JavaScript, solve CAPTCHAs, and navigate as real users. Traditional defenses often fail to meet expectations or frustrate genuine users. That’s why Vercel created BotID, an invisible CAPTCHA that has real-time protections against sophisticated bots that help you protect your critical endpoints. In this blog post, we will explore why you should care about this new tool, how to set it up, its use cases, and some key considerations to take into account. We will be using Next.js for our examples, but please note that this tool is not tied to this framework alone; the only requirement is that your app is deployed and running on Vercel. Why Should You Care? Think about these scenarios: - Checkout flows are overwhelmed by scalpers - Signup forms inundated with fake registrations - API endpoints draining resources with malicious requests They all impact you and your users in a negative way. For example, when bots flood your checkout page, real customers are unable to complete their purchases, resulting in your business losing money and damaging customer trust. Fake signups clutter the app, slowing things down and making user data unreliable. When someone deliberately overloads your app’s API, it can crash or become unusable, making users angry and creating a significant issue for you, the owner. BotID automatically detects and filters bots attempting to perform any of the above actions without interfering with real users. How does it work? A lightweight first-party script quickly gathers a high set of browser & environment signals (this takes ~30ms, really fast so no worry about performance issues), packages them into an opaque token, and sends that token with protected requests via the rewritten challenge/proxy path + header; Vercel’s edge scores it, attaches a verdict, and checkBotId() function simply reads that verdict so your code can allow or block. We will see how this is implemented in a second! But first, let’s get started. Getting Started in Minutes 1. Install the SDK: ` 1. Configure redirects Wrap your next.config.ts with BotID’s helper. This sets up the right rewrites so BotID can do its job (and not get blocked by ad blockers, extensions, etc.): ` 2. Integrate the client on public-facing pages (where BotID runs checks): Declare which routes are protected so BotID can attach special headers when a real user triggers those routes. We need to create instrumentation-client.ts (place it in the root of your application or inside a src folder) and initialize BotID once: ` instrumentation-client.ts runs before the app hydrates, so it’s a perfect place for a global setup! If we have an inferior Next.js version than 15.3, then we would need to use a different approach. We need to render the React component inside the pages or layouts you want to protect, specifying the protected routes: ` 3. Verify requests on your server or API: ` - NOTE: checkBotId() will fail if the route wasn’t listed on the client, because the client is what attaches the special headers that let the edge classify the request! You’re all set - your routes are now protected! In development, checkBotId() function will always return isBot = false so you can build without friction. To disable this, you can override the options for development: ` What happens on a failed check? In our example above, if the check failed, we return a 403, but it is mostly up to you what to do in this case; the most common approaches for this scenario are: - Hard block with a 403 for obviously automated traffic (just what we did in the example above) - Soft fail (generic error/“try again”) when you want to be cautious. - Step-up (require login, email verification, or other business logic). Remember, although rare, false positives can occur, so it’s up to you to determine how you want to balance your fail strategy between security, UX, telemetry, and attacker behavior. checkBotId() So far, we have seen how to use the property isBot from checkBotId(), but there are a few more properties that you can leverage from it. There are: isHuman (boolean): true when BotID classifies the request as a real human session (i.e., a clear “pass”). BotID is designed to return an unambiguous yes/no, so you can gate actions easily. isBot (boolean): We already saw this one. It will be true when the request is classified as automated traffic. isVerifiedBot (boolean): Here comes a less obvious property. Vercel maintains and continuously updates a comprehensive directory of known legitimate bots from across the internet. This directory is regularly updated to include new legitimate services as they emerge. This could be helpful for allowlists or custom logic per bot. We will see an example in a sec. verifiedBotName? (string): The name for the specific verified bot (e.g., “claude-user”). verifiedBotCategory? (string): The type of the verified bot (e.g., “webhook”, “advertising”, “ai_assistant”). bypassed (boolean): it is true if the request skipped BotID check due to a configured Firewall bypass (custom or system). You could use this flag to avoid taking bot-based actions when you’ve explicitly bypassed protection. Handling Verified Bots - NOTE: Handling verified bots is available in botid@1.5.0 and above. It might be the case that you don’t want to block some verified bots because they are not causing damage to you or your users, as it can sometimes be the case for AI-related bots that fetch your site to give information to a user. We can use the properties related to verified bots from checkBotId() to handle these scenarios: ` Choosing your BotID mode When leveraging BotID, you can choose between 2 modes: - Basic Mode: Instant session-based protection, available for all Vercel plans. - Deep Analysis Mode: Enhanced Kasada-powered detection, only available for Pro and Enterprise plan users. Using this mode, you will leverage a more advanced detection and will block the hardest to catch bots To specify the mode you want, you must do so in both the client and the server. This is important because if either of the two does not match, the verification will fail! ` Conclusion Stop chasing bots - let BotID handle them for you! Bots are and will get smarter and more sophisticated. BotID gives you a simple way to push back without slowing your customers down. It is simple to install, customize, and use. Stronger protection equals fewer headaches. Add BotID, ship with confidence, and let the bots trample into a wall without knowing what’s going on....

Let's innovate together!

We're ready to be your trusted technical partners in your digital innovation journey.

Whether it's modernization or custom software solutions, our team of experts can guide you through best practices and how to build scalable, performant software that lasts.

Prefer email? hi@thisdot.co