Skip to content
William Hutt

AUTHOR

William Hutt

Software Engineer

I'm a software engineer with an interest in web development.

Select...
Select...
Setting Up a Shopify App: Updating Customer Orders with Tracking Info  cover image

Setting Up a Shopify App: Updating Customer Orders with Tracking Info

Today, we are wrapping up our adventure! Last time we learned about retrieving fulfillment IDs, but this time, we encounter the final boss: updating customer orders with tracking information. Now, if we have any new adventurers with us, I recommend heading over here to prep yourself for the encounter ahead. If you just need a recap of our last session, you can head over here. Alternatively, if you just want the code from last time that can be found here. If you want to skip ahead and look at the code it can be found here. With that all said we’re off to battle with updating customer orders with tracking information! Body We’re gonna start by heading over to our app/routes/app.index.jsx file, and grabbing the code found in the loader function. We’ll be moving that to our action function so we can add our post call. We’ll completely replace the existing action function code, and because of that, we need to make a couple of tweaks to the code base. We’re going to remove anything that has a reference to actionData?.product or productId. Now what we need to add the call admin.rest.resources.Fulfillment, which will allow us to update customer orders with tracking information. We’ll be placing it under our fulfillment ID loop. Here is a general example of what that call will look like. ` This is a good start as we now have our fulfillment information and get to add a few things to it. We’ll start off by adding our fulfillment ID and then our fulfillment tracking info. ` Awesome! Now we have given the fulfillment ID and tracking info we need to the fulfillment object, but we need to do one more thing for that to update. Thankfully, it’s a small thing and that’s to save it. ` Now, the above will work wonderfully for a single order, but based on our prior adventurers, we had multiple ids for orders that needed to be completed. So our next step is to loop over our fulfillment object. Though before we do that, here is what the current code should look like: ` Before we go to loop over this, we’re going to add a small change to fulfillmentIds. We’re going to create a new variable and add the company and tracking number information. So above the fulfillment variable, we will add this: ` Perfect! Now for the looping, we’ll just wrap it in a for of loop: ` Now that the loop is set up, we will be able to go through all of the orders and update them with the shipping company and a tracking number. So we’ll run a yarn dev, and navigate to the app page by pressing the p button. We should see our template page, and be able to click on the Generate a product button. Now we’ll navigate to our order page and we should see all open orders set to fulfilled. Conclusion Here we are. At the end of our three part saga, we covered a fair share of things in order to get our customer orders tracking information added, and can now take a long rest to rejuvenate....

Setting Up a Shopify App: Retrieving Fulfillment IDs  cover image

Setting Up a Shopify App: Retrieving Fulfillment IDs

This is Part 2 of an ongoing series showing developers how to set up and deploy their own Shopify App! Find Part 1 here. Today, we are going to continue our adventure! Last we left off, we made it through quite a bit, from setting up our Shopify App to retrieving customer orders. Though today's adventure will have us encounter retrieving fulfillment ids. This will be needed for our grand finale, which will be updating customer orders with tracking information. Now, if we have any new adventurers with us or you just need a recap of our last session, you can head over here. Alternatively, if you just want the code from last time that can be found here. If you want to skip ahead and look at the code, it can be found here. With that all said, let us start retrieving those fulfillment ids! We’re going start off by heading on over to our ` file, and making changes to the loader function. Here, we need to retrieve our session, which we’ll do by adding: ` between our const orders and before our return json, and it should look like this afterward. ` We’re going to need this to retrieve our fulfillment order information. With that, out of the way we’ll start by calling ` to retrieve our fulfillment orders. This will require the session we added and the order IDs from the orders we retrieved above in our previous blog post. Here is a rough look at what the call will be. ` We need to make a few changes though to get this up and running. First our order.id lives inside of an orders array so we’ll need to loop over this call. Second order.id also contains a bunch of unneeded information like `, so we’ll need to remove that in order to get the string of numbers we need. With those conditions outlined, we’ll go ahead and wrap it in a for loop, and use .replace to resolve the order.id issue, which gives us: ` Now that we are able to loop over our call and have gotten the order IDs properly sorted, we still have a couple of issues. We need a way to use this data. So we’re going to set a variable to store the call, and then we’ll need a place to store the fulfillment ID(s). To store the fulfillment ID(s), we’ll create an array called fulfillmentIds. ` For the call, we’ll label it as fulfillmentOrder. ` We should now have something that looks like this: ` We’re now almost there, and we just need to figure out how we want to get the fulfillment ID(s) out of the fulfillmentOrder. To do this, we’ll map over fulfillmentOrder and check for the open status, and push the IDs found into our fulfillmentIds array. ` Awesome! Now we have our fulfillment ID(s)! We can now return our fulfillment ID(s) in our return json() section, which should look like this. ` Our code should now look like this: ` We can now look at our terminal and see the fulfillmentIds array returning the fulfillment ID(s). Conclusion And there we have it! Part 2 is over and we can catch our breath. While it was a shorter adventure than the last one, a good chunk was still accomplished. We’re now set up with our fulfillment ID(s) to push on into the last part of the adventure, which will be updating customer orders with tracking information....

Setting Up a Shopify App and Getting Order Data cover image

Setting Up a Shopify App and Getting Order Data

Today, we are going on an adventure! We’re starting a three-part guide on creating a Shopify app and updating a customer's order with tracking information. For this article, it's assumed that you already have a Shopify store. If you want to skip ahead and look at the code, it can be found here. To start us off, we’ll use the Shopify Create app and then follow it up with retrieving customer orders. Shopify Create app Getting started with the Shopify Create app will be a quick and easy process. We’ll start by navigating to a directory where we want to create our app and run the following ` We’ll be greeted by a few different prompts asking for the project name, and building your app selection. Success! Now let's navigate into our new directory and do a yarn dev, and we’ll get a few options. We’ll choose to create it as a new app, add an app name, config name, and select the store we want to use. With that out of the way, we’ll open the preview by pressing the p button. It should automatically open it up in a window and show us an app install screen where we will click the Install app button. This should redirect us to our Remix app template screen Install Template Perfect! We now have a basic Shopify Create app up and running. Next, we will move on to adding in the ability to retrieve our customer orders. Orders query Alright, it’s customer order time! We’re going to be leaving the template mostly as is. We are going to focus on adding the call and verifying the data comes back. We’ll navigate over to our app/routes/app._index.jsx file and start changing the loader function. Start by removing: ` And replacing it with: ` Next, swap Wrap change { shop } in: ` With ` Follow that up with changing ` To ` Then, we’ll remove the View product button that has the old shop variable in it. When you go back and look at your application, you should see the Error: Access denied for fulfillmentOrders field. This is due to scopes that we haven’t updated. To fix this, we’ll head over to our shopify.app.toml file and replace ` with ` Here is what you should now have: ` We’ll now do another yarn dev which will tell us that our scopes inside the TOML don’t match the scopes in our Partner Dashboard. To fix this, we simply need to run: ` And then we’ll be prompted to confirm our changes with the difference shown. It will give us a success response, and now we can do another yarn dev to look at our application. Doing so brings us back to our old friend the app install page. Only this time it’s telling us to update the app and redirecting us back to the application page. Huh, seems like we’re getting a new error this time. For some reason, the app is not approved to access the FulfillmentOrder object. No worries, follow the link in that message. It should be https://partners.shopify.com/[partnerId]/apps/[appId]/customer_data Here, we select the Protected customer data and just toggle Store management and save. After this, go down to the Protected customer fields (optional) and do the same thing for Name, Email, Phone, and Address. With that all said and done, we’ll exit the page. After that, go back to our application, and refresh it. Tada! It works! We will see a toast notification at the bottom of the screen that says Orders received and in our terminal console, we will see our orders returning. Conclusion That was an exciting start to our three-part adventure. We covered a lot, from setting up a Shopify app to getting our orders back and everything up and running! Next time, we’ll be digging into how to get our fulfillment ids, which will be needed to update a customer's order with tracking information....

How to Integrate Mailchimp Forms in a React Project cover image

How to Integrate Mailchimp Forms in a React Project

Intro Today we will cover how to set up an email signup form using React and Mailchimp. This blog will be using the starter.dev cra-rxjs-styled-components template to expedite the process. This article assumes you have a basic understanding of React, and have set up a Mailchimp account. Here is the code repo if you want to review it while reading, or just skip ahead. We will start with setting up our React project using Starter.dev for simplicity, and then finish it up by integrating the two for our signup form. To start, we will be using the command yarn create @this-dot/starter --kit cra-rxjs-styled-components, which can be found here. We’ll go ahead, and give the project a name. I will be calling mine react-mailchimp. Now we will navigate into the project and do a yarn install. Then we can run yarn run dev to get it up and running locally on localhost:3000. This should have us load up on the React App, RxJS, and styled-components Starter kit page. With that all set, we’ll also need to install jsonp by using yarn add jsonp. We’ll be using jsonp instead of fetch to avoid any CORS issues we may run into. This also makes for an easy and quick process by not relying on their API, which can’t be utilized by the client. Now that we have our project set up, we will go ahead and go and grab our form action URL from MailChimp. This can be found by going to your Audience > Signup Forms > Embedded Forms > Continue and then grabbing the form action URL found in the Embedded Form Code. We need to make a small change to the URL and swap /post? with /post-json?. We can now start setting up our form input, and our submit function. I will add a simple form input and follow it up, and a submit function. Inside the submit function, we will use our imported jsonp to invoke our action URL. ` We’ll also add a quick alert to let the user know that it was successful and that’s it! We’ve now successfully added the email to our MailChimp account. Conclusion Today, we covered how to integrate Mailchimp with a react app using the cra-rxjs-styled-components template from starter.dev. I highly recommend using starter.dev to get your project up and running quickly. Here is the code repo again for you to check out....

Class and Enum Typings for Handling Data with GraphQL cover image

Class and Enum Typings for Handling Data with GraphQL

Intro Today we’ll talk about class and enum typings for handling more complex data with GraphQL. The typings will be used on class objects to make them easier to read and understand. This blog will build on the concepts of another blog found here, in which I discussed wrapping and enhancing data, which talks about how to create a GraphQL Rest API wrapper, and how to enhance your data. This article assumes you have a basic understanding of classes, typing, and enums. Here is the code repo if you want to review it while reading, or just skip ahead. With that said, we will look at the class structure we’re going to be using, the enums, and the type lookup for our customField enum with our mapper. Class setup This class we’re going to set up will be strictly for typing. We’re going to make a RestaurantGuest class as this data will be restaurant themed. So in our restaurant.ts file, we will name it RestaurantGuest, and include a few different items. *Basic starting class example* ` After setting that up, we will add a type that will reference the above class. *Type example* ` This type will be used later when we do the type lookup in conjunction with our mapper function. With the above finished, we can now move on to handling our enums. Handling enums We’ll now create our Enum to make dealing with our complex data easier. Since the above 3249258 represents FoodReservation, we will create an enum for it. Now you might be wondering why 3249258 represents FoodReservation. Unfortunately, this is an example of how data can be returned to us from a call. It could be due to the field id established in a CMS such as Contentful, or from another source that we don’t control. This isn’t readable, so we’re creating an Enum for the value. *Enum example* ` This will be used later during our type look-up. Type lookup Now we can start combining the enum from above with our class type, and a look-up type that we’re about to create. So let's make another type called RestaurantGuestFieldLookup, which will have an id and value. *Look-up type example* ` Perfect, now we’ll swap out ` to be ` We can now move on to creating and using our mapper. In a separate file called mapper.ts, we will create our restaruantGuestMapper function. ` Tada! Thanks to all the work above, we can easily understand and get back the data we need. Conclusion Today's article covered setting up a typing class, creating an enum, and type lookup with a mapper for more complex data handling with GraphQL. Hopefully, the class structure was straightforward and helpful, along with the enums and type lookup and formatting examples. If you want to learn more about GraphQL, read up on our resources available at graphql.framework.dev....

How to Create a GraphQL Rest API Wrapper and Enhance Your Data cover image

How to Create a GraphQL Rest API Wrapper and Enhance Your Data

Intro Today we will talk about wrapping a REST API with a GraphQL wrapper, which means that the REST API will be accessible via a different GraphQL API. We’ll be using Apollo Server for the implementation. This article assumes you have a basic understanding of REST API endpoints, and some knowledge of GraphQL. Here is the code repo if you want to review it while reading. With that said, we will be looking at why you would wrap a REST API, how to wrap an existing REST API, and how you can enhance your data using GraphQL. Why wrap a REST API with GraphQL There are a couple of different reasons to wrap a REST API. The first is migrating from an existing REST API, which you can learn about in detail here, and the second is creating a better wrapper for existing data. Granted, this can be done using REST. But for this article, we will focus on a GraphQL version. A reason for creating a better wrapper would be using a CMS that provides custom fields. For instance, you get a field that is listed as C_435251, and it has a value of 532. This doesn’t mean anything to us. But when looking at the CMS these values could indicate something like “Breakfast Reservation” is set to “No”. So, with our wrapping, we can return it to a more readable value. Another example is connecting related types. For instance, in the code repo for this blog, we have a type Person with a connection to the type Planet. *Connection example* ` How to Wrap a REST API Alright, you have your REST API, and you might wonder how to wrap it with GraphQL? First, you will call your REST API endpoint, which is inside your rest-api-sources file inside your StarwarsAPI class. *REST API example* ` This above class will then be imported and used in the server/index file to set up your new Apollo server. *Apollo server example* ` Now, in your GraphQL resolver, you will make a person query and retrieve your starWarsAPI from it, which contains the information you want to call. *GraphQL resolver* ` With the above done, let's start on how to enhance your data in the resolver. Enhancing your data With our resolver up and running, we’ll now use it to enhance some of our data. For now, we’ll make the name we get back returned in a first name, and the last initial format. To do so above our Query, we’ll start a Person object and put the variable name inside it. We’ll then grab the name from our Query and proceed to tweak it into the format we want. *Enhancing in resolver* ` Tada! Now, when we call our GraphQL, our name will return formatted in a first name, and last initial state. Conclusion Today's article covered why you want to wrap a REST API with GraphQL for migration or to provide a better API layer, how to wrap an existing REST API with GraphQL, and how you can use the resolver to enhance your data for things like name formatting. I hope it was helpful, and will give others a good starting point. If you want to learn more about GraphQL and REST API wrappers, read up on our resources available at graphql.framework.dev....

Remix Deployment with Architecture cover image

Remix Deployment with Architecture

Intro Today’s article, will give a brief overview of the Architect framework and how to deploy a Remix app. I’ll cover a few different topics, such as what Architect is, why it’s good to use, and the issues I ran into while using it. It is a straightforward process, and I recommend using it with the Grunge Stack offered by Remix. So let’s jump on in and start talking about Architect. Prerequisites There are a few prerequisites and also some basic understandings that are expected going into this. The first is to have a GitHub account, then an AWS account, and finally some basic understanding of how to deploy. I also recommend checking out the *Grunge Stack* here if you run into any issues when we progress further. What is Architect? First off, Architect is a simple framework for Functional Web Apps (FWAs) on AWS. Now you might be wondering, "why Architect?" It offers the best developer experience, works locally, has infrastructure as code, is secured to the least privilege by default, and is open-source. Architect prioritizes speed, smart configurable defaults, and flexible infrastructure. It also allows users to test things and debug locally. It defines a high-level manifest, and turns a complex cloud infrastructure into a build artifact. It complies the manifest into AWS CloudFormation and deploys it. Since it’s open-source, Architect prioritizes a regular release schedule, and backwards compatibility. Remix deployment Deploying Remix with Architect is rather straightforward, and I recommend using the Grunge Stack to do it. First, we’re going to head on over to the terminal and run *npx create-remix --template remix-run/grunge-stack*. That will get you a Remix template with almost everything you need. *Generating remix template* For the next couple of steps, you need to add your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to your repo’s secrets. You’ll need to create the secrets on your AWS account, and then add them to your repo. *AWS secrets* *GitHub secrets* The last steps before deploying include giving CloudFormation a SESSION_SECRET of its own for both staging and production environments. Then you need to add an ARC_APP_SECRET. *Adding env variables* ` With all of that out of the way, you need to run *npx arc deploy*, which will deploy your build to the staging environment. You could also do *npx arc deploy —-dry-run* beforehand to verify everything is fine. Issues I Had in My Own Project Now let's cover some issues I had, and my solution for my own project. So, I had a project far into development, and while looking at the Grunge Stack, I was scratching my head, trying to figure out what was necessary to bring over to my existing project. I had several instances of trial and error as I plugged in the Architect related items. For instance: the arc file, which has the app name, HTTP routes, static assets for S3 buckets, and the AWS configuration. I also had to change things like the *remix.config* file, adding a *server.ts* file, and adding Architect related dependencies in the package.json. During this process, I would get errors about missing files or missing certain function dirs(), and I dumped a good chunk of time into it. So while continuing to get those errors, I concluded that I would follow the Grunge Stack, and their instructions for a new project. Then I would merge my old project with the new project; and that resolved my issues. While not the best way, it did get me going, and did not waste any more time. That resolved my immediate issues of trying to get my Remix app deployed to AWS, but then I had to figure out what got pushed. That was easy, since it created a lambda, API gateway, and bucket based on the items in my arc file. Then I realized, while testing it out live, that my environmental variables hadn’t carried over, so I had to tweak those on AWS. My project also used a GitHub OAuth app. So that needed to be tweaked with the new URL. Then it was up and running. Conclusion Today’s article covered a brief overview of what Architect is, why it’s good to use, how to deploy a Remix app and the issues you might have. I hope it was useful and will give others a good starting point....

Git Basics: Diff and Stash cover image

Git Basics: Diff and Stash

Getting started with Git Today’s article will cover some of the basics of Git. This article is written under the assumption that you have already made a GitHub repository or have access to one, and that you also have a basic understanding of the command line. If you haven’t opened up a GitHub account, I recommend going to GitHub, creating an account, setting up a repository and following this guide before continuing on. Now, we’ll move on to a brief rundown of the Git commands that will be used in this article, and then follow it up with how to use each of them. The Rundown *Git diff* - This command is used to show changes between commits and the working tree. *Git stash* - This command is used to stash or remove the changes made to your working directory (no worries these haven’t gone up in smoke) *Git stash pop* - This command is used to retrieve your most recent stash made by popping it from your stash stack *Git stash list* - This command is used to display a list of your current stash entries. *Git stash apply* - This command is used to reapply a git stash, but also keep it in your stash Git Diff Alright, now we’re going to move on to how to do a *git diff*. I’m going to be going to my console, and heading over to the blog repo I used last time. From here, I’m going to open up my README file with nano and edit it. After saving, I’ll use a *git status* to verify that the changes are showing up. Now, we can see that the file is edited, but say we don’t know or remember what was changed. In this instance, we can use a *git diff*, and it’ll show us the changes that were made. *Git status* *Git diff* Git Stash Say we decide we don’t want or need those README changes at the moment we can use a *git stash*. With that done, we’ll use *git status*, and we can see that those changes are gone. While the change do appear to be gone, we can easily retrieve it by doing *git stash pop*. Once again, we’ll use *git status*, and verify that the changes are back. *Git stash* *Git status* *Git stash pop* *Git status again* Git stash list & apply Alright, so we’re going to do a *git stash* again to get rid of our current changes. We’re going to edit the README with nano again. We’ll run another *git status* to verify the changes were made. Then, we’re going to do another *git stash* to get rid of those changes. Now, with a couple of changes stashed, we’re going to do a *git stash list* to see our list of stashed changes. *Git stash* *Nano changes and git status* *Git stash again* *Git stash list* Now we want our initial changes. In order to get those, we’ll use a *git stash apply 1*. This will keep it in your *git stash* and it is useful if you want to apply the same changes in multiple branches. *Git stash apply n (in our case n === 1)* Conclusion We made it to the end! I hope this article was helpful, and that you were able to learn about, and be more comfortable with Git and GitHub. This article covered some of the basics of Git to try and help people starting out, or for those who might need a refresher....

Getting Started with Git cover image

Getting Started with Git

Getting Started with Git Today’s article will cover some of the basics of Git. This article is under the assumption that you have already made a GitHub repository or have access to one and basic understanding of the command line. If you haven’t, I recommend going to https://github.com/, creating an account and setting up a repository. The Rundown of Common Git Commands - git clone *This command is used on an existing repository and creates a copy of the target repository* - git status *This command is used to show the state of your working directory and staging area* - git checkout *The command is used to switch between different branches or versions in a repository * - git add *This command is used to add changes to the staging area of your current directory* - git commit *This command is used to save your changes to your local repository* - git push *This command is used to upload your local repository to a remote repository* - git pull *This command is used to download the newest updates from the remote repository* Cloning a Repo Now that we’ve covered some of the basic Git commands, we’ll go through a few examples of how they get used. We’ll start with cloning or copying a remote repository for our local use. To start, you’ll navigate to your repository and find the clone button (now called Code). For this article, we’ll be using HTTPS as our cloning option. Now that you’ve copied that to your clipboard, we’ll be opening up the terminal and navigating to a folder that you want to clone your repository in. We’ll be using *git clone * so in this case, it will be *git clone https://github.com/WillHutt/blog.git*. Git Status Now, let's navigate to your new folder (it’s the name of the repo you cloned) and run a *git status* to see what’s going on. Git Checkout Looks like everything went okay. I’m now sitting in the master branch of my cloned repository. From here, we will leave the master branch and move to a new branch. The general rule of thumb is to always work in a separate branch; that way, if anything goes wrong, you’re not causing errors to the master branch. We’re now going to run *git checkout -b *. *Huzzah! We are now on a new branch.* Git Add Now, let's make some changes in your new branch. I’m going to edit my README file. I’ll be using *nano README.md* to edit my README file. Making some changes Now that we’ve saved those changes, let's run a *git status* and see what has been changed. Sweet! It shows that we’ve made some changes. Let's add those changes to our branch. We’ll be using git add to add those to the staging area. With git add, we can either use *git add .* to add all of my changes or we can be more specific with *git add *. After that git add, I ran a *git status* and you can see that the text is now green indicating it has been added. Git Commit Now we want to commit the newest changes to your branch. We’ll start by using *git commit*, which brings up our handy commit message prompt. Here we will give a short description of what changed. We won’t go into detail about commit message standards in this article. We’ll keep it super simple and mention that we are updating our README documentation. Awesome! Everything went as planned and we’ve now committed our changes. Git Push Finally, with all of that done, we can move on to pushing the local changes to your remote repository. We’ll do this with *git push origin * and in this case, it will be *git push origin new-branch-yay*. Here, it will ask for your username and password for GitHub, so go ahead and enter those. *Tada! We have pushed to your repository branch* Merging We now need to make a pull request to get that change into the master branch. To do that, go back over to GitHub and head to your repository. Once there, we will select Pull requests and click on the New pull request button. Sweet! Now that we’ve done that, make sure the branch you want to merge is selected. That branch will be your compare while the place you want to merge into will be the base. Woot! Now that everything is selected properly, go ahd and click the Create pull request button. This will take us to your last step on creating a pull request. As you can see in the image above, you have a variety of options. We’re just going to focus on hitting the Create pull request button. Now we have ourselves a pull request! Before we click that Merge pull request button, we’re going to select the white arrow to the right of it. This shows a few different merging options that we can choose from. For now, select Squash and merge and confirm it. This will keep things simple and clean, allowing our history of commits to be nice and orderly. That is super useful when you need to go back and see what changes were made without seeing a ton of merges associated with one pull request. Success! We have merged your pull request and updated master with your latest changes. Now we have one last thing left before we’re done with your now merged pull request. We need to click the Delete branch button to help keep your repository branches nice and clean. Navigate back to the homepage of the remote repository on GitHub and we can see that the master branch has been updated. Git Pull Now, we'll do a *git pull* to get the latest updates from our remote repository. To do that, we’ll change branches in our terminal. In my case, I’ll be moving from my new-branch-yay back to my master branch. In order to do so, we’ll use *git checkout master*. Once on master, we’ll use a *git pull* to get the latest update to master. Conclusion We made it to the end! I hope this article was helpful and you were able to learn and be more comfortable with Git and GitHub. This article covered some of the basics of Git to try and help people starting out or might need a refresher....

Intro to Google DevTools: Console Panel cover image

Intro to Google DevTools: Console Panel

Intro to Google DevTools: Console Panel Today's article will give an overview of the Console panel in Google DevTools. The Console panel allows you to perform a number of cool things. Some of these are inputs in your code, such as a console.log to being able to return recently inspected elements. Also, the Console is technically a Javascript REPL in which you can directly execute code and interact with the opened page. We’ll cover a number of console inputs and how to see recently inspected elements from the console panel. Console methods We’ll start this section off by going over a very simple console method called console.log. We will follow it up with a few other methods such as .error, .table, .time, .warn and finish up with recently inspected elements. These are some useful console methods that should be useful for future projects. Console.log A console.log is used to log a message into the console panel. This can be a number of things from a simple string like ‘Hello’ up to more complex things like outputting data from arrays and objects. In order to use this method, all you have to do it type console.log("insert message you want"). Afterwards the message will be displayed in the console panel. *Here we have a simple string* *Next is an array example* *Finally we will show an object example* Console.error The console.error method is useful for returning an error message. It is very similar to the console.log mentioned above. It is very easy to use and can be something as simple as ‘This case has failed’. Just like console.log it’s an easy method to use and all you have to do is console.error("insert message you want"). Also this method does not stop code execution. Console.table A console.table method is pretty cool as it can log an array and objects as a table. This can be very helpful if you are not quite sure what the array looks like. It will output an index column and follow it up with the information stored within the object. I will include a couple examples from small to big. *Small* *Big* Console.time The console.time method is nice to use if you’re curious about knowing how long an operation takes. In order to use this, you would start with calling console.time(), and then putting console.timeEnd() where you want it to stop. I will show a couple different examples using different loops. This will give a good indicator as to which operation is quicker. *For loop* *While loop* In this scenario, the for loop was a little quicker than the while loop. Console.warn A console.warn method will return a warning. This is similar to console.log and console.error, but it outputs your message as a warning. This method does not interrupt your code execution. $0-$4 These methods return the five recently inspected elements. So $0 grabs the first, $1 grabs the second, and so on. I will show a couple examples using these. These are very useful for when you don’t want to go back to the inspected elements. *Google example* *Youtube example* Conclusion Today's article covered an overview of the Console panel in Google DevTools. Hopefully, after reading this, you will have learned something new and useful. I personally find the Console panel to be extremely helpful. I use it all the time to help verify code pieces and to quickly grab previously inscpected elements....

Intro to Google DevTools: Network Panel cover image

Intro to Google DevTools: Network Panel

Intro to Google DevTools: Network Panel Today's article will give an overview of the Network panel in Google DevTools. The Network panel allows you to perform a number of cool things. We’ll cover a variety of details on this tab. We’ll start with explaining a bit about the network columns, then move onto talking about filters, following that up with throttling, and finally wrap it up by talking about call details. Using the Network panel I’ll be using my old D&D site as an example here and navigate the Grand Heroes page. From here we will open up Google DevTools and click on the Network Panel. The panel should currently be empty of any data. We will reload the page with the Network panel open. This will cause data to appear in the panel. The data or resources shown here will be ordered by time of execution. You will also note that the resources are broken up by name, status, type, initiator, size, time and waterfall. *Empty Network panel* *Resource filled Network panel* Network columns As mentioned above the resources are broken up into a few different columns. The name column represents the name of the resource. Status stands for the HTTP response code. Type is the resource type. Initiator represents the cause of the request and by clicking on this it will take you to the code that triggered the request. Size stands for how big the resource is. Time is how long is how long the request took. Finally the waterfall represents the different stages of the request. You can hover over it to see a general breakdown. Filter The Network panel can usually have a lot of resources shown depending on the site. This is where filtering can come in handy. There are a few different ways to filter in the Network panel. First we’ll start by marking sure we’re on the Grand Heroes page from before and open up the Network panel. We’ll reload the page from here so we can pump the panel full of resources. Now we’ll find the filter options by looking for a text box that says Filter. This option will be found on the second row under the Network panel. Here you can enter information and it will filter the rest out. To the right of this you can see other filter options like XHR, JS, and CSS among many others. By clicking on one it will filter out the other types. *Filter bar* *Filtering will text-box* *Filtering by type option* Throttling Looking at the column above the filter section we can see a throttling option. Here we can find different throttling items such as No Throttling, Fast 3G, Slow 3G, Offline, and even custom options. This is an amazing tool as it allows you to test how fast your loading time is. With most people having a mobile device and viewing things on the go it’s important to know if your site can load quickly on a mobile device. *Throttling toggle* Speaking of mobile devices let us try a Slow 3G throttle. I suspect this page will load fairly slowly. *Slow 3G* Well that went about as well as I thought. Originally before the throttling it didn’t annoy me but with throttling it was really slow. The tool shows us that I need to improve the load time of that page. This is great to know and will improve the overall user experience of a site. Call details We’ll now look into a resource call and see what we can find out. On the Grand Heroes page with the Network panel open we’ll click on the 10 Roll button. This will trigger a new resource to be called and displayed in the Network panel. We’ll click on the resource and click on headers and see all kinds of information. *Headers* We can see a few different rows: General, Response Header, Request Headers, and Query String Parameters. These rows have useful information from the Request URL, Request Method, and many others. Navigating to the Preview tab on the resource gives us the information retrieved from the call. In this case you should see an array of objects. *Preview* Conclusion Today's article covered an overview of the Network panel in Google DevTools. Hopefully, after reading this you will have learned something new and useful. I personally find the Network panel to be extremely helpful....

Intro to Google DevTools: Sources Panel cover image

Intro to Google DevTools: Sources Panel

Intro to Google DevTools: Sources Panel Today's article will give an overview of the Sources panel in Google DevTools. The Sources panel allows you to easily debug and see what is going on with your code. First we’re going to cover some basics on how to use breakpoints. They are very useful and we’ll use them by going through a function call. Breakpoints I’ll be using my old D&D site as an example here and navigate to the Grand Heroes page. After that we’ll need to navigate to the Sources panel in Google DevTools. Once there you should select the grandheroes.php file and see some code in the window. Here we will now cover how to add, remove and toggle breakpoints on and off. *Sources panel* Adding breakpoints is fairly straightforward. On the Sources panel in the grandheroes.php file, you should see lines with numbers on them. All you have to do is click on those to which you would like to add your breakpoints. *Adding a breakpoint* Removing breakpoints is just as easy. Looking at the same grandheroes.php file we should have some blue breakpoints (which now appear as red dots if you have the latest version of Chrome installed). All we have to do is now click on those blue breakpoints, and they will be removed. Another way of removing them is to go to the right side of the Sources panel, and open up the Breakpoints tab. From here, you can right click and select “Remove breakpoint” or “Remove all breakpoints”. *Removing a breakpoint* *Removing a breakpoint with Breakpoint tab* Toggling breakpoints on and off is pretty straightforward. After selecting breakpoints on the grandheroes.php page again, we can navigate over to the Breakpoints tab on the right. From here, we can click on the check boxes to enable or disable a breakpoint. We can also right click on the Breakpoints tab, and choose “*Deactivate breakpoints*” or “*Disable all breakpoints*”. Finally, above the Breakpoints tab at the very top, you should see an arrow pointing to the right with a diagonal line through it. By clicking this arrow, you can also toggle the breakpoints on and off. *Toggling a breakpoint from Breakpoints tab* *Toggling a breakpoint with the arrow* Following a function call So now that we know how to set, remove, and toggle breakpoints, we will be using them to follow a function call. We’ll start by making sure we’re on the Grand Heroes page, and then we'll open up our Sources panel. After that, we’ll navigate to the gandheroes.php file in the Sources panel, and click on line 63 and 65. *Line 63 and 65* From here, we’ll click on the *1 Roll* button, which will trigger our function call called *oneRoll()*. This should hit our first breakpoint on line 63, where we have an *if* case. We can now verify that the *if* case is true and the data does in fact have a length greater than 0. *Breakpoint 1* We’ll continue to the next breakpoint by resuming the script execution. You can do this by clicking F8 or the bar and right arrow symbol in the top right corner above the Watch tab. Here we can see that it enters the if case and will start to output the data. *Breakpoint 2* After seeing this, we’ll resume the script execution again, which will move us past our last breakpoint. We will see that the page has now updated with the information received from our function call. *Data loaded from function call* Conclusion Today's article covered an overview of the Sources panel in Google DevTools. Hopefully, after reading this, you will have learned something new and useful. I personally find the Sources panel to be extremely helpful. I use it daily for work, and it always helps me get through my code problems....