Skip to content

How to Create a GraphQL Rest API Wrapper and Enhance Your Data

Intro

Today we will talk about wrapping a REST API with a GraphQL wrapper, which means that the REST API will be accessible via a different GraphQL API. We’ll be using Apollo Server for the implementation. This article assumes you have a basic understanding of REST API endpoints, and some knowledge of GraphQL. Here is the code repo if you want to review it while reading. With that said, we will be looking at why you would wrap a REST API, how to wrap an existing REST API, and how you can enhance your data using GraphQL.

Why wrap a REST API with GraphQL

There are a couple of different reasons to wrap a REST API. The first is migrating from an existing REST API, which you can learn about in detail here, and the second is creating a better wrapper for existing data.

Granted, this can be done using REST. But for this article, we will focus on a GraphQL version. A reason for creating a better wrapper would be using a CMS that provides custom fields. For instance, you get a field that is listed as C_435251, and it has a value of 532. This doesn’t mean anything to us. But when looking at the CMS these values could indicate something like “Breakfast Reservation” is set to “No”. So, with our wrapping, we can return it to a more readable value. Another example is connecting related types. For instance, in the code repo for this blog, we have a type Person with a connection to the type Planet.

Connection example

type Person {
    """The name of this person."""
    name: String

    """A planet that this person was born on or inhabits."""
    homeworld: Planet
}

type Planet {
    """The name of this planet."""
    name: String
}

How to Wrap a REST API

Alright, you have your REST API, and you might wonder how to wrap it with GraphQL? First, you will call your REST API endpoint, which is inside your rest-api-sources file inside your StarwarsAPI class.

REST API example

class StarwarsAPI {
    constructor() {
        this.axios = axios.create({
            baseURL: 'https://swapi.dev/api/',
        });
    }

    async getPerson(id) {
        const {
            data
        } = await this.axios.get(`people/${id}`);
        return data
    }

    async getHomeworld(id) {
        const {
            data
        } = await this.axios.get(`planets/${id}`);
        return data
    }
}

This above class will then be imported and used in the server/index file to set up your new Apollo server.

Apollo server example

const StarwarsAPI = require('./rest-api-sources/starwars-rest-api');

const server = new ApolloServer({
    typeDefs,
    resolvers,
    dataSources: () => ({}),
    context: () => {
        return {
            starwarsAPI: new StarwarsAPI(),
        };
    },
});

Now, in your GraphQL resolver, you will make a person query and retrieve your starWarsAPI from it, which contains the information you want to call. GraphQL resolver

const resolvers = {
    Query: {
        person: async (_, {
            id
        }, {
            starwarsAPI
        }) => {
            return await starwarsAPI.getPerson(id);
        },
    },

};

With the above done, let's start on how to enhance your data in the resolver.

Enhancing your data

With our resolver up and running, we’ll now use it to enhance some of our data. For now, we’ll make the name we get back returned in a first name, and the last initial format. To do so above our Query, we’ll start a Person object and put the variable name inside it. We’ll then grab the name from our Query and proceed to tweak it into the format we want.

Enhancing in resolver

Person: {
    name: ({
        name
    }) => {
        if (!name) {
            return null;
        }
        const [first, last] = name.split(" ")

        if (last === undefined) {
            return first
        }
        return `${first} ${last[0].toUpperCase()}.`
    }
},

Tada! Now, when we call our GraphQL, our name will return formatted in a first name, and last initial state.

Conclusion

Today's article covered why you want to wrap a REST API with GraphQL for migration or to provide a better API layer, how to wrap an existing REST API with GraphQL, and how you can use the resolver to enhance your data for things like name formatting. I hope it was helpful, and will give others a good starting point.

If you want to learn more about GraphQL and REST API wrappers, read up on our resources available at graphql.framework.dev.

This Dot Labs is a development consultancy that is trusted by top industry companies, including Stripe, Xero, Wikimedia, Docusign, and Twilio. This Dot takes a hands-on approach by providing tailored development strategies to help you approach your most pressing challenges with clarity and confidence. Whether it's bridging the gap between business and technology or modernizing legacy systems, you’ll find a breadth of experience and knowledge you need. Check out how This Dot Labs can empower your tech journey.

You might also like

Class and Enum Typings for Handling Data with GraphQL cover image

Class and Enum Typings for Handling Data with GraphQL

Intro Today we’ll talk about class and enum typings for handling more complex data with GraphQL. The typings will be used on class objects to make them easier to read and understand. This blog will build on the concepts of another blog found here, in which I discussed wrapping and enhancing data, which talks about how to create a GraphQL Rest API wrapper, and how to enhance your data. This article assumes you have a basic understanding of classes, typing, and enums. Here is the code repo if you want to review it while reading, or just skip ahead. With that said, we will look at the class structure we’re going to be using, the enums, and the type lookup for our customField enum with our mapper. Class setup This class we’re going to set up will be strictly for typing. We’re going to make a RestaurantGuest class as this data will be restaurant themed. So in our restaurant.ts file, we will name it RestaurantGuest, and include a few different items. __Basic starting class example__* ` class RestaurantGuest { id!: number; name!: string; email!: string; phoneNumber!: string; / foodReservation */ 3249258!: { id: number, value: string } | undefined; } ` After setting that up, we will add a type that will reference the above class. __Type example__* ` type RestaurantGuestResponse= { [g in keyof RestaurantGuest ]: RestaurantGuest [g]; }; ` This type will be used later when we do the type lookup in conjunction with our mapper function. With the above finished, we can now move on to handling our enums. Handling enums We’ll now create our Enum to make dealing with our complex data easier. Since the above 3249258 represents FoodReservation, we will create an enum for it. Now you might be wondering why 3249258 represents FoodReservation. Unfortunately, this is an example of how data can be returned to us from a call. It could be due to the field id established in a CMS such as Contentful, or from another source that we don’t control. This isn’t readable, so we’re creating an Enum for the value. __Enum example__* ` export enum FoodReservation { 'Yes' = 53425867, 'No' = 53425868, } ` This will be used later during our type look-up. Type lookup Now we can start combining the enum from above with our class type, and a look-up type that we’re about to create. So let's make another type called RestaurantGuestFieldLookup, which will have an id and value. __Look-up type example__* ` export type RestaurantGuestFieldLookup = { id: TId; value: TValue; }; ` Perfect, now we’ll swap out ` 3249258?: {id: number, value: string} | undefined; ` to be ` 3249258?: RestaurantGuestFieldLookup; ` We can now move on to creating and using our mapper. In a separate file called mapper.ts, we will create our restaruantGuestMapper function. ` export const restaruantGuestMapper = (guest: RestaurantGuestResponse) => { if (!guest) { return null; } return { id: guest.id, name: guest.name, phoneNumber: guest.phoneNumber, email: guest.email, reservation: guest.3249258.id === FoodReservation.Yes, }; }; ` Tada! Thanks to all the work above, we can easily understand and get back the data we need. Conclusion Today's article covered setting up a typing class, creating an enum, and type lookup with a mapper for more complex data handling with GraphQL. Hopefully, the class structure was straightforward and helpful, along with the enums and type lookup and formatting examples. If you want to learn more about GraphQL, read up on our resources available at graphql.framework.dev....

Introducing the New Serverless, GraphQL, Apollo Server, and Contentful Starter kit cover image

Introducing the New Serverless, GraphQL, Apollo Server, and Contentful Starter kit

Introducing the new Serverless, GraphQL, Apollo Server, and Contentful Starter kit The team at This Dot Labs has released a brand new starter kit which includes the Serverless Framework, GraphQL, Apollo Server and Contentful configured and ready to go. This article will walk through how to set up the new kit, the key technologies used, and reasons why you would consider using this kit. Table of Contents - How to get started setting up the kit - Generate the project - Setup Contentful access - Setting up Docker - Starting the local server - How to Create the Technology Model in Contentful - How to seed the database with demo data - How to work with the migration scripts - Technologies included in this starter kit - Why use GraphQL? - Why use Contentful? - Why use Amazon Simple Queue Service (SQS)? - Why use Apollo Server? - Why use the Serverless Framework? - Why use Redis? - Why use the Jest testing framework? - Project structure - How to deploy your application - What can this starter kit be used for? - Conclusion How to get started setting up the kit Generate the project In the command line, you will need to start the starter.dev CLI by running the npx @this-dot/create-starter` command. You can then select the `Serverless Framework, Apollo Server, and Contentful CMS` kit and name your new project. Then you will need to `cd` into your new project directory and install the dependencies using the tool of your choice (npm, yarn, or pnpm). Next, you will need to Run `cp .env.example .env` to copy the contents of the `.env.example` file into the `.env` file. Setup Contentful access You will first need to create an account on Contentful, if you don't have one already. Once you are logged in, you will need to create a new space. From there, go to Settings -> API keys` and click on the `Content Management Tokens` tab. Next, click on the `Generate personal token` button and give your token a name. Copy your new Personal Access Token, and add it to the `CONTENTFUL_CONTENT_MANAGEMENT_API_TOKEN` variable. Then, go to `Settings -> General settings` to get the `CONTENTFUL_SPACE_ID`. The last step is to add those `CONTENTFUL_CONTENT_MANAGEMENT_API_TOKEN` and `CONTENTFUL_SPACE_ID` values to your `.env` file. Setting up Docker You will first need to install Docker Desktop if you don't have it installed already. Once installed, you can start up the Docker container with the npm run infrastructure:up` command. Starting the local server While the Docker container is running, open up a new tab in the terminal and run npm run dev` to start the development server. Open your browser to `http://localhost:3000/dev/graphql` to open up Apollo server. How to Create the Technology Model in Contentful To get started with the example model, you will first need to create the model in Contentful. 1. Log into your Contentful account 2. Click on the Content Model` tab 3. Click on the Design your Content Modal` button if this is your first modal 4. Create a new model called Technology` 5. Add three new text fields called displayName`, `description` and `url` 6. Save your new model How to seed the database with demo data This starter kit comes with a seeding script that pre-populates data for the Technology` Content type. In the command line, run npm run db:seed` which will add three new data entries into Contentful. If you want to see the results from seeding the database, you can execute a small GraphQL query using Apollo server. First, make sure Docker, and the local server(npm run dev`) are running, and then navigate to `http://localhost:3000/dev/graphql`. Add the following query: ` query TechnologyQuery { technologies { description displayName url } } ` When you run the query, you should see the following output. `json { "data": { "technologies": [ { "description": "GraphQL provides a strong-typing system to better understand and utilize our API to retrieve and interact with our data.", "displayName": "GraphQL", "url": "https://graphql.framework.dev/" }, { "description": "Node.js® is an open-source, cross-platform JavaScript runtime environment.", "displayName": "Node.js", "url": "https://nodejs.framework.dev/" }, { "description": "Express is a minimal and flexible Node.js web application framework.", "displayName": "Express", "url": "https://www.npmjs.com/package/express" } ] } } ` How to work with the migration scripts Migrations are a way to make changes to your content models and entries. This starter kit comes with a couple of migration scripts that you can study and run to make changes to the demo Technology` model. These migration scripts are located in the `scripts/migration` directory. To get started, you will need to first install the contentful-cli`. `sh npm i -g contentful-cli ` You can then login to Contentful using the contentful-cli`. `sh contentful login ` You will then need to choose the Contentful space where the Technology` model is located. `sh contentful space use ` If you want to modify the existing demo content type, you can run the second migration script from the starter kit. `sh contentful space migration scripts/migrations/02-edit-technology-contentType.js -y ` If you want to build out more content models using the CLI, you can study the example code in the /scripts/migrations/01-create-technology-contentType.js` file. From there, you can create a new migration file, and run the above `contentful space migration` command. If you want to learn more about migration in Contentful, then please check out the documentation. Technologies included in this starter kit Why use GraphQL? GraphQL is a query language for your API and it makes it easy to query all of the data you need in a single request. This starter kit uses GraphQL to query the data from our Contentful` space. Why use Contentful? Contentful is a headless CMS that makes it easy to create and manage structured data. We have integrated Contentful into this starter kit to make it easy for you to create new entries in the database. Why use Amazon Simple Queue Service (SQS)? Amazon Simple Queue Service (SQS) is a queuing service that allows you to decouple your components and process and store messages in a scalable way. In this starter kit, an SQS message is sent by the APIGatewayProxyHandler` using the `sendMessage` function, which is then stored in a queue called `DemoJobQueue`. The SQS handler `sqs-handler` polls this queue, and processes any message received. `ts import { APIGatewayProxyHandler } from "aws-lambda"; import { sendMessage } from "../utils/sqs"; export const handler: APIGatewayProxyHandler = async (event) => { const body = JSON.parse(event.body || "{}"); const resp = await sendMessage({ id: Math.ceil(Math.random() 100), message: body.message, }); return { statusCode: resp.success ? 200 : 400, body: JSON.stringify(resp.data), }; }; ` Why use Apollo Server? Apollo Server is a production-ready GraphQL server that works with any GraphQL client, and data source. When you run npm run dev` and open the browser to `http://localhost:3000/dev/graphql`, you will be able to start querying your Contentful data in no time. Why use the Serverless Framework? The Serverless Framework is used to help auto-scale your application by using AWS Lambda functions. In the starter kit, you will find a serverless.yml` file, which acts as a configuration for the CLI and allows you to deploy your code to your chosen provider. This starter kit also includes the following plugins: - serverless-offline` - allows us to deploy our application locally to speed up development cycles. - serverless-plugin-typescript` - allows the use of TypeScript with zero-config. - serverless-dotenv-plugin` - preloads function environment variables into the Serverless Framework. Why use Redis? Redis is an open-source in-memory data store that stores data in the server memory. This starter kit uses Redis to cache the data to reduce the API response times and rate limiting. When you make a new request, those new requests will be retrieved from the Redis cache. Why use the Jest testing framework? Jest is a popular testing framework that works well for creating unit tests. You can see some example test files under the src/schema/technology` directory. You can use the `npm run test` command to run all of the tests. Project structure Inside the src` directory, you will find the following structure: ` . ├── generated │ └── graphql.ts ├── handlers │ ├── graphql.ts │ ├── healthcheck.spec.ts │ ├── healthcheck.ts │ ├── sqs-generate-job.spec.ts │ ├── sqs-generate-job.ts │ ├── sqs-handler.spec.ts │ └── sqs-handler.ts ├── models │ └── Technology │ ├── create.spec.ts │ ├── create.ts │ ├── getAll.spec.ts │ ├── getAll.ts │ ├── getById.spec.ts │ ├── getById.ts │ ├── index.ts │ ├── TechnologyModel.spec.ts │ └── TechnologyModel.ts ├── schema │ ├── technology │ │ ├── index.ts │ │ ├── technology.resolver.spec.ts │ │ ├── technology.resolvers.ts │ │ └── technology.typedefs.ts │ └── index.ts └── utils ├── contentful │ ├── contentful-healthcheck.spec.ts │ ├── contentful-healthcheck.ts │ ├── contentful.spec.ts │ ├── contentful.ts │ └── index.ts ├── redis │ ├── index.ts │ ├── redis-healthcheck.spec.ts │ ├── redis-healthcheck.ts │ ├── redis.spec.ts │ └── redis.ts ├── sqs │ ├── client.spec.ts │ ├── client.ts │ ├── getQueueUrl.spec.ts │ ├── getQueueUrl.ts │ ├── index.ts │ ├── is-offline.spec.ts │ ├── is-offline.ts │ ├── sendMessage.spec.ts │ └── sendMessage.ts └── test └── mocks ├── contentful │ ├── entry.ts │ └── index.ts ├── aws-lambda-handler-context.ts ├── graphql.ts ├── index.ts └── sqs-record.ts ` This given structure makes it easy to find all of the code and tests related to that specific component. This structure also follows the single responsibility principle which means that each file has a single purpose. How to deploy your application The Serverless Framework needs access to your cloud provider account so that it can create and manage resources on your behalf. You can follow the guide to get started. Steps to get started: 1. Sign up for an AWS account 2. Create an IAM User and Access Key 3. Export your AWS_ACCESS_KEY_ID` & `AWS_SECRET_ACCESS_KEY` credentials. `sh export AWSACCESS_KEY_ID= export AWSSECRET_ACCESS_KEY= ` 4. Deploy your application on AWS Lambda`: `sh npm run deploy ` 5. To deploy a single function, run: `sh npm run deploy function --function myFunction ` To stop your Serverless application, run: `sh serverless remove ` For more information on Serverless deployment, check out this article. What can this starter kit be used for? This starter kit is very versatile, and can be used with a front-end application for a variety of situations. Here are some examples: - personal developer blog - small e-commerce application Conclusion In this article, we looked at how we can get started using the Serverless, GraphQL, Apollo Server, and Contentful Starter kit. We also looked at the different technologies used in the kit, and why they were chosen. Lastly, we looked at how to deploy our application using AWS. I hope you enjoy working with our new starter kit!...

Setting Up a Shopify App: Retrieving Fulfillment IDs  cover image

Setting Up a Shopify App: Retrieving Fulfillment IDs

This is Part 2 of an ongoing series showing developers how to set up and deploy their own Shopify App! Find Part 1 here. Today, we are going to continue our adventure! Last we left off, we made it through quite a bit, from setting up our Shopify App to retrieving customer orders. Though today's adventure will have us encounter retrieving fulfillment ids. This will be needed for our grand finale, which will be updating customer orders with tracking information. Now, if we have any new adventurers with us or you just need a recap of our last session, you can head over here. Alternatively, if you just want the code from last time that can be found here. If you want to skip ahead and look at the code, it can be found here. With that all said, let us start retrieving those fulfillment ids! We’re going start off by heading on over to our `app/routes/app._index.jsx``` file, and making changes to the loader function. Here, we need to retrieve our session, which we’ll do by adding: `js const { session } = await authenticate.admin(request); ` between our const orders and before our return json, and it should look like this afterward. `js const orders = responseJson?.data?.fulfillmentOrders?.edges?.map( (edge) => edge.node ) || [[]]; const { session } = await authenticate.admin(request); return json({ orders: orders, }); ` We’re going to need this to retrieve our fulfillment order information. With that, out of the way we’ll start by calling `admin.rest.resources.FulfillmentOrder.all``` to retrieve our fulfillment orders. This will require the session we added and the order IDs from the orders we retrieved above in our previous blog post. Here is a rough look at what the call will be. `js const fulfillmentOrder = await admin.rest.resources.FulfillmentOrder.all({ session: session, orderid: order.id, }); ` We need to make a few changes though to get this up and running. First our order.id lives inside of an orders array so we’ll need to loop over this call. Second order.id also contains a bunch of unneeded information like `gid://shopify/Order/```, so we’ll need to remove that in order to get the string of numbers we need. With those conditions outlined, we’ll go ahead and wrap it in a for loop, and use .replace to resolve the order.id issue, which gives us: `js for (const { order } of orders) { await admin.rest.resources.FulfillmentOrder.all({ session: session, orderid: order.id.replace("gid://shopify/Order/", ""), }); } ` Now that we are able to loop over our call and have gotten the order IDs properly sorted, we still have a couple of issues. We need a way to use this data. So we’re going to set a variable to store the call, and then we’ll need a place to store the fulfillment ID(s). To store the fulfillment ID(s), we’ll create an array called fulfillmentIds. `js const fulfillmentIds = []; ` For the call, we’ll label it as fulfillmentOrder. `js const fulfillmentOrder = await admin.rest.resources.FulfillmentOrder.all({ session: session, orderid: order.id.replace("gid://shopify/Order/", ""), }); ` We should now have something that looks like this: `js const { session } = await authenticate.admin(request); const fulfillmentIds = []; for (const { order } of orders) { const fulfillmentOrder = await admin.rest.resources.FulfillmentOrder.all({ session: session, orderid: order.id.replace("gid://shopify/Order/", ""), }); } ` We’re now almost there, and we just need to figure out how we want to get the fulfillment ID(s) out of the fulfillmentOrder. To do this, we’ll map over fulfillmentOrder and check for the open status, and push the IDs found into our fulfillmentIds array. ` for (const { order } of orders) { const fulfillmentOrder = await admin.rest.resources.FulfillmentOrder.all({ session: session, orderid: order.id.replace("gid://shopify/Order/", ""), }); fulfillmentOrder.data.map(({ id, status }) => { if (status === "open") fulfillmentIds.push({ id }); }); } ` Awesome! Now we have our fulfillment ID(s)! We can now return our fulfillment ID(s) in our return json() section, which should look like this. `js return json({ orders: orders, fulfillmentIds: fulfillmentIds, }); ` Our code should now look like this: `js const { session } = await authenticate.admin(request); const fulfillmentIds = []; for (const { order } of orders) { const fulfillmentOrder = await admin.rest.resources.FulfillmentOrder.all({ session: session, orderid: order.id.replace("gid://shopify/Order/", ""), }); fulfillmentOrder.data.map(({ id, status }) => { if (status === "open") fulfillmentIds.push({ id }); }); } return json({ orders: orders, fulfillmentIds: fulfillmentIds, }); ` We can now look at our terminal and see the fulfillmentIds array returning the fulfillment ID(s). Conclusion And there we have it! Part 2 is over and we can catch our breath. While it was a shorter adventure than the last one, a good chunk was still accomplished. We’re now set up with our fulfillment ID(s) to push on into the last part of the adventure, which will be updating customer orders with tracking information....

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels cover image

Being a CTO at Any Level: A Discussion with Kathy Keating, Co-Founder of CTO Levels

In this episode of the engineering leadership series, Kathy Keating, co-founder of CTO Levels and CTO Advisor, shares her insights on the role of a CTO and the challenges they face. She begins by discussing her own journey as a technologist and her experience in technology leadership roles, including founding companies and having a recent exit. According to Kathy, the primary responsibility of a CTO is to deliver the technology that aligns with the company's business needs. However, she highlights a concerning statistic that 50% of CTOs have a tenure of less than two years, often due to a lack of understanding and mismatched expectations. She emphasizes the importance of building trust quickly in order to succeed in this role. One of the main challenges CTOs face is transitioning from being a technologist to a leader. Kathy stresses the significance of developing effective communication habits to bridge this gap. She suggests that CTOs create a playbook of best practices to enhance their communication skills and join communities of other CTOs to learn from their experiences. Matching the right CTO to the stage of a company is another crucial aspect discussed in the episode. Kathy explains that different stages of a company require different types of CTOs, and it is essential to find the right fit. To navigate these challenges, Kathy advises CTOs to build a support system of advisors and coaches who can provide guidance and help them overcome obstacles. Additionally, she encourages CTOs to be aware of their own preferences and strengths, as self-awareness can greatly contribute to their success. In conclusion, this podcast episode sheds light on the technical aspects of being a CTO and the challenges they face. Kathy Keating's insights provide valuable guidance for CTOs to build trust, develop effective communication habits, match their skills to the company's stage, and create a support system for their professional growth. By understanding these key technical aspects, CTOs can enhance their leadership skills and contribute to the success of their organizations....