Jay Gould

Integrating Apollo GraphQL into a Node server with JSON Web Tokens - Part 2

February 19, 2018

This is part 2 of a 3 part post about a recent project I’ve made for Preact with JWT authentication using Apollo GraphQL boilerplate. Instead of patching together different concepts and technologies on a per project basis, I’ve made this helpful starter pack which is great for testing and prototyping JWT based auth with Apollo.

Full code can be found in a link at the bottom of this post.

  • Part 1: Preact using Preact CLI, with Node based JWT authentication - view here
  • Part 2: Integrating Apollo’s GraphQL for server communication - [you are here]
  • Part 3: Integrating Apollo’s GraphQL for client communication and data management - view here

Post overview

In the previous post I wrote about how I set up the Preact front end of the boilerplate with Redux, and connected to a back end server which handled authentication using JSON Web Tokens (JWTs). Redux is great for managing the global application state and handled the JWTs really well, however the goal of this boilerplate is to handle the authentication using Apollo’s GraphQL.

This post will cover:

  • Setting up a basic Apollo GraphQL server.
  • How an existing Node server architecture is updated to accommodate GraphQL authentication.
  • Protecting the graphql endpoint with JWT’s.

Technologies used

Apollo is a “family of technologies” which helps consume the GraphQL API in an application. The GraphQL API is pretty amazing once you try it out. It helps front end code easily request information in any desired format using a single API endpoint, removing the need for a verbose REST setup and the complexity involved in updating front end code due to back end API changes.

There are alternatives to Apollo, such as Relay, but I decided to go with Apollo as the following means more and, in my opinion, better documentation.

Apollo requires setup on server and client side. As this part will cover the server:

Apollo Server

The server side implementation of Apollo is used with a Node server, and is called by the Apollo client. The Apollo client provides a GraphQL query to the Apollo server, and the server gathers the data from existing data services such as external sources or MongoDB for example, and sends the data back in the format requested by the client.

Setup of GraphQL server

Apollo server can be integrated into many Node frameworks. I’m using Node’s Express framework in the boilerplate, which uses the relevant Apollo packages:

npm install graphql graphql-tools apollo-server-express

The server can be set up and tested independently of the front end, as it can be queried separately using Graphiql. I’ll cover the server side completely first, then move on to the client side setup with React.

Similar to having a Mongoose (or similar) schema set up to define data structures, GraphQL also requires that the data structures are pre-defined, and each data point is given a type. This type can be a Scalar type (GraphQL lingo for String, Numbers, Booleans etc..) or custom types, defined by database entries such as Users or Refresh Tokens. The schema for this boilerplate looks like this:

// models/gql/gqlSchema.js

const { makeExecutableSchema } = require("graphql-tools")
const resolvers = require("./gql/resolvers")

const typeDefs = `
# Entry points
type Query {
	user(email: String): User // returns type User
	allUsers: [User] // returns array of type User
	allLoginActivity: [LoginActivity] // returns array of type LoginActivity
}

# Custom types
type User {
	id: String
	first: String
	last: String
	email: String
	password: String
	refreshToken: String
	thisLoginActivity: [LoginActivity]
}

type LoginActivity {
	thisUser: User
	activityType: String
	time: String
}
`

const schema = makeExecutableSchema({
  typeDefs,
  resolvers,
})
module.exports = schema

The typeDefs constant above is written in the actual GraphQL language - it’s not Javascript. This defines the query and types which the data will conform to when it’s queried from the relevant places (existing MongoDB in this case). The type Query {} block is the “entry point” which the query will be built from, and the custom types below forms the schema (structure) of the data which will be queried.

You’ll notice the makeExecutableSchema() from the graphql-tools module takes 2 parameters - the typeDefs constant, and resolvers import. In this boilerplate, the resolvers are located in a directory called gql which holds the GraphQL related files.

The resolvers are functions for associated defined types (above) which return data to the query, in this case from MongoDB:

// models/gql/resolvers.js

const Users = require("../Users") //MongoDB model
const LoginActivity = require("../LoginActivity") //MongoDB model

const resolvers = {
  Query: {
    user(root, args) {
      return Users.findOne({
        email: args.email,
      })
    },
    allUsers() {
      return Users.find()
    },
    allLoginActivity() {
      return LoginActivity.find()
    },
  },
  User: {
    thisLoginActivity(user) {
      return LoginActivity.find({
        userID: user._id,
      })
    },
  },
  LoginActivity: {
    thisUser(activity) {
      return Users.findOne({
        _id: activity.userID,
      })
    },
  },
}

module.exports = resolvers

The Apollo documentation on resolvers is a great place to learn this in depth. In my opinion it was the more complex part of GraphQL to understand, especially how the resolvers relate to the query. I’ll touch on it here but the official documentation and other sources will cover in much more detail than the scope of this post.

The above may look confusing at first, but it makes more sense when you look at the structure of a GraphQL query:

// the GraphQL query

query {
  user(email: "[email protected]"){
    id
    first
    last
    email
    password
    refreshToken
    thisLoginActivity{
      time
    }
  }
}

The last 3 code blocks above show the 3 parts of server side GraphQL; the schema (structure of data), the resolver (returning the data from database based on the query) and the GraphQL query (the GraphQL query sent from the client).

Let’s start with the GraphQL query. We are querying a user with the email matching the specified value in the parameter of:

user(email: "[email protected]")

This user() in the GraphQL query is specified in the resolver in the Query block, alongside all other potential queries, such as getAllUsers(). In the resolver, the user() resolver simply returns a Mongoose query to find a user by email.

Based on the schema in the gqlSchema.js file, the user() query inside the type Query block returns a User type, which is defined just a few lines below under the “Custom types” comment. This means when querying the user, based on the schema, we are able to access all of the defined user properties in the User type such as id, first name, last name, email etc… which are returned from the resolver, but the GrahphQL query will define what is returned to the user. Perhaps we only want the users name, for example, then we could write the following query:

// the GraphQL query

query {
  user(email: "[email protected]"){
    first
    last
  }
}

So we are retrieving basic information from the resolver via the GraphQL query, based on the schema. Taking this a step further though in our example, we’re able to query data about the user from a different MongoDB table (and even a different data source if we wanted) all within the same query. This is the powerful part of GraphQL. In our example we are querying LoginActivity data related to the user, which is held in a different MongoDB collection.

The user(email: "[email protected]") query returns User type, and inside the user type defined in the gqlSchema.js is the following schema definition:

thisLoginActivity: [LoginActivity]

This means the user will have an associated array which is of type LoginActivity, which is also defined in the gqlSchema.js file. The question here is how does the thisLoginActivity in the GraphQL query know to get data about that user? Answer; it’s how the GraphQL query is structured.

// the GraphQL query

query {
  user(email: "[email protected]"){
    id
    first
    last
    email
    password
    refreshToken
    thisLoginActivity{ // User object from the parent resolver is passed to this
      time
    }
  }
}

The query we’re using in our example shows that the first “root value” passed from the original query is a user object, containing the user details such as id, name etc. This is then passed to the thisLoginActivity in the User part of the resolver. This user object is used in the resolver to get the specific data from MongoDB, which in this case is to return the login activity of the user.

Aside from basic queries which simply retrieve information based on the given arguments, GraphQL also accepts mutations. These are very similar to queries, except data is updated/created instead of just queried. Here’s a simple example of what a mutation may look like to register a user to the system:

// models/gql/resolvers.js
...

const resolvers = {
  Query: {
    user(root, args) {
      return Users.findOne({
        email: args.email
      });
    },
    allUsers() {
      return Users.find();
    },
    allLoginActivity() {
      return LoginActivity.find();
    }
  },
  Mutation: {
    registerUser(root, {
      first,
      last,
      email,
      password
    }) {
      ... //Mongoose queries to register user
    }
  }
  ...
};

In order to test the server once you have the GraphQL layer as your API, you’re able to use the fantastic Graphiql tool to send queries to your server and get the response as you would from the client.

Migrating over existing functionality

The above section covered the most basic setup of a GraphQL/Apollo server, but as you can imagine, the resolver file will be huge if Mongoose queries are added into that one file. The entire old REST API logic (or new code if you’re starting from scratch) may be consumed by the Apollo server, so structuring this well is important.

If you’ve read my first post in this series and the accompanying Github branch, you’ll know that this project started off with a Node/Express oriented architecture, with much of the Express router responding to API requests with the use of the Node’s request, response and next parameters:

// controllers/auth.api.js

router.post(
  "/signup",
  (req, res, next) => {
    Users.find({
      email: req.body.email,
    })
      .then((user) => {
        let passwordHash = bcrypt.hashSync(req.body.password.trim(), 12)
        let newUser = _.pick(req.body, "first", "last", "email")
        newUser.password = passwordHash
        return Users.create(newUser)
      })
      .then((newUser) => {
        req.user = newUser
        req.activity = "signup"
        next()
      })
  },
  auth.createToken,
  auth.createRefreshToken,
  auth.logUserActivity,
  (req, res) => {
    res.status(201).send({
      success: true,
      authToken: req.authToken,
      refreshToken: req.refreshToken,
    })
  }
)

I’ve left out things like error handling etc from the above, but the important part of this is that the data flows through the queries by the use of next() functions, accumulating new data on the req object each time, eventually sending the response back to the user with the users auth and refresh tokens. The problem with this is that it can’t be used (efficiently or in the way you’d probably like it to be used) with GraphQL.

Instead, all of the back end functions such as createToken(), createRefreshToken() must be refactored in order to work independently in a functional way with no side effects, and relying upon any parameters passed to it, not just the Node request object parameters like in the above snippet.

Once refactored, the above snippet from the old style Node API may look something like this:

// controllers/auth.api.js

router.post("/signup", (req, res) => {
  auth
    .registerUser(
      req.body.first,
      req.body.last,
      req.body.email,
      req.body.password
    )
    .then((user) => {
      let authToken = auth.createToken(user)
      let refreshToken = auth.createRefreshToken(user)
      let userActivityLog = auth.logUserActivity(user, "signup")
      return Promise.all([authToken, refreshToken, userActivityLog]).then(
        (tokens) => {
          return {
            user,
            authToken: tokens[0],
            refreshToken: tokens[1],
          }
        }
      )
    })
    .then(() => {
      res.send({
        success: true,
      })
    })
    .catch((err) => {
      res.send(errors.errorHandler(err))
    })
})

This is now in a better position to be used with Apollo resolvers.

I’ve left the old style Node API calls in the boilerplate for reference.

Rather than having all these function calls and logic inside the resolvers, a common approach is to abstract this information into separate functions which are imported/required in the resolver file. As described on the Apollo GraphQL website, these can be referred to as models and connectors.

I have split mine up slightly differently though. The above snippet (the refactored signup flow) is abstracted to it’s own model type file called modelAuth.js. This means I can include the above process with the use of a single function in the resolver:

// models/gql/resolvers.js
...
const {
  registerUserModel
} = require('./modelAuth'); //include the register user function

const resolvers = {
  Query: {
    user(root, args) {
      return Users.findOne({
        email: args.email
      });
    },
    allUsers() {
      return Users.find();
    },
    allLoginActivity() {
      return LoginActivity.find();
    }
  },
  Mutation: {
    registerUser(root, {
      first,
      last,
      email,
      password
    }) {
      return registerUserModel(first, last, email, password); // register abstraction here
    }
  }
  ...
};

This included function (as shown above) relies on a few functions which are included from another file called controllers/auth.js. As mentioned, this file has also been updated to remove dependancy from the Node style res and req objects, and now forms what could be seen as a connector, handling the actual queries and being responsible for connecting to which ever back end service is requesting the data (MySQL DB, MongoDB, Postgres etc).

It’s easy to overthink the file structure of applications with many moving parts like this. The structure outlined here suits a smaller application, but as an app grows it will inevitably mean that a different folder directory structure will make sense. This will come naturally depending on when/how your app grows.

Protecting GraphQL endpoint

Everything mentioned above can be tested without a client-side system by using GraphiQL, but when it comes to implementing the client side we need to secure the connection to GraphQL.

In the same way that our API endpoints in controllers/auth.api.js are protected by using jsonwebtoken to verify the auth token in the header of the request, we can (and must) protect the GrahpQL part of the server-side system. Without protecting this, someone would be able to submit GraphQL requests and alter the database.

I’ve abstracted the entry point to GraphQL away from the main server code so it’s in a file config/graphql.js. In order to keep the code clean and separated, most of the authorisation logic will be held here.

This is a basic implementation of the graphqlExpress middleware from Apollo which is used to connect Apollo’s GraphQL instance to the /graphql endpoint:

// config/graphql.js

const {
  graphqlExpress
} = require('apollo-server-express');

const connect = app => {
  app.use(
    '/graphql',
    bodyParser.json(),
    graphqlExpress({
      schema
    }),
    ...
  );
};
...

This allows all queries to pass through the middleware, but to secure it using JWT we must update a few bits. Luckily this is pretty easy with Express as there’s a popular module called express-jwt which protects the endpoint by looking for an authorisation header’s Bearer token, passing a req.user object after a successful authorisation. The token in the header is checked, as usual, by the secret key:

// config/graphql.js

const {
  graphqlExpress
} = require('apollo-server-express');
const ejwt = require('express-jwt');

const connect = app => {
  app.use(
    '/graphql',
    bodyParser.json(),
    ejwt({
      secret: config.secret
    }),
    graphqlExpress({
      schema
    }),
    ...
  );
};
...

Now this does protect the endpoint as expected because unless a valid token is passed, an error is thrown when accessing the /graphql endpoint. But what if users without a token want to access the endpoint and perform a query or mutation? For example, logging in or registering? Again, this is easily implemented:

// config/graphql.js

...

const connect = app => {
  app.use(
    '/graphql',
    bodyParser.json(),
    ejwt({
      secret: config.secret,
      credentialsRequired: false
    }),
    graphqlExpress(req => ({
      schema,
      context: {
        user: req.user
      }
    })),
    ...
  );
};
...

The express-jwt has an optional parameter to skip the verification process called credentialsRequired which allows requests (without a JWT in the header) such as login and registering through. If the header does contain a JWT, it’s contents, which are traditionally the identification of a user via their ID or email address, are passed to the rest of the application using the req.user object. This can then be used in the next middleware function, graphqlExpress(), to set a context object. This object can then be accessed in every resolver.

You may be thinking “why would you want to not require that the JWT is required?“. Well, if a JWT is not supplied, we can protect the resolvers of GraphQL but returning an error if there’s no user set in the context. But if a JWT is supplied to the graphql endpoint, it must be a valid JWT (have the secret able to verify it’s authenticity) so we know that the data inside the JWT has not been tampered with.

The context object is now the key to keeping GraphQL safe. Register, login and other resolvers which don’t require authorisation can be used as normal without any checks, and all other resolvers can have a check such as:

// models/gql/resolvers.js

const resolvers = {
  Query: {
    allUsers(_, {}, context) {
      if (context.user) {
        return Promise.reject('Unauthorized');
      }
      return Users.find();
    }
    ...
  }
}

In a production application it’s likely that specific users should only be able to access certain features of you application, and this logic can now be implemented by using the users ID within the context object. Of course, having all this logic this means resolvers will be littered with some pretty ugly checks, so it can be abstracted to the models discussed earlier:

// models/gql/resolvers.js

...

const resolvers = {
  Query: {
    allUsers(_, {}, context) {
      return checkUser(context).then(authedUser => {
        return Users.find();
      });
    }
  }
	...
}

// models/gql/modelAuth.js

let checkUser = context => {
  if (!context.user) {
    return Promise.reject('Unauthorized');
  }
  return Promise.resolve(context.user);
};

Full code can now be found on Github for my preact-jwt-apollo-boilerplate.

I have split the project into Git branches which show each stage of development from the below:

Final thoughts and improvements

You’ll see the boilerplate code has much more checking and logic around the bits discussed above as I’ve left out some verbose but important points, so do check that out if you’re unsure.

Also the entire boilerplate really is just a starting point. As your application grows, so may the ideas mentioned in this post, and the way the code is structured and used. Specifically, I have not included any information about Apollo/GraphQL subscriptions, which are another key part about using GraphQL. There are many other resources which cover this in enough detail.

If you have any questions, please get in touch!


Senior Engineer at Haven

© Jay Gould 2023, Built with love and tequila.