Location

Based in Hyderabad, India

Navigate

  • Home
  • Projects
  • Skills
  • Blog

Connect

Stay updated with latest posts

Feel free to reach out through social media

© 2025 Odyssey | Ayushman Gupta. All rights reserved.

Designed & Built with ⚡

OdysseyTechnical Guide
Building Authenticated MCP Servers
VOL. I
Technical Guide

Building Authenticated MCP Servers

Written by

Ayushman

24th Sept, 2025
5 min read
Cover image for article: Building Authenticated MCP Servers

Table of Contents

  • The Beginning: A Real Problem
  • The Architecture
  • Authentication
  • Setting Up the Auth Server
  • Bringing It All Together
  • Deploying Options
  • Configuration Examples
  • Result

On this page

  • The Beginning: A Real Problem
  • The Architecture
  • Authentication
  • Setting Up the Auth Server
  • Bringing It All Together
  • Deploying Options
  • Configuration Examples
  • Result
Share this article
Technical Guide
24th Sept, 2025
Back to Journal

The Beginning: A Real Problem

Recently, I had the opportunity to work on a proof of concept where we needed to let LLM chatbots fetch courses from our catalog. With that context, the bots could recommend options tailored to each person’s activity, goals, and learning history.

That's when I came across MCP - the Model Context Protocol. It’s a simple but powerful way to personalise LLMs by giving them access to your own data or functionality whenever they need it.

In ChatGPT and Claude, all the services I use daily are connected through what they call connectors. Under the hood, those connectors are powered by MCP. Thanks to that, ChatGPT can now pull insights directly from my Notion pages or Google Calendar whenever I ask.

And just a few weeks ago, ChatGPT introduced developer mode for custom MCP servers, which means you can now add your own tools to GPT as well.

The Architecture

MCP standardises and provides an interface for all LLMs to receive context and function calling capabilities.

  • MCP Host: This is a chatbot with LLM capabilities. It can be ChatGPT, Claude, or even our own custom chatbot powered by an LLM.
  • MCP Client: The component that manages one-on-one connections to MCP servers and retrieves context for the host
  • MCP Server: Our program that actually provides the context and tools
MCP architecture: the Host (LLM), the Client (connection manager), and the Server (tools + context)
Fig.1 - MCP architecture: the Host (LLM), the Client (connection manager), and the Server (tools + context)

The basic setup is refreshingly straightforward. You register your tools and provide context - a title, description, parameters, and a callback function. Here's what that looks like in practice:

Authentication

The example above is neat, but it isn’t realistic. We don’t want just anyone sending Slack messages or pulling data from our servers - proper authentication is a must.

MCP uses OAuth 2.1 with PKCE for authentication, but with some extra rules that make it different from what you might expect. Specifically, MCP requires discovery documents, resource indicators, and strict audience checks to keep tokens bound to the right server.

Setting Up the Auth Server

To secure our MCP server, we need an authorisation layer. Without it, anyone could hit our endpoints and start pulling data or sending requests. MCP builds this on top of OAuth 2.1, so the flow might look familiar if you’ve worked with modern auth systems.

MCP authentication flow: OAuth tokens, authorization server, and secure access to tools
Fig.2 - MCP authentication flow: OAuth tokens, authorization server, and secure access to tools

1. Expose a .well-known Endpoint

The first step is to let clients know how to talk to our authorization server. MCP follows the OAuth 2.0 Protected Resource Metadata (RFC 9728) and Authorization Server Metadata (RFC 8414) standards, which means clients look for metadata at predictable paths under .well-known.

For example, a request to:

https://our-domain.com/.well-known/oauth-authorization-server

should return JSON describing metadata for our auth server’s capabilities:

This document tells the MCP client where to send users for login (/authorize), where to swap codes for tokens (/token), and which scopes and flows are supported.

2. /authorize – Starting Authentication

The /authorize endpoint is where the login flow begins. When the MCP host opens, it redirects the user to our IDP (Google, Notion, GitHub, etc.) through this endpoint.
We need to generate a PKCE challenge for security and include the resource=https://our-domain.com/mcp parameter so the token is bound to our MCP server only.

3. /idp/callback – Handling Redirects

After the user logs into the IDP (Identity Provider), they’re redirected back to our server at a callback endpoint, such as /idp/callback. This is where we exchange the authorization code for an access token. That token is proof of who the user is and what they’re allowed to do.

4. /token – Issuing and Refreshing Tokens

Finally, our /token endpoint issues tokens (access + refresh) and handles renewals. Every request the MCP client makes to your server will carry:

Authorization: Bearer <access-token>

Our server must validate:

  • Signature (jwks.json)
  • Issuer https://our-domain.com
  • Audience https://our-domain.com/mcp

If invalid or expired, return 401.

5. /mcp – Protecting the MCP Endpoint

We need to protect our MCP endpoint with middleware that checks the token before connecting the client.

Bringing It All Together

Now that we’ve got authentication and the /mcp endpoint in place, let’s look at how to actually use this setup.

Deploying Options

We’ve got multiple options to deploy this service. Since it’s just a JavaScript runtime, we can spin it up with Node, Deno, or any other JS runtime - either locally or on a remote server.

  • Local Development: Run with node http.js (MCP Server) and node auth.ts (Auth Server), or use Docker. In this setup, you’ll usually have http://localhost:3000 for the MCP server and http://localhost:4000 for the auth server.
  • Cloud Deployments: Vercel, Railway, Fly.io, or even a bare EC2 box. Just make sure everything is served over HTTPS. In this case, instead of localhost, you’ll point to your domain, e.g. https://mcp.your-domain.com and https://auth.your-domain.com.

The only hard requirement: your .well-known endpoints and /mcp must be reachable over HTTPS.

Configuration Examples

Here are the different ways you can configure and connect to your MCP server depending on your deployment scenario:

ScenarioConfig ExampleNotes
Local HTTP (dev)mcp-remote http://localhost:3000/mcp 9696 --transport http-onlyMCP on localhost:3000, Auth on localhost:4000. Great for testing.
Remote HTTPSmcp-remote https://mcp.your-domain.com/mcp 9696 --transport http-onlyUse when server is deployed to Vercel, Railway, EC2, etc. Requires HTTPS + .well-known.
Pure STDIO (no remote)node ./server.js --transport=stdioDirect connection to your server's stdio. Fastest dev loop. No HTTP bridge.
LangGraph / Agentic AInew MultiServerMCPClient({ mcpServers: { ... }})Works with both mcp-remote (HTTP bridge) and pure stdio. Auto-discovers tools.

Local HTTP (dev)

mcp-remote http://localhost:3000/mcp 9696 --transport http-only

MCP on localhost:3000, Auth on localhost:4000. Great for testing.

Remote HTTPS

mcp-remote https://mcp.your-domain.com/mcp 9696 --transport http-only

Use when server is deployed to Vercel, Railway, EC2, etc. Requires HTTPS + .well-known.

Pure STDIO (no remote)

node ./server.js --transport=stdio

Direct connection to your server's stdio. Fastest dev loop. No HTTP bridge.

LangGraph / Agentic AI

new MultiServerMCPClient({ mcpServers: { ... }})

Works with both mcp-remote (HTTP bridge) and pure stdio. Auto-discovers tools.

Result

What started as a simple proof of concept - letting a chatbot recommend courses - has now become a pattern I use everywhere. By building an authenticated MCP server, I’ve been able to give LLMs secure access to tools and data that actually matter: Slack for communication, Notion for notes, Google Calendar for scheduling, or even custom APIs inside a company.

The amazing thing about MCP is its plug and play capability. Write once, and your server works across Claude, ChatGPT, Cursor, or even your own in-house chatbot. Instead of reinventing integrations, you simply register tools and let the LLM call them securely.

Of course, it wasn’t all smooth sailing. While testing with Cursor, I kept running into an annoying issue: every time I changed a tool on my server, I had to restart both Cursor and the server before the updates would show up. It really slowed me down in the early stages. I eventually solved this by adding nodemon to automatically restart the server whenever files changed, and then triggering the tools update method from the MCP SDK so that changes were detected without having to reconnect or restart Cursor.

And the possibilities are still wide open. Imagine LLMs booking meetings directly into your calendar, summarizing Notion or Obsidian notes, or managing deployment pipelines — all gated by OAuth and standardized by MCP. It’s also a great way for products, services, and apps to extend their knowledge and functionality into the world of LLMs.