Skip to main content

ADR-010: Use Model Context Protocol (MCP) for llm Integration

Date: 2025-10-06 Status: Accepted Deciders: Development Team Tags: llm, integration, protocol

Context​

The IDE requires standardized integration with multiple llm providers:

  • LM Studio (local models)
  • Claude Code (Anthropic)
  • Future llm services

Requirements:

  • Unified API across providers
  • Tool/function calling support
  • Resource access (files, databases)
  • Prompt templates
  • Security and permissions
  • OAuth 2.1 authentication
  • Real-time streaming

Decision​

We will use Anthropic's Model Context Protocol (MCP) as the primary integration standard for all llm interactions.

Rationale​

Why MCP​

  1. Industry Standard: Adopted by Anthropic, OpenAI (March 2025), and others
  2. Three Core Primitives:
    • Tools: Model-controlled functions
    • Resources: App-controlled data access
    • Prompts: User-controlled templates
  3. Security: OAuth 2.1, strict permissions model
  4. Streaming: Streamable HTTP Transport (March 2025)
  5. Open Source: Open standard, growing ecosystem
  6. Pre-built Servers: Google Drive, Slack, GitHub, Git, Postgres

2025 Enhancements​

OAuth 2.1 Security (March 2025):

  • Secure agent-server communication
  • Token-based authentication
  • Granular permissions

Streamable HTTP Transport:

  • Real-time bidirectional data flow
  • Keeps connection open for streaming
  • Server-sent events support

Architecture​

MCP Primitives Usage​

1. Tools (Model-Controlled)​

AI decides when to call these:

// LM Studio MCP Server - tools
const tools = [
{
name: 'lmstudio_chat',
description: 'Chat with LM Studio model',
inputSchema: {
type: 'object',
properties: {
model: { type: 'string' },
messages: { type: 'array' },
temperature: { type: 'number' },
},
required: ['model', 'messages']
}
},
{
name: 'lmstudio_list_models',
description: 'List available models in LM Studio',
inputSchema: {
type: 'object',
properties: {}
}
}
];

2. Resources (App-Controlled)​

Application decides when to provide:

// File System MCP Server - resources
const resources = [
{
uri: 'file:///workspace/src/main.ts',
name: 'Main TypeScript file',
mimeType: 'text/typescript',
description: 'Application entry point'
},
{
uri: 'opfs:///session-{id}/editor-state',
name: 'editor state',
mimeType: 'application/json',
description: 'Current editor session state'
}
];

3. Prompts (User-Controlled)​

User triggers these templates:

// llm MCP Server - prompts
const prompts = [
{
name: 'code-review',
description: 'Review code for issues',
arguments: [
{
name: 'code',
description: 'Code to review',
required: true
}
]
},
{
name: 'explain-code',
description: 'Explain what code does',
arguments: [
{
name: 'code',
description: 'Code to explain',
required: true
},
{
name: 'level',
description: 'Explanation level (beginner/advanced)',
required: false
}
]
}
];

Implementation​

MCP Client​

// src/services/mcp-client.ts
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';

class MCPClient {
private clients: Map<string, Client> = new Map();

async connectToServer(serverName: string, config: ServerConfig): Promise<void> {
const transport = new StdioClientTransport({
command: config.command,
args: config.args,
env: config.env
});

const client = new Client(
{
name: 'az1ai-ide',
version: '0.1.0'
},
{
capabilities: {
tools: {},
resources: {},
prompts: {}
}
}
);

await client.connect(transport);
this.clients.set(serverName, client);
}

async callTool(serverName: string, toolName: string, args: any): Promise<any> {
const client = this.clients.get(serverName);
if (!client) throw new Error(`Server ${serverName} not connected`);

const result = await client.callTool({
name: toolName,
arguments: args
});

return result;
}

async listTools(serverName: string): Promise<Tool[]> {
const client = this.clients.get(serverName);
if (!client) throw new Error(`Server ${serverName} not connected`);

const result = await client.listTools();
return result.tools;
}

async getResource(serverName: string, uri: string): Promise<Resource> {
const client = this.clients.get(serverName);
if (!client) throw new Error(`Server ${serverName} not connected`);

const result = await client.readResource({ uri });
return result;
}

async getPrompt(serverName: string, promptName: string, args?: any): Promise<string> {
const client = this.clients.get(serverName);
if (!client) throw new Error(`Server ${serverName} not connected`);

const result = await client.getPrompt({
name: promptName,
arguments: args
});

return result.messages.map(m => m.content).join('\n');
}
}

export const mcpClient = new MCPClient();

LM Studio MCP Server​

// mcp-lmstudio/index.js (already implemented)
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';

const LM_STUDIO_BASE_URL = `http://${process.env.LM_STUDIO_HOST}:${process.env.LM_STUDIO_PORT}/v1`;

const server = new Server(
{
name: 'lmstudio-mcp-server',
version: '1.0.0'
},
{
capabilities: {
tools: {}
}
}
);

server.setRequestHandler('tools/list', async () => ({
tools: [
{
name: 'lmstudio_list_models',
description: 'List available models in LM Studio',
inputSchema: {
type: 'object',
properties: {}
}
},
{
name: 'lmstudio_chat',
description: 'Chat with LM Studio model',
inputSchema: {
type: 'object',
properties: {
model: { type: 'string' },
messages: { type: 'array' },
temperature: { type: 'number', default: 0.7 },
max_tokens: { type: 'number', default: 1000 }
},
required: ['model', 'messages']
}
}
]
}));

server.setRequestHandler('tools/call', async (request) => {
// Implementation as previously created
});

Configuration​

// .mcp.json
{
"mcpServers": {
"lmstudio": {
"type": "stdio",
"command": "node",
"args": ["/workspace/PROJECTS/t2/mcp-lmstudio/index.js"],
"env": {
"LM_STUDIO_HOST": "host.docker.internal",
"LM_STUDIO_PORT": "1234"
}
},
"filesystem": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/workspace/PROJECTS/t2"]
},
"git": {
"type": "stdio",
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-git", "/workspace/PROJECTS/t2"]
}
}
}

Service Layer Integration​

// src/services/llm-service.ts
import { mcpClient } from './mcp-client';

class llmService {
async initialize(): Promise<void> {
// Connect to all MCP servers
const config = await loadMCPConfig();

for (const [name, serverConfig] of Object.entries(config.mcpServers)) {
await mcpClient.connectToServer(name, serverConfig);
}
}

async chatWithLMStudio(
model: string,
messages: Message[],
temperature: number = 0.7
): Promise<string> {
const result = await mcpClient.callTool('lmstudio', 'lmstudio_chat', {
model,
messages,
temperature
});

return result.content[0].text;
}

async listLMStudioModels(): Promise<llmModel[]> {
const result = await mcpClient.callTool('lmstudio', 'lmstudio_list_models', {});
return result.models;
}

async getFileContext(filePath: string): Promise<string> {
const resource = await mcpClient.getResource('filesystem', `file://${filePath}`);
return resource.contents[0].text;
}

async usePromptTemplate(name: string, args: any): Promise<string> {
return await mcpClient.getPrompt('lmstudio', name, args);
}
}

export const llmService = new llmService();

Security Considerations​

Authentication (OAuth 2.1)​

// OAuth 2.1 for secure MCP connections
interface MCPAuthConfig {
clientId: string;
clientSecret: string;
authorizationEndpoint: string;
tokenEndpoint: string;
scopes: string[];
}

class SecureMCPClient extends MCPClient {
async authenticateServer(serverName: string, authConfig: MCPAuthConfig): Promise<void> {
const token = await this.getOAuthToken(authConfig);

// Connect with authentication
await this.connectToServer(serverName, {
...config,
env: {
...config.env,
OAUTH_TOKEN: token
}
});
}
}

Tool Permissions​

// Restrict tool access per session
interface ToolPermissions {
allowedTools: string[];
deniedTools: string[];
requireConfirmation: string[];
}

async function callToolWithPermissions(
toolName: string,
args: any,
permissions: ToolPermissions
): Promise<any> {
if (permissions.deniedTools.includes(toolName)) {
throw new Error(`Tool ${toolName} is not permitted`);
}

if (permissions.requireConfirmation.includes(toolName)) {
const confirmed = await confirmToolExecution(toolName, args);
if (!confirmed) throw new Error('Tool execution cancelled by user');
}

return await mcpClient.callTool('lmstudio', toolName, args);
}

References​