Skip to main content

theia AI Research Findings

Date: 2025-10-08 theia Version: 1.65.0 (September 26, 2025) Status: ✅ PRODUCTION-READY FRAMEWORK


🎯 Executive Summary

Eclipse theia 1.65 includes a complete, production-ready AI framework with:

MCP Protocol Support - Anthropic's Model Context Protocol fully integrated ✅ Multi-llm Support - OpenAI, Anthropic, Ollama, Hugging Face, Azure, DeepSeek ✅ Chat UI Widgets - Complete chat interface with history, agents, context ✅ AI Code Completion - IntelliSense powered by llms ✅ Custom Agent System - Build domain-specific AI agents ✅ Prompt Templates - User-customizable prompts via .prompttemplate files ✅ Context Management - Attach files, symbols, and domain data to AI requests ✅ Full Transparency - See all data sent to llms, inspect prompts and responses

Bottom Line: We have a mature, actively-maintained AI IDE framework. We should leverage it, not rebuild it.


📦 theia AI Architecture

Core Components (All Included in 1.65)

ComponentPackagePurpose
AI Core Framework@theia/ai-coreAgent system, tool calling, prompt management
AI Chat UI@theia/ai-chat-uiChatViewWidget, chat history, agent switcher
AI Chat Service@theia/ai-chatChatService, ChatSession, conversation management
MCP Integration@theia/ai-mcpModel Context Protocol client and server
OpenAI Provider@theia/ai-openaiGPT-3.5/4, o1, o3-mini support
Ollama Provider@theia/ai-ollamaLocal llm support (Llama, Mistral, etc.)
Code Completion@theia/ai-code-completionAI-powered IntelliSense
editor AI@theia/ai-editorInline chat, code actions
History@theia/ai-historyConversation persistence
IDE Integration@theia/ai-ideworkspace-aware AI tools

🔥 Key Features (Released 2024-2025)

1. Model Context Protocol (MCP) Integration

Announced: December 19, 2024 Blog: theia IDE and theia AI support MCP

What It Does:

  • Connects AI to external tools and services (Git, GitHub, databases, testing frameworks, internet search)
  • llm decides when to call MCP tools based on function descriptions
  • theia provides MCP server management UI (start/stop servers, view status)

How It Works:

  1. Configure MCP servers in AI Configuration view
  2. llm receives function descriptions from MCP servers
  3. llm triggers tool calls when needed
  4. theia executes tool and returns results to llm
  5. llm incorporates results into response

Example MCP Servers:

// .theia/mcp-servers.json
{
"servers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_TOKEN": "${GITHUB_TOKEN}"
}
},
"git": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-git"]
}
}
}

V5 Opportunity: Our LM Studio MCP server (mcp-lmstudio/index.js) can be registered as a theia MCP server, making LM Studio models available to theia AI.


2. llm Freedom - Support for ANY llm

Announced: September 20, 2024 (Sneak Preview) Blog: Why theia supports any llm!

Supported Providers (as of 1.65):

  • OpenAI: GPT-3.5, GPT-4, GPT-4.5, o1, o3-mini
  • Anthropic: Claude 3.5 Sonnet, Claude 3.7 Sonnet
  • Azure OpenAI: Enterprise OpenAI deployments
  • Ollama: Local open-source models (Llama, Mistral, Qwen, DeepSeek)
  • Hugging Face API: 100K+ models
  • Llama-File: Fully local, no server required
  • DeepSeek: DeepSeek-Coder, DeepSeek-V2
  • OpenAI-compatible: Any API following OpenAI spec (e.g., LM Studio!)

Configuration (Simple):

// .theia/settings.json
{
"ai.openai.api.baseUrl": "http://localhost:1234/v1", // LM Studio
"ai.openai.api.key": "not-needed",
"ai.openai.models": [
"meta-llama-3.3-70b-instruct",
"qwen/qwq-32b",
"deepseek-coder-v2"
]
}

V5 Opportunity: Our 16+ LM Studio models are immediately usable in theia AI with zero custom code.


3. Chat Context Management

Announced: March 2025 (theia 1.59) Blog: Eclipse theia 1.59 Release

What It Does:

  • Attach files, symbols, or domain-specific components to AI chat requests
  • llm receives full context about selected code/files
  • Improves accuracy of code suggestions and explanations

How It Works:

  1. User selects files/code in editor
  2. User clicks "Add to Chat Context"
  3. Context appears as attachment in chat input
  4. llm receives file content with prompt
  5. Response is contextualized to the provided code

V5 Opportunity: theia already handles context management. We don't need to build file selection or context tracking.


4. Custom Agent System

Announced: November 2024 (theia 1.56) Blog: Eclipse theia 1.56 Release

What It Does:

  • Create domain-specific AI agents (e.g., "Code Generator", "UI Designer", "Security Reviewer")
  • Each agent has custom prompts, preferred llm, and tools
  • Agents can request specific inputs during interactions
  • Support for multiple prompts per agent with seamless switching

How to Define Agents:

// Custom agent via theia extension
import { Agent } from '@theia/ai-core'

const codeGeneratorAgent: Agent = {
id: 'code-generator',
name: 'Code Generator',
description: 'Generates production-ready TypeScript code',
model: 'meta-llama-3.3-70b-instruct',
prompt: `You are an expert TypeScript engineer.
Generate clean, well-documented, production-ready code.
Follow mobile-first design principles.
Use Chakra UI for styling.`,
tools: ['lmstudio_chat', 'file_read', 'file_write'],
}

V5 Opportunity: We can register custom agents for our workflow modes (Single, Parallel, Sequential, Consensus) as theia agents.


5. User-Customizable Prompts

Announced: March 2025 (theia 1.57 in 2025-02 release) Blog: The Eclipse theia Community Release 2025-02

What It Does:

  • Users can add custom variants to agent prompts via .prompttemplate files
  • Simplifies customization without modifying agent code
  • Template variables for dynamic prompt generation

Example Prompt Template:

<!-- .theia/prompts/code-review.prompttemplate -->
---
agent: code-reviewer
variant: security-focused
---

# Security-Focused Code Review

Review the following code for security vulnerabilities:
- SQL injection risks
- XSS vulnerabilities
- Authentication bypasses
- Data exposure

Code to review:
{{selectedCode}}

V5 Opportunity: Users can customize AI behavior without touching our code.


6. Full Transparency and Control

Announced: March 2025 (AI-powered theia IDE launch) Blog: Introducing the AI-powered theia IDE

What It Does:

  • See exactly what data is sent to llms
  • Inspect prompts, functions, variables
  • View complete communication history
  • Understand AI decision-making process

Transparency Features:

  • Prompt inspection: See final prompt sent to llm
  • Context visualization: See attached files/code
  • Tool call logging: See when llm calls tools (MCP)
  • Response metadata: See token count, model used, latency

V5 Opportunity: Privacy-first AI with full visibility into data flow. Critical for enterprise users.


🏗️ theia AI Architecture (Inversify DI)

How theia Widgets Work

theia uses Inversify (DI container) for all wiring:

// theia extension module
import { ContainerModule } from '@theia/core/shared/inversify'
import { ChatViewWidget } from '@theia/ai-chat-ui/lib/browser/chat-view-widget'
import { CommandContribution } from '@theia/core'

export default new ContainerModule(bind => {
// Register widget
bind(ChatViewWidget).toSelf().inSingletonScope()

// Register command contribution
bind(CommandContribution).to(AIChatCommandContribution)
})

Accessing Widgets from React

Challenge: theia uses DI, React uses props/hooks

Solution: Create a bridge service

// src/services/theia-container.ts
import { Container } from '@theia/core/shared/inversify'

class theiaContainerService {
private container: Container | null = null

setContainer(container: Container) {
this.container = container
}

getWidget<T>(identifier: any): T {
return this.container!.get<T>(identifier)
}
}

export const theiaContainer = new theiaContainerService()

Usage in React:

// src/components/theia/theia-chat-widget.tsx
import { useEffect, useRef } from 'react'
import { ChatViewWidget } from '@theia/ai-chat-ui/lib/browser/chat-view-widget'
import { theiaContainer } from '../../services/theia-container'

export const theiaChatWidget = () => {
const containerRef = useRef<HTMLDivElement>(null)

useEffect(() => {
const widget = theiaContainer.getWidget<ChatViewWidget>(ChatViewWidget)
containerRef.current!.appendChild(widget.node)

return () => widget.dispose()
}, [])

return <div ref={containerRef} style={{ height: '100%' }} />
}

🔌 MCP Server Integration Strategy

theia's MCP Architecture

┌─────────────────────────────────────┐
│ ChatViewWidget (User Interface) │
│ - User types prompt │
│ - Attaches context (files) │
└────────────────┬────────────────────┘


┌─────────────────────────────────────┐
│ ChatService (theia AI Core) │
│ - Processes prompt │
│ - Resolves variables/context │
└────────────────┬────────────────────┘


┌─────────────────────────────────────┐
│ llm Provider (OpenAI/Ollama/etc) │
│ - Receives prompt + tool schemas │
│ - Decides to call tools │
│ - Returns tool call requests │
└────────────────┬────────────────────┘


┌─────────────────────────────────────┐
│ MCP Client (theia AI MCP) │
│ - Routes tool calls to MCP servers │
│ - Returns tool results to llm │
└────────────────┬────────────────────┘


┌─────────────────────────────────────┐
│ MCP Servers (External Processes) │
│ - GitHub, Git, Database, etc. │
│ - LM Studio (our custom server!) │
└─────────────────────────────────────┘

Registering Our LM Studio MCP Server

Current: Standalone Node.js server (mcp-lmstudio/index.js)

Strategy: Register as theia MCP server

// .theia/mcp-servers.json
{
"servers": {
"lmstudio": {
"command": "node",
"args": ["${workspaceFolder}/mcp-lmstudio/index.js"],
"env": {
"LM_STUDIO_HOST": "localhost",
"LM_STUDIO_PORT": "1234"
}
}
}
}

Result: theia AI can call LM Studio tools via MCP protocol

Tools Available:

  • lmstudio_list_models - Get available models
  • lmstudio_chat - Chat completion
  • lmstudio_completion - Text completion

📊 theia AI vs Custom Build Comparison

FeatureCustom Build (V4 approach)theia AI 1.65 (V5 approach)
Chat UI~2,000 lines React✅ ChatViewWidget (built-in)
Chat History~500 lines custom✅ Built-in persistence
Agent System~1,500 lines✅ Agent interface + registration
MCP Protocol~1,000 lines✅ Fully implemented
llm ProvidersOpenAI only✅ 7+ providers out-of-box
Context ManagementManual file selection✅ Auto file/symbol attachment
Prompt TemplatesHardcoded.prompttemplate files
Tool CallingCustom implementation✅ MCP standard
Code CompletionNot implemented✅ AI IntelliSense
TransparencyNo visibility✅ Full prompt/data inspection
Total Code~15,000 lines~500 lines (wrappers only)
MaintenanceWe own all bugstheia team maintains
UpdatesManualAutomatic with theia updates

Time Savings: 3-4 weeks → 3-4 days


🚀 V5 Integration Strategy

Phase 1: Setup (1 day)

  1. Create theia Container Bridge

    ✅ src/services/theia-container.ts
    ✅ Initialize container in app.tsx
    ✅ Test widget instantiation
  2. Configure LM Studio as llm Provider

    // .theia/settings.json
    {
    "ai.openai.api.baseUrl": "http://localhost:1234/v1",
    "ai.openai.models": ["meta-llama-3.3-70b-instruct", ...]
    }

Phase 2: Wrap theia Widgets (2 days)

  1. theiaChatWidget - AI chat interface

    <theiaChatWidget sessionId={sessionId} />
  2. theiaterminalWidget - Integrated terminal

    <theiaterminalWidget sessionId={sessionId} />
  3. theiaMonacoWidget - Code editor

    <theiaMonacoWidget file={activeFile} />
  4. theiafile-explorerWidget - File tree

    <theiafile-explorerWidget sessionId={sessionId} />

Phase 3: Integrate into V5 UI (2 days)

  1. AI Studio Tab

    <SplitPane>
    <theiaChatWidget sessionId={tabId} /> {/* Replaces custom chat */}
    <CodePreview sessionId={tabId} /> {/* Keep custom preview */}
    </SplitPane>
  2. workspace Tab

    <Flex>
    <theiafile-explorerWidget /> {/* Replaces custom explorer */}
    <theiaMonacoWidget /> {/* Replaces custom editor */}
    <theiaterminalWidget /> {/* Replaces custom terminal */}
    </Flex>
  3. Mobile-First Wrappers

    <TouchFriendlyCard title="AI Chat">
    <theiaChatWidget />
    </TouchFriendlyCard>

Phase 4: Register Custom Agents (1 day)

Define V5-specific agents:

// src/browser/ai/custom-agents-module.ts
import { Agent } from '@theia/ai-core'

const agents: Agent[] = [
{
id: 'v5-code-generator',
name: 'Code Generator',
model: 'meta-llama-3.3-70b-instruct',
prompt: 'You are an expert TypeScript engineer...',
},
{
id: 'v5-ui-designer',
name: 'UI Designer',
model: 'qwen/qwq-32b',
prompt: 'You are a mobile-first UI/UX expert...',
},
{
id: 'v5-code-reviewer',
name: 'Code Reviewer',
model: 'deepseek-coder-v2',
prompt: 'You are a senior code reviewer...',
},
]

Phase 5: Test End-to-End (1 day)

  • AI chat with LM Studio models
  • Context attachment (files, code)
  • Agent switching
  • Tool calls (MCP)
  • Mobile responsive (320px)
  • Multi-session isolation

Total Timeline: 7 days (vs 4+ weeks custom build)


💡 Key Insights from Research

1. theia AI is Production-Ready

Evidence:

  • Released March 2025 (AI-powered theia IDE)
  • Active development (monthly releases)
  • Used by Gitpod, Google Cloud Shell, Arduino IDE
  • MCP protocol support (Anthropic standard)
  • Extensive llm provider support

Conclusion: We can trust theia AI as foundation. It's not experimental.


2. MCP is the Standard

Evidence:

  • Developed by Anthropic
  • Adopted by theia, Cline, other AI IDEs
  • Growing ecosystem of MCP servers (Git, GitHub, databases, etc.)
  • Official spec: https://modelcontextprotocol.io/

Conclusion: Our MCP server approach is correct. theia's MCP support makes integration trivial.


3. theia Supports ANY llm

Evidence:

  • OpenAI-compatible API support
  • Ollama for local models
  • Multiple llm providers (7+)
  • Simple configuration

Conclusion: Our LM Studio integration (16+ models) works out-of-box. No custom code needed.


4. Extensibility via Inversify DI

Evidence:

  • Everything is a service bound to DI container
  • Extensions register via ContainerModule
  • Widgets are injectable

Conclusion: React-theia bridge is the right approach. We can extract and wrap any theia widget.


5. User Customization Built-In

Evidence:

  • .prompttemplate files for custom prompts
  • MCP server configuration
  • Agent registration
  • Settings UI

Conclusion: Users can customize AI behavior without code changes. Self-service.


🎯 Strategic Recommendations

✅ DO: Leverage theia AI

  1. Use ChatViewWidget for all AI chat interfaces
  2. Use theia's MCP client instead of custom tool calling
  3. Register custom agents via theia's Agent interface
  4. Configure LM Studio as OpenAI-compatible provider
  5. Wrap theia widgets in mobile-first React components

❌ DON'T: Rebuild theia Features

  1. Don't build custom chat UI - Use ChatViewWidget
  2. Don't build custom agent system - Use theia's Agent interface
  3. Don't build custom MCP client - Use @theia/ai-mcp
  4. Don't build custom terminal - Use terminalWidget
  5. Don't build custom editor - Use Monacoeditor

🎨 DO: Add V5 Unique Value

  1. Mobile-first wrappers - TouchFriendlyCard around theia widgets
  2. Multi-tab modalities - AI Studio / workspace / theia IDE routing
  3. V5 backend integration - JWT auth, FoundationDB sessions
  4. Custom branding - Header/Footer, theme
  5. Progressive disclosure - Accordion navigation

📚 Resources

Official Documentation

Key Blog Posts

GitHub


✅ Conclusion

theia AI 1.65 is a mature, production-ready AI IDE framework with:

  • Complete chat UI system
  • Multi-llm support (including LM Studio via OpenAI-compatible API)
  • MCP protocol integration
  • Custom agent system
  • User-customizable prompts
  • Full transparency and control

V5 Strategy:

  1. Wrap theia widgets in React components
  2. Configure LM Studio as llm provider
  3. Register custom agents for our workflow modes
  4. Add mobile-first UI around theia components
  5. Integrate V5 backend (JWT, FoundationDB)

Result: Production-ready AI IDE in 1-2 weeks instead of 3-4 months.


Document Version: 1.0.0 Research Date: 2025-10-08 Status: ✅ APPROVED STRATEGY - PROCEED WITH THEIA AI INTEGRATION