Skip to main content

CODITECT GPT API Executor

Multi-model AI execution framework - GPT (OpenAI) provider implementation. Provides standardized interface for executing prompts against GPT models with streaming support, token tracking, and error handling.

Features:

  • GPT-4o, GPT-4 Turbo, GPT-3.5 Turbo support
  • Streaming and batch response modes
  • Token usage tracking and cost estimation
  • Automatic retry with exponential backoff
  • Rate limiting and quota management
  • Function calling support
  • JSON mode for structured outputs

Usage: from core.execute_gpt import GPTExecutor

executor = GPTExecutor(model="gpt-4o")
response = executor.execute("Explain quantum computing")

# Streaming
for chunk in executor.stream("Write a poem"):
print(chunk, end="")

Environment Variables: OPENAI_API_KEY - Required API key OPENAI_MODEL - Default model (optional)

Author: AZ1.AI INC. Framework: CODITECT Copyright: © 2025 AZ1.AI INC. All rights reserved.

File: execute_gpt.py

Classes

ExecutionResult

Result from GPT execution.

GPTExecutor

GPT API executor with standardized interface.

Functions

main()

CLI interface for GPT executor.

to_dict()

No description

execute(prompt, system, messages, max_tokens, temperature, json_mode, functions)

Execute prompt against GPT model.

stream(prompt, system, messages, max_tokens)

Stream response from GPT model.

Usage

python execute_gpt.py