CODITECT Gemini API Executor
Multi-model AI execution framework - Gemini (Google) provider implementation. Provides standardized interface for executing prompts against Gemini models with streaming support, token tracking, and error handling.
Features:
- Gemini 1.5 Pro, Gemini 1.5 Flash support
- Streaming and batch response modes
- Token usage tracking and cost estimation
- Automatic retry with exponential backoff
- Safety settings configuration
- Multimodal input support (text + images)
- Context window management (1M+ tokens)
Usage: from core.execute_gemini import GeminiExecutor
executor = GeminiExecutor(model="gemini-1.5-pro")
response = executor.execute("Explain quantum computing")
# Streaming
for chunk in executor.stream("Write a poem"):
print(chunk, end="")
Environment Variables: GOOGLE_API_KEY - Required API key GEMINI_MODEL - Default model (optional)
Author: AZ1.AI INC. Framework: CODITECT Copyright: © 2025 AZ1.AI INC. All rights reserved.
File: execute_gemini.py
Classes
ExecutionResult
Result from Gemini execution.
GeminiExecutor
Gemini API executor with standardized interface.
Functions
main()
CLI interface for Gemini executor.
to_dict()
No description
execute(prompt, system, history, max_tokens, temperature)
Execute prompt against Gemini model.
stream(prompt, system, max_tokens)
Stream response from Gemini model.
Usage
python execute_gemini.py