Skip to main content

AI Content Moderation

In today's digital landscape, content moderation is crucial for maintaining safe and appropriate user experiences. Whether you're building a social platform, forum, or any user-generated content system, you need intelligent moderation that can scale with your user base while maintaining human oversight for complex decisions.

This comprehensive guide explores how to build a production-ready content moderation system using Motia's event-driven architecture. We'll cover:

  1. AI-Powered Analysis: Using OpenAI for text toxicity detection and image safety analysis
  2. Confidence-Based Routing: Automatically handling clear cases while flagging uncertain content for human review
  3. Slack Integration: Creating interactive moderation workflows within existing team communication tools
  4. Human-in-the-Loop: Seamlessly integrating human decision-making into automated processes

Let's build a content moderation system that scales intelligently.


The Power of Intelligent Content Moderation

![AI Content Moderation Workflow](pathname://img/ai-content-moderation-workflow.png)

At its core, our content moderation system solves a fundamental challenge: how do you efficiently moderate user-generated content at scale while maintaining human oversight for complex decisions? Traditional approaches often involve either fully manual processes that don't scale or fully automated systems that lack nuance.

Our Motia-powered solution combines the best of both worlds through intelligent routing:

  • OpenAI Integration: Advanced AI analysis for text toxicity and image safety detection
  • Confidence-Based Routing: Automatic handling of clear cases, human review for uncertain content
  • Slack Integration: Interactive moderation workflows within existing team communication tools
  • Motia Framework: Event-driven orchestration with built-in state management and error handling

Instead of a monolithic moderation system, we get a flexible architecture where each component can be scaled, modified, or replaced independently.


The Anatomy of Our Content Moderation System

Our application consists of six specialized steps, each handling a specific part of the moderation workflow. Let's explore the complete architecture.

<Tabs items={['content-submit', 'content-analyzer', 'content-router', 'slack-notifier', 'slack-webhook', 'action-executor']}> The entry point for content moderation. This API endpoint receives user-generated content (text and/or images) and initiates the moderation workflow.

```typescript
import { z } from "zod";
import { ApiRouteConfig, Handlers } from "motia";

const ContentSubmitInputSchema = z.object({
text: z.string().optional(),
imageUrl: z.string().optional(),
userId: z.string(),
platform: z.string(),
});

export const config: ApiRouteConfig = {
type: "api",
name: "ContentSubmitAPI",
description: "Receives user-generated content for moderation",
path: "/content/submit",
method: "POST",
bodySchema: ContentSubmitInputSchema,
emits: ["content.submitted"],
flows: ["content-moderation"],
};

export const handler: Handlers["ContentSubmitAPI"] = async (
req,
{ logger, emit }
) => {
const { text, imageUrl, userId, platform } = req.body;
const submissionId = `sub_${Date.now()}_${Math.random()
.toString(36)
.slice(2, 11)}`;

logger.info(`Content submitted for moderation`, {
submissionId,
hasText: !!text,
hasImage: !!imageUrl,
userId,
platform,
});

await emit({
topic: "content.submitted",
data: {
submissionId,
text,
imageUrl,
userId,
platform,
timestamp: new Date().toISOString(),
},
});

return {
status: 200,
body: {
message: "Content submitted for moderation",
submissionId,
},
};
};
```
The AI analysis engine that processes both text and image content using OpenAI's advanced models to determine content safety and risk levels.
```typescript
import { z } from "zod";
import { EventConfig, Handlers } from "motia";
import OpenAI from "openai";

const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});

export const config: EventConfig = {
type: "event",
name: "ContentAnalyzer",
description: "Analyzes content using OpenAI for toxicity and safety",
subscribes: ["content.submitted"],
emits: ["content.analyzed"],
input: z.object({
submissionId: z.string(),
text: z.string().optional(),
imageUrl: z.string().optional(),
userId: z.string(),
platform: z.string(),
timestamp: z.string(),
}),
flows: ["content-moderation"],
};

export const handler: Handlers["ContentAnalyzer"] = async (
input,
{ logger, emit }
) => {
const { submissionId, text, imageUrl, userId } = input;

logger.info("Starting content analysis", { submissionId, hasText: !!text, hasImage: !!imageUrl });

let textScore = 0;
let imageScore = 0;
let textAnalysis = "";
let imageAnalysis = "";

// Analyze text content if present
if (text) {
try {
const textResponse = await openai.chat.completions.create({
model: "gpt-4",
messages: [
{
role: "system",
content: `You are a content moderation AI. Analyze the following text for toxicity, hate speech, violence, harassment, or inappropriate content.
Respond with a JSON object containing:
- "score": a number between 0-1 where 0 is completely safe and 1 is extremely harmful
- "analysis": a brief explanation of your assessment
- "categories": array of detected issues (e.g., ["hate_speech", "violence"])`
},
{
role: "user",
content: text
}
],
temperature: 0.1,
});

const textResult = JSON.parse(textResponse.choices[0]?.message?.content || "{}");
textScore = textResult.score || 0;
textAnalysis = textResult.analysis || "";
} catch (error) {
logger.error("Text analysis failed", { error, submissionId });
}
}

// Analyze image content if present
if (imageUrl) {
try {
const imageResponse = await openai.chat.completions.create({
model: "gpt-4-vision-preview",
messages: [
{
role: "system",
content: `You are a content moderation AI. Analyze the following image for inappropriate content, violence, nudity, or harmful material.
Respond with a JSON object containing:
- "score": a number between 0-1 where 0 is completely safe and 1 is extremely harmful
- "analysis": a brief explanation of your assessment
- "categories": array of detected issues (e.g., ["violence", "inappropriate"])`
},
{
role: "user",
content: [
{
type: "text",
text: "Analyze this image for content moderation:"
},
{
type: "image_url",
image_url: { url: imageUrl }
}
]
}
],
temperature: 0.1,
});

const imageResult = JSON.parse(imageResponse.choices[0]?.message?.content || "{}");
imageScore = imageResult.score || 0;
imageAnalysis = imageResult.analysis || "";
} catch (error) {
logger.error("Image analysis failed", { error, submissionId });
}
}

// Calculate overall risk score
const overallScore = Math.max(textScore, imageScore);

const analysisResult = {
submissionId,
userId,
textScore,
imageScore,
overallScore,
textAnalysis,
imageAnalysis,
analyzedAt: new Date().toISOString(),
};

logger.info("Content analysis completed", {
submissionId,
overallScore,
textScore,
imageScore,
});

await emit({
topic: "content.analyzed",
data: analysisResult,
});
};
```
The intelligent routing system that determines whether content should be automatically approved/rejected or sent for human review based on confidence scores.
```typescript
import { z } from "zod";
import { EventConfig, Handlers } from "motia";

export const config: EventConfig = {
type: "event",
name: "ContentRouter",
description: "Routes content based on AI analysis confidence scores",
subscribes: ["content.analyzed"],
emits: ["content.auto.approved", "content.auto.rejected", "content.needs.review"],
input: z.object({
submissionId: z.string(),
userId: z.string(),
textScore: z.number(),
imageScore: z.number(),
overallScore: z.number(),
textAnalysis: z.string(),
imageAnalysis: z.string(),
analyzedAt: z.string(),
}),
flows: ["content-moderation"],
};

export const handler: Handlers["ContentRouter"] = async (
input,
{ logger, emit }
) => {
const { submissionId, userId, overallScore, textAnalysis, imageAnalysis } = input;

logger.info("Routing content based on analysis", {
submissionId,
overallScore,
userId
});

let decision: string;
let routingData = {
submissionId,
userId,
overallScore,
textAnalysis,
imageAnalysis,
routedAt: new Date().toISOString(),
};

// Decision logic based on confidence scores
if (overallScore <= 0.05) {
// Very low risk - auto approve
decision = "approved";
logger.info("Content auto-approved", { submissionId, overallScore });

await emit({
topic: "content.auto.approved",
data: {
...routingData,
decision,
reason: "Low risk score - automatically approved",
},
});

} else if (overallScore >= 0.95) {
// Very high risk - auto reject
decision = "rejected";
logger.info("Content auto-rejected", { submissionId, overallScore });

await emit({
topic: "content.auto.rejected",
data: {
...routingData,
decision,
reason: "High risk score - automatically rejected",
},
});

} else {
// Medium risk - needs human review
decision = "review";
logger.info("Content needs human review", { submissionId, overallScore });

await emit({
topic: "content.needs.review",
data: {
...routingData,
decision,
reason: "Medium risk score - requires human review",
priority: overallScore >= 0.7 ? "high" : overallScore >= 0.5 ? "medium" : "low",
},
});
}
};
```
Creates interactive Slack messages for human moderators with approve/reject/escalate buttons and contextual information.
```typescript
import { z } from "zod";
import { EventConfig, Handlers } from "motia";
import { WebClient } from "@slack/web-api";

const slack = new WebClient(process.env.SLACK_BOT_TOKEN);

export const config: EventConfig = {
type: "event",
name: "SlackNotifier",
description: "Sends interactive Slack messages for human review",
subscribes: ["content.needs.review"],
emits: ["slack.notification.sent"],
input: z.object({
submissionId: z.string(),
userId: z.string(),
overallScore: z.number(),
textAnalysis: z.string(),
imageAnalysis: z.string(),
decision: z.string(),
reason: z.string(),
priority: z.enum(["low", "medium", "high"]),
routedAt: z.string(),
}),
flows: ["content-moderation"],
};

export const handler: Handlers["SlackNotifier"] = async (
input,
{ logger, emit }
) => {
const { submissionId, userId, overallScore, textAnalysis, imageAnalysis, priority } = input;

logger.info("Sending Slack notification for review", {
submissionId,
priority,
userId
});

// Determine channel based on priority
let channel: string;
switch (priority) {
case "high":
channel = process.env.SLACK_CHANNEL_URGENT!;
break;
case "medium":
channel = process.env.SLACK_CHANNEL_ESCALATED!;
break;
default:
channel = process.env.SLACK_CHANNEL_MODERATION!;
}

// Create interactive message with buttons
const message = {
channel,
text: `Content Moderation Review Required`,
blocks: [
{
type: "header",
text: {
type: "plain_text",
text: `🚨 Content Review - ${priority.toUpperCase()} Priority`,
},
},
{
type: "section",
fields: [
{
type: "mrkdwn",
text: `*Submission ID:*\n${submissionId}`,
},
{
type: "mrkdwn",
text: `*User ID:*\n${userId}`,
},
{
type: "mrkdwn",
text: `*Risk Score:*\n${(overallScore * 100).toFixed(1)}%`,
},
{
type: "mrkdwn",
text: `*Priority:*\n${priority.toUpperCase()}`,
},
],
},
{
type: "section",
text: {
type: "mrkdwn",
text: `*AI Analysis:*\n${textAnalysis || imageAnalysis || "No analysis available"}`,
},
},
{
type: "actions",
elements: [
{
type: "button",
text: {
type: "plain_text",
text: "✅ Approve",
},
style: "primary",
action_id: "approve_content",
value: submissionId,
},
{
type: "button",
text: {
type: "plain_text",
text: "❌ Reject",
},
style: "danger",
action_id: "reject_content",
value: submissionId,
},
{
type: "button",
text: {
type: "plain_text",
text: "⚠️ Escalate",
},
action_id: "escalate_content",
value: submissionId,
},
],
},
],
};

try {
const result = await slack.chat.postMessage(message);

logger.info("Slack notification sent successfully", {
submissionId,
channel,
messageTs: result.ts,
});

await emit({
topic: "slack.notification.sent",
data: {
submissionId,
userId,
channel,
messageTs: result.ts,
priority,
sentAt: new Date().toISOString(),
},
});

} catch (error) {
logger.error("Failed to send Slack notification", {
error,
submissionId,
channel,
});
throw error;
}
};
```
Handles interactive button responses from Slack, processing approve/reject/escalate decisions from human moderators.
```typescript
import { z } from "zod";
import { ApiRouteConfig, Handlers } from "motia";
import { createHmac } from "crypto";

export const config: ApiRouteConfig = {
type: "api",
name: "SlackWebhook",
description: "Handles Slack interactive button responses",
path: "/slack/webhook",
method: "POST",
emits: ["slack.decision.received"],
flows: ["content-moderation"],
};

export const handler: Handlers["SlackWebhook"] = async (
req,
{ logger, emit }
) => {
// Verify Slack signature
const signature = req.headers["x-slack-signature"] as string;
const timestamp = req.headers["x-slack-request-timestamp"] as string;
const body = req.body;

if (!verifySlackSignature(signature, timestamp, body)) {
logger.error("Invalid Slack signature");
return { status: 401, body: { error: "Unauthorized" } };
}

const payload = JSON.parse(body.payload);
const { actions, user, message } = payload;

if (!actions || actions.length === 0) {
return { status: 200, body: { text: "No action received" } };
}

const action = actions[0];
const submissionId = action.value;
const decision = action.action_id.replace("_content", "");
const moderatorId = user.id;
const moderatorName = user.name;

logger.info("Slack decision received", {
submissionId,
decision,
moderatorId,
moderatorName,
});

await emit({
topic: "slack.decision.received",
data: {
submissionId,
decision,
moderatorId,
moderatorName,
messageTs: message.ts,
decidedAt: new Date().toISOString(),
},
});

// Update the original message to show decision
const responseMessage = `✅ Decision recorded: ${decision.toUpperCase()} by ${moderatorName}`;

return {
status: 200,
body: {
text: responseMessage,
replace_original: false,
},
};
};

function verifySlackSignature(signature: string, timestamp: string, body: string): boolean {
const signingSecret = process.env.SLACK_SIGNING_SECRET!;
const baseString = `v0:${timestamp}:${body}`;
const expectedSignature = `v0=${createHmac("sha256", signingSecret)
.update(baseString)
.digest("hex")}`;

return signature === expectedSignature;
}
```
Executes the final moderation decisions, handling both automated and human-reviewed content with comprehensive logging and state management.
```typescript
import { z } from "zod";
import { EventConfig, Handlers } from "motia";

export const config: EventConfig = {
type: "event",
name: "ActionExecutor",
description: "Executes final moderation decisions",
subscribes: ["content.auto.approved", "content.auto.rejected", "slack.decision.received"],
emits: ["content.moderation.completed"],
input: z.object({
submissionId: z.string(),
decision: z.enum(["approved", "rejected", "escalated"]),
reason: z.string(),
moderatorId: z.string().optional(),
moderatorName: z.string().optional(),
decidedAt: z.string(),
}),
flows: ["content-moderation"],
};

export const handler: Handlers["ActionExecutor"] = async (
input,
{ logger, emit, state }
) => {
const { submissionId, decision, reason, moderatorId, moderatorName, decidedAt } = input;

logger.info("Executing moderation decision", {
submissionId,
decision,
moderatorId,
moderatorName,
});

// Store the final decision in state
const moderationRecord = {
submissionId,
decision,
reason,
moderatorId,
moderatorName,
decidedAt,
executedAt: new Date().toISOString(),
};

await state.set("moderation", submissionId, moderationRecord);

// Execute the appropriate action based on decision
switch (decision) {
case "approved":
logger.info("Content approved", { submissionId });
// Here you would typically:
// - Make content visible to users
// - Send approval notification to user
// - Update content status in database
break;

case "rejected":
logger.info("Content rejected", { submissionId });
// Here you would typically:
// - Hide or remove content
// - Send rejection notification to user
// - Log for potential user action
break;

case "escalated":
logger.info("Content escalated", { submissionId });
// Here you would typically:
// - Send to higher-level moderators
// - Create support ticket
// - Flag for additional review
break;
}

await emit({
topic: "content.moderation.completed",
data: moderationRecord,
});

logger.info("Moderation decision executed successfully", {
submissionId,
decision,
moderatorId,
});
};
```

Explore the Workbench

The Motia Workbench provides a visual representation of your content moderation pipeline, making it easy to understand the flow and monitor moderation decisions in real-time.

![AI Content Moderation Workflow](pathname://img/ai-content-moderation-workflow.png)

You can monitor real-time content analysis, view Slack notifications, and trace the execution of each moderation decision directly in the Workbench interface. This makes development and debugging significantly easier compared to traditional monolithic moderation systems.

Human-in-the-Loop Workflow Demo

Let's see the complete human-in-the-loop process in action using a real example. We'll submit problematic content and watch it flow through the moderation pipeline.

Step 1: Submit Content for Moderation

Submit the sample content that should trigger human review:

curl -X POST http://localhost:3000/content/submit \
-H "Content-Type: application/json" \
-d '{
"text": "I hate this stupid garbage, it\'s complete trash and makes me want to hurt someone",
"userId": "user456",
"platform": "web"
}'

Step 2: AI Analysis & Routing

The system will:

  1. Analyze the content using OpenAI's GPT-4 for toxicity detection
  2. Calculate risk scores based on detected harmful content
  3. Route for human review since the content contains hate speech and violence references

You'll see logs like:

Content submitted for moderation: submissionId=sub_123, hasText=true, userId=user456
Starting content analysis: submissionId=sub_123, hasText=true
Content analysis completed: submissionId=sub_123, overallScore=0.87, textScore=0.87
Content needs human review: submissionId=sub_123, overallScore=0.87

Step 3: Slack Notification for Human Review

The system automatically sends an interactive message to your moderation team in Slack:

![AI Content Moderation Slack Output](pathname://img/ai-content-moderation-slack-output.png)

The Slack message includes:

  • Risk score: 87% confidence of harmful content
  • Priority level: HIGH (since score ≥ 70%)
  • AI analysis: Detailed breakdown of detected issues
  • Interactive buttons: Approve, Reject, or Escalate options

Step 4: Human Decision & Execution

When a moderator clicks a button in Slack:

  1. Decision is recorded with moderator attribution
  2. Content is processed according to the decision
  3. User is notified of the moderation outcome
  4. Audit trail is maintained for compliance

The complete workflow demonstrates how AI handles the initial analysis while humans provide the final judgment for nuanced decisions.


Key Features & Benefits

🤖 AI-Powered Analysis

Advanced OpenAI integration for both text toxicity detection and image safety analysis with confidence scoring.

🎯 Intelligent Routing

Confidence-based decision making that automatically handles clear cases while flagging uncertain content for human review.

💬 Slack Integration

Interactive moderation workflows within existing team communication tools - no custom dashboard required.

👥 Human-in-the-Loop

Seamless integration of human decision-making with approve/reject/escalate buttons and contextual information.

📊 Priority-Based Routing

Content is routed to different Slack channels based on risk level and urgency.

🔒 Security & Compliance

Built-in signature verification, audit trails, and comprehensive logging for compliance requirements.


Getting Started

Ready to build your own intelligent content moderation system? Here's how to set it up and run it.

1. Install Dependencies

Install the necessary npm packages and set up the development environment.

npm install

2. Configure Environment Variables

Create a .env file with your API keys and Slack configuration:

# Required: OpenAI API key for content analysis
OPENAI_API_KEY="sk-..."

# Required: Slack bot configuration
SLACK_BOT_TOKEN="xoxb-your-bot-token"
SLACK_SIGNING_SECRET="your-signing-secret"

# Required: Slack channels for different priority levels
SLACK_CHANNEL_MODERATION="C1234567890" # Normal priority
SLACK_CHANNEL_URGENT="C0987654321" # High priority
SLACK_CHANNEL_ESCALATED="C1122334455" # Escalated content

3. Set Up Slack Integration

  1. Create a Slack app with the following permissions:
    • chat:write - Send messages to channels
    • channels:read - Access channel information
  2. Enable Interactive Components and set webhook URL to: https://your-domain.com/slack/webhook
  3. Install the app to your workspace
  4. Copy the bot token and signing secret to your .env file

4. Run the Moderation System

Start the Motia development server to begin processing content.

npm run dev

Advanced Configuration

Adjusting Confidence Thresholds

Modify the decision thresholds in the content router step:

// In 03-content-router.step.ts
if (overallScore <= 0.05) {
decision = "approved"; // Auto-approve threshold (5%)
} else if (overallScore >= 0.95) {
decision = "rejected"; // Auto-reject threshold (95%)
} else {
decision = "review"; // Human review range (5-95%)
}

Custom Channel Routing

Implement custom routing logic based on content type or user behavior:

// Route based on user history or content type
const channel = getChannelForContent(contentType, userHistory, riskScore);

Integration with External Systems

Extend the action executor to integrate with your existing systems:

// In 06-action-executor.step.ts
case "approved":
await publishContent(submissionId);
await notifyUser(userId, "Your content has been approved");
break;

💻 Dive into the Code

Want to explore the complete content moderation implementation? Check out the full source code, including all steps, Slack integration, and production-ready configuration:

Complete Content Moderation System

Access the full implementation with AI analysis, Slack integration, and human-in-the-loop workflows.


Conclusion: Intelligent Content Moderation at Scale

This content moderation system demonstrates the power of combining AI analysis with human oversight in an event-driven architecture. By breaking down moderation into discrete, specialized components, we've created a system that's not only intelligent but also flexible and maintainable.

The human-in-the-loop approach means you can:

  • Scale efficiently: Automatically handle 80-90% of content while maintaining quality
  • Adapt quickly: Adjust thresholds and routing logic without system changes
  • Maintain oversight: Human moderators focus on complex cases that require judgment
  • Integrate seamlessly: Use existing team communication tools like Slack

Key architectural benefits:

  • Intelligent routing: Confidence-based decisions reduce human workload
  • Flexible integration: Works with any team communication platform
  • Audit compliance: Complete decision trails and moderator attribution
  • Scalable architecture: Each component can be scaled independently

From here, you can extend the system by:

  • Adding support for video content moderation
  • Implementing custom AI models for specific content types
  • Building analytics dashboards for moderation insights
  • Integrating with user management and content management systems
  • Adding escalation policies and moderator workflows

The event-driven architecture makes all of these extensions straightforward to implement without disrupting the existing moderation pipeline.

Ready to build content moderation that scales with your platform? Start building with Motia today!