Instructions Module (IM) Plan¶
Overview¶
The Instructions Module directs the Content Production Module by generating customized prompts and directives. It incorporates client-specific strategies and ensures high-quality, ethical content direction following 2025 best practices.
Key Features¶
- Prompt Generation: Create tailored prompts based on client strategies and content types
- Strategy Customization: Per-client templates with industry-specific knowledge
- Ethical/SEO Integration: Enforce E-A-T, keywords, and proper formatting
- Job Queuing: Lightweight background tasks for prompt refinement
- Template Management: Store and retrieve templates from Supabase
Technology Stack¶
- Language: Python 3.12+
- Framework: FastAPI
- LLM Libraries: OpenAI SDK (for prompt refinement)
- Database: Supabase (for template storage)
- Deployment: Railway
API Endpoints¶
| Endpoint | Method | Description | Request Body | Response |
|---|---|---|---|---|
/generate-prompt | POST | Generate customized prompt | { "topic": string, "content_type": string, "client_id": string, "keywords": array<string> } | { "job_id": string } |
/prompt/{job_id} | GET | Retrieve generated prompt | None | { "prompt": string, "metadata": object } |
/templates/{client_id} | GET | Get base template (debug) | None | { "template": string } |
/health | GET | Service health check | None | { "status": "ok" } |
Implementation Code Structure¶
Main Application (app.py)¶
from fastapi import FastAPI, BackgroundTasks, HTTPException
from pydantic import BaseModel
from typing import List, Optional, Dict
import uuid
import os
from datetime import datetime
from supabase import create_client, Client
import asyncio
from openai import AsyncOpenAI
# Initialize FastAPI
app = FastAPI(title="Instructions Module", version="1.0.0")
# Initialize Supabase
supabase_url = os.getenv("SUPABASE_URL")
supabase_key = os.getenv("SUPABASE_KEY")
supabase: Client = create_client(supabase_url, supabase_key)
# Initialize OpenAI for prompt refinement (optional)
openai_client = AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# Request Models
class PromptRequest(BaseModel):
topic: str
content_type: str # blog/social/local
client_id: str
keywords: List[str] = []
class PromptResponse(BaseModel):
prompt: str
metadata: Dict
# Template configurations per client
TEMPLATE_CONFIGS = {
"PASCO": {
"role": "expert science educator",
"style": "educational, accurate, engaging",
"constraints": [
"Cite scientific sources",
"Use appropriate terminology",
"Include practical examples"
]
},
"heaviside": {
"role": "digital marketing expert",
"style": "professional, conversion-focused",
"constraints": [
"Focus on local SEO",
"Include call-to-action",
"Optimize for snippets"
]
},
"paving": {
"role": "paving industry specialist",
"style": "authoritative, practical",
"constraints": [
"Use industry terminology",
"Include cost considerations",
"Address common concerns"
]
},
"garage": {
"role": "garage door expert",
"style": "helpful, trustworthy",
"constraints": [
"Focus on safety",
"Include maintenance tips",
"Address seasonal concerns"
]
},
"electrician": {
"role": "licensed electrician consultant",
"style": "safety-conscious, professional",
"constraints": [
"Emphasize safety first",
"Include code compliance",
"Provide practical advice"
]
}
}
# Background task for prompt generation
async def build_prompt(job_id: str, params: Dict):
try:
# Update job status
supabase.table("prompt_jobs").update({
"status": "in_progress",
"updated_at": datetime.utcnow().isoformat()
}).eq("id", job_id).execute()
# Fetch base template from Supabase
template_result = supabase.table("prompt_templates").select("*").eq(
"client_id", params["client_id"]
).eq(
"content_type", params["content_type"]
).execute()
if template_result.data:
base_template = template_result.data[0]["template"]
else:
# Fallback to default template
base_template = get_default_template(params["content_type"])
# Get client config
client_config = TEMPLATE_CONFIGS.get(params["client_id"], TEMPLATE_CONFIGS["heaviside"])
# Build the prompt
prompt = construct_prompt(
base_template=base_template,
topic=params["topic"],
keywords=params["keywords"],
client_config=client_config,
content_type=params["content_type"]
)
# Optional: Refine prompt with LLM
if os.getenv("ENABLE_PROMPT_REFINEMENT", "false").lower() == "true":
prompt = await refine_prompt_with_llm(prompt, params)
# Store result
result = {
"prompt": prompt,
"metadata": {
"client_id": params["client_id"],
"content_type": params["content_type"],
"keywords": params["keywords"],
"template_version": "1.0",
"refined": os.getenv("ENABLE_PROMPT_REFINEMENT", "false").lower() == "true"
}
}
supabase.table("prompt_jobs").update({
"status": "completed",
"result": result,
"updated_at": datetime.utcnow().isoformat()
}).eq("id", job_id).execute()
except Exception as e:
supabase.table("prompt_jobs").update({
"status": "failed",
"error": str(e),
"updated_at": datetime.utcnow().isoformat()
}).eq("id", job_id).execute()
def construct_prompt(base_template: str, topic: str, keywords: List[str],
client_config: Dict, content_type: str) -> str:
"""Construct a detailed prompt based on inputs and configuration"""
# Content type specific instructions
content_instructions = {
"blog": {
"format": "comprehensive blog post",
"length": "800-2000 words",
"structure": "introduction, main sections with headers, conclusion",
"seo": "optimize for featured snippets and long-tail keywords"
},
"social": {
"format": "engaging social media post",
"length": "100-300 words",
"structure": "hook, value proposition, call-to-action",
"seo": "optimize for local search and engagement"
},
"local": {
"format": "location-specific content",
"length": "500-1000 words",
"structure": "local context, service details, area-specific information",
"seo": "include local landmarks and geo-specific keywords"
}
}
content_spec = content_instructions.get(content_type, content_instructions["blog"])
# Build the prompt
prompt_parts = [
f"You are a {client_config['role']} creating a {content_spec['format']} about {topic}.",
f"\nWriting style: {client_config['style']}",
f"\nContent specifications:",
f"- Length: {content_spec['length']}",
f"- Structure: {content_spec['structure']}",
f"- SEO focus: {content_spec['seo']}",
f"\nKey constraints:"
]
for constraint in client_config['constraints']:
prompt_parts.append(f"- {constraint}")
if keywords:
prompt_parts.append(f"\nIncorporate these keywords naturally: {', '.join(keywords)}")
prompt_parts.append(f"\n{base_template}")
# Add chain-of-thought instruction
prompt_parts.append("\nThink step by step:")
prompt_parts.append("1. Research and gather relevant information")
prompt_parts.append("2. Create an outline")
prompt_parts.append("3. Write the content")
prompt_parts.append("4. Review for accuracy and SEO optimization")
return "\n".join(prompt_parts)
async def refine_prompt_with_llm(prompt: str, params: Dict) -> str:
"""Optional: Use LLM to refine the prompt"""
refinement_prompt = f"""
Improve this content generation prompt to be more specific and effective.
Maintain the core requirements but enhance clarity and add relevant examples.
Original prompt:
{prompt}
Enhanced prompt:
"""
response = await openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": refinement_prompt}],
max_tokens=2000,
temperature=0.3
)
return response.choices[0].message.content
def get_default_template(content_type: str) -> str:
"""Get default template based on content type"""
templates = {
"blog": "Write a comprehensive blog post that provides valuable information to readers. Include relevant examples and actionable insights.",
"social": "Create an engaging social media post that captures attention and encourages interaction. Be concise and impactful.",
"local": "Write location-specific content that resonates with the local community. Include relevant local references and context."
}
return templates.get(content_type, templates["blog"])
# API Endpoints
@app.post("/generate-prompt", response_model=Dict[str, str])
async def generate_prompt(request: PromptRequest, background_tasks: BackgroundTasks):
# Create job record
job_id = str(uuid.uuid4())
job_data = {
"id": job_id,
"status": "pending",
"params": request.dict(),
"created_at": datetime.utcnow().isoformat(),
"updated_at": datetime.utcnow().isoformat()
}
supabase.table("prompt_jobs").insert(job_data).execute()
# Queue background task
background_tasks.add_task(build_prompt, job_id, request.dict())
return {"job_id": job_id}
@app.get("/prompt/{job_id}")
async def get_prompt(job_id: str):
result = supabase.table("prompt_jobs").select("*").eq("id", job_id).execute()
if not result.data:
raise HTTPException(status_code=404, detail="Job not found")
job = result.data[0]
if job["status"] == "completed":
return job["result"]
else:
return {"status": job["status"], "error": job.get("error")}
@app.get("/templates/{client_id}")
async def get_templates(client_id: str):
# Check authorization here in production
templates = supabase.table("prompt_templates").select("*").eq(
"client_id", client_id
).execute()
return {"templates": templates.data}
@app.get("/health")
async def health():
return {"status": "ok", "service": "IM", "version": "1.0.0"}
Requirements (requirements.txt)¶
fastapi==0.111.0
uvicorn==0.30.1
pydantic==2.7.1
supabase==2.5.0
openai==1.35.0
python-dotenv==1.0.1
Railway Configuration (railway.json)¶
{
"$schema": "https://railway.app/railway.schema.json",
"build": {
"builder": "NIXPACKS"
},
"deploy": {
"startCommand": "uvicorn app:app --host 0.0.0.0 --port $PORT",
"restartPolicyType": "ON_FAILURE",
"restartPolicyMaxRetries": 3
}
}
Database Schema (Supabase)¶
Prompt Jobs Table¶
CREATE TABLE prompt_jobs (
id UUID PRIMARY KEY,
status TEXT NOT NULL CHECK (status IN ('pending', 'in_progress', 'completed', 'failed')),
params JSONB NOT NULL,
result JSONB,
error TEXT,
created_at TIMESTAMPTZ NOT NULL,
updated_at TIMESTAMPTZ NOT NULL
);
-- Indexes
CREATE INDEX idx_prompt_jobs_status ON prompt_jobs(status);
CREATE INDEX idx_prompt_jobs_created_at ON prompt_jobs(created_at DESC);
Prompt Templates Table¶
CREATE TABLE prompt_templates (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
client_id TEXT NOT NULL,
content_type TEXT NOT NULL,
template TEXT NOT NULL,
description TEXT,
version TEXT DEFAULT '1.0',
active BOOLEAN DEFAULT true,
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW()
);
-- Indexes
CREATE UNIQUE INDEX idx_templates_client_content ON prompt_templates(client_id, content_type) WHERE active = true;
CREATE INDEX idx_templates_client ON prompt_templates(client_id);
Sample Template Data¶
-- PASCO Scientific Templates
INSERT INTO prompt_templates (client_id, content_type, template, description) VALUES
('PASCO', 'blog', 'Create an educational blog post that explains complex scientific concepts in an accessible way. Use analogies and real-world applications. Include experiments or demonstrations when relevant.', 'Science education blog template'),
('PASCO', 'social', 'Write an engaging social media post about a scientific concept. Make it exciting and shareable while maintaining accuracy. Include a fun fact or surprising insight.', 'Science social media template');
-- Agency Templates
INSERT INTO prompt_templates (client_id, content_type, template, description) VALUES
('heaviside', 'local', 'Create location-specific content that highlights local expertise and community connection. Reference local landmarks, events, or characteristics that resonate with area residents.', 'Local marketing template'),
('paving', 'blog', 'Write an authoritative blog post about paving services. Include technical details, cost factors, and maintenance advice. Address common customer concerns and seasonal considerations.', 'Paving industry blog template');
Environment Variables¶
# Supabase
SUPABASE_URL=your_supabase_url
SUPABASE_KEY=your_supabase_anon_key
# OpenAI (for optional refinement)
OPENAI_API_KEY=your_openai_key
ENABLE_PROMPT_REFINEMENT=false # Set to true for LLM refinement
# Railway
PORT=8001
Testing Strategy¶
Unit Tests (test_app.py)¶
import pytest
from fastapi.testclient import TestClient
from unittest.mock import patch, MagicMock
from app import app, construct_prompt, get_default_template
client = TestClient(app)
def test_health_endpoint():
response = client.get("/health")
assert response.status_code == 200
assert response.json()["status"] == "ok"
def test_construct_prompt():
prompt = construct_prompt(
base_template="Write about {topic}",
topic="solar panels",
keywords=["renewable", "energy"],
client_config={
"role": "energy expert",
"style": "informative",
"constraints": ["Be accurate"]
},
content_type="blog"
)
assert "solar panels" in prompt
assert "renewable" in prompt
assert "energy expert" in prompt
@patch('app.supabase')
def test_generate_prompt_endpoint(mock_supabase):
mock_supabase.table.return_value.insert.return_value.execute.return_value = None
response = client.post("/generate-prompt", json={
"topic": "Test Topic",
"content_type": "blog",
"client_id": "PASCO",
"keywords": ["test"]
})
assert response.status_code == 200
assert "job_id" in response.json()
def test_get_default_template():
blog_template = get_default_template("blog")
assert "comprehensive" in blog_template
social_template = get_default_template("social")
assert "engaging" in social_template
Integration with CPM¶
How CPM Calls IM¶
# In CPM's generate_content function:
async def get_prompt_from_im(params: Dict) -> str:
async with httpx.AsyncClient() as client:
# Start prompt generation
response = await client.post(
f"{IM_SERVICE_URL}/generate-prompt",
json={
"topic": params["topic"],
"content_type": params["content_type"],
"client_id": params["client_id"],
"keywords": params.get("keywords", [])
}
)
job_data = response.json()
job_id = job_data["job_id"]
# Poll for completion (with timeout)
max_attempts = 30 # 30 seconds timeout
for _ in range(max_attempts):
status_response = await client.get(f"{IM_SERVICE_URL}/prompt/{job_id}")
result = status_response.json()
if "prompt" in result:
return result["prompt"]
elif result.get("status") == "failed":
raise Exception(f"Prompt generation failed: {result.get('error')}")
await asyncio.sleep(1)
raise TimeoutError("Prompt generation timed out")
V2 Upgrade Path¶
1. Advanced Prompt Engineering with LangChain¶
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.memory import ConversationSummaryMemory
class AdvancedPromptBuilder:
def __init__(self):
self.memory = ConversationSummaryMemory()
async def build_iterative_prompt(self, params: Dict):
# Multi-step prompt refinement
stages = [
"research_prompt",
"outline_prompt",
"content_prompt",
"optimization_prompt"
]
for stage in stages:
# Build and refine at each stage
pass
2. RAG-Enhanced Prompts¶
from langchain.vectorstores import SupabaseVectorStore
from langchain.embeddings import OpenAIEmbeddings
class RAGPromptEnhancer:
def __init__(self):
self.vectorstore = SupabaseVectorStore(
client=supabase,
embedding=OpenAIEmbeddings(),
table_name="knowledge_base"
)
async def enhance_with_context(self, base_prompt: str, client_id: str):
# Retrieve relevant knowledge
relevant_docs = await self.vectorstore.similarity_search(
base_prompt,
k=5,
filter={"client_id": client_id}
)
# Incorporate into prompt
context = "\n".join([doc.page_content for doc in relevant_docs])
return f"{base_prompt}\n\nRelevant context:\n{context}"
3. Dynamic Strategy Loading¶
class StrategyManager:
async def load_strategy(self, client_id: str) -> Dict:
# Pull from Supabase with caching
strategy = await self.cache.get_or_set(
f"strategy:{client_id}",
lambda: supabase.table("strategies").select("*").eq("client_id", client_id).execute(),
ttl=3600
)
return strategy.data[0]
Monitoring and Metrics¶
Key Metrics¶
- Prompt generation time
- Template cache hit rate
- Refinement impact on content quality
- Error rates by client
- Most used templates
Logging¶
import structlog
logger = structlog.get_logger()
# Log prompt generation
logger.info("prompt_generated",
client_id=params["client_id"],
content_type=params["content_type"],
duration_ms=duration
)
Cost Estimation¶
MVP (Monthly)¶
- Railway hosting: ~$5
- Supabase: Free tier
- OpenAI (refinement disabled): $0
- Total: ~$5/month
V2 with Refinement (Monthly)¶
- Railway hosting: ~$10
- Supabase: Free tier
- OpenAI refinement: ~$5-10
- Total: ~$15-20/month
Success Metrics¶
MVP Goals¶
- <5s prompt generation time
- 95% uptime
- Zero hallucination in prompts
- Template coverage for all clients
V2 Goals¶
- <2s prompt generation with caching
- 99% uptime
- Dynamic prompt optimization
- A/B testing capability