Skip to content

Environment Variables Reference

This document provides a comprehensive reference for all environment variables used across the HG Content Generation System. Variables are organized by service and deployment context.

Quick Reference

Required Variables (Minimum Setup)

# Database
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_ROLE_KEY=your-service-key

# At least one LLM provider
OPENAI_API_KEY=your-openai-key

# Next.js Frontend
NEXTAUTH_SECRET=your-secret-key
NEXTAUTH_URL=http://localhost:3000

Production Checklist

  • All database credentials configured
  • At least 2 LLM providers configured
  • Redis credentials for caching
  • Monitoring and logging configured
  • Security tokens rotated and secured

Global Variables

Database Configuration

Supabase (Primary Database)

# Required
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...

# Optional
SUPABASE_DB_PASSWORD=your-db-password  # For direct DB connections
SUPABASE_JWT_SECRET=your-jwt-secret    # For custom JWT validation

Description: Supabase provides the primary database, authentication, and real-time features. Default: None (required) Used by: All services

Database Pool Configuration

# Connection pool settings
DATABASE_POOL_SIZE=20                  # Maximum connections per service
DATABASE_POOL_TIMEOUT=30               # Connection timeout in seconds
DATABASE_MAX_OVERFLOW=10               # Additional connections beyond pool size
DATABASE_POOL_RECYCLE=3600            # Recycle connections every hour

Default: Pool size 10, timeout 30s Used by: CPM, External API, SMM

LLM Provider Configuration

OpenAI

OPENAI_API_KEY=sk-your-openai-key
OPENAI_ORG_ID=org-your-org-id         # Optional: for organization billing
OPENAI_BASE_URL=https://api.openai.com/v1  # Optional: for proxy setups

Description: OpenAI GPT models (GPT-4, GPT-3.5-turbo) Default: No default (required for OpenAI usage) Used by: CPM, External API

Anthropic Claude

ANTHROPIC_API_KEY=sk-ant-your-key
ANTHROPIC_BASE_URL=https://api.anthropic.com  # Optional: for proxy setups

Description: Anthropic Claude models (Claude-3-Sonnet, Claude-3-Haiku) Default: No default Used by: CPM, External API

Google Gemini

GOOGLE_API_KEY=AIzaSy-your-google-key
GOOGLE_PROJECT_ID=your-project-id     # For Vertex AI
GOOGLE_LOCATION=us-central1           # For Vertex AI

Description: Google Gemini and Vertex AI models Default: Location us-central1 Used by: CPM

Groq

GROQ_API_KEY=gsk_your-groq-key
GROQ_BASE_URL=https://api.groq.com/openai/v1  # Optional: for custom endpoints

Description: Groq high-speed inference models Default: Standard Groq endpoint Used by: CPM

Ollama (Local Models)

OLLAMA_BASE_URL=http://localhost:11434
OLLAMA_API_KEY=your-api-key           # Optional: for secured Ollama instances
OLLAMA_TIMEOUT=300                    # Request timeout in seconds

Description: Local Ollama model serving Default: localhost:11434, 300s timeout Used by: CPM, External API

Caching and Storage

Redis Configuration

REDIS_URL=redis://localhost:6379
REDIS_PASSWORD=your-redis-password     # For secured Redis
REDIS_DB=0                            # Database number
REDIS_MAX_CONNECTIONS=50              # Connection pool size
REDIS_SOCKET_TIMEOUT=5                # Socket timeout in seconds

Description: Redis for caching, session storage, and job queues Default: localhost:6379, no password, db 0 Used by: CPM, Frontend, External API

File Storage (Optional)

# AWS S3 (for file uploads/storage)
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
AWS_REGION=us-east-1
AWS_S3_BUCKET=hgcontent-uploads

# Or Cloudflare R2
CLOUDFLARE_R2_ACCESS_KEY_ID=your-access-key
CLOUDFLARE_R2_SECRET_ACCESS_KEY=your-secret-key
CLOUDFLARE_R2_BUCKET=hgcontent-uploads
CLOUDFLARE_R2_ENDPOINT=your-account-id.r2.cloudflarestorage.com

Description: Object storage for file uploads and assets Default: No file storage (optional feature) Used by: Frontend, CPM

Service-Specific Variables

Content Production Module (CPM)

Core Configuration

# Service identification
CPM_SERVICE_NAME=content-production-module
CPM_VERSION=1.0.0
CPM_ENVIRONMENT=production            # production, staging, development

# API Configuration
CPM_HOST=0.0.0.0                     # Bind address
CPM_PORT=8000                        # Port number
CPM_DEBUG=false                      # Debug mode
CPM_LOG_LEVEL=INFO                   # DEBUG, INFO, WARNING, ERROR

Default: Host 0.0.0.0, port 8000, INFO logging Used by: CPM service only

Processing Configuration

# Job processing
MAX_CONCURRENT_JOBS=10               # Maximum concurrent content generation jobs
JOB_TIMEOUT=600                      # Job timeout in seconds
JOB_RETRY_ATTEMPTS=3                 # Number of retry attempts for failed jobs
JOB_CLEANUP_INTERVAL=3600            # Cleanup completed jobs every hour

# Content limits
MAX_CONTENT_LENGTH=10000             # Maximum content length in characters
MIN_CONTENT_LENGTH=100               # Minimum content length
DEFAULT_TARGET_LENGTH=1200           # Default target length for content

Default: 10 concurrent jobs, 600s timeout, 3 retries Used by: CPM service

LLM Provider Configuration

# Provider selection and fallback
PRIMARY_LLM_PROVIDER=openai          # Primary provider (openai, anthropic, google, groq)
FALLBACK_LLM_PROVIDER=anthropic      # Fallback provider
LLM_PROVIDER_TIMEOUT=120             # Provider request timeout in seconds
LLM_RATE_LIMIT_BUFFER=0.8            # Use 80% of rate limits for safety

# Model configuration
OPENAI_DEFAULT_MODEL=gpt-4           # Default OpenAI model
ANTHROPIC_DEFAULT_MODEL=claude-3-sonnet-20240229  # Default Anthropic model
GOOGLE_DEFAULT_MODEL=gemini-pro      # Default Google model
GROQ_DEFAULT_MODEL=mixtral-8x7b-32768 # Default Groq model

Default: OpenAI primary, Anthropic fallback, 120s timeout Used by: CPM service

External API

API Configuration

# Service configuration
EXTERNAL_API_HOST=0.0.0.0
EXTERNAL_API_PORT=8001
EXTERNAL_API_VERSION=1.0.0
EXTERNAL_API_TITLE="HG Content External API"

# Rate limiting
EXTERNAL_API_RATE_LIMIT=100          # Requests per hour per API key
EXTERNAL_API_BURST_LIMIT=10          # Requests per minute burst limit
EXTERNAL_API_DAILY_JOB_LIMIT=50      # Daily job creation limit per API key

# Processing
EXTERNAL_WORKER_ENABLED=true         # Enable background worker
EXTERNAL_JOB_POLL_INTERVAL=5         # Poll CPM for job updates every N seconds

Default: Port 8001, 100 requests/hour limit Used by: External API service

Job Processing

# Job management
EXTERNAL_JOB_RETENTION_HOURS=168     # Keep completed jobs for 7 days
EXTERNAL_JOB_CLEANUP_INTERVAL=3600   # Cleanup old jobs every hour
EXTERNAL_MAX_CONCURRENT_REQUESTS=5   # Max concurrent CPM requests

# CPM Integration
CPM_INTERNAL_URL=http://cpm:8000     # Internal CPM service URL
CPM_SERVICE_TOKEN=service-secret-123  # Service-to-service authentication

Default: 7 day retention, hourly cleanup Used by: External API service

Instructions Module (IM)

Service Configuration

# IM service settings
IM_SERVICE_NAME=instructions-module
IM_HOST=0.0.0.0
IM_PORT=8001
IM_ENVIRONMENT=production

# Template management
TEMPLATE_CACHE_TTL=3600              # Cache templates for 1 hour
TEMPLATE_VERSION_LIMIT=10            # Keep last 10 versions of each template

Default: Port 8001, 1 hour cache TTL Used by: Instructions Module

Template Processing

# Generation settings
DEFAULT_INSTRUCTION_LENGTH=500       # Default instruction length
MAX_INSTRUCTION_LENGTH=2000          # Maximum instruction length
INSTRUCTION_GENERATION_TIMEOUT=30    # Timeout for instruction generation

# Performance monitoring
TEMPLATE_PERFORMANCE_TRACKING=true   # Track template performance metrics
INSTRUCTION_QUALITY_SCORING=true    # Enable quality scoring

Default: 500 default length, 30s timeout Used by: Instructions Module

Strategy Management Module (SMM)

Service Configuration

# SMM service settings
SMM_SERVICE_NAME=strategy-management-module
SMM_HOST=0.0.0.0
SMM_PORT=8002
SMM_LOG_LEVEL=INFO

# Caching
STRATEGY_CACHE_TTL=1800              # Cache strategies for 30 minutes
CLIENT_HIERARCHY_CACHE_TTL=3600      # Cache client hierarchy for 1 hour

Default: Port 8002, 30 minute strategy cache Used by: Strategy Management Module

Frontend (Next.js)

Next.js Configuration

# Basic Next.js settings
NEXTAUTH_SECRET=your-nextauth-secret-key
NEXTAUTH_URL=http://localhost:3000   # Base URL for authentication callbacks
NODE_ENV=production                  # production, development, test

# Database
DATABASE_URL=${SUPABASE_URL}         # Same as Supabase URL

Description: Next.js authentication and basic configuration Default: Development mode, localhost:3000 Used by: Frontend application

API Integration

# Internal service URLs
NEXT_PUBLIC_CPM_API_URL=https://cpm.hgcontent.com
NEXT_PUBLIC_EXTERNAL_API_URL=https://api.hgcontent.com
NEXT_PUBLIC_SMM_API_URL=https://smm.hgcontent.com

# Public configuration (exposed to client)
NEXT_PUBLIC_APP_VERSION=1.0.0
NEXT_PUBLIC_ENVIRONMENT=production

Description: Service URLs and public configuration Used by: Frontend application

Feature Flags

# Feature toggles
NEXT_PUBLIC_ENABLE_ANALYTICS=true    # Enable analytics dashboard
NEXT_PUBLIC_ENABLE_OLLAMA=true       # Show Ollama provider option
NEXT_PUBLIC_ENABLE_WEBSOCKET=true    # Enable real-time updates
NEXT_PUBLIC_ENABLE_BILLING=false     # Enable billing features (coming soon)

Default: Analytics and WebSocket enabled Used by: Frontend application

Deployment-Specific Variables

Railway Deployment

# Railway-specific
RAILWAY_ENVIRONMENT_NAME=production
RAILWAY_PUBLIC_DOMAIN=hgcontent.railway.app

# Port binding (Railway assigns these automatically)
PORT=8000                            # Railway assigns port dynamically

Description: Railway platform-specific variables Used by: All services when deployed on Railway

Vercel Deployment (Frontend)

# Vercel-specific
VERCEL_ENV=production                # production, preview, development
VERCEL_URL=hgcontent.vercel.app      # Assigned by Vercel

# Build configuration
NEXT_PUBLIC_VERCEL_URL=${VERCEL_URL} # Public URL for client-side

Description: Vercel platform-specific variables Used by: Frontend when deployed on Vercel

Docker Deployment

# Container configuration
DOCKER_BUILDKIT=1                    # Enable BuildKit for faster builds
COMPOSE_PROJECT_NAME=hgcontent       # Docker Compose project name

# Health check configuration
HEALTH_CHECK_INTERVAL=30             # Health check every 30 seconds
HEALTH_CHECK_TIMEOUT=10              # 10 second timeout for health checks
HEALTH_CHECK_RETRIES=3               # 3 retries before marking unhealthy

Description: Docker and Docker Compose specific settings Used by: All services in Docker deployment

Security and Monitoring

Authentication and Security

# JWT Configuration
JWT_SECRET_KEY=your-jwt-secret-key   # For signing JWT tokens
JWT_EXPIRATION_HOURS=24              # JWT token expiration time
JWT_REFRESH_EXPIRATION_DAYS=30       # Refresh token expiration

# API Security
API_KEY_PREFIX=hgc_                  # Prefix for generated API keys
API_KEY_LENGTH=32                    # Length of generated API keys
CORS_ALLOWED_ORIGINS=*               # CORS allowed origins (comma-separated)

# Rate limiting
RATE_LIMIT_REDIS_URL=${REDIS_URL}    # Redis URL for rate limiting storage
RATE_LIMIT_SKIP_FAILED_REQUESTS=false # Skip counting failed requests

Default: 24 hour JWT expiration, 30 day refresh Used by: All services for security

Monitoring and Logging

Logging Configuration

# Logging
LOG_LEVEL=INFO                       # DEBUG, INFO, WARNING, ERROR, CRITICAL
LOG_FORMAT=json                      # json, text
LOG_FILE_PATH=/var/log/hgcontent.log # Optional: log to file
STRUCTURED_LOGGING=true              # Enable structured logging

# Error tracking
SENTRY_DSN=https://your-sentry-dsn   # Sentry error tracking (optional)
SENTRY_ENVIRONMENT=production        # Environment tag for Sentry

Default: INFO level, JSON format, structured logging Used by: All services

Metrics and Monitoring

# Prometheus metrics
ENABLE_METRICS=true                  # Enable Prometheus metrics endpoint
METRICS_PORT=9090                    # Port for metrics endpoint (separate from main app)

# Health checks
HEALTH_CHECK_ENABLED=true            # Enable health check endpoints
HEALTH_CHECK_PATH=/health            # Health check endpoint path

# Performance monitoring
ENABLE_PROFILING=false               # Enable performance profiling (dev only)
PROFILING_SAMPLE_RATE=0.1            # Sample rate for profiling

Default: Metrics enabled on port 9090 Used by: All services for monitoring

Development and Testing

Development Environment

# Development mode
DEVELOPMENT_MODE=true                # Enable development features
DEBUG_MODE=true                      # Verbose debugging output
HOT_RELOAD=true                      # Enable hot reload (frontend)

# Testing
TESTING=false                        # Disable external calls during tests
TEST_DATABASE_URL=postgresql://test-db # Separate test database
MOCK_LLM_RESPONSES=false             # Use mock responses instead of real LLM calls

Description: Development and testing configuration Used by: All services in development

Local Development Overrides

# Local overrides (typically in .env.local)
NEXT_PUBLIC_CPM_API_URL=http://localhost:8000
NEXT_PUBLIC_EXTERNAL_API_URL=http://localhost:8001
REDIS_URL=redis://localhost:6379
SUPABASE_URL=http://localhost:54321  # Local Supabase instance

Description: Local development URL overrides Used by: Development environment only

Environment File Templates

Production Template (.env.production)

# === Production Environment Configuration ===

# Database (Required)
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_ROLE_KEY=your-service-key

# LLM Providers (At least one required)
OPENAI_API_KEY=sk-your-openai-key
ANTHROPIC_API_KEY=sk-ant-your-key
GOOGLE_API_KEY=AIzaSy-your-google-key

# Caching
REDIS_URL=redis://your-redis-instance
REDIS_PASSWORD=your-redis-password

# Authentication
NEXTAUTH_SECRET=your-production-secret
JWT_SECRET_KEY=your-jwt-secret

# Monitoring
SENTRY_DSN=https://your-sentry-dsn
LOG_LEVEL=INFO

# Security
CORS_ALLOWED_ORIGINS=https://app.hgcontent.com,https://hgcontent.com
API_KEY_PREFIX=hgc_prod_

Development Template (.env.development)

# === Development Environment Configuration ===

# Database
SUPABASE_URL=http://localhost:54321
SUPABASE_ANON_KEY=your-local-anon-key
SUPABASE_SERVICE_ROLE_KEY=your-local-service-key

# LLM Providers (Optional for development)
OPENAI_API_KEY=sk-your-openai-key

# Local services
REDIS_URL=redis://localhost:6379

# Authentication
NEXTAUTH_SECRET=dev-secret-key
NEXTAUTH_URL=http://localhost:3000

# Development settings
DEVELOPMENT_MODE=true
DEBUG_MODE=true
LOG_LEVEL=DEBUG
MOCK_LLM_RESPONSES=true

Testing Template (.env.test)

# === Test Environment Configuration ===

# Test database
SUPABASE_URL=http://localhost:54321
SUPABASE_ANON_KEY=test-anon-key
SUPABASE_SERVICE_ROLE_KEY=test-service-key

# Disable external services
TESTING=true
MOCK_LLM_RESPONSES=true
REDIS_URL=redis://localhost:6379/1  # Separate Redis DB for tests

# Test configuration
NODE_ENV=test
LOG_LEVEL=ERROR
RATE_LIMITING_ENABLED=false

Validation and Security

Variable Validation

The system validates environment variables on startup:

# Example validation rules
REQUIRED_VARS = [
    "SUPABASE_URL",
    "SUPABASE_ANON_KEY", 
    "SUPABASE_SERVICE_ROLE_KEY"
]

LLM_PROVIDER_VARS = [
    "OPENAI_API_KEY",
    "ANTHROPIC_API_KEY", 
    "GOOGLE_API_KEY",
    "GROQ_API_KEY"
]

# At least one LLM provider must be configured
assert any(os.getenv(var) for var in LLM_PROVIDER_VARS), \
    "At least one LLM provider API key must be configured"

Security Best Practices

  1. Never commit secrets: Use .env files and .gitignore
  2. Rotate keys regularly: Especially API keys and JWT secrets
  3. Use least privilege: Grant minimum required permissions
  4. Separate environments: Different keys for dev/staging/production
  5. Monitor usage: Track API key usage and set up alerts

Variable Naming Conventions

  • PUBLIC_: Exposed to client-side code
  • SECRET_: Sensitive values (never log these)
  • _URL: Service endpoints and URLs
  • _KEY: API keys and authentication tokens
  • _TIMEOUT: Timeout values in seconds
  • _ENABLED: Boolean feature flags

Troubleshooting

Common Issues

Missing Required Variables

Error: SUPABASE_URL environment variable is required
Solution: Check that all required variables are set in your environment

Invalid Service URLs

Error: Failed to connect to http://cpm:8000
Solution: Verify service URLs are correct for your deployment environment

LLM Provider Errors

Error: No LLM providers configured
Solution: Set at least one LLM provider API key

Debugging Environment Variables

# List all environment variables (be careful with secrets!)
env | grep -E "(SUPABASE|OPENAI|REDIS)" | head -10

# Check specific variables
echo "Database URL: ${SUPABASE_URL}"
echo "Redis configured: ${REDIS_URL:+Yes}"

# Validate environment in Python
python -c "
import os
required = ['SUPABASE_URL', 'SUPABASE_ANON_KEY']
missing = [var for var in required if not os.getenv(var)]
print(f'Missing variables: {missing}' if missing else 'All required variables set')
"

For additional support, see the Troubleshooting Guide or Developer Guide.