Technical Architecture
BlackOps Center is a multi-tenant SaaS platform built on modern web technologies. This document explains our architecture, design decisions, and how the system works under the hood.
High-Level Architecture
┌─────────────────────────────────────────────────────┐
│ CLIENT LAYER │
│ ┌──────────┐ ┌──────────┐ ┌─────────────────┐ │
│ │ Web App │ │ Mobile │ │ Browser │ │
│ │ (Next.js)│ │ (PWA) │ │ Extension │ │
│ └─────┬────┘ └─────┬────┘ └────────┬────────┘ │
└────────┼─────────────┼────────────────┼───────────┘
│ │ │
└─────────────┼────────────────┘
│
┌─────────────▼────────────────┐
│ API LAYER (Next.js) │
│ ┌────────────────────────┐ │
│ │ REST API Routes │ │
│ │ /api/* │ │
│ └────────────────────────┘ │
│ ┌────────────────────────┐ │
│ │ Webhook Handlers │ │
│ │ /api/webhooks/* │ │
│ └────────────────────────┘ │
└──────────┬──────────────────┘
│
┌──────────▼──────────────────┐
│ DATA & SERVICES LAYER │
│ ┌───────┐ ┌────────────┐ │
│ │Supabase│ │ External │ │
│ │(Postgres)│ │ Services │ │
│ │ + RLS │ │ (AI, etc.)│ │
│ └───────┘ └────────────┘ │
└─────────────────────────────┘
│
┌──────────▼──────────────────┐
│ INFRASTRUCTURE │
│ ┌────────┐ ┌───────────┐ │
│ │ Vercel │ │ CDN │ │
│ │(Edge) │ │(Images) │ │
│ └────────┘ └───────────┘ │
└─────────────────────────────┘Tech Stack
Frontend
- Framework: Next.js 14 (App Router)
- Language: TypeScript
- Styling: Tailwind CSS
- UI Components: Custom + Radix UI primitives
- State Management: React Server Components + client state
- Forms: React Hook Form + Zod validation
Backend
- API: Next.js API Routes (serverless)
- Database: PostgreSQL (via Supabase)
- Auth: Supabase Auth (magic links, OAuth)
- Storage: Supabase Storage (images, files)
- Email: Resend (transactional + inbound)
- Payments: Stripe (subscriptions + one-time)
AI & ML
- LLM: OpenAI GPT-4 / Claude 3.5 Sonnet
- Embeddings: OpenAI text-embedding-3-small
- Vector Search: pgvector (Postgres extension)
Infrastructure
- Hosting: Vercel (Next.js + Edge Functions)
- Database: Supabase (managed Postgres)
- CDN: Vercel Edge Network
- Monitoring: Vercel Analytics + Sentry
Multi-Tenancy Architecture
Row-Level Security (RLS)
BlackOps Center uses PostgreSQL's Row-Level Security to enforce tenant isolation at the database level. Every table has a site_id column, and RLS policies automatically filter queries.
-- Example RLS policy
CREATE POLICY "Users see only their site's posts"
ON posts
FOR ALL
USING (site_id = current_setting('app.current_site_id')::uuid);
-- Every query is automatically filtered
SELECT * FROM posts;
-- Becomes: SELECT * FROM posts WHERE site_id = 'abc-123'
-- This prevents cross-tenant data leaks at DB levelSite Context
Site context flows through the request chain:
1. Request arrives at middleware → Domain/subdomain identifies site → Site ID added to request headers 2. API route extracts site context → Validates user has access to site → Sets Supabase RLS context 3. Database queries automatically filtered → All queries scoped to site_id → Cross-tenant access impossible 4. Response returned → Only site-specific data included
Data Isolation
Multi-tenant isolation strategy:
- Shared Schema: All tenants use same tables (cost-efficient)
- RLS Enforcement: Database-level security (bulletproof)
- API Validation: Additional checks in application layer (defense in depth)
- Tenant Context: Passed via headers, validated every request
🔒 Security Considerations
Multi-tenancy requires vigilance. We've audited 128+ API routes to ensure proper site_id filtering. Key principle: Never trust client-supplied site_id. Always derive from authenticated user context.
Database Schema
Core Tables
-- Sites (tenants) sites ├─ id (uuid, PK) ├─ owner_id (uuid, FK → auth.users) ├─ slug (text, unique) ├─ domain (text, unique) ├─ subdomain (text, unique) ├─ settings (jsonb) └─ created_at (timestamptz) -- Site users (team members) site_users ├─ id (uuid, PK) ├─ site_id (uuid, FK → sites) ├─ user_id (uuid, FK → auth.users) ├─ role (text: owner|admin|editor|viewer) └─ created_at (timestamptz) -- Content reservoirs content_reservoirs ├─ id (uuid, PK) ├─ site_id (uuid, FK → sites) ← RLS ├─ name (text) ├─ description (text) ├─ email_address (text, unique) ├─ settings (jsonb) └─ created_at (timestamptz) -- Reservoir items reservoir_items ├─ id (uuid, PK) ├─ reservoir_id (uuid, FK → content_reservoirs) ├─ site_id (uuid, FK → sites) ← RLS ├─ title (text) ├─ content (text) ├─ url (text) ├─ source_type (text) ├─ embedding (vector(1536)) ← pgvector ├─ tags (text[]) └─ created_at (timestamptz) -- Blog posts posts ├─ id (uuid, PK) ├─ site_id (uuid, FK → sites) ← RLS ├─ slug (text) ├─ title (text) ├─ content (text) ├─ status (text: draft|published) ├─ published_at (timestamptz) └─ created_at (timestamptz) -- Newsletters newsletters ├─ id (uuid, PK) ├─ site_id (uuid, FK → sites) ← RLS ├─ name (text) ├─ from_email (text) └─ created_at (timestamptz) -- Newsletter subscribers newsletter_subscribers ├─ id (uuid, PK) ├─ newsletter_id (uuid, FK → newsletters) ├─ site_id (uuid, FK → sites) ← RLS ├─ email (text) ├─ status (text: active|unsubscribed|bounced) └─ subscribed_at (timestamptz) -- Subscriptions (billing) subscriptions ├─ id (uuid, PK) ├─ user_id (uuid, FK → auth.users) ├─ site_id (uuid, FK → sites, nullable) ├─ stripe_subscription_id (text) ├─ plan (text) ├─ sites_count (int) ├─ status (text) └─ created_at (timestamptz)
Vector Search with pgvector
Reservoir items are embedded for semantic search:
-- Add pgvector extension CREATE EXTENSION vector; -- Embedding column ALTER TABLE reservoir_items ADD COLUMN embedding vector(1536); -- Cosine similarity search SELECT id, title, 1 - (embedding <=> query_embedding) AS similarity FROM reservoir_items WHERE site_id = 'abc-123' ORDER BY embedding <=> query_embedding LIMIT 10; -- Index for performance CREATE INDEX ON reservoir_items USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);
Authentication & Authorization
Authentication Flow
1. User visits /admin ↓ 2. Middleware checks session ↓ 3. If not authenticated: → Redirect to /signin ↓ 4. User signs in (Supabase Auth): → Magic link (email) → OAuth (Google, GitHub) → Password (optional) ↓ 5. Session created (JWT in cookie) ↓ 6. Redirect back to /admin ↓ 7. User sees their sites
Authorization Model
// Authorization check in API route
export async function POST(request: NextRequest) {
// 1. Verify user is authenticated
const user = await getUser(request)
if (!user) {
return NextResponse.json(
{ error: 'Unauthorized' },
{ status: 401 }
)
}
// 2. Get site context
const siteId = request.headers.get('x-site-id')
// 3. Verify user has access to site
const { data: siteAccess } = await supabase
.from('site_users')
.select('role')
.eq('user_id', user.id)
.eq('site_id', siteId)
.single()
if (!siteAccess) {
return NextResponse.json(
{ error: 'Forbidden' },
{ status: 403 }
)
}
// 4. Check role permissions
if (siteAccess.role === 'viewer') {
return NextResponse.json(
{ error: 'Insufficient permissions' },
{ status: 403 }
)
}
// 5. Proceed with request
// ...
}Role Permissions
const PERMISSIONS = {
owner: ['*'], // Full access
admin: [
'posts:read',
'posts:write',
'posts:publish',
'reservoirs:read',
'reservoirs:write',
'team:manage',
'settings:write'
],
editor: [
'posts:read',
'posts:write',
'posts:publish',
'reservoirs:read',
'reservoirs:write'
],
viewer: [
'posts:read',
'reservoirs:read'
]
}Content Generation Pipeline
AI Content Generation Flow
1. User triggers generation → "Generate blog post from reservoir" ↓ 2. API route validates request → User has access to site? → Sufficient credits? ↓ 3. Fetch relevant reservoir items → Vector search for semantic relevance → Filter by tags, date, score ↓ 4. Construct AI prompt → System prompt (tone, style) → Reservoir context (research) → User instructions (topic, angle) ↓ 5. Call LLM API (streaming) → OpenAI or Anthropic → Stream response to client ↓ 6. Post-process output → Markdown formatting → Link insertion → SEO optimization ↓ 7. Save as draft → Store in posts table → Deduct credits ↓ 8. User reviews and publishes
Prompt Engineering
// System prompt (defines voice/style)
const systemPrompt = `
You are Ben Newton, a technical content creator who writes
about web development, React, and content strategy. Your
writing is:
- Direct and practical (no fluff)
- Technically accurate but accessible
- Conversational yet authoritative
- Heavy on examples and code snippets
Write a blog post based on the research provided.
`
// User prompt (specific request)
const userPrompt = `
Topic: React Server Components Performance
Research context:
${reservoirItems.map(item => `
- ${item.title}: ${item.content}
`).join('\n')}
Structure:
1. Hook: Why RSC performance matters
2. Common performance mistakes
3. Optimization techniques (with code)
4. Real-world benchmark results
5. Best practices
Target: 2,500 words, intermediate developers
`
// Call LLM
const stream = await openai.chat.completions.create({
model: 'gpt-4',
messages: [
{ role: 'system', content: systemPrompt },
{ role: 'user', content: userPrompt }
],
stream: true
})RSS & Content Monitoring
RSS Processing Pipeline
1. Cron job runs every hour ↓ 2. Fetch active RSS feeds → WHERE is_active = true → WHERE last_fetched < now() - interval ↓ 3. For each feed: a. Fetch feed XML b. Parse items c. Check for duplicates (by GUID) d. Filter by language preferences e. Extract metadata (title, content, image) ↓ 4. Enrich items: a. Fetch missing images (OpenGraph) b. Calculate keyword relevance c. Analyze sentiment ↓ 5. AI reservoir scoring (async): a. Generate embedding for item b. Compare to all reservoirs c. Calculate match scores d. Store scores in item metadata ↓ 6. Save to database → INSERT INTO rss_feed_items → UPDATE feed last_fetched ↓ 7. User sees scored items in feed reader
AI Reservoir Scoring
// Score content against reservoirs
async function scoreContent(item, reservoirs) {
// 1. Generate embedding for item
const embedding = await openai.embeddings.create({
model: 'text-embedding-3-small',
input: `${item.title}\n\n${item.content}`
})
// 2. Vector search against each reservoir
const scores = await Promise.all(
reservoirs.map(async (reservoir) => {
// Semantic similarity via vector search
const { data } = await supabase.rpc('match_reservoir_items', {
query_embedding: embedding.data[0].embedding,
reservoir_id: reservoir.id,
match_threshold: 0.7,
match_count: 5
})
// Keyword matching
const keywordScore = calculateKeywordMatch(
item,
reservoir.keywords
)
// Combined score (60% semantic, 40% keyword)
const score = (data.avg_similarity * 0.6) + (keywordScore * 0.4)
return {
reservoirId: reservoir.id,
score: score * 10, // Scale to 0-10
reasoning: generateReasoning(data, keywordScore)
}
})
)
// 3. Return best match + all scores
return {
bestMatch: scores.sort((a, b) => b.score - a.score)[0],
allMatches: scores
}
}Email-to-Reservoir
Inbound Email Flow
1. Email sent to reservoir address → react-tips@blackopscenter.com ↓ 2. Resend receives email → Validates inbound domain ↓ 3. Resend webhook fires → POST /api/webhooks/resend/inbound → Payload: email metadata ↓ 4. Webhook handler: a. Verify signature (Svix) b. Rate limit check (10/min per reservoir) c. Validate payload size (<2MB) d. Find reservoir by email address ↓ 5. Fetch full email content → GET from Resend API → Extract text/HTML ↓ 6. Process email: a. Sanitize HTML (prevent XSS) b. Extract metadata (from, subject) c. Check for duplicates (prevent re-import) ↓ 7. Create reservoir item → INSERT INTO reservoir_items → Link to reservoir → Track in reservoir_inbound_emails ↓ 8. Return 200 OK (acknowledge receipt) ↓ 9. User sees item in reservoir
Billing & Subscriptions
Subscription Model
Base Subscription: $297/month ├─ 1 site included ├─ Core features └─ Monthly credits Additional Sites: $100/site/month ├─ Billed as separate line items in Stripe ├─ Added/removed dynamically └─ Prorated billing Credit Packs (One-Time Purchase): ├─ Starter: $29 → 3,000 credits ├─ Standard: $49 → 6,000 credits └─ Pro: $99 → 15,000 credits
Stripe Webhook Flow
subscription.created: 1. Parse subscription from Stripe 2. Calculate site count (base + add-ons) 3. Create subscription record in DB 4. Initialize monthly credits 5. Send welcome notification subscription.updated: 1. Detect changes (site count, tier, period) 2. Update subscription record 3. If billing period renewed → reset credits 4. If tier changed → adjust credit limits subscription.deleted: 1. Mark subscription as canceled 2. Send cancellation notification 3. Sites go read-only (data retained) checkout.session.completed: 1. Identify purchase type (subscription vs. credit pack) 2. If credit pack → add credits to user 3. Record transaction 4. Send receipt
Performance & Scaling
Caching Strategy
- Static Pages: Blog posts cached at CDN (Vercel)
- API Responses: Cached with stale-while-revalidate
- Database Queries: React Server Components cache
- Images: Optimized and cached by Vercel Image Optimization
Database Optimization
-- Indexes for performance CREATE INDEX idx_posts_site_slug ON posts(site_id, slug); CREATE INDEX idx_reservoir_items_site ON reservoir_items(site_id); CREATE INDEX idx_rss_items_published ON rss_feed_items(published_at DESC); CREATE INDEX idx_rss_items_score ON rss_feed_items(ai_reservoir_score DESC); -- Vector index for similarity search CREATE INDEX ON reservoir_items USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);
Scaling Considerations
- Serverless Functions: Auto-scale with traffic (Vercel)
- Database Connection Pooling: Supabase handles pooling
- Rate Limiting: Per-API-key and per-endpoint limits
- Background Jobs: Offload heavy work (RSS, AI) to async queues
- CDN: Static assets served from edge
Monitoring & Observability
Logging
// Structured logging
console.log('[API]', {
method: 'POST',
path: '/api/admin/reservoirs',
userId: user.id,
siteId: siteId,
duration: 152,
status: 200
})
// Error logging with context
console.error('[ERROR]', {
error: error.message,
stack: error.stack,
context: {
userId: user.id,
siteId: siteId,
endpoint: '/api/admin/posts'
}
})Metrics & Alerts
- Vercel Analytics: Performance, Core Web Vitals
- Sentry: Error tracking, performance monitoring
- Supabase Dashboard: DB performance, query times
- Stripe Dashboard: Payment success rate, MRR
Security Measures
Defense in Depth
- RLS Policies: Database-level tenant isolation
- API Authorization: Every route validates site access
- Input Validation: Zod schemas for all inputs
- Rate Limiting: Prevent abuse and DDoS
- CSRF Protection: Built into Next.js
- Content Sanitization: HTML sanitized before storage
- Webhook Signatures: Verify all webhooks (Stripe, Resend)
Data Encryption
- In Transit: TLS 1.3 for all connections
- At Rest: Database encrypted by Supabase
- Secrets: Environment variables (not in code)
- API Keys: Hashed before storage
Disaster Recovery
Backup Strategy
- Database Backups: Daily automated backups (Supabase)
- Point-in-Time Recovery: 7-day retention
- Storage Backups: Images replicated to multiple regions
Incident Response
- Alert triggers (error rate spike, latency increase)
- On-call engineer notified
- Assess impact and severity
- Mitigate (rollback, scale up, disable feature)
- Post-incident review
- Update runbooks
Future Architecture
Planned Improvements
- Edge Compute: Move more logic to edge for lower latency
- Real-Time Collaboration: Websockets for live editing
- Background Job Queue: Dedicated queue system (BullMQ)
- Read Replicas: DB replicas for read-heavy queries
- GraphQL API: Alternative to REST for flexible querying
Developer Resources
- API Reference: Full API documentation
- Webhooks: Webhook configuration guide
- Building Integrations: Integration examples
- GitHub: Open-source components
Good architecture is invisible. It just works—reliably, securely, and at scale.