Conversation Videos
Turn a real Claude Code session into a published video — every step runs as a single MCP tool call, no web UI clicks required.
The Pipeline
Save the verbatim transcript, generate a chat-demo video that auto-scrolls through every turn, render it to MP4, and attach + post — five MCP calls end to end.
extract JSONL → save_conversation → create_conversation_video
(local CLI) (MCP) (MCP)
│
▼
attach_media_to_tweet ← poll get_video ← render_video
(MCP) (MCP, async)
│
▼
post_tweets (MCP)Step 1 — Extract the verbatim transcript
Claude's replies get paraphrased if reconstructed from in-session memory, so the saved note ends up wrong. The extractor reads the on-disk session JSONL Claude Code writes to ~/.claude/projects/<cwd>/<session>.jsonl and emits a clean array of messages.
bun scripts/extract-claude-session.ts \ --since-text "<first user message of this conversation>" \ > /tmp/transcript.json
--since-text— slice from the first user message containing this substring (the JSONL holds prior sessions too)--markdown— preview the rendered output before saving--no-tools— strip_[tool: NAME]_annotations--session— pin to a specific JSONL file--project-dir— override project root (default: cwd)
Step 2 — save_conversation
Stores the conversation as a structured note. Renders mobile-readable markdown (H3 role labels, --- rule between turns, optional inline italic timestamps) and auto-tags the note with conversation and emulator-source so it's discoverable downstream.
save_conversation({
domain: "blackopscenter.com",
title: "Build session — chat video pipeline",
messages: [
{ role: "user", content: "look at blackops note 935eea08" },
{ role: "assistant", content: "Pulled the note..." }
]
})
→ returns note idStep 3 — create_conversation_video
Looks up the saved note, parses its markdown back into messages, and builds a single chat-demo scene with the full conversation as the script slot. Each message fades + slides in sequentially and the track auto-scrolls to keep the latest bubble in view, so a long conversation reads cleanly inside a fixed viewport.
create_conversation_video({
domain: "blackopscenter.com",
note_id: "<uuid or slug from step 2>",
format: "1920x1080", // optional; "1080x1920" for vertical
duration: 30, // optional; auto ~3.2s/turn, capped 12-60s
voiceover_text: "...", // optional narration over the chat
voice_id: "...", // optional ElevenLabs voice id
max_messages: 12 // optional cap
})
→ returns video idThe chat-demo template auto-scales per-message dwell time to text length so the full script fits the scene duration, with reveal/settle padding.
Step 4 — render_video (async)
Triggers the render. Returns immediately — poll get_video for generation_progress / generation_stage. When status becomes rendered, the MP4 lives at video.exports[0].output_url.
render_video({
id: "<video id>",
domain: "blackopscenter.com",
quality: "standard" // draft | standard | high
})Step 5 — Attach & post
Feed the rendered MP4 URL to attach_media_to_tweet (or attach_media_to_post), then publish with post_tweets. Both tools already exist in the MCP surface — see their built-in descriptions for parameters.
Why this matters
The raw material is the work itself — not a recreation of it. A real working session captured verbatim is more authentic than any scripted demo, and faster to produce.
Because every step is an MCP tool call, the same chain works from Claude Code, the Chrome extension, or any other MCP client. Saying “show this conversation in a video and post it” triggers the full pipeline.
See also: Scene Templates · Rendering Pipeline · Voiceover