2026-02-02
better_changelog — Vercel Deployment (RESOLVED)
The saga:
- Original error: “No Next.js version detected” → Set Root Directory to
apps/web - 404 NOT_FOUND (Vercel platform):
output: "standalone"in next.config → commented out - Still 404: Missing env vars (Clerk etc) → Adam confirmed they were set
- Build completing in 53ms: Vercel “Detected Turbo” and skipped entire build pipeline
- Turbo override attempt: Changed Build Command to
next build, Install Command topnpm install→ Turbo detection STILL hijacked the build - Removed turbo signals: Stripped turbo.json and turbo dep from personal fork → pushed but Vercel kept building old commit
5495ce8 - Git webhook broken: Vercel wasn’t picking up new commits despite correct repo connection. Disconnect/reconnect didn’t fix it.
- CLI deploy: Installed Vercel CLI, deployed directly → build succeeded but created new project without env vars (500 MIDDLEWARE_INVOCATION_FAILED)
- Env var newlines: Used
echoto pipe env vars → trailing newlines broke NEXT_PUBLIC_APP_DOMAIN matching in middleware → all requests treated as tenant subdomains → rewritten to /changelog → 404 - Fixed with
printf: Re-added all env vars without newlines, redeployed → 200 OK 🎉
New Vercel setup:
- Project: adams-projects-a157b046/web
- URL: https://web-five-phi-53.vercel.app
- Deploy method: Vercel CLI from server (
vercel deploy --prod --yes --token=$(cat /root/.vercel-token)) - Token: saved at
/root/.vercel-token - Root Directory: apps/web (via CLI link, not Git integration)
- Framework: Next.js (auto-detected)
- Old project: better-changelog-web.vercel.app (abandoned, Git integration was broken)
Lessons learned:
- Vercel’s Turbo auto-detection hijacks builds even with command overrides — strip turbo.json for non-Turbo deploys
echo "value" | vercel env addadds trailing newline — useprintfinstead- Vercel “Redeploy” replays the same commit — won’t pick up new pushes if webhook is dead
output: "standalone"in next.config breaks Vercel (it’s for Docker/Render only)
Agent Visualizer / Dashboard Research
Adam wants better real-time visibility into what his agent and sub-agents are doing. Here’s what exists.
1. Built-in: Clawdbot Control UI (Gateway Dashboard)
URL: http://127.0.0.1:18789/ (local) or via Tailscale
Docs: https://docs.openclaw.ai/web/control-ui
The Gateway already ships a Vite + Lit single-page app that connects via WebSocket. It provides:
- Chat with the agent (send, stream tool calls + live tool output cards, abort)
- Sessions list with per-session thinking/verbose overrides
- Live log tailing with filter/export (
logs.tailRPC) - Cron jobs management (list/add/run/enable/disable + run history)
- Nodes list with capabilities
- Skills status, enable/disable, install
- Exec approvals for gateway/node allowlists
- Config view/edit with validation
- Debug panel: status/health/models snapshots + event log + manual RPC calls
What it does well:
- Already installed, zero-config for local use
- Live streaming of tool calls and agent events
- Sessions list shows all active sessions (main, group, cron, sub-agents)
- Log tailing with filters
What it lacks for Adam’s use case:
- No visual tree/graph of parent → sub-agent relationships
- No real-time “activity feed” showing all concurrent agent runs at once
- Sessions list is flat — doesn’t show spawn hierarchy
- No timeline or waterfall view of tool executions
- No token/cost dashboard aggregation
2. Built-in: CLI Tools for Monitoring
Several CLI commands provide real-time monitoring:
clawdbot status --all— Full diagnosis with log tailclawdbot sessions --json --active 60— Active sessionsclawdbot logs --follow— Live log streamclawdbot gateway call sessions.list— RPC query for sessionsclawdbot gateway call logs.tail --params '{"sinceMs": 60000}'/statusin chat — Session context, token usage, model info/context list— What’s in the system prompt
3. Built-in: Gateway WebSocket Event Stream
The Gateway emits structured events on the WS protocol that any custom dashboard could consume:
agentevents: lifecycle start/end/error + streaming assistant/tool deltaschatevents: delta messages, final messagespresenceevents: connected clients/nodeshealthevents: gateway health snapshotscronevents: job runs
The sub-agent system uses session keys like agent:<agentId>:subagent:<uuid>, so a custom listener could reconstruct the spawn tree by watching sessions.list and matching subagent: patterns.
4. Canvas — Build Your Own Dashboard
The Canvas system (A2UI or plain HTML) can be used to build a custom real-time dashboard:
- Agent can push HTML/JS to the Canvas panel (macOS app, iOS, Android)
- A2UI supports component-based UI updates
- Canvas host:
http://<gateway>:18793/__openclaw__/a2ui/ - The agent itself could build and maintain a monitoring dashboard as a Canvas skill
Idea: Write a skill that periodically polls sessions.list + logs.tail and renders a live tree view in Canvas. This would be the most “native” Clawdbot approach.
5. External: Langfuse (Open Source LLM Observability)
URL: https://github.com/langfuse/langfuse | https://langfuse.com Stars: Very popular, YC W23
Full LLM engineering platform with:
- Distributed trace visualization (nested spans, tree view)
- Session/conversation tracking
- Token + cost analytics
- Prompt management + versioning
- Self-hostable via Docker Compose
Pros:
- Beautiful trace visualization — exactly the tree/waterfall view Adam wants
- Self-hostable, open source
- Great for debugging complex multi-step agent runs
- Tracks costs, latency, tokens per step
Cons:
- Python/JS SDK integration — would need a Clawdbot plugin/hook to emit traces
- No native Clawdbot integration exists yet
- Adds infrastructure (Postgres + ClickHouse or their cloud)
- Designed for LLM API calls, not the full Clawdbot session model
6. External: Helicone (Open Source LLM Observability)
URL: https://github.com/Helicone/helicone | https://helicone.ai Stars: Popular, YC W23
AI Gateway + observability platform:
- Agent tracing with step-by-step execution graphs
- Cost + latency tracking per request
- Session replay
- Self-hostable via Docker
Pros:
- Session replays show full agent execution flow
- Works as a proxy gateway (intercepts API calls)
- 10k free requests/month on cloud tier
Cons:
- Works as API proxy — would need to route Clawdbot’s model calls through Helicone
- Heavier infrastructure if self-hosting
- Less suited for visualizing the Clawdbot session/sub-agent hierarchy specifically
7. External: AgentOps (Open Source Agent Monitoring)
URL: https://github.com/AgentOps-AI/agentops | https://agentops.ai
Purpose-built for AI agent observability:
- Session replays with step-by-step execution graphs
- Agent debugging with nested span visualization
- Cost tracking
- Self-hostable (MIT license)
- Decorator-based Python SDK
Pros:
- Most “agent-native” of the external tools
- Session replay is very close to what Adam wants
- Open source + self-hostable
Cons:
- Python SDK — would need a bridge/plugin for Clawdbot (TypeScript/Node)
- Designed for Python agent frameworks (CrewAI, LangChain, etc.)
- Integration effort would be significant
8. External: Pydantic Logfire
URL: https://github.com/pydantic/logfire
OpenTelemetry-based observability with LLM-specific features:
- Beautiful trace visualization
- SQL query interface for data
- Python-centric
Cons: Python-only SDK, closed-source backend — not practical for Clawdbot.
9. External: Traccia (OpenTelemetry for AI)
URL: https://github.com/traccia-ai/traccia
Lightweight OpenTelemetry SDK for AI agents:
- Auto-patches OpenAI, Anthropic clients
- Exports to Grafana Tempo, Jaeger, Zipkin
- Cost + token tracking
Pros: Standards-based (OTel), can use any OTel backend for visualization Cons: Python-only, would need a TypeScript equivalent + Clawdbot hooks
🏆 Recommendation for Adam
Short-term (today, zero effort):
Use the built-in Control UI at http://127.0.0.1:18789/. It already streams tool calls, shows sessions, and tails logs. Open it alongside your Telegram chat to see what’s happening.
Medium-term (best ROI): Build a custom Canvas dashboard skill. This would:
- Use the Gateway WS API to subscribe to
agentevents across all sessions - Reconstruct the sub-agent spawn tree from session keys (
subagent:<uuid>) - Show a live tree view: main agent → sub-agents with status indicators
- Display current tool being executed, tokens used, elapsed time
- Render in Canvas (visible on macOS/iOS) or as a standalone web page
This is the most “Clawdbot-native” approach and doesn’t require external infrastructure.
Long-term (if needed): Self-host Langfuse and write a Clawdbot plugin hook that emits OpenTelemetry traces:
- Hook into
before_agent_start,after_tool_call,agent_end - Emit spans to Langfuse for each agent run, tool call, and sub-agent spawn
- Get Langfuse’s beautiful trace tree visualization + cost analytics
The plugin hook system (before_tool_call, after_tool_call, agent_end, session_start, session_end) is perfectly suited for this — each hook point maps cleanly to an OpenTelemetry span boundary.
Key insight: The Gateway WS protocol already emits all the events needed. The gap is a visualization layer that shows the parent→child spawn tree and concurrent activity in a visual way. A Canvas skill polling sessions.list + subscribing to agent events is the fastest path to what Adam wants.
better_changelog — Post-Deploy Fixes (Overnight, Adam sleeping)
What was done:
-
Upgraded Next.js 16.0.7 → 16.1.6 (latest stable)
- Fixes CVE-2025-66478 security vulnerability
- Resolves baseline-browser-mapping outdated warnings (bundled in 16.1.6)
- 20MB smaller install, Turbopack FS caching stable, new bundle analyzer
-
Migrated middleware.ts → proxy.ts
middlewarefile convention deprecated in Next.js 16, renamed toproxy- Just a file rename — API is identical (clerkMiddleware, config.matcher all unchanged)
- Clerk docs already reference
proxy.tsas the convention - Build now shows
Ć’ Proxy (Middleware)instead of deprecation warning
-
Updated next.config.ts
- Removed stale “Experimental features for Next.js 15” comment
- tsconfig auto-updated by Next.js 16.1.6 (jsx: preserve → react-jsx, added .next/dev/types)
-
Deployed and verified
- Build: clean, no warnings, 18.5s compile on Vercel
/→ 200 ✅ (landing page renders fully)/sign-in→ 200 ✅ (Clerk auth form renders)/dashboard→ 404 withx-clerk-auth-reason: protect-rewrite✅ (correct — unauthenticated)- Browser screenshot confirms full UI rendering
-
Pushed to personal fork:
git push personal vercel-fix:main(commit 4eca275)
What was NOT changed (and why):
- Origin/personal fork sync: Origin (project-shovels) still has turbo.json; personal fork has it stripped. This divergence is intentional — Vercel deploys from personal fork which needs turbo stripped. Origin can keep turbo for local dev. Syncing would break one or the other.
- Clerk “Development mode” badge visible on sign-in — this is Clerk’s dev instance config, not a code issue. Adam would need to switch to production keys when ready.
- Missing pages (pricing, features, docs) — these nav links 404. They’re placeholder links in the landing page, not bugs. Will be built later.
Git state:
- Branch
vercel-fixis 4 commits ahead oforigin/main(turbo strip + trigger builds + this fix) - Personal fork
mainsynced tovercel-fixhead (4eca275) - Deploy command:
cd /root/clawd/better_changelog/apps/web && vercel deploy --prod --yes --token=$(cat /root/.vercel-token)