Reggie Self-Improvement Projects β Implementation Plan
Created: 2026-02-03 Author: Reggie (sub-agent: project-planning)
Project 1: Agent Visualizer Dashboard
Priority: HIGH | Estimated Effort: 2β3 days | Dependencies: None (all infrastructure exists)
What It Is
A custom single-page web dashboard that connects to the Clawdbot gateway WebSocket and provides real-time observability into Reggieβs operations: live sessions, activity streams, spawn trees, logs, and cost tracking.
Why It Matters
- Visibility: Adam canβt currently see what Reggie is doing in real-time without reading Telegram messages or SSH-ing in. This gives him a live control tower.
- Debugging: When things go wrong (stuck sub-agents, excessive token burn, errors), thereβs no single place to see whatβs happening.
- Trust: Transparency builds confidence in autonomy. If Adam can always peek at the dashboard, heβll be more comfortable letting Reggie run independently.
- Exists partially: Clawdbot already ships a built-in control UI at
http://127.0.0.1:18789/(thecontrol-uidirectory), but itβs a Lit-based compiled app. A custom vanilla HTML/JS dashboard gives us full control and can be tailored to Reggie-specific needs.
Technical Context (Discovered)
- Gateway: Running on port 18789, loopback-only, accessible via Tailscale Serve at
https://clawdbot-test.tailfc9c40.ts.net/ - Built-in control UI: Exists at
/usr/lib/node_modules/clawdbot/dist/control-ui/(Lit web components, single JS bundle). Already has sessions, chat, config, logs, cron views. - Gateway WS Protocol: JSON frames with
req/res/eventtypes. Usesid-based request-response + event streaming. - Key RPC methods (confirmed in source):
sessions.listβ Lists all sessions with metadatasessions.patchβ Update session propertieschat.sendβ Send a message to a sessionchat.historyβ Get chat history for a sessionchat.abortβ Abort a running agent turnlogs.tailβ Tail gateway log file (cursor-based pagination)cron.list/cron.statusβ Cron job managementstatus/healthβ System statusconfig.get/config.schemaβ Configurationchannels.statusβ Channel statusmodels.listβ Available modelsnode.listβ Paired nodesskills.statusβ Skills status
- Key events:
AgentEventβ Tool calls, LLM responses, errors (fields:runId,seq,stream,ts,data)ChatEventβ Session activity (fields:runId,sessionKey,seq,state,message)
- Auth: Gateway supports device auth with tokens and optional password
Implementation Steps
Step 1: Scaffold the HTML page
- Create
/root/clawd/dashboard/index.htmlβ single file, vanilla HTML/CSS/JS - Dark theme matching Clawdbotβs design tokens (discovered CSS vars from the built-in UI)
- CSS Grid layout: sidebar nav + main content area
- Responsive design for desktop use
Step 2: WebSocket connection layer
- JavaScript class
GatewayClientthat:- Connects to
ws://127.0.0.1:18789(or via Tailscale URL) - Handles JSON frame protocol:
{ type: "req", id, method, params }β{ type: "res", id, result/error } - Auto-reconnect with exponential backoff
- Event listener pattern for
AgentEventandChatEvent - Request/response correlation via
idfield - Optional auth token support (from URL param or input field)
- Connects to
Step 3: Live Sessions Panel
- Call
sessions.liston connect and every 5s - Display table/cards with:
- Session key, label, agent ID
- Status (active/idle), model in use
- Token count, cost if available
- Last activity timestamp
- Active run indicator (spinning icon)
- Color-code: green = active, gray = idle, red = error
- Click to view session details / chat history
Step 4: Agent Activity Stream
- Subscribe to
AgentEventandChatEventvia WS events - Real-time feed showing:
- π§ Tool calls (tool name, arguments preview)
- π¬ LLM responses (truncated preview)
- β Errors (full error message)
- π€ Session state changes (started, completed, aborted)
- Each entry shows: timestamp, session key, event type, content
- Auto-scroll to bottom (with βpause auto-scrollβ on manual scroll up)
- Filter by session key, event type, or search text
Step 5: Session Spawn Tree
- Parse session keys to extract hierarchy (format:
agent:main:subagent:UUID) - Visualize as indented tree or simple DAG:
ββ agent:main:main (ACTIVE) β ββ agent:main:subagent:abc123 (COMPLETED) β ββ agent:main:subagent:def456 (RUNNING) ββ cron:morning-check (IDLE) - Use CSS for tree lines, no external library needed
- Show spawn time, duration, status badge
Step 6: Log Tail
- Call
logs.tailwith cursor-based pagination - Display in monospace log viewer:
- Timestamp | Level | Subsystem | Message
- Color-coded levels (trace=gray, info=blue, warn=yellow, error=red)
- Poll every 2s for new entries (matching built-in UI behavior)
- Filter controls: level dropdown, subsystem text filter, search
- Limit display to last 500 lines (configurable)
Step 7: Cost/Token Tracking
- Extract from session metadata (if available in
sessions.listresponse) - Display per-session: input tokens, output tokens, estimated cost
- Show totals across all active sessions
- Simple bar chart using CSS (no chart library)
Step 8: Deployment
- Option A (preferred): Serve from gateway itself
- Gateway serves static files from its control-ui directory
- We could add our dashboard alongside or at a sub-path
- Investigate if gateway supports custom static dirs
- Option B: Serve via simple HTTP server
python3 -m http.server 8080from/root/clawd/dashboard/- Add Tailscale Serve rule for the port
- Option C: Standalone HTML file
- Open directly in browser via Tailscale, with WS URL configured
File Structure
/root/clawd/dashboard/
βββ index.html # Single-page app (HTML + CSS + JS inline)
βββ README.md # How to access and use
βββ (optional) styles/ # If CSS grows too large for inline
Key Design Decisions
- No build step: Vanilla HTML/JS. No npm, no bundler, no framework.
- No external dependencies: Zero CDN links. Pure browser APIs.
- WebSocket only: No REST polling (except logs.tail which is cursor-based).
- Progressive enhancement: Dashboard works even if some RPC methods fail (shows βunavailableβ rather than crashing).
Project 2: Knowledge Base / Second Brain
Priority: MEDIUM | Estimated Effort: 3β4 days | Dependencies: SQLite with FTS5
What It Is
A searchable, structured knowledge system that indexes everything Reggie learns β decisions, research, conversation insights, project context β making it queryable by keyword, tag, date, or semantic similarity.
Why It Matters
- Memory files are flat:
memory/YYYY-MM-DD.mdfiles grow linearly. Finding βwhat did we decide about the CalWizz pricing strategy?β requires reading multiple files. - MEMORY.md gets stale: Itβs manually curated and canβt scale. Important context gets buried or lost.
- Search is primitive: The existing
memory_searchtool (if it exists) likely does simple grep. We need structured search with relevance ranking. - Pattern recognition: A structured KB lets Reggie say βweβve discussed competitor X three times; hereβs what we concluded each time.β
Technical Context
- Current state: 7 daily memory files (2026-01-27 through 2026-02-03) + MEMORY.md
- SQLite:
libsqlite3is installed (v3.45.1). Need to installsqlite3CLI and Node.js bindings. - No existing memory_search tool found in config: Would need to be built or integrated.
- FTS5: Available in the installed SQLite version. Provides full-text search with ranking.
Implementation Steps
Phase 1: SQLite Database Schema (Day 1)
-- Core notes table
CREATE TABLE notes (
id INTEGER PRIMARY KEY AUTOINCREMENT,
source TEXT NOT NULL, -- 'daily', 'memory', 'conversation', 'research', 'decision'
source_file TEXT, -- Original file path if from markdown
title TEXT,
content TEXT NOT NULL,
created_at TEXT NOT NULL, -- ISO 8601
updated_at TEXT NOT NULL,
project TEXT, -- 'calwizz', 'someshovels', 'better-changelog', etc.
importance INTEGER DEFAULT 0 -- 0=normal, 1=notable, 2=critical
);
-- Tags for categorization
CREATE TABLE tags (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT UNIQUE NOT NULL
);
CREATE TABLE note_tags (
note_id INTEGER REFERENCES notes(id) ON DELETE CASCADE,
tag_id INTEGER REFERENCES tags(id) ON DELETE CASCADE,
PRIMARY KEY (note_id, tag_id)
);
-- Links between notes (bidirectional)
CREATE TABLE note_links (
from_note_id INTEGER REFERENCES notes(id) ON DELETE CASCADE,
to_note_id INTEGER REFERENCES notes(id) ON DELETE CASCADE,
relation TEXT, -- 'references', 'supersedes', 'contradicts'
PRIMARY KEY (from_note_id, to_note_id)
);
-- Full-text search index
CREATE VIRTUAL TABLE notes_fts USING fts5(
title, content, source, project,
content='notes',
content_rowid='id'
);
-- Triggers to keep FTS in sync
CREATE TRIGGER notes_ai AFTER INSERT ON notes BEGIN
INSERT INTO notes_fts(rowid, title, content, source, project)
VALUES (new.id, new.title, new.content, new.source, new.project);
END;
-- (similar triggers for UPDATE and DELETE)Location: /root/clawd/knowledge/reggie.db
Phase 2: Ingestion Scripts (Day 1-2)
ingest-daily.js: Parsememory/YYYY-MM-DD.mdfiles- Split by
##headers into individual notes - Extract date from filename
- Auto-tag based on content keywords (calwizz, changelog, marketing, etc.)
- Track which files have already been ingested (avoid duplicates)
- Split by
ingest-memory.js: ParseMEMORY.md- Split by
##sections - Mark as
importance=2(curated long-term memory) - Link related notes via keyword matching
- Split by
- Auto-extraction: Run as part of heartbeat or cron
- After each daily file update, re-ingest new content
- Incremental: only process new/changed sections
Phase 3: Search API (Day 2-3)
- Node.js module:
/root/clawd/knowledge/search.jssearch(query, options)β ranked results- Options:
project,source,dateRange,tags,limit - Uses FTS5
MATCHwithbm25()ranking - Returns:
[{ id, title, snippet, source, score, tags, created_at }]
- Integration with agent tools:
- Custom tool or enhance existing
memory_search - Accept natural language queries
- Return formatted markdown snippets
- Custom tool or enhance existing
Phase 4: Automatic Extraction (Day 3)
- Decision extractor: Scan conversations for decision patterns
- βWe decided toβ¦β, βGoing withβ¦β, βThe plan isβ¦β
- Auto-create notes with
source='decision'and high importance
- Research indexer: When Reggie does web research, capture summaries
- Store search queries, key findings, URLs
- Tag by project and topic
- Insight tracker: Mark notable observations from conversations
- Competitor moves, market insights, user feedback patterns
Phase 5: Future β Semantic Search (Later)
- If/when we get embedding capabilities:
- Generate embeddings for each note using an API
- Store in a vector column or separate vector DB (SQLite
vec0extension or similar) - Enable βfind similar notesβ and semantic queries
- For now, FTS5 is sufficient and zero-dependency
File Structure
/root/clawd/knowledge/
βββ reggie.db # SQLite database
βββ search.js # Search module
βββ ingest-daily.js # Daily file ingester
βββ ingest-memory.js # MEMORY.md ingester
βββ schema.sql # Database schema
βββ README.md # Documentation
Key Design Decisions
- SQLite over Postgres: Zero infrastructure. File-based. Portable. Already on the system.
- FTS5 over regex: Proper ranking, phrase search, prefix matching, boolean operators.
- Incremental ingestion: Donβt re-process unchanged content.
- Backward compatible: Existing markdown files remain the source of truth. DB is a read index.
Project 3: Proactive Monitoring
Priority: MEDIUM | Estimated Effort: 2β3 days | Dependencies: Cron system (already exists)
What It Is
Automated background checks that catch problems before Adam notices: site uptime, deploy health, competitor changes, social mentions, and project activity.
Why It Matters
- Catching downtime fast: If
app.calwizz.comgoes down at 2 AM, Reggie should know before the first user complaint. - Competitive awareness: Tracking competitors (Flowtrace, Clockwise, Reclaim) for pricing or feature changes.
- Proactive > reactive: The PM workflow says Reggie should execute independently. Monitoring is the most natural autonomous task.
- Reduces Adamβs cognitive load: Instead of manually checking sites and socials, Reggie reports only when something needs attention.
Implementation Steps
Step 1: Deploy Health Checks (Day 1)
Sites to monitor:
| URL | Expected | Check |
|---|---|---|
https://app.calwizz.com | 200 OK | HTTP GET, verify response |
https://calwizz.com | 200 OK | Landing page |
https://changelog.someshovels.com | 200 OK | Changelog site |
https://someshovels.com | 200 OK | Company site |
Implementation:
// /root/clawd/monitoring/health-check.js
async function checkSite(url, name) {
const start = Date.now();
try {
const res = await fetch(url, {
timeout: 10000,
headers: { 'User-Agent': 'Reggie-Monitor/1.0' }
});
const latency = Date.now() - start;
return {
name, url, status: res.status,
ok: res.ok, latency,
timestamp: new Date().toISOString()
};
} catch (err) {
return {
name, url, status: 0,
ok: false, error: err.message,
latency: Date.now() - start,
timestamp: new Date().toISOString()
};
}
}Cron schedule: Every 15 minutes via Clawdbot cron system Alert threshold: 2 consecutive failures before alerting Adam Alert channel: Telegram message to Adam
Step 2: Uptime State Tracking (Day 1)
// /root/clawd/monitoring/state.json
{
"sites": {
"app.calwizz.com": {
"lastOk": "2026-02-03T10:00:00Z",
"lastCheck": "2026-02-03T10:15:00Z",
"consecutiveFailures": 0,
"alertSent": false,
"uptimePercent7d": 99.8
}
}
}- Track 7-day rolling uptime percentage
- Include in daily/weekly briefings
- Recovery alerts when a previously-down site comes back
Step 3: Competitor Tracking (Day 2)
Competitors to watch:
- Flowtrace (flowtrace.co) β Closest competitor, calendar analytics
- Clockwise (getclockwise.com) β Calendar optimization
- Reclaim.ai (reclaim.ai) β Smart calendar management
What to track:
- Pricing page changes (hash the page content, alert on change)
- Blog/changelog for new feature announcements
- Social media mentions and announcements
Implementation:
- Weekly cron job that fetches and hashes pricing pages
- Compare hash to stored value; if different, fetch diff and alert
- Store page snapshots in
/root/clawd/monitoring/competitors/ - Use
web_fetchtool during cron execution
Step 4: Social Mention Scanning (Day 2-3)
Channels to monitor:
- Twitter/X: Mentions of @CalWizzApp, @ShippingShovels, βcalwizzβ
- Reddit: Mentions in relevant subreddits (r/productivity, r/googlecalendar)
- Hacker News: Any CalWizz or SomeShovels mentions
Implementation:
- Use
web_searchduring heartbeat/cron to search for recent mentions - Track seen mentions in state file to avoid duplicate alerts
- Alert Adam only for notable mentions (not our own posts)
- Frequency: 2-3 times per day (to avoid being annoying)
Step 5: Project Repo Activity (Day 3)
Repos to monitor:
project-shovels/time-insights-app(CalWizz backend)Chaoticonomist/calwizz-landing(CalWizz landing)project-shovels/better-changelog(Changelog tool)
What to track:
- New issues opened (especially by external users)
- PR activity
- Failed CI runs
- Dependabot/security alerts
Implementation:
- GitHub API (via personal access token if available, or web scraping)
- Cron job every 2 hours
- Only alert for external issues/PRs (not our own commits)
- Batch non-urgent notifications into daily digest
Alert Philosophy: Donβt Be Annoying
URGENCY LEVELS:
- π΄ CRITICAL (immediate): Site down, security alert β Telegram immediately
- π‘ NOTABLE (same day): Competitor change, external issue β Include in next briefing
- π’ FYI (batch): Repo activity, mention tracking β Weekly digest
File Structure
/root/clawd/monitoring/
βββ health-check.js # Site health checker
βββ competitor-track.js # Competitor page monitor
βββ social-scan.js # Social mention scanner
βββ repo-activity.js # GitHub repo monitor
βββ state.json # Persistent state
βββ competitors/ # Page snapshots
β βββ flowtrace-pricing.md
β βββ clockwise-pricing.md
βββ README.md
Cron Job Configuration
# Health checks: every 15 minutes
- name: health-check
schedule: "*/15 * * * *"
payload: "Run site health checks for all monitored URLs"
# Competitor tracking: weekly on Mondays
- name: competitor-scan
schedule: "0 9 * * 1"
payload: "Check competitor pricing pages and changelogs for changes"
# Social mentions: 3x daily
- name: social-scan
schedule: "0 9,14,20 * * *"
payload: "Scan for CalWizz and SomeShovels mentions on social media"
# Repo activity: every 2 hours during business hours
- name: repo-check
schedule: "0 8-22/2 * * *"
payload: "Check GitHub repos for new issues, PRs, and alerts"Project 4: Voice Capabilities
Priority: LOWER | Estimated Effort: 0.5β1 day | Dependencies: TTS tool, ElevenLabs API key
What It Is
ElevenLabs TTS integration for verbal briefings, reading blog posts aloud, and storytelling. Making Reggie capable of βspeakingβ via audio messages on Telegram.
Why It Matters
- Engagement: Audio briefings are more personal than walls of text. βGood morning Adam, hereβs what happened overnightβ as a voice message hits different.
- Accessibility: Sometimes Adam is driving, walking, or just doesnβt want to read a long report.
- Personality: A voice gives Reggie more character. The SOUL.md mentions being warm and genuine β voice amplifies that.
- Storytelling: AGENTS.md specifically mentions using voice for βstories, movie summaries, and βstorytimeβ moments.β
Technical Context (Discovered)
- TTS tool already exists: Clawdbot has a built-in
ttstool that converts text to speech and returns aMEDIA:path. - No ElevenLabs config found: The
clawdbot.jsonconfig doesnβt show any TTS/voice/ElevenLabs configuration. This needs to be set up. - Telegram supports voice messages: The
messagetool has anasVoiceparameter for sending audio as voice messages. - AGENTS.md reference: Mentions
sag(ElevenLabs TTS) as a skill for voice storytelling.
Implementation Steps
Step 1: Check TTS Skill Status (30 min)
- Run
clawdbot skills listor check skills status via gateway RPC - Determine if ElevenLabs is a built-in skill or needs installation
- Check if
sagskill exists and what it requires - Identify what API key / configuration is needed
Step 2: Configure ElevenLabs (30 min)
- Adam needs to provide an ElevenLabs API key
- Add to Clawdbot config (likely under skills or a TTS section):
{ "skills": { "tts": { "provider": "elevenlabs", "apiKey": "sk-...", "voice": "..." } } } - Choose a voice that fits Reggieβs personality (warm, slightly energetic, friendly)
- Test with a simple phrase
Step 3: Voice Selection (30 min)
Recommended voices to test (ElevenLabs pre-made):
- βJoshβ β Warm, natural, conversational male voice
- βAdamβ β Deep, clear narrator voice (but might be confusing since the user is also Adam)
- βCharlieβ β Young, energetic, friendly
- Custom clone: If Adam wants Reggie to have a truly unique voice
Document the chosen voice in TOOLS.md:
### TTS
- Provider: ElevenLabs
- Voice: [chosen voice name] (ID: [voice_id])
- Use for: Morning briefings, blog post narration, storytelling
- Default speaker: Telegram voice messageStep 4: Usage Patterns (Day 1)
Define when Reggie should use voice:
- Morning briefing: βGood morning! Hereβs your dayβ¦β β voice message
- Blog post narration: When a new blog post is published, record it as audio
- Storytelling: When asked to tell a story or explain something fun
- Alerts: Critical monitoring alerts could optionally be voice
- Summaries: βHereβs what happened while you were awayβ as audio
Implementation:
// In agent context, use the tts tool:
const audio = await tts({ text: briefingText });
// Then send via message tool:
await message({ action: "send", target: "adam", filePath: audio.path, asVoice: true });Step 5: Cost Awareness
- ElevenLabs pricing: ~$0.30/1000 characters (Starter plan)
- Average briefing: ~500 characters = ~$0.15
- Daily cost if used 3x: ~$0.45/day
- Monthly: ~$13.50
- Set a daily character limit to avoid surprises
What Adam Needs To Do
- Sign up for ElevenLabs (if not already): https://elevenlabs.io
- Get an API key from the dashboard
- Share the API key with Reggie (securely via Telegram)
- Reggie configures it in Clawdbot
File Structure
# No new files needed β it's configuration + using existing tools
# Just update TOOLS.md with voice preferences
Priority Matrix & Execution Order
IMPACT
HIGH βββββββ LOW
ββββββββββββ¬βββββββββββ
HIGH β Dashboard β β
EFFORT β (P1) β β β
ββββββββββββΌβββββββββββ€
MED β KB/Brain β Monitor β
β (P2) β (P3) β
ββββββββββββΌβββββββββββ€
LOW β β Voice β
β β (P4) β
ββββββββββββ΄βββββββββββ
Recommended Execution Timeline
| Week | Project | Milestones |
|---|---|---|
| Week 1 | P1: Dashboard | WS connection, sessions panel, activity stream |
| Week 1 | P4: Voice | Configure ElevenLabs (if API key available) |
| Week 2 | P1: Dashboard | Spawn tree, log tail, cost tracking, deploy |
| Week 2 | P3: Monitoring | Health checks, uptime tracking, cron jobs |
| Week 3 | P2: Knowledge Base | Schema, ingestion, FTS5 search |
| Week 3 | P3: Monitoring | Competitor tracking, social scanning |
| Week 4 | P2: Knowledge Base | Auto-extraction, agent tool integration |
Quick Wins (Can Start Today)
- Voice: Just needs an API key from Adam. 30 min to configure once we have it.
- Health checks: Simple HTTP pings as a cron job. 1 hour to implement.
- Dashboard scaffold: HTML file with WS connection. 2-3 hours for basic version.
Open Questions for Adam
- Dashboard access: Should it be password-protected beyond Tailscale? (Currently gateway is loopback-only, exposed via Tailscale Serve)
- ElevenLabs: Do you have an account? Want to set one up? Budget preference for TTS usage?
- Monitoring scope: Any other sites/services to monitor beyond CalWizz and SomeShovels?
- Competitors: Are Flowtrace, Clockwise, and Reclaim the right competitors to track? Any others?
- Knowledge Base: Should it index conversation history from Telegram too, or just the markdown files?