← Jarvis
OpenClaw Setup Guide
How Ron set up Jarvis — a personal AI assistant running 24/7 on OpenClaw, communicating via Telegram.
What This Is
A persistent AI assistant that:
- Lives on a cloud server (Railway), always on
- Communicates via Telegram (could also be WhatsApp, Discord, Signal)
- Has a workspace with files, scripts, memory
- Runs background jobs (cron) for monitoring, backups, reminders
- Manages GitHub repos autonomously via tiny PRs
- Tracks attention budget to avoid overwhelming the human
Architecture
You (Telegram) ↔ OpenClaw Gateway ↔ Claude (Anthropic API)
│
Workspace (/data/workspace)
├ SOUL.md (personality)
├ MEMORY.md (long-term memory)
├ HOT.md (critical rules, read every reply)
├ scripts/ (guards, polling, credits)
├ hooks/ (message validation)
└ memory/ (daily logs)
│
Cron Jobs (9 scheduled tasks)
Backup (every 30 min → GitHub)
Before You Start
Cost
- LLM API: Claude Opus ~$15/MTok input, $75/MTok output. Budget $5-20/day for active use.
- Hosting: Railway free tier initially, ~$5-10/month for always-on.
Optimize for quality, not cost. Your time is worth more than API tokens.
Security
- Treat the server like a personal device — it has your API keys and tokens
- Use fine-grained tokens with minimal permissions
- Don't store secrets in workspace files. Use environment variables.
Model Choice
- Claude Opus 4 for main interactions (highest quality)
- Consider Sonnet for background cron jobs (cheaper, still good)
- The model matters more than you think. Don't cheap out on your main agent.
Persistence
- Agents wake up fresh each session. Files are your memory.
- Design a memory architecture early: daily logs, long-term memory, hot rules
- Back up your workspace to GitHub. Losing it means losing your assistant's identity.
Step-by-Step Setup
1. Infrastructure (Railway)
- Fork the OpenClaw Railway template
- Deploy to Railway
- Add a persistent volume at
/data
- Set environment variables:
ANTHROPIC_API_KEY, OPENCLAW_GATEWAY_TOKEN
2. Telegram Channel
- Create a bot via @BotFather
- Add bot token to OpenClaw config
- Set
allowFrom to your Telegram user ID
- Start chatting
3. Identity Bootstrap
On first message, the assistant reads BOOTSTRAP.md and walks through choosing a name, defining personality, and learning about you. Delete the file after — the assistant is now "born."
4. Workspace Structure
/data/workspace/
AGENTS.md — operating instructions
SOUL.md — personality and principles
USER.md — info about you
IDENTITY.md — name, emoji, vibe
HOT.md — critical rules (checked every reply)
MEMORY.md — long-term curated memory
HEARTBEAT.md — periodic check tasks
memory/ — daily logs (YYYY-MM-DD.md)
scripts/ — automation (guards, polling)
hooks/ — message validation
5. Cron Jobs
| Job | Schedule | Purpose |
| Morning message | 5:00 AM | Daily briefing, credit reset |
| Workspace backup | Every 30 min | Push to GitHub |
| GitHub polling | Every 1 min | Check for PR comments |
| Blocked reminder | 9:00 AM | Nudge on pending items |
| Nightly summary | 9:30 PM | Day recap with credits |
| Memory review | 1:30 AM | Consolidate daily logs |
| Weekly summary | Sun 11 AM | Week in review |
6. GitHub Integration
- Create a bot GitHub account
- Fork target repos to the bot account
- Generate fine-grained PATs for push and PR creation
- Build a deterministic write guard (Python, not LLM)
Never give the bot direct write access to your repos. PRs only.
7. Safety Layers
- Write guard: All GitHub mutations through deterministic Python. The LLM forgets rules; code doesn't.
- Message guard: Post-send hook validates outgoing messages
- Attention budget: Daily credit system prevents overwhelm
- Capture mode: When credits hit zero, assistant reacts with 👍 only
- No process killing: Never let the assistant restart its own gateway
- Backup everything: 30-min auto-backup to private GitHub repo
Lessons Learned
- ChatGPT said 10 minutes. It took 4 hours. Expect setup to take a full day.
- Text rules leak. Code enforcement doesn't. Put safety at side-effect boundaries, not in markdown.
- Tiny PRs always. 1-2 lines, 1 file, one idea. The human reviews in 10 seconds.
- The assistant will break things. Design for recovery, not prevention.
- Attention is the scarcest resource. Not tokens, not compute. Your attention.
- Memory architecture > prompt engineering. HOT.md (20 lines, every reply) beats MEMORY.md (big file, loaded once).
- Back up from day one. Your workspace IS your assistant.
Current Stack
- Host: Railway (Docker)
- LLM: Claude Opus 4 via Anthropic API
- Channel: Telegram
- Repos: 3 monitored (tiny-pr-bot, know-thyself, jarvis-workspace)
- Cron jobs: 9 scheduled tasks
- Guard: Deterministic Python write guard
- Backup: Private GitHub repo, every 30 minutes
v0.2-staging