Key Takeaways
- OpenClaw\u2019s install is the friction. Hundreds of thousands of stars on GitHub, but the README walks through nvm, Node v22.16.0 specifically, a 380MB global npm install, an undocumented JSON config that fails silently, and a ~90-second plugin cold-load. Most people give up on step five.
- Hosted version skips all of it. Subscribe to Plus on voltagegpu.com/confidential-agent, send
/start <token>to @VoltageGPUPersonalBot, start texting. About four minutes from sign-up to first reply. - Same OpenClaw, different surface. The 92 default plugins ship. Web search, summarisation, document analysis, persistent memory. Inference is routed to a Qwen3-32B-TEE model inside an Intel TDX enclave — the operator (us) cannot read prompts.
- Honest about what you lose: no custom plugins, no local file access, single agent identity. If those matter, install locally and accept the install pain.
I tried to install OpenClaw three times before I gave up and shipped a hosted version on Telegram for $20/mo.
If you have stared at the OpenClaw README and felt the dread settle in, this post is for you. I am going to walk through (1) the exact friction that kills most installs, with the receipts, (2) what I built to skip it, (3) the architecture including the parts I am not proud of, (4) what you lose by not running it locally, and (5) why I think the hosted angle is the right answer for most people.
If you would rather just try it: voltagegpu.com/confidential-agent. Same price as ChatGPT Plus. Sealed in Intel TDX in the EU. The operator (me) literally cannot read your messages — there is a side-by-side at /vs/chatgpt-plus if that claim sounds suspicious.
The install nobody finishes
OpenClaw is a beast of a project. Hundreds of thousands of stars on GitHub. A plugin ecosystem that makes most agent frameworks look anaemic. A maintainer with an actual point of view about what an agent should be.
It is also unusable for ~99% of the humans who star it.
Here is what the README asks of you, in order:
# 1. Install nvm (you've heard of it, never installed it)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.0/install.sh | bash
# 2. Install Node v22 specifically — not v20, not v21, not v22.0.0,
# not the v22 already in your homebrew. v22.16.0 or it segfaults.
nvm install 22.16.0
nvm use 22.16.0
# 3. Global npm install of a ~380MB package
npm install -g openclaw
# 4. Get an API key from a model provider you've never heard of
# OR from OpenAI but configure the right base URL
# OR from Chutes/Targon/etc. — README lists 14 options unranked
# 5. Edit ~/.openclaw/openclaw.json — JSON, no schema validation, fails silently
{
"providers": [...],
"agents": [...],
"gateway": { "mode": "local" } // miss this, exit code 78, no error message
}
# 6. Install the gateway as a systemd user service
openclaw daemon install
openclaw daemon start
# 7. Run your first agent
openclaw agent main --local --prompt "hello world"
# (waits ~100 seconds — yes really, plugin load — returns three lines of JSON)Each step is fine on its own. Stack them and you have a 30-minute setup that fails on step five for half the people who start it, because the JSON config rejects fields the README examples show. I lost an hour on a single ignored field before figuring out the schema was stricter than the docs.
The maintainer has been blunt about this. Paraphrasing a recent issue thread: "if you do not know how to use a terminal, this project is too dangerous for you to run."
Fair. But that filter throws out a lot of people who actually need an agent — solo professionals, journalists, EU residents who want a private assistant on their phone, the kinds of users who star the repo, run git clone, and never come back.
So I paid myself to host it
The shortcut, once you have eaten enough of these errors, is this:
What if the install just… was not your problem?
That is the entire pitch. Run OpenClaw on a server I control. Wire the input/output to a surface every adult already has on their phone. Charge for it.
The surface I picked is Telegram. Not Slack (work). Not WhatsApp (no usable bot API). Not iMessage (Apple will not let you). Telegram's bot API is mature, the UX is identical to texting, and people already have it installed.
The flow ends up being four steps:
- Subscribe at voltagegpu.com/confidential-agent — Stripe, $20/mo.
- Dashboard shows you a one-time link token.
- Open Telegram, message @VoltageGPUPersonalBot, send
/start <token>. - Start texting it like you would text a person.
Total time, sign-up to first reply: about four minutes, most of which is Stripe checkout.
What is actually running
Here is the architecture, no glossing:
Telegram client
│
▼
┌──────────────────────────────────────────┐
│ Next.js app on Vercel │
│ /api/telegram/webhook │
│ ├─ verifies bot token │
│ ├─ resolves chatId → userId │
│ └─ inserts AgentJob row in Postgres │
└──────────────────────────────────────────┘
│
▼
┌──────────────────────────────────────────┐
│ Postgres (Neon) │
│ AgentJob: { userId, chatId, prompt, │
│ status, result } │
└──────────────────────────────────────────┘
│ polled
▼
┌──────────────────────────────────────────┐
│ Worker on OVH VPS (systemd unit) │
│ voltage-personal-agent.service │
│ ├─ pulls pending AgentJob │
│ ├─ spawns: openclaw agent main --local │
│ │ --prompt <user message> │
│ ├─ openclaw loads 92 plugins (~90s │
│ │ cold, the part I'm not proud of) │
│ ├─ extracts payloads[0].text │
│ ├─ writes result back to AgentJob │
│ └─ sends to Telegram via bot API │
└──────────────────────────────────────────┘
│ inference
▼
┌──────────────────────────────────────────┐
│ Chutes TEE inference │
│ https://llm.chutes.ai/v1 │
│ model: Qwen/Qwen3-32B-TEE │
│ Intel TDX-sealed, EU-hosted │
└──────────────────────────────────────────┘A few things worth calling out, because they hurt to debug:
OpenClaw 2026.5.x changed the response shape without bumping major. It used to return {output: "..."}. It now returns {payloads: [{text, mediaUrl}], meta: {...}}. If you grep for .output in your worker code, you will get empty replies forever and the JSON will look fine in logs because meta is populated.
--local mode loads 92 plugins on every cold call. That is the ~90–100 second floor I keep hitting. The gateway daemon (openclaw daemon start) keeps plugins warm on port 18789, but the worker currently spawns fresh per job because I have not figured out a clean way to multiplex jobs through a single warm gateway without leaking state between users. So users wait ~100 seconds. JOB_TIMEOUT_MS=240_000 absorbs this and a "thinking…" message at t+2s keeps it from feeling dead.
The Telegram sendMessage returns {ok: false} on bad chatId instead of throwing. So a typo in chat resolution silently swallows the agent's reply. I learned this by inserting an AgentJob with chatId 999999999, watching the worker complete successfully, and finding the answer in the database but never on my phone. Lesson: assert ok === true and re-queue if not.
What you lose by not running it locally
Be honest with yourself. The hosted version is not strictly equivalent to running OpenClaw on your laptop. Specifically:
- No custom plugins. You get the 92 that ship by default. Want to add the GitHub plugin with your PAT? Local only, for now.
- No local file access. OpenClaw on your laptop can read
~/Documents/. The hosted bot cannot reach into your filesystem (and should not). - Single agent identity. The runtime configures
--agent mainonly. You cannot define--agent code-reviewerand--agent legal-researchwith separate system prompts on Plus today. - Inference model is fixed. Qwen/Qwen3-32B-TEE. You do not get to swap in GPT-5 or Claude. Deliberate choice for the hardware-sealed story, but still a constraint.
If any of those are dealbreakers, install OpenClaw locally. Genuinely. The README is hostile but the project is good.
What you gain
The reasons people actually use the hosted version, ranked by what shows up in support emails:
- Memory persistence across devices. Local OpenClaw stores conversation memory on disk. The hosted version stores it server-side, so the bot remembers your context whether you message from your phone, Telegram Web on a laptop, or your tablet.
- Mobile. Local OpenClaw is laptop-only unless you SSH from your phone, which nobody does.
- No installation entropy. No nvm conflicts when you upgrade macOS. No "works on my machine, fails on yours" when teaching a colleague.
- EU + TDX privacy posture. This one needs its own paragraph.
The privacy angle, briefly
Local OpenClaw is private to you in the sense that the agent runs on your laptop. But the moment you point it at OpenAI or Anthropic, your prompts go to a US-hosted commercial provider that holds plaintext logs and can be subpoenaed.
The hosted version routes inference to an Intel TDX-sealed VM in France. TDX is a hardware confidentiality feature: the VM's memory is encrypted with a per-VM key the host (us) cannot extract. Our SREs cannot read your prompts. A subpoena to us yields ciphertext we cannot decrypt. The inference model never sees plaintext outside the enclave.
This is the "GDPR Article 28(3)(b) confidentiality, hardware-enforced" story, and it is why a couple of solo lawyers and notaries have started using it for client-sensitive drafting they used to handle in ChatGPT and quietly regret.
If you want the long version, there is a comparison page at /vs/chatgpt-plus — same $20/mo as ChatGPT Plus, different threat model.
The price anchor
I picked $20/mo for a reason. ChatGPT Plus is $20. Claude Pro is $20. There is an unwritten consumer expectation that "premium AI = $20/mo," and I am not interested in fighting it.
What is included on Plus:
- 2,000 inference requests per month (covers normal daily use comfortably)
- Persistent conversation memory across devices
- All 92 default OpenClaw plugins (web search, summarisation, file analysis on Telegram-attached docs, persistent memory)
- Telegram delivery on @VoltageGPUPersonalBot
- EU-hosted, GDPR DPA auto-signed at checkout, attestation on request
If you blow past 2,000, the dashboard offers metered top-ups. If you do not, you do not pay extra.
Try it, or fork the bridge
If you want to use it: voltagegpu.com/confidential-agent.
If you want to host your own Telegram bridge to OpenClaw on your own VPS, the architecture diagram above is roughly all of it. The painful bits, in order of how much time they cost me:
- Handle the
payloads[0].textextraction shape change - Do not trust
sendMessageok-status — assertok === true - Cold plugin load is ~90s; either keep a warm gateway or set user expectations with a "thinking…" placeholder
- The
gateway.mode=localconfig field is required and the failure mode is exit code 78 with no message
Whichever you pick: stop trying to install OpenClaw cold on a fresh machine and expecting it to work first try. It will not. The maintainer was right about the terminal warning. The fix is either commit to the install pain, or pay someone else to wear it.
I picked option three: become the someone else.
FAQ
Is this just a wrapper around OpenClaw?
Why does the local install fail so often?
Can I bring my own plugins?
What model does the hosted runtime use?
How does this compare to ChatGPT Plus at $20/mo?
Why Telegram and not a standalone app?
Will I lose anything compared to running OpenClaw locally?
Skip the install. Start texting.
$20/mo. Same price as ChatGPT Plus. OpenClaw on Telegram, sealed in Intel TDX, EU-hosted. Four minutes from sign-up to your first private message.