Your server.
Your AI.
Your control.

Deploy secured OpenClaw AI agents on your own infrastructure. Container isolation, reverse proxy, trust-minimized auth — zero vendor lock-in.

terminal
$ curl -fsSL https://getbot.run/install.sh | bash
getbot setup
$ wget -qO- https://getbot.run/install.sh | bash
getbot setup

v0.3.0 · Used in production by the builder · Early access open for technical users

Works with Claude, ChatGPT, Gemini, DeepSeek, and local models.

⚡ How it works

From install to running AI agent in three steps.

1

Install the CLI

One curl command. Downloads a single Go binary — no dependencies, no Docker on your laptop, no Node.js.

curl -fsSL https://getbot.run/install.sh | bash
2

Point it at your server

The setup wizard connects via SSH, checks server readiness, and picks your LLM provider. You bring the server — any VPS, bare metal, or cloud VM.

getbot setup
3

Your AI agent is live

Deployed inside an Incus container, behind Caddy with HTTPS, authenticated via Google SSO. Your API keys never leave your server.

✓ Bot deployed at https://ai.yourcompany.com

🛡️ Architecture at a glance

Three layers of isolation between the internet and your AI agent.

🌐 Internet
HTTPS requests
🔒 Caddy Reverse Proxy
TLS termination + forward_auth
🔐 Auth Layer
Google SSO → JWT minting on your VPS
📦 Incus Container
Isolated environment per organization
🤖 OpenClaw AI Agent
Your keys, your model, your data

🛠️ Built for trust

Every decision serves one principle: minimize trust boundaries.

🔒

Container Isolation

Each organization runs in its own Incus container with Docker-in-Incus. No shared runtimes, no container escapes to the host.

🛡️

Trust-Minimized Auth

Google OAuth via auth.getbot.run, JWT minting on your VPS, Caddy forward_auth — your AI agent never sees raw credentials.

🤖

Any LLM Provider

Choose your AI provider during setup. Switch between Claude, ChatGPT, Gemini, or self-hosted models without redeploying.

🖥️

SSH-First Deploy

Point getbot at any server with SSH access. No cloud provider accounts required — works on bare metal, VPS, or existing infra.

🔐

Your Keys Stay Home

API keys are injected directly into your container via SSH. They never touch getbot servers, never appear in logs, never leave your VPS.

Zero Lock-in

Everything is open source. Uninstall getbot and your server is unchanged. No proprietary agents, no phone-home telemetry.

🛡️ The getbot security model

Every deployment decision serves one principle: minimize trust boundaries.

🔐

Your server, your keys

API keys stay on your VPS. They never touch getbot servers, never leave your infrastructure, never appear in logs.

📦

Process isolation via Incus

Each organization gets its own Incus container with Docker inside. A compromised agent cannot reach the host or other tenants.

🔒

No ambient credentials

Auth flows through Google OAuth → JWT minting on your VPS → Caddy forward_auth. The AI agent never handles authentication tokens.

Zero vendor lock-in

Switch LLM providers without redeploying. Uninstall getbot and your server is unchanged. No proprietary agents, no phone-home telemetry.

📖 Documentation

Step-by-step guides covering every stage of a getbot deployment.

Browse all docs →

What early access means today

CLI and docs are public

Install the CLI, read the docs, and explore the architecture. No approval needed.

🔒

Deployment access is approved manually

getbot setup requires approved access. We review requests and reply within a few days.

🔐

Your agent and API keys stay on your server

Keys are injected into your container via SSH. They never touch getbot infrastructure.

🛡️

Auth runs through getbot-managed infrastructure

Google OAuth flows through auth.getbot.run. Early users may use shared auth and DNS patterns while we harden per-org setup.

📝 How we built getbot's security architecture

A deep dive into the isolation, auth, and hardening decisions behind getbot.

Read the post →

Request early access

getbot is usable today for technical users deploying on their own servers. The CLI and docs are public. Deployment access is approved manually.

We'll only email you about your access request.