Your server.
Your AI.
Your control.
Deploy secured OpenClaw AI agents on your own infrastructure in minutes. Container isolation, reverse proxy, trust-minimized auth — zero vendor lock-in.
$ curl -fsSL https://getbot.run/install.sh | bash
getbot setup$ wget -qO- https://getbot.run/install.sh | bash
getbot setup Works with Claude, ChatGPT, Gemini, DeepSeek, and local models. See all install options →
⚡ How it works
From install to running AI agent in three steps.
Install the CLI
One curl command. Downloads a single Go binary — no dependencies, no Docker on your laptop, no Node.js.
curl -fsSL https://getbot.run/install.sh | bash Point it at your server
The setup wizard connects via SSH, checks server readiness, and picks your LLM provider. You bring the server — any VPS, bare metal, or cloud VM.
getbot setup Your AI agent is live
Deployed inside an Incus container, behind Caddy with HTTPS, authenticated via Google SSO. Your API keys never leave your server.
✓ Bot deployed at https://ai.yourcompany.com 🛡️ Architecture at a glance
Three layers of isolation between the internet and your AI agent.
🛠️ Built for trust
Every decision serves one principle: minimize trust boundaries.
Container Isolation
Each organization runs in its own Incus container with Docker-in-Incus. No shared runtimes, no container escapes to the host.
Trust-Minimized Auth
Google OAuth via auth.getbot.run, JWT minting on your VPS, Caddy forward_auth — your AI agent never sees raw credentials.
Any LLM Provider
Choose your AI provider during setup. Switch between Claude, ChatGPT, Gemini, or self-hosted models without redeploying.
SSH-First Deploy
Point getbot at any server with SSH access. No cloud provider accounts required — works on bare metal, VPS, or existing infra.
Your Keys Stay Home
API keys are injected directly into your container via SSH. They never touch getbot servers, never appear in logs, never leave your VPS.
Zero Lock-in
Everything is open source. Uninstall getbot and your server is unchanged. No proprietary agents, no phone-home telemetry.
🛡️ The getbot security model
Every deployment decision serves one principle: minimize trust boundaries.
Your server, your keys
API keys stay on your VPS. They never touch getbot servers, never leave your infrastructure, never appear in logs.
Process isolation via Incus
Each organization gets its own Incus container with Docker inside. A compromised agent cannot reach the host or other tenants.
No ambient credentials
Auth flows through Google OAuth → JWT minting on your VPS → Caddy forward_auth. The AI agent never handles authentication tokens.
Zero vendor lock-in
Switch LLM providers without redeploying. Uninstall getbot and your server is unchanged. No proprietary agents, no phone-home telemetry.
📖 Documentation
Step-by-step guides covering every stage of a getbot deployment.
Getting Started
First-run experience, what you need, and how to begin.
Setup Wizard
SSH connection, provider selection, and bot deployment in one command.
Container Isolation
Incus containers, Docker-in-Incus, and why your bots can't escape.
CI Automation
Non-interactive deploys with --yes for GitHub Actions and pipelines.
📝 Read the security blog series
Five posts covering the architecture decisions, attack surface analysis, and hardening measures behind getbot.
Read the blog →Get notified when getbot is ready
We'll send one email when getbot is available for public use. No spam, no drip campaigns.
One email. No spam. Unsubscribe anytime.