Most AI agent platforms ask you to hand over your API keys, trust their infrastructure, and hope for the best. We built getbot to eliminate that trust requirement entirely.
This post is the first in a five-part series covering the security decisions behind getbot. We’ll start with the big picture: the three-tier architecture that keeps your AI agents isolated, authenticated, and under your control.
The problem with hosted AI agents
When you deploy an AI agent on someone else’s platform, you’re trusting them with:
- Your LLM API keys — which have billing implications and access to your conversation history
- Your data — every document, message, and file your agent processes
- Your availability — if their platform goes down, your agents go down
- Your costs — they set the markup on top of LLM provider pricing
getbot takes a different approach: your server, your keys, your control. The AI agent runs on infrastructure you own, behind authentication you configure, with API keys that never leave your machine.
Three tiers of isolation
getbot’s security model has three distinct layers, each with its own trust boundary:
Tier 1: Central auth (auth.getbot.run)
The central auth server handles exactly one thing: Google OAuth. It never sees your API keys, never touches your data, and never communicates with your AI agents directly.
When a user visits yourorg.getbot.run, Caddy sends a forward_auth subrequest to the local getbot-auth service on your VPS. If the user doesn’t have a valid session, they’re redirected to auth.getbot.run for Google OAuth. After authenticating with Google, auth.getbot.run issues a short-lived authorization code (30 seconds, single-use) and redirects back to your VPS. Your VPS exchanges the code, validates the email against your allowlist, and mints a local JWT. The central server never learns whether authentication succeeded or failed on your side.
Tier 2: VPS host (getbot-auth)
The getbot-auth service runs on your VPS at 127.0.0.1:9099 — localhost only, inaccessible from the network. It handles:
- JWT minting: HS256 with 256-bit keys generated via
crypto/rand, 24-hour expiry - Forward auth: Every request to your AI agent passes through
/auth/verify, which checks the JWT cookie, validates the org claim, and verifies the email allowlist - Key rotation: Maintains both current and previous signing keys, so you can rotate without invalidating active sessions
- Rate limiting: 10 requests per minute per IP using a sliding window
The JWT cookies are HttpOnly, Secure, and SameSite=Lax. No JavaScript can read them, they’re only sent over HTTPS, and they’re not sent on cross-origin requests.
Tier 3: Incus containers (your bots)
Each organization gets its own Incus container — not a Docker container, but a full system container with its own init process, filesystem, and network stack. Docker runs inside the Incus container (Docker-in-Incus), giving you the convenience of Docker Compose for bot management with the isolation guarantees of a system-level boundary.
Containers sit on a private bridge network (10.199.0.0/24) with NAT for outbound access. They cannot reach the host’s localhost, which means a compromised container cannot access the getbot-auth service, the Caddy admin API, or any other host-level service.
Identity without a database
getbot derives identity from email addresses. When you run getbot install --email alice@acme.com --team marketing, the system:
- Extracts the org from the email domain:
acme.com→acme - Creates or reuses the Incus container named
acme - Deploys the bot at
acme.getbot.run/marketing - Adds
alice@acme.comto the org’s email allowlist
There’s no user registration, no org creation flow, no invite system. The email domain is the org. This eliminates an entire class of authorization bugs — there’s no way to accidentally grant access to the wrong org because the org boundary is the email domain boundary.
The request flow
Here’s what happens when Alice visits acme.getbot.run/marketing/chat:
- Caddy receives the request and matches the bot route
- Forward auth: Caddy sends a subrequest to
127.0.0.1:9099/auth/verifywith the original headers - getbot-auth extracts the org from the Host header (
acme), validates the JWT cookie, checks that the token’s org claim matches, and verifies Alice’s email is in the allowlist - If valid: Returns
200withX-Forwarded-Email: alice@acme.com - Caddy strips the
/marketingprefix and proxies to the container at its assigned port - The bot receives the request with the authenticated email in a header — it never handles auth itself
If Alice doesn’t have a valid session, step 3 returns a redirect to the OAuth flow. The bot never sees unauthenticated requests.
What this means in practice
- API keys stay on your VPS. They’re injected as environment variables into the Docker container inside the Incus container. They never pass through getbot’s central infrastructure.
- Auth is per-request. Every single request goes through forward_auth. There’s no “once authenticated, always trusted” assumption.
- Containers are disposable. Each org’s container can be destroyed and recreated without affecting other orgs.
- The central server is optional. If auth.getbot.run went down, existing sessions would continue to work (JWTs are validated locally). New sessions would require the OAuth flow, but you could swap in your own OAuth provider.
Up next
In the next post, we’ll dig into the specific attack surface that AI agent deployments create — WebSocket connections, streaming responses, and why the forward_auth pattern matters more than you might think.