Self-Hosted Proxy
Self-hosted mode runs the sameknoxcall/proxy Docker image on your infrastructure. Live API traffic — request bodies, response bodies, injected secrets — stays entirely within your VPC. Only routing metadata, tenant config, and re-encrypted secrets flow from the control plane over TLS.
Requires a Pro or Enterprise subscription with
byoAgentEnabled (the same flag gates the Go Agent). Reach out if you need to pilot — we can provision a sandbox agent.Why Self-Host
- Data-residency regulation: HIPAA, FedRAMP-adjacent, EU-residency, sovereign-cloud contracts that require payload data to stay on-prem.
- Air-gapped environments: systems without outbound internet except to a single control-plane endpoint.
- Latency-sensitive workloads: proxy sits in the same VPC as your apps and upstream APIs.
- Compliance audits: every byte of customer data remains under your audit log.
How It Works
- Control plane at knoxcall.com holds route config and MASTER_KEY-wrapped secrets.
- Your container pulls a signed session bundle every hour.
- Request flow: your app →
knoxcall/proxy(your infra) → upstream API. Control-plane hop only happens during session-bundle refresh.
Prerequisites
- Credentials: create a server-mode agent under Automation → Agents with
mode: server. Copy thekc_agent_xxxID and secret (shown only once). - Local Postgres + Redis: one of each, reachable from the container. SQLite is not supported.
- Local MASTER_KEY_B64: generate a 32-byte base64 key locally (
openssl rand -base64 32). Do NOT share this with KnoxCall; it stays on your infrastructure and wraps integration credentials at rest. - Outbound HTTPS to
knoxcall.com(for session-bundle refresh + metrics reporting). - Public HTTPS endpoint (typically
https://proxy.yourdomain.com) behind your preferred TLS terminator.
Install
Docker Compose
Before bringing anything up, generate two strong secrets and place them in a local.env file (never commit):
.env alongside the compose file:
docker compose up -d, then point your TLS terminator (Caddy, Nginx, Cloudflare Tunnel, ALB) at :8080.
Health Checks
Once running, the container exposes:GET /healthz— readiness probe. Returns 503 until the first control-plane sync succeeds, then 200 with sync stats. Returns 503 again if the bundle goes stale (control plane unreachable for >1 hour).GET /livez— liveness probe. Always 200 while the process is responsive.
/healthz so a container with a dead control-plane link is depooled automatically.
Session Bundles
Every hour the container POSTs to/agent/v1/session with its agent credentials and receives a bundle containing:
- Routes merged with their production-environment config
- Secrets re-encrypted with an HKDF-derived session key (decrypts locally)
- Tenant metadata (slug, subdomain hash)
- Environments and API keys
Local Integrations
The control plane never pushes KnoxCall’s own integration credentials (SMTP, Twilio, S3, Anthropic) to a self-hosted container. You configure your own under Settings → Integrations. Required minimum for a functional self-hosted deployment:- Email provider — otherwise alerts, invitations, and password resets fail silently.
- S3 — for response-body archival
- Anthropic — for AI anomaly detection on your alerts
- Twilio — for SMS alerts
Tamper Seal
Everyknoxcall/proxy image is built with a two-pass build_sig:
- Compile the image with a placeholder signature.
- Hash the compiled
dist/contents. - HMAC-SHA256 of
hash:versionwith a CI-only secret. - Rebuild with the real signature baked in.
KNOXCALL_STRICT_TAMPER_CHECK=true), an unknown signature = session refused. Default mode logs a tamper event but still issues the session so you have time to investigate.
Next Steps
- Read the migration guide to cutover from cloud to self-hosted with zero downtime
- Configure email and other integrations on your self-hosted container
- Run shadow-mode tests before flipping live traffic