Skip to main content
Shakti
FAQ

Can I run Shakti fully air-gapped?

Yes. Single Rust binary, no external service dependencies at runtime, and a self-hosted Ollama / vLLM adapter closes the LLM loop.

Yes. Shakti Server is a single Rust binary; once you deploy it into your Kubernetes cluster (or bare metal) it needs only Postgres + Redis. Both can run inside the same cluster.

For the LLM layer, the Ollama / vLLM / LM Studio adapters treat a local inference endpoint as a first-class provider. Point the adapter at http://ollama.internal:11434 (or wherever your inference service lives) and the pipeline never emits an outbound call.

Air-gapped deployments are part of the Enterprise tier because they typically come with custom Axon parsers, a longer procurement cycle, and a signed offline-installer process. The Server Pro tier covers the standard self-hosted case — you deploy inside your cluster with outbound access to your chosen LLM provider.

Talk to the founding team.

30-minute working session scoped to your stack. No slide decks.