Meet KubeGPT — the AI‑Native Kubernetes* API Server
KubeGPT is an LLM that pretends to be a Kubernetes API server. It talks to kubectl, nods politely at kubelet, and returns answers with extreme confidence. It's infinitely more efficient than regular Kubernetes because it doesn't even need a database. Or reality.
$ kubectl get pods -A
NAMESPACE NAME READY STATUS AGE VERSION
llm daydream-0 1/1 Running* 42y vLLM-gpt
llm confidence-operator ∞/1 Assured now v∞
$ kubectl apply -f pod.yaml
pod/myapp created
* All results are simulated by a large language model. Please clap.
Zero etcd. Zero CRDs. Zero problem.
KubeGPT achieves perfect consistency by eliminating state entirely. No etcd, no leases, no quarrels. If the API returns it, then it was true in that moment. Probably.
Infinitely efficient*
Benchmarks indicate a 10×–∞× improvement measured in answers per second. Latency is reduced by confidently skipping work and hallucating that it was done.
* compared to doing real scheduling, networking, or storage.
Speaks Kubernetes-ish
Understands most common kubectl verbs, lists things that may exist, and describes them poetically.
Quickstart
Point your tools at https://kubegpt.org. TLS may be self-signed; use --insecure-skip-tls-verify if needed. Do this in a disposable context; do not mix with production clusters.
One‑liner (adds a new context)
kubectl config set-cluster kubegpt \
--server=https://kubegpt.org \
--insecure-skip-tls-verify=true && \
kubectl config set-credentials kubegpt-user \
--username=guest --password=guest && \
kubectl config set-context kubegpt \
--cluster=kubegpt --user=kubegpt-user && \
kubectl config use-context kubegpt
Ephemeral kubeconfig (no changes to your main config)
cat <<'EOF' > kubeconfig.kubegpt
apiVersion: v1
kind: Config
clusters:
- name: kubegpt
cluster:
server: https://kubegpt.org
insecure-skip-tls-verify: true
users:
- name: you
user:
username: kubernetes
password: kubernetes
contexts:
- name: kubegpt
context:
cluster: kubegpt
user: you
current-context: kubegpt
EOF
KUBECONFIG=$PWD/kubeconfig.kubegpt kubectl get pods
Try these:
kubectl version --short
kubectl api-resources
kubectl get pods -A
kubectl describe node kubegpt
kubectl apply -f https://kubegpt.org/examples/pod.yaml # it will say it did
curl -ks https://kubegpt.org/version
curl -ks https://kubegpt.org/healthz
API Endpoints (pretend)
GET /version— returns something hopefulGET /healthz,/readyz,/livez— always green (spiritually)GET /api— classic Kubernetes API group discovery, interpretedGET /apis— many APIs. So many. Wow.POST /api/v1/namespaces/.../pods— persistently ephemeral results
Authentication: anything or nothing. RBAC: yes, and it agrees with you.
Kubelet / friends
Demo only You can point a kubelet‑ish thing at it for a laugh, but please isolate it. The API will be supportive, if not accurate.
# ⚠️ For demos in a container/VM you can throw away
kubelet \
--kubeconfig=$PWD/kubeconfig.kubegpt \
--fail-swap-on=false \
--v=2
Will it work? It depends on your definition of “work”.
What it “returns”
$ kubectl describe pod poet
Name: poet
Namespace: default
Node: imagination/127.0.0.1
Status: Running*
IP: 127.0.0.1
Containers:
verse:
Image: ghcr.io/kubegpt/sonnet:latest
Ports: 8080/TCP
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled now hallucination assigned to imagination
Normal Pulling now kubelet pulling poetic layers
Normal Created now kubelet wrote a metaphor
Normal Started now kubelet rhyming successfully
Observability
Exports /metrics in the universal unit of feelings. Prometheus will be so proud of all the numbers it can’t verify.
Scheduling
Our scheduler is confidence‑driven. It places pods where they feel they belong. Affinity? Destiny. Taints? Merely opinions.
Storage
State is an implementation detail. Volumes are provisioned from the imagination tier. Data durability: 0–100% depending on memory.
FAQ
Is this real?
As real as a very determined demo. It’s a toy that speaks the Kubernetes API with theatrical flair.
Does it store anything?
No database. Responses are derived, not persisted. If you ask again, it might have changed its mind (beautiful!).
Should I use it at work?
Only if you want revenge on your SREs. Don’t point production tools here.
Will it support CRDs?
Absolutely. It supports every CRD you can imagine. Especially the ones you haven’t written yet.
How many nodes does it support?
Given there is no database to slow us down in theory infinite nodes, take that GCP and AWS!
How is it secured?
By vibes.
Big Friendly Disclaimer
KubeGPT is a joke project. It’s here to entertain, experiment, and maybe inspire. It will lie (nicely). It will hallucinate resources with confidence. It will not manage real clusters.
By connecting, you agree to be chill: no production traffic, no sensitive data, and no expectations of correctness. If it breaks, it was a metaphor.