
We piloted OrbOps AI on a hybrid setup: GCP (GKE) + on-prem K8s, GitHub Actions for CI, and Terraform (remote state in GCS). Setup was a few clicks—connect GitHub, grant read-only cloud access, point to clusters. In ~15 minutes it understood our repos and opened PRs with Docker/K8s/Terraform updates. First staging deploy landed in under 45 minutes.
The big unlock was the propose-first, human-in-the-loop flow—approvals show diffs, policy context, and risk so our lead can approve in seconds. OrbOps didn’t try to replace our stack; it acted like an agentic control plane around it.
On the FinOps side, cost signals showed up before merges: it identified an oversized GKE node pool, suggested rightsizing, and nudged us to run a batch job on spot/preemptible capacity. It also TTL’d idle preview environments, which we usually forget to clean up. We liked the “cost delta per PR” summary and the option to block risky cost jumps via policy.
The AI insights were useful, not noisy: it flagged a flaky test, suggested adding readiness probes, recommended a small memory limit bump before rollout, and proposed a canary instead of an all-at-once deploy—came with a ready rollback plan.
Verdict: less orchestration toil, safer rollouts, and immediate cost hygiene—without changing our tools. We’re expanding the pilot to more services
Report
1 view
