Context
We're running OpenSandbox in production on Kubernetes using the batchsandbox workload provider with a Pool for pre-warmed Python sandboxes. As we scale our AI workloads, sandbox density has become a critical concern, today each sandbox occupies a full pod, which makes the cost per sandbox high and warm-pool capacity limited by node count rather than node resources.
What we found
We discovered OSEP-0007 (fast-sandbox runtime support) which proposes exactly what we need: a maxSandboxesPerPod field on a SandboxPool CRD, enabling multiple isolated sandbox containers inside a single Agent Pod. The design is well-specified and @fengcone has already built the controller side at fengcone/fast-sandbox.
We also noticed that the OpenSandbox server side, fastsandbox_provider.py and fastsandbox_client.py, does not exist yet, and the OSEP status is still provisional with no implementation PR open.
Questions
-
Is there an official plan to implement OSEP-0007 in the OpenSandbox server? We're looking for this as a first-class feature in the controller and server, not a separate deployment.
-
Is there a target milestone or timeline?
-
Would a community contribution of fastsandbox_provider.py + fastsandbox_client.py be accepted? We've done a detailed analysis of the existing batchsandbox_provider.py interface and the fast-sandbox gRPC proto, and the scope is well-understood (~2 new files + minor changes to config.py and provider_factory.py).
Why this matters
Multi-sandbox per pod is a fundamental efficiency feature for anyone running AI agent workloads at scale. With the current 1-sandbox-per-pod model, a pool of 5 pre-warmed sandboxes consumes 5 pods. With maxSandboxesPerPod: 5, the same pool fits in 1 pod. This directly impacts infrastructure cost and warm-pool capacity for every production OpenSandbox deployment on Kubernetes.
We're happy to contribute the implementation if there's maintainer appetite to review and merge it, @fengcone already did the hard architectural work on the controller side.
Tags: feature, component/k8s
Context
We're running OpenSandbox in production on Kubernetes using the batchsandbox workload provider with a Pool for pre-warmed Python sandboxes. As we scale our AI workloads, sandbox density has become a critical concern, today each sandbox occupies a full pod, which makes the cost per sandbox high and warm-pool capacity limited by node count rather than node resources.
What we found
We discovered OSEP-0007 (fast-sandbox runtime support) which proposes exactly what we need: a maxSandboxesPerPod field on a SandboxPool CRD, enabling multiple isolated sandbox containers inside a single Agent Pod. The design is well-specified and @fengcone has already built the controller side at fengcone/fast-sandbox.
We also noticed that the OpenSandbox server side, fastsandbox_provider.py and fastsandbox_client.py, does not exist yet, and the OSEP status is still provisional with no implementation PR open.
Questions
Is there an official plan to implement OSEP-0007 in the OpenSandbox server? We're looking for this as a first-class feature in the controller and server, not a separate deployment.
Is there a target milestone or timeline?
Would a community contribution of fastsandbox_provider.py + fastsandbox_client.py be accepted? We've done a detailed analysis of the existing batchsandbox_provider.py interface and the fast-sandbox gRPC proto, and the scope is well-understood (~2 new files + minor changes to config.py and provider_factory.py).
Why this matters
Multi-sandbox per pod is a fundamental efficiency feature for anyone running AI agent workloads at scale. With the current 1-sandbox-per-pod model, a pool of 5 pre-warmed sandboxes consumes 5 pods. With maxSandboxesPerPod: 5, the same pool fits in 1 pod. This directly impacts infrastructure cost and warm-pool capacity for every production OpenSandbox deployment on Kubernetes.
We're happy to contribute the implementation if there's maintainer appetite to review and merge it, @fengcone already did the hard architectural work on the controller side.
Tags: feature, component/k8s