Component: shared/sandbox/
Gateway router: services/gateway/routers/sandbox.py
Status: ✅ Production Ready
The Purple8 Sandbox provides fully isolated, multi-container Docker environments for running generated projects in real time. Each session gets its own dedicated container set, an automated 8-stage launch pipeline, and a live preview proxy routed through the API gateway.
┌────────────────────────────────────────────────────────────────────┐
│ SANDBOX SESSION LIFECYCLE │
│ │
│ 1. POST /api/sandbox/sessions/from-project/{project_id} │
│ ├─ Auth: verify_token (JWT required) │
│ ├─ ProjectValidator auto-fixes vite.config.ts / package.json │
│ ├─ Detects required runtimes (Python, Node, Postgres, Redis) │
│ ├─ Acquires containers from pool (multi-container) │
│ └─ Deploys files to /workspace via tar │
│ │
│ 2. POST /api/sandbox/sessions/{id}/launch (SSE stream) │
│ └─ Runs SandboxPipeline (8 stages, events streamed to client) │
│ │
│ 3. GET /api/sandbox/sessions/{id}/preview/... (proxy) │
│ └─ Transparently proxies to container's live dev server │
│ │
│ 4. DELETE /api/sandbox/sessions/{id} │
│ └─ Terminates all containers, cleans /workspace │
└────────────────────────────────────────────────────────────────────┘
File: shared/sandbox/container_pool.py
The pool pre-warms containers on startup and reuses them across sessions.
class RuntimeType(str, Enum):
PYTHON_312 = "python312" # python:3.12-slim
NODE_20 = "node20" # node:20-alpine
POSTGRES_15 = "postgres15" # postgres:15
REDIS_7 = "redis7" # redis:7
| Runtime | Pool | Network |
|---|---|---|
python:3.12-slim |
4 containers | purple8-platform_8-network |
node:20-alpine |
4 containers | purple8-platform_8-network |
postgres:15 |
2 containers | purple8-platform_8-network |
redis:7 |
2 containers | purple8-platform_8-network |
Note: Vector search and knowledge graph operations are handled by the platform’s Purple8 Graph Engine (port 8100), not per-session containers. Purple8 Graph replaced all legacy vector databases (Qdrant) and provides vector similarity search (
/v1/search/vector), hybrid vector+graph search (/v1/search/hybrid), and graph traversal (/v1/traverse) as platform-level services shared across all sessions.
All containers share the internal Docker bridge network. Containers communicate by IP address (container.attrs["NetworkSettings"]["Networks"][...]["IPAddress"]).
File: shared/sandbox/manager.py
class ContainerRole(str, Enum):
PRIMARY = "primary" # Main execution container
FRONTEND = "frontend" # Frontend dev server
BACKEND = "backend" # Backend API server
DATABASE = "database" # PostgreSQL
CACHE = "cache" # Redis
Each session tracks multiple containers:
@dataclass
class SandboxSession:
session_id: str
containers: Dict[ContainerRole, ContainerInfo] # all containers
status: SessionStatus
...
# Backward-compat properties (return first/primary container):
@property
def container_id(self) -> str: ...
@property
def container_ip(self) -> str: ...
@dataclass
class ContainerInfo:
role: ContainerRole
container_id: str
container_ip: str
runtime_type: RuntimeType
status: str # "running", "stopped", "error"
port: Optional[int] # exposed port (if applicable)
| Method | Description |
|---|---|
create_session(runtime) |
Single-container session (simple projects) |
create_multi_container_session(roles_config) |
Multi-container session |
get_session(session_id) |
Look up session by ID |
terminate_session(session_id) |
Stop all containers, clean /workspace |
_cleanup_container_files(container_id) |
Remove /workspace contents |
File: shared/sandbox/Sandbox_pipeline.py
The pipeline is SSE-streamed to the frontend via POST /api/sandbox/sessions/{id}/launch.
Stage 1: DetectEntryPoint
→ Inspects /workspace for: package.json, main.py, app.py, manage.py,
server.py, index.js, index.ts, requirements.txt, pyproject.toml
→ Sets ctx.entry_file, ctx.project_type ("node" | "python")
Stage 2: MaterializeFiles
→ Uploads all project files to /workspace via docker cp / tar stream
→ Detects and uses subdirectories: frontend_code/, backend_code/,
frontend/, backend/, client/, web/, app/, ui/
Stage 3: GenerateEnvVars
→ Injects PORT, BACKEND_PORT, DATABASE_URL, REDIS_URL
→ Sets CORS_ORIGINS to include the preview proxy URL
→ Writes .env file into /workspace
Stage 4: InstallDependencies
→ Node: npm install (uses package-lock.json if present)
→ Python: pip install -r requirements.txt (or pyproject.toml)
→ Timeout: 120 s
Stage 5: PostInstallChecks
→ Node: verifies node_modules/ exists and is non-empty
→ Python: verifies pip list succeeds
Stage 6: SyntaxCheck
→ Node: node --check <entry_file>
→ Python: python -m py_compile <entry_file>
→ Fails fast on syntax errors before attempting to start
Stage 7: LaunchBackend
→ Runs the backend start command (uvicorn, flask run, etc.)
→ Polls http://{container_ip}:{backend_port}/health (or /) for up to 30 s
→ Marks backend as ready when HTTP 200 received
Stage 8: LaunchFrontend
→ Patches vite.config.ts: sets base = "/api/sandbox/sessions/{sid}/preview/"
→ Runs npm run dev (or npm start)
→ Sets CHOKIDAR_USEPOLLING=true (Docker filesystem watch)
→ Polls http://{container_ip}:{fe_port}/ for up to 45 s
→ Reports ready with preview URL
Each stage emits an SSE event:
data: {"stage": "InstallDependencies", "status": "running", "log": "npm install..."}
data: {"stage": "InstallDependencies", "status": "complete", "duration_ms": 8234}
Every pipeline run is tracked as a journey in the Purple8 Graph Engine via JourneyBridge (same integration pattern used by phase_based_executor.py):
| Lifecycle Event | JourneyBridge Call | Data Captured |
|---|---|---|
| Pipeline starts | bridge.start(session_id, project_name) |
Session + project metadata |
| Each stage completes | bridge.advance(phase_id, status) |
Stage name, status, elapsed time |
| Pipeline finishes | bridge.finish(final_status, metadata) |
Total time, stages succeeded/failed |
This creates a full execution graph in Purple8 Graph that can be queried for analytics, debugging, and pipeline optimization.
File: services/gateway/routers/sandbox.py — GET /api/sandbox/sessions/{sid}/preview/{path:path}
The Vite dev server (and other frameworks) use a base path config so that all asset URLs, HMR WebSocket connections, and API calls go through /api/sandbox/sessions/{sid}/preview/.
Browser → GET /api/sandbox/sessions/{sid}/preview/src/main.js
→ Proxy rewrites to:
GET http://{container_ip}:{port}/api/sandbox/sessions/{sid}/preview/src/main.js
← Returns file with correct MIME type
The proxy:
/api/sandbox/sessions/{sid}/preview/ prefix to the upstream containerContent-Type headers from the upstream responseStage 8 (LaunchFrontend) patches vite.config.ts before starting:
// Injected automatically by Sandbox_pipeline.py Stage 8
export default defineConfig({
base: '/api/sandbox/sessions/{session_id}/preview/',
server: {
host: '0.0.0.0',
port: 5173,
hmr: {
overlay: false,
},
},
})
File: services/project_validator.py
Runs automatically before files are deployed to a container. Validates and auto-fixes:
| Issue | Auto-Fix |
|---|---|
vite.config.ts missing host: '0.0.0.0' |
Injects server config block |
vite.config.ts missing hmr.overlay: false |
Injects HMR config |
No CHOKIDAR_USEPOLLING set |
Adds to .env and Docker run env |
package.json missing start or dev script |
Reports error (cannot auto-fix) |
requirements.txt missing |
Reports warning |
File: shared/sandbox/dev_server.py
Handles framework-specific start commands:
| Framework | Detection | Start Command |
|---|---|---|
| Vite / React / Vue | vite in package.json |
npm run dev |
| Next.js | "next" in dependencies |
npm run dev |
| Express / Node | "express" or node entry |
npm start |
| FastAPI | fastapi in requirements |
uvicorn main:app --host 0.0.0.0 --port {port} |
| Flask | flask in requirements |
flask run --host 0.0.0.0 --port {port} |
| Django | django in requirements |
python manage.py runserver 0.0.0.0:{port} |
The dev server stores the detected working directory in self._detected_working_dir, which is checked against common frontend subdirectories (frontend_code/, frontend/, client/, web/, app/, ui/).
verify_token)session_id — no cross-session access/workspace is cleaned before the container is returned to the pool| Symptom | Likely Cause | Fix |
|---|---|---|
| Stage 4 (Install) times out | Large node_modules or slow network |
Increase INSTALL_TIMEOUT_S env var |
| Stage 7 (LaunchBackend) times out | App doesn’t expose /health or / |
Add a /health endpoint or ensure app binds to 0.0.0.0 |
| Preview shows blank page | Vite base path mismatch |
Check Stage 8 log — confirm vite.config.ts was patched |
| HMR not working | CHOKIDAR_USEPOLLING not set |
Ensure project_validator.py ran successfully |
| “Cannot connect to sandbox app on port 5173” | Frontend container stale | Rebuild frontend container: docker compose build frontend |