Purple8-platform

Purple8 Sandbox Architecture

Component: shared/sandbox/
Gateway router: services/gateway/routers/sandbox.py
Status: ✅ Production Ready


Overview

The Purple8 Sandbox provides fully isolated, multi-container Docker environments for running generated projects in real time. Each session gets its own dedicated container set, an automated 8-stage launch pipeline, and a live preview proxy routed through the API gateway.


Architecture

┌────────────────────────────────────────────────────────────────────┐
│                    SANDBOX SESSION LIFECYCLE                         │
│                                                                      │
│  1. POST /api/sandbox/sessions/from-project/{project_id}            │
│     ├─ Auth: verify_token (JWT required)                             │
│     ├─ ProjectValidator auto-fixes vite.config.ts / package.json    │
│     ├─ Detects required runtimes (Python, Node, Postgres, Redis)    │
│     ├─ Acquires containers from pool (multi-container)               │
│     └─ Deploys files to /workspace via tar                          │
│                                                                      │
│  2. POST /api/sandbox/sessions/{id}/launch  (SSE stream)            │
│     └─ Runs SandboxPipeline (8 stages, events streamed to client)   │
│                                                                      │
│  3. GET  /api/sandbox/sessions/{id}/preview/...  (proxy)            │
│     └─ Transparently proxies to container's live dev server         │
│                                                                      │
│  4. DELETE /api/sandbox/sessions/{id}                               │
│     └─ Terminates all containers, cleans /workspace                 │
└────────────────────────────────────────────────────────────────────┘

Container Pool

File: shared/sandbox/container_pool.py

The pool pre-warms containers on startup and reuses them across sessions.

RuntimeType Enum

class RuntimeType(str, Enum):
    PYTHON_312  = "python312"    # python:3.12-slim
    NODE_20     = "node20"       # node:20-alpine
    POSTGRES_15 = "postgres15"   # postgres:15
    REDIS_7     = "redis7"       # redis:7

Pool Sizes

Runtime Pool Network
python:3.12-slim 4 containers purple8-platform_8-network
node:20-alpine 4 containers purple8-platform_8-network
postgres:15 2 containers purple8-platform_8-network
redis:7 2 containers purple8-platform_8-network

Note: Vector search and knowledge graph operations are handled by the platform’s Purple8 Graph Engine (port 8100), not per-session containers. Purple8 Graph replaced all legacy vector databases (Qdrant) and provides vector similarity search (/v1/search/vector), hybrid vector+graph search (/v1/search/hybrid), and graph traversal (/v1/traverse) as platform-level services shared across all sessions.

All containers share the internal Docker bridge network. Containers communicate by IP address (container.attrs["NetworkSettings"]["Networks"][...]["IPAddress"]).


Session Management

File: shared/sandbox/manager.py

ContainerRole Enum

class ContainerRole(str, Enum):
    PRIMARY    = "primary"     # Main execution container
    FRONTEND   = "frontend"    # Frontend dev server
    BACKEND    = "backend"     # Backend API server
    DATABASE   = "database"    # PostgreSQL
    CACHE      = "cache"       # Redis

SandboxSession

Each session tracks multiple containers:

@dataclass
class SandboxSession:
    session_id: str
    containers: Dict[ContainerRole, ContainerInfo]   # all containers
    status: SessionStatus
    ...

    # Backward-compat properties (return first/primary container):
    @property
    def container_id(self) -> str: ...
    @property
    def container_ip(self) -> str: ...

ContainerInfo

@dataclass
class ContainerInfo:
    role: ContainerRole
    container_id: str
    container_ip: str
    runtime_type: RuntimeType
    status: str         # "running", "stopped", "error"
    port: Optional[int] # exposed port (if applicable)

Key Methods

Method Description
create_session(runtime) Single-container session (simple projects)
create_multi_container_session(roles_config) Multi-container session
get_session(session_id) Look up session by ID
terminate_session(session_id) Stop all containers, clean /workspace
_cleanup_container_files(container_id) Remove /workspace contents

8-Stage Launch Pipeline

File: shared/sandbox/Sandbox_pipeline.py

The pipeline is SSE-streamed to the frontend via POST /api/sandbox/sessions/{id}/launch.

Stage 1: DetectEntryPoint
  → Inspects /workspace for: package.json, main.py, app.py, manage.py,
    server.py, index.js, index.ts, requirements.txt, pyproject.toml
  → Sets ctx.entry_file, ctx.project_type ("node" | "python")

Stage 2: MaterializeFiles
  → Uploads all project files to /workspace via docker cp / tar stream
  → Detects and uses subdirectories: frontend_code/, backend_code/,
    frontend/, backend/, client/, web/, app/, ui/

Stage 3: GenerateEnvVars
  → Injects PORT, BACKEND_PORT, DATABASE_URL, REDIS_URL
  → Sets CORS_ORIGINS to include the preview proxy URL
  → Writes .env file into /workspace

Stage 4: InstallDependencies
  → Node: npm install (uses package-lock.json if present)
  → Python: pip install -r requirements.txt (or pyproject.toml)
  → Timeout: 120 s

Stage 5: PostInstallChecks
  → Node: verifies node_modules/ exists and is non-empty
  → Python: verifies pip list succeeds

Stage 6: SyntaxCheck
  → Node: node --check <entry_file>
  → Python: python -m py_compile <entry_file>
  → Fails fast on syntax errors before attempting to start

Stage 7: LaunchBackend
  → Runs the backend start command (uvicorn, flask run, etc.)
  → Polls http://{container_ip}:{backend_port}/health (or /) for up to 30 s
  → Marks backend as ready when HTTP 200 received

Stage 8: LaunchFrontend
  → Patches vite.config.ts: sets base = "/api/sandbox/sessions/{sid}/preview/"
  → Runs npm run dev (or npm start)
  → Sets CHOKIDAR_USEPOLLING=true (Docker filesystem watch)
  → Polls http://{container_ip}:{fe_port}/ for up to 45 s
  → Reports ready with preview URL

Each stage emits an SSE event:

data: {"stage": "InstallDependencies", "status": "running", "log": "npm install..."}
data: {"stage": "InstallDependencies", "status": "complete", "duration_ms": 8234}

Purple8 Graph Journey Tracking

Every pipeline run is tracked as a journey in the Purple8 Graph Engine via JourneyBridge (same integration pattern used by phase_based_executor.py):

Lifecycle Event JourneyBridge Call Data Captured
Pipeline starts bridge.start(session_id, project_name) Session + project metadata
Each stage completes bridge.advance(phase_id, status) Stage name, status, elapsed time
Pipeline finishes bridge.finish(final_status, metadata) Total time, stages succeeded/failed

This creates a full execution graph in Purple8 Graph that can be queried for analytics, debugging, and pipeline optimization.


Live Preview Proxy

File: services/gateway/routers/sandbox.pyGET /api/sandbox/sessions/{sid}/preview/{path:path}

How It Works

The Vite dev server (and other frameworks) use a base path config so that all asset URLs, HMR WebSocket connections, and API calls go through /api/sandbox/sessions/{sid}/preview/.

Browser  →  GET /api/sandbox/sessions/{sid}/preview/src/main.js
         →  Proxy rewrites to:
            GET http://{container_ip}:{port}/api/sandbox/sessions/{sid}/preview/src/main.js
         ←  Returns file with correct MIME type

The proxy:

Vite Config Injection

Stage 8 (LaunchFrontend) patches vite.config.ts before starting:

// Injected automatically by Sandbox_pipeline.py Stage 8
export default defineConfig({
  base: '/api/sandbox/sessions/{session_id}/preview/',
  server: {
    host: '0.0.0.0',
    port: 5173,
    hmr: {
      overlay: false,
    },
  },
})

Project Validator

File: services/project_validator.py

Runs automatically before files are deployed to a container. Validates and auto-fixes:

Issue Auto-Fix
vite.config.ts missing host: '0.0.0.0' Injects server config block
vite.config.ts missing hmr.overlay: false Injects HMR config
No CHOKIDAR_USEPOLLING set Adds to .env and Docker run env
package.json missing start or dev script Reports error (cannot auto-fix)
requirements.txt missing Reports warning

Dev Server Abstraction

File: shared/sandbox/dev_server.py

Handles framework-specific start commands:

Framework Detection Start Command
Vite / React / Vue vite in package.json npm run dev
Next.js "next" in dependencies npm run dev
Express / Node "express" or node entry npm start
FastAPI fastapi in requirements uvicorn main:app --host 0.0.0.0 --port {port}
Flask flask in requirements flask run --host 0.0.0.0 --port {port}
Django django in requirements python manage.py runserver 0.0.0.0:{port}

The dev server stores the detected working directory in self._detected_working_dir, which is checked against common frontend subdirectories (frontend_code/, frontend/, client/, web/, app/, ui/).


Security


Troubleshooting

Symptom Likely Cause Fix
Stage 4 (Install) times out Large node_modules or slow network Increase INSTALL_TIMEOUT_S env var
Stage 7 (LaunchBackend) times out App doesn’t expose /health or / Add a /health endpoint or ensure app binds to 0.0.0.0
Preview shows blank page Vite base path mismatch Check Stage 8 log — confirm vite.config.ts was patched
HMR not working CHOKIDAR_USEPOLLING not set Ensure project_validator.py ran successfully
“Cannot connect to sandbox app on port 5173” Frontend container stale Rebuild frontend container: docker compose build frontend