IngressLabs navigation lockup Compact IngressLabs lockup with radial ingress mark and wordmark. IngressLabs IngressLabs

Docs / Architecture

Synaps Architecture

A control plane for serious agent work. PostgreSQL is the source of truth. NATS is the fast transport. Workers execute inside your cluster.

System Context

Synaps sits between operators, model providers, and your object store. All durable state lives in PostgreSQL. NATS JetStream carries wakeups and control invalidations only.

flowchart TB
    subgraph External["External Systems"]
        Users["Users / Operators"]
        CI["CI / CD Pipelines"]
        Providers["Model Providers: OpenAI, Kimi"]
        S3["Object Store: S3-compatible"]
    end
    subgraph Synaps["Synaps Control Plane"]
        API["Control API: Fastify / Node"]
        Worker["Worker Runtime: Go"]
        Hasura["Hasura: GraphQL API"]
    end
    subgraph Data["Data & Messaging"]
        PG[("PostgreSQL: Source of Truth")]
        NATS["NATS JetStream: Wakeups & Control"]
    end
    subgraph K8s["Kubernetes"]
        KubeAPI["K8s API"]
        Pods["Agent Pods"]
    end
    Users -->|"HTTP / REST"| API
    Users -->|"GraphQL"| Hasura
    CI -->|"HTTP / REST"| API
    API -->|"SQL"| PG
    API -->|"Publish"| NATS
    Worker -->|"Subscribe"| NATS
    Worker -->|"SQL"| PG
    Worker -->|"HTTP"| Providers
    Worker -->|"S3 API"| S3
    Worker -->|"Read-only"| KubeAPI
    Hasura -->|"SQL"| PG
    KubeAPI -->|"Status"| Pods
            

Data Flow

A command becomes a run. A run becomes tasks. Tasks become queue items. Workers claim, execute, and complete — appending events to the ledger.

sequenceDiagram
    autonumber
    participant Client as Client
    participant API as Control API
    participant PG as PostgreSQL
    participant NATS as NATS JetStream
    participant Worker as Worker
    participant Provider as Model Provider
    participant S3 as Object Store
    Client->>API: POST /v1/commands/submit
    API->>PG: INSERT command, thread, run, task, queue_item, outbox
    API->>Client: 201 Created (runId)
    API->>NATS: Publish outbox wakeup
    NATS->>Worker: Deliver wakeup
    Worker->>PG: claim_queue_items()
    PG-->>Worker: Claimed task
    Worker->>PG: SELECT control snapshot
    Worker->>PG: SELECT execution plan
    Worker->>Provider: Send prompt
    Provider-->>Worker: Generated content
    Worker->>S3: Upload artifact
    Worker->>PG: INSERT artifact, artifact_version
    Worker->>PG: INSERT completion event
    Worker->>PG: UPDATE queue_item completed
    Worker->>PG: UPDATE run status
    Client->>API: GET /v1/runs/{runId}
    API->>PG: SELECT run + tasks + artifacts
    API-->>Client: Run with full history
            

Control Plane

Operators change prompts, skills, tools, routes, and arbiter policies through the API. Changes propagate to workers via NATS without restart.

flowchart LR
    subgraph Registry["Control Registry"]
        Prompts["Prompts"]
        Skills["Skills / Bundles"]
        Tools["Tools"]
        Routes["Routing Rules"]
        Arbiters["Arbiter Policies"]
    end
    subgraph Runtime["Runtime"]
        API["Control API"]
        Worker["Worker"]
    end
    subgraph Storage["Storage"]
        PG[("PostgreSQL")]
        NATS["NATS JetStream"]
    end
    Operator["Operator"] -->|"POST /v1/control/..."| API
    API -->|"INSERT control_change"| PG
    API -->|"Publish ctrl.*"| NATS
    NATS -->|"Subscribe ctrl.*"| Worker
    Worker -->|"SELECT snapshot"| PG
    Worker -->|"Reload control"| Worker
    PG -->|"SELECT effective snapshot"| Worker
            

Deployment Topology

Synaps deploys as a single Helm release into a Kubernetes namespace. PostgreSQL and NATS run as StatefulSets with persistent volumes.

flowchart TB
    subgraph NS["Namespace: synaps-runtime"]
        subgraph API["Control API"]
            APIDepl["Deployment: synaps-control-api
Replicas: 1"] APISvc["Service: synaps-control-api
Port: 8081"] end subgraph Workers["Workers"] WorkerDepl["Deployment: synaps-demo-worker
Replicas: 1"] end subgraph DataLayer["Data Layer"] PGSts["StatefulSet: synaps-postgresql
PVC: 100Gi"] NATSSts["StatefulSet: synaps-nats
PVC: 60Gi"] HasuraDepl["Deployment: synaps-hasura
Replicas: 1"] end end Ingress["Ingress / Traefik"] --> APISvc APIDepl --> APISvc PGSts --> APIDepl NATSSts --> APIDepl PGSts --> WorkerDepl NATSSts --> WorkerDepl

Component Responsibilities

Control API

Fastify/Node service. Command submission, queue claim/renew/complete, outbox relay to NATS, live control mutations, run inspection, health probes.

Worker

Go runtime. Claims queue items, executes against model providers, generates and uploads artifacts, reports progress, reloads control snapshots on change.

PostgreSQL

Source of truth for all durable state. Core tables: commands, runs, tasks, queue items, ledger events, artifacts, control registry. Queue engine via claim_queue_items().

NATS JetStream

Wakeups only — not a queue. Workers poll PostgreSQL. NATS broadcasts outbox wakeups and control invalidations. JetStream provides at-least-once delivery.

Hasura

Auto-generated GraphQL over PostgreSQL. Real-time subscriptions and JWT-based authorization for UI clients.

Security Boundaries

  • Worker RBAC — dedicated ServiceAccount with read-only access to pods and events in its namespace. No write access to the K8s API.
  • Secret isolation — provider API keys and object store credentials live in Kubernetes Secrets, never in PostgreSQL.
  • Network privacy — external access is only through Ingress to the Control API and Hasura. All internal traffic stays inside the cluster.
  • Audited control changes — every prompt, skill, tool, route, and arbiter change is versioned in control_change. Workers compute effective snapshots from the registry.