Back to home

Sanctum Runtime

Runtime trust infrastructure for autonomous AI

Sanctum provides runtime trust between AI reasoning and real-world execution — for agents, robotics, automation, and embodied systems.

v0.1 — open source (MIT). GitHub (self-host) — copy .env.example to .env, then npm run dev:runtime. Hosted console: console.sanctumruntime.com. Enterprise boundary: open core.

Introduction

Artificial intelligence is beginning to move beyond screens.

The next generation of AI will not simply answer questions or generate content. It will:

  • move through homes
  • operate machinery
  • interact with families
  • access physical environments
  • assist workers
  • make decisions in real time
  • and increasingly influence the physical world

As AI becomes embodied, cybersecurity becomes physical security.

This changes everything.

The current AI ecosystem was largely built around cloud-first architectures designed for software assistants and web-based models. But physical AI introduces a different class of risk:

  • unauthorized actions
  • unsafe autonomy
  • prompt injection
  • remote manipulation
  • cloud dependency
  • privacy exposure
  • and unverified real-world execution

Sanctum exists to solve this problem.

We are building runtime trust infrastructure for autonomous AI systems.

Why Sanctum Exists

Most AI infrastructure today focuses on:

  • intelligence
  • reasoning
  • model performance
  • generation quality
  • and scalability

Very few systems focus on trust architecture.

Yet the moment AI gains the ability to:

  • unlock a door
  • move through a building
  • operate equipment
  • interact with vulnerable individuals
  • access sensitive systems
  • or make physical decisions

trust becomes the most important layer in the stack.

Sanctum was created around a simple principle:

Powerful AI systems require trustworthy runtime infrastructure.

Not fear. Not restriction. Not anti-AI narratives. Trust.

What Sanctum Is

Sanctum is a local-first runtime trust layer for autonomous AI systems — agents, robotics, automation, and embodied platforms.

It acts as a secure control layer between:

  • AI models
  • physical devices
  • users
  • cloud services
  • and real-world actions

Sanctum is designed to help developers build AI systems that are:

  • observable
  • permission-aware
  • auditable
  • resilient
  • and safer to deploy

Phase 1

AI Behavioral Firewall + Action Permission SDK

Rather than attempting to solve every problem in synthetic infrastructure immediately, Sanctum begins with a focused and practical foundation:

  • behavioral monitoring
  • action verification
  • runtime observability
  • local-first protection
  • and offline-capable execution

This allows developers and companies to:

  • create real deployments
  • build safer agent deployments
  • establish trust with users
  • reduce operational risk
  • and prepare for the next generation of robotics systems

This first phase is intentionally designed to:

  • accelerate developer adoption
  • support enterprise security requirements
  • enable robotics integrations
  • and establish the foundation for long-term synthetic infrastructure

The Problem

AI Can Think. But Can You Trust It?

Modern AI systems can already:

  • access APIs
  • write code
  • browse the web
  • operate applications
  • control devices
  • and chain actions autonomously

As these systems become connected to robotics platforms, humanoids, drones, industrial systems, and smart environments, new categories of risk emerge.

Cloud Dependence

Many AI systems rely heavily on remote infrastructure.

This creates:

  • availability risks
  • latency concerns
  • privacy exposure
  • and centralized points of failure

In physical environments, unstable connectivity should never compromise safety.

Prompt Injection and Behavioral Manipulation

AI systems can be manipulated through:

  • indirect instructions
  • malicious prompts
  • hostile external inputs
  • or unsafe command chaining

Without runtime verification, AI may execute actions outside expected behavioral boundaries.

Unsafe Physical Actions

Physical AI introduces real-world consequences.

An AI system should not automatically:

  • unlock secured areas
  • control machinery
  • access sensitive infrastructure
  • initiate payments
  • or interact with vulnerable individuals without context-aware authorization

Lack of Observability

Most AI systems today lack sufficient runtime transparency.

Developers and enterprises need:

  • audit trails
  • execution visibility
  • approval flows
  • behavioral logs
  • and runtime verification

Without observability, trust becomes difficult to establish.

The Sanctum Runtime

Sanctum Runtime is the operational trust layer that sits between AI reasoning and physical execution.

It intercepts actions before execution and evaluates:

  • permissions
  • behavioral anomalies
  • execution context
  • policy rules
  • offline state
  • and risk conditions

The result is a safer and more controllable AI runtime environment.

Core Runtime Components

Action Verification

Before an AI system performs a sensitive action, Sanctum evaluates whether the action should proceed.

Example

AI request:

Unlock front door.

Sanctum evaluates:

  • permission level
  • requesting identity
  • execution context
  • device trust state
  • anomaly score
  • and local policy

The action is then approved, denied, escalated, or held for verification.

Behavioral Monitoring

Sanctum continuously evaluates runtime behavior patterns.

The platform can detect:

  • unusual command sequences
  • unsafe escalation patterns
  • suspicious remote instructions
  • prompt injection attempts
  • policy violations
  • and anomalous execution behavior

The goal is not to block intelligence. The goal is to establish trusted autonomy.

Offline Runtime Protection

Sanctum is designed with local-first principles.

AI systems should continue operating safely even when disconnected from cloud infrastructure.

Sanctum supports:

  • local execution
  • offline verification
  • local policy enforcement
  • local audit storage
  • and disconnected fallback modes

This architecture improves resilience, latency, reliability, and privacy.

Audit Logs

Every significant runtime event can be logged and verified.

Audit trails may include:

  • action requests
  • execution decisions
  • policy evaluations
  • model confidence
  • approval states
  • runtime anomalies
  • and execution timestamps

For enterprise deployments, observability is essential.

Why Local-First Matters

As AI systems become more integrated into daily life, users will increasingly expect:

  • private interactions
  • secure execution
  • predictable behavior
  • and control over sensitive data

Local-first architecture helps reduce:

  • external dependency
  • unnecessary data transmission
  • exposure to cloud compromise
  • and centralized control risks

Not every AI decision should require remote infrastructure.

Especially when physical systems are involved.

Example Use Cases

Humanoids

Future humanoids may:

  • assist families
  • support healthcare environments
  • operate in warehouses
  • or interact with children and elderly individuals

These systems require permission boundaries, trusted runtime monitoring, and transparent execution behavior.

AI Assistants

Advanced AI assistants increasingly interact with:

  • APIs
  • calendars
  • communications
  • finances
  • and connected devices

Sanctum helps enforce runtime safety policies before sensitive actions occur.

Industrial Robotics

Industrial systems require low latency, local reliability, and operational resilience.

Sanctum enables policy-aware runtime verification while maintaining local execution capability.

Smart Environments

Connected homes and facilities contain access systems, sensors, automation layers, and operational infrastructure.

Sanctum helps establish trusted AI interaction boundaries across physical environments.

Autonomous Systems

Drones, mobile robots, and autonomous systems require execution validation, behavioral observability, and runtime resilience.

These systems cannot rely entirely on uninterrupted cloud communication.

Architecture Philosophy

Sanctum is designed around several foundational principles.

Local-First

Sensitive runtime logic should remain operational even without cloud connectivity.

Permission-Aware Execution

Physical actions should require explicit policy validation.

Observable Runtime Behavior

Trust requires visibility.

Developers and enterprises should understand what happened, why it happened, and how decisions were evaluated.

Interoperability

Sanctum is being designed to support robotics systems, AI agents, local inference platforms, edge devices, and future embodied systems.

Human-Aligned Infrastructure

The objective is not to limit AI capability. The objective is to help AI systems operate within trustworthy and understandable boundaries.

Technology Stack

Sanctum is intentionally designed with practical, scalable, and developer-friendly infrastructure.

Frontend

Marketing Platform

  • Next.js
  • TailwindCSS
  • Framer Motion
  • shadcn/ui

Dashboard

  • Next.js App Router
  • TypeScript
  • Zustand
  • TanStack Query

Dashboard capabilities include runtime sessions, anomaly logs, action approvals, trust monitoring, offline state visibility, and permission management.

Backend

Core Infrastructure

  • Node.js
  • TypeScript
  • Fastify

Database

  • PostgreSQL
  • Prisma ORM

Realtime Layer

  • WebSockets
  • NATS planned for future distributed runtime messaging

AI Runtime Layer

Sanctum is designed around local inference compatibility.

Supported local runtime directions include:

  • Ollama
  • llama.cpp
  • DeepSeek local models
  • and Qwen local models

This architecture prioritizes local execution, low latency, privacy, and runtime resilience.

Security Layer

Planned security directions include:

  • JWT authentication
  • device keys
  • Rust-based runtime services
  • and hardware trust integrations

Robotics Compatibility

Phase 1

  • Python SDK
  • Node SDK

Phase 2

  • ROS2 integration
  • NVIDIA Jetson support
  • Raspberry Pi support
  • Unitree support

Architecture

How You Access Sanctum

Sanctum is infrastructure you install into systems — like Stripe, Docker, or Supabase — not an app end users open.

A — Local SDK (Phase 1, primary)

Runs inside your agent, backend, or robotics stack. This is the core experience.

bash
npm install @sanctum-runtime/sdk

B — Local runtime service (roadmap)

Optional daemon on a device: intercepts actions, connects to Ollama, enforces policies — think “Docker daemon for AI trust.”

bash
npx sanctum init   # coming soon

C — Cloud dashboard (optional)

Policy management, logs, and threat views. Visibility and control — not the runtime itself.

Execution flow

  • Before: AI → executes action directly
  • After: AI → Sanctum Runtime → Decision → Execution

What Sanctum is not

  • Not a chatbot or standalone consumer app
  • Not a robot controller
  • Not dashboard-first — the runtime is embedded middleware

Quick Start

Goal: first verified action in under five minutes. Install the SDK from npm or use the monorepo workspace — see open core.

1. Start the runtime

bash
git clone https://github.com/Matik103/sanctum-runtime.git
cd sanctum-runtime
cp .env.example .env   # edit HOST, PORT, DASHBOARD_*, OLLAMA_URL, SITE_*
npm install
npm run dev:runtime
npm run smoke

2. Install the SDK

bash
npm install @sanctum-runtime/sdk @sanctum-runtime/adapter-agent-runtime

3. Run the agent example (monorepo)

bash
npm run example:agent

4. Initialize and gate actions

typescript
import { SanctumRuntime } from "@sanctum-runtime/sdk"
import { protectAgent, AgentActions } from "@sanctum-runtime/adapter-agent-runtime"

const sanctum = new SanctumRuntime({ baseUrl: process.env.SANCTUM_API_URL! })

await protectAgent(sanctum, {
  actor: "workflow-agent",
  action: AgentActions.SEND_EMAIL,
  context: { to: "user@example.com" },
  offlineMode: true,
  execute: async () => sendEmail(),
})

await sanctum.policy("unlock_door", "verify")

Direct verify (no execute wrapper)

typescript
const result = await sanctum.verifyAction({
  actor: "assistant-agent",
  action: "unlock_door",
  context: { time: "02:13 AM", owner_sleeping: true },
})

if (result.decision === "APPROVED") executeAction()

Strategy

Open Source & Enterprise

Sanctum uses an open-core model: open infrastructure for adoption, private intelligence and enterprise features for moat and revenue.

Public (open infrastructure)

  • Core runtime SDK — interception, verification, audit, events
  • Basic policy engine — Approve, Verify, Block
  • Local integrations — Ollama, llama.cpp, offline mode
  • Category adapters — agents, ROS2 starters, examples
  • CLI, examples, and this documentation
  • Community dashboard — logs, actions, runtime status

Private (enterprise & intelligence)

  • Advanced threat intelligence and proprietary risk models
  • Fleet orchestration and organization-wide policy sync
  • Advanced analytics, trust scoring, behavioral intelligence
  • Hosted Sanctum Cloud, compliance, and device attestation
Public: how Sanctum works. Private: how Sanctum becomes smarter than competitors.

License: MIT for this repository. Enterprise features may be dual-licensed later. Canonical boundary doc: OPEN_CORE.md.

Runtime Philosophy

Sanctum is not designed around fear.

It is designed around responsibility.

We believe advanced AI systems can become deeply valuable tools across:

  • healthcare
  • manufacturing
  • research
  • home assistance
  • logistics
  • and infrastructure

But intelligence without runtime trust creates operational risk.

The future of AI will not be determined only by model capability.

It will also be determined by transparency, reliability, resilience, observability, and trust.

Long-Term Direction

Sanctum begins with runtime protection and action authorization.

Over time, the broader infrastructure vision expands toward:

  • synthetic identity
  • secure memory systems
  • portable trust layers
  • local cognition infrastructure
  • and trusted autonomy standards

The long-term goal is to help establish the foundational trust architecture required for embodied intelligence.

Not just smarter AI. Trusted AI.

Current Phase

Our immediate focus is clear:

  • build working runtime infrastructure
  • support developers
  • enable local-first deployments
  • establish behavioral observability
  • and create practical trust systems for physical AI

Execution matters.

The future infrastructure layer for embodied intelligence must begin with systems that work reliably today.

Final Principle

The future will not only ask whether AI is intelligent.
It will ask whether AI can be trusted.

Sanctum exists to help answer that question.