Veilfire
Open Source AI Agent Runtime

EmberSpark — Open Source AI Agent Framework

Apache 2.0

EmberSpark is an open-source, local-first AI agent runtime built on bounded autonomy. Deploy with Docker in minutes — Apache 2.0 licensed, with LangGraph execution, BudgetGuard limits, and four LLM providers including local Ollama.

EmberSpark — open source AI agent framework logo
“Open by default. Bounded by design.”
Install

Get EmberSpark

One docker compose up. Apache 2.0. Open source on GitHub.

EmberSpark icon

EmberSpark

by Veilfire

Apache 2.0Docker ComposeLinux · macOS · Windows

Run with Docker

git clone https://github.com/Veilfire/EmberSpark.git cd EmberSpark docker compose up

Foreground; first run builds the image (~5–10 min). Web UI at http://0.0.0.0:7777.

Credentials Are Displayed Once

On first start, EmberSpark prints a one-time credentials banner to the console for the web UI sign-in. Save them immediately. If you miss them, re-show via docker compose logs -f spark.

Lifecycle

docker compose up -d # background docker compose logs -f spark # tail logs / re-show credentials docker compose down # stop, KEEP volumes docker compose down --volumes # stop + WIPE state

Persistent Volumes

Two named volumes: spark-state for credentials and logs; spark-data for SQLite, Chroma, and deliverables. Bind address and IP allowlist are configured via deploy/docker/spark.yaml.

Agentic AI Risk Notice

Agentic AI operates with varying levels of autonomy and carries unique risks comparable to allowing an unknown user to perform tasks on your computer. By downloading and using EmberSpark, you acknowledge these risks and accept full responsibility for any actions taken by the agent. Veilfire provides no warranty and assumes no liability for damages arising from autonomous AI operations. Use autonomy levels at your own risk.

Features

What Makes EmberSpark Different

Most open source AI agent frameworks treat safety as a sticker. EmberSpark treats it as a design constraint.

Bounded Autonomy

Closed by default. Every tool declares its permissions, inputs, outputs, and sensitivity. BudgetGuard caps iterations, model calls, tool calls, tokens, wall-clock, and cost.

Mandatory OS Sandbox

Tool execution runs inside Bubblewrap, sandbox-exec, or nsjail. Defense in depth, not best effort. Bundled into the Docker image so you don't have to install or configure it separately.

Local-First by Design

Native Ollama support means you can run agents end-to-end without sending data to any cloud provider. Perfect for regulated environments, privacy-sensitive workflows, and offline use.

17 Built-in Plugins

Filesystem, HTTP, shell, SQLite, web search, PDF reader, Git, and more. All declarative, all sandboxed, all opt-in. Extend with your own plugins using a simple Python interface.

3-Tier Memory

Task memory (ephemeral), session memory (SQLite), and long-term memory (Chroma vector store). Tenant-isolated, policy-checked, and searchable.

Privacy-by-Default Redaction

detect-secrets and Microsoft Presidio scrub API keys, credentials, PII, and named entities from agent context before any data reaches the LLM provider.

Providers

Choose Your Model

Four LLM providers, including local Ollama for fully offline agents.

OpenAI

GPT-4 series, GPT-5 series, etc.

Anthropic

Claude Opus, Sonnet, Haiku

OpenRouter

Unified access to 100+ models

Ollama (local)

Run agents fully offline. No cloud dependency for inference.

Operator Console

Web UI Built In

Run docker compose up and get a full operator console at localhost:7777 with one-time generated credentials.

Scheduler

Cron-style task scheduling with cycle detection

Chat

Interactive agent conversations from the browser

Cost Tracking

Per-task budgets and live cost telemetry

Security Center

Audit log review, redaction config, and run replay

Open Source vs. Enterprise

How EmberSpark Fits with Veilfire

EmberSpark and the Veilfire Platform share the same safety philosophy. EmberSpark is the open-source proving ground; the platform is what ships to enterprise production.

CapabilityEmberSpark (Open Source)Veilfire Platform (Enterprise)
Agent runtimeSingle-agent, local-firstMulti-agent fleets via Ember
SandboxingOS-level (Bubblewrap / sandbox-exec / nsjail)OS + container + cluster isolation
Policy enforcementSandbox + BudgetGuard limitsLens runtime enforcement (<2ms p95)
Safety evaluationDIY via testsInsight 5-pillar evaluation framework
Audit trailLocal JSONLCryptographic Merkle chain with KMS signing
Compliance reportingSelf-managedEU AI Act, NIST AI RMF, ISO 42001, SOC 2, AIDA
LicenseApache 2.0Commercial
Getting Started

Quick Start Guide

From docker compose up to your first running agent task in five steps.

1

Clone the repository

Pull the EmberSpark repo and switch into the project directory.

git clone https://github.com/Veilfire/EmberSpark.git cd EmberSpark
2

Start with Docker Compose

Builds the image (~5–10 min on first run) and brings up the runtime, sandbox, and web UI in one command.

docker compose up
3

Save the one-time credentials

On first start, EmberSpark prints a credentials banner to the console — DISPLAYED ONCE. Copy them now. If you miss them, re-show with: docker compose logs -f spark

# look for the banner in the docker compose output # format: "DISPLAYED ONCE; save them now"
4

Open the web UI

The operator console is bound to 0.0.0.0:7777 by default. Sign in with the credentials from step 3.

open http://0.0.0.0:7777 # or visit in your browser
5

Configure your AI provider

From the web UI, add an API key for OpenAI, Anthropic, or OpenRouter — or point to a local Ollama instance for fully offline inference. Then run your first agent task from the included examples.

# in the web UI: # Settings → Providers → add API key # Tasks → Run example → weekly-digest
Plugins

Plugin Reference

Every plugin declares its sensitivity, sandbox requirements, and permissions. Below is a representative slice of the 17 built-ins.

PluginCategorySandbox Notes
filesystemI/OPath allowlist, read-only by default
httpNetworkDomain allowlist, HMAC where supported
shellExecutionSandboxed via Bubblewrap / sandbox-exec
sqliteStoragePer-task DB scope
web_searchNetworkProvider-pluggable
pdf_readerI/OLocal file ingestion only
gitExecutionRepo allowlist required
+10 moreVariousSee repository for the full list

Who Is EmberSpark For?

  • Developers and researchers exploring agentic AI without enterprise overhead
  • Teams that need local-first inference for privacy, regulated environments, or offline work
  • Engineers who want bounded autonomy as a design constraint, not a sticker
  • Open source contributors who want to study or extend a real agent runtime