EmberSpark — Open Source AI Agent Framework
EmberSpark is an open-source, local-first AI agent runtime built on bounded autonomy. Deploy with Docker in minutes — Apache 2.0 licensed, with LangGraph execution, BudgetGuard limits, and four LLM providers including local Ollama.
“Open by default. Bounded by design.”
Get EmberSpark
One docker compose up. Apache 2.0. Open source on GitHub.
EmberSpark
by Veilfire
Run with Docker
git clone https://github.com/Veilfire/EmberSpark.git
cd EmberSpark
docker compose upForeground; first run builds the image (~5–10 min). Web UI at http://0.0.0.0:7777.
Credentials Are Displayed Once
On first start, EmberSpark prints a one-time credentials banner to the console for the web UI sign-in. Save them immediately. If you miss them, re-show via docker compose logs -f spark.
Lifecycle
docker compose up -d # background
docker compose logs -f spark # tail logs / re-show credentials
docker compose down # stop, KEEP volumes
docker compose down --volumes # stop + WIPE statePersistent Volumes
Two named volumes: spark-state for credentials and logs; spark-data for SQLite, Chroma, and deliverables. Bind address and IP allowlist are configured via deploy/docker/spark.yaml.
Agentic AI Risk Notice
Agentic AI operates with varying levels of autonomy and carries unique risks comparable to allowing an unknown user to perform tasks on your computer. By downloading and using EmberSpark, you acknowledge these risks and accept full responsibility for any actions taken by the agent. Veilfire provides no warranty and assumes no liability for damages arising from autonomous AI operations. Use autonomy levels at your own risk.
What Makes EmberSpark Different
Most open source AI agent frameworks treat safety as a sticker. EmberSpark treats it as a design constraint.
Bounded Autonomy
Closed by default. Every tool declares its permissions, inputs, outputs, and sensitivity. BudgetGuard caps iterations, model calls, tool calls, tokens, wall-clock, and cost.
Mandatory OS Sandbox
Tool execution runs inside Bubblewrap, sandbox-exec, or nsjail. Defense in depth, not best effort. Bundled into the Docker image so you don't have to install or configure it separately.
Local-First by Design
Native Ollama support means you can run agents end-to-end without sending data to any cloud provider. Perfect for regulated environments, privacy-sensitive workflows, and offline use.
17 Built-in Plugins
Filesystem, HTTP, shell, SQLite, web search, PDF reader, Git, and more. All declarative, all sandboxed, all opt-in. Extend with your own plugins using a simple Python interface.
3-Tier Memory
Task memory (ephemeral), session memory (SQLite), and long-term memory (Chroma vector store). Tenant-isolated, policy-checked, and searchable.
Privacy-by-Default Redaction
detect-secrets and Microsoft Presidio scrub API keys, credentials, PII, and named entities from agent context before any data reaches the LLM provider.
Choose Your Model
Four LLM providers, including local Ollama for fully offline agents.
OpenAI
GPT-4 series, GPT-5 series, etc.
Anthropic
Claude Opus, Sonnet, Haiku
OpenRouter
Unified access to 100+ models
Ollama (local)
Run agents fully offline. No cloud dependency for inference.
Web UI Built In
Run docker compose up and get a full operator console at localhost:7777 with one-time generated credentials.
Scheduler
Cron-style task scheduling with cycle detection
Chat
Interactive agent conversations from the browser
Cost Tracking
Per-task budgets and live cost telemetry
Security Center
Audit log review, redaction config, and run replay
How EmberSpark Fits with Veilfire
EmberSpark and the Veilfire Platform share the same safety philosophy. EmberSpark is the open-source proving ground; the platform is what ships to enterprise production.
| Capability | EmberSpark (Open Source) | Veilfire Platform (Enterprise) |
|---|---|---|
| Agent runtime | Single-agent, local-first | Multi-agent fleets via Ember |
| Sandboxing | OS-level (Bubblewrap / sandbox-exec / nsjail) | OS + container + cluster isolation |
| Policy enforcement | Sandbox + BudgetGuard limits | Lens runtime enforcement (<2ms p95) |
| Safety evaluation | DIY via tests | Insight 5-pillar evaluation framework |
| Audit trail | Local JSONL | Cryptographic Merkle chain with KMS signing |
| Compliance reporting | Self-managed | EU AI Act, NIST AI RMF, ISO 42001, SOC 2, AIDA |
| License | Apache 2.0 | Commercial |
Quick Start Guide
From docker compose up to your first running agent task in five steps.
Clone the repository
Pull the EmberSpark repo and switch into the project directory.
git clone https://github.com/Veilfire/EmberSpark.git
cd EmberSparkStart with Docker Compose
Builds the image (~5–10 min on first run) and brings up the runtime, sandbox, and web UI in one command.
docker compose upSave the one-time credentials
On first start, EmberSpark prints a credentials banner to the console — DISPLAYED ONCE. Copy them now. If you miss them, re-show with: docker compose logs -f spark
# look for the banner in the docker compose output
# format: "DISPLAYED ONCE; save them now"Open the web UI
The operator console is bound to 0.0.0.0:7777 by default. Sign in with the credentials from step 3.
open http://0.0.0.0:7777 # or visit in your browserConfigure your AI provider
From the web UI, add an API key for OpenAI, Anthropic, or OpenRouter — or point to a local Ollama instance for fully offline inference. Then run your first agent task from the included examples.
# in the web UI:
# Settings → Providers → add API key
# Tasks → Run example → weekly-digestPlugin Reference
Every plugin declares its sensitivity, sandbox requirements, and permissions. Below is a representative slice of the 17 built-ins.
| Plugin | Category | Sandbox Notes |
|---|---|---|
| filesystem | I/O | Path allowlist, read-only by default |
| http | Network | Domain allowlist, HMAC where supported |
| shell | Execution | Sandboxed via Bubblewrap / sandbox-exec |
| sqlite | Storage | Per-task DB scope |
| web_search | Network | Provider-pluggable |
| pdf_reader | I/O | Local file ingestion only |
| git | Execution | Repo allowlist required |
| +10 more | Various | See repository for the full list |
Who Is EmberSpark For?
- Developers and researchers exploring agentic AI without enterprise overhead
- Teams that need local-first inference for privacy, regulated environments, or offline work
- Engineers who want bounded autonomy as a design constraint, not a sticker
- Open source contributors who want to study or extend a real agent runtime