Introduction to OmniDaemon
Welcome to OmniDaemon! This page will help you understand what OmniDaemon is, how it works, and whether it’s the right tool for your use case.What is OmniDaemon?
OmniDaemon is a universal event-driven runtime engine specifically designed for AI agents. Think of it as “Kubernetes for AI Agents” - it provides the infrastructure layer that makes AI agents autonomous, observable, and scalable.The Simple Explanation
Imagine you have AI agents that need to:- Run continuously in the background (not just respond to HTTP requests)
- React to events happening across your system
- Work together with other agents
- Process tasks reliably (with retries if something fails)
- Scale up when there’s more work to do
Core Concepts (5-Minute Read)
1. Event-Driven Architecture
Traditional AI systems work like this:- Agents run autonomously (don’t need someone to ask them)
- Multiple agents can react to the same event
- System is more resilient (failures don’t break everything)
- Easy to add new agents without changing existing ones
2. Topics and Subscriptions
Agents subscribe to topics (like email distribution lists):3. Agent Runners
An agent runner is your Python script that:- Registers one or more agents
- Starts listening for events
- Runs until you stop it (Ctrl+C)
4. The Event Bus
The event bus is like a highway for messages. It:- Delivers events from publishers to agents
- Ensures messages aren’t lost
- Handles retries if agents fail
- Load balances across multiple agent instances
5. Storage
OmniDaemon stores:- Agent Registry: Which agents are registered
- Results: Outputs from your agents (kept for 24 hours)
- Metrics: How many tasks processed, failed, timing info
- Configuration: System settings
6. Consumer Groups
When you run multiple instances of the same agent (for scaling), they form a consumer group:7. Dead Letter Queue (DLQ)
If an agent fails repeatedly (default: 3 retries), the message goes to the DLQ:When to Use OmniDaemon
✅ Great For
1. Background AI Processing❌ Not Great For
1. Simple HTTP APIsHow OmniDaemon Compares
| Feature | Celery | AWS Lambda | Temporal | OmniDaemon |
|---|---|---|---|---|
| Purpose | Task queues | Serverless | Workflows | AI Agents |
| AI-First | ❌ No | ❌ No | ❌ No | ✅ Yes |
| Event-Driven | ✅ Yes | ⚠️ Partial | ⚠️ Partial | ✅ Yes |
| Setup Complexity | 🔴 High | 🟡 Medium | 🔴 High | 🟢 Low |
| Framework Agnostic | ✅ Yes | ✅ Yes | ⚠️ Partial | ✅ Yes |
| Horizontal Scaling | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes |
| Agent Abstraction | ❌ No | ❌ No | ❌ No | ✅ Yes |
| Pluggable Backends | ⚠️ Limited | ❌ No | ❌ No | ✅ Yes |
| Built-in Metrics | ⚠️ Basic | ✅ CloudWatch | ✅ Yes | ✅ Yes |
| DLQ | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes |
| Cold Starts | N/A | 🔴 Yes | N/A | 🟢 No |
| Vendor Lock-in | 🟢 No | 🔴 Yes | ⚠️ Partial | 🟢 No |
System Requirements
Minimum Requirements
For Development:- Python 3.9 or higher
- 4 GB RAM
- Redis (can run in Docker)
- Python 3.9 or higher
- 8+ GB RAM (depends on number of agents)
- Redis (recommended: 16+ GB RAM for production)
- Linux (Ubuntu 20.04+, CentOS 7+, or similar)
Supported Platforms
- ✅ Linux (Ubuntu, CentOS, Debian, Fedora, etc.)
- ✅ macOS (Intel and Apple Silicon)
- ✅ Windows (via WSL2)
- ✅ Docker (any platform)
Event Bus Backends
Currently Supported:- ✅ Redis Streams (6.0+)
- 🚧 Apache Kafka (2.8+)
- 🚧 RabbitMQ (3.8+)
- 🚧 NATS JetStream (2.9+)
Storage Backends
Currently Supported:- ✅ JSON (file-based, for development)
- ✅ Redis (6.0+, for production)
- 🚧 PostgreSQL (12+)
- 🚧 MongoDB (4.4+)
- 🚧 Amazon S3 (for results storage)
Architecture Overview
Here’s how OmniDaemon fits into your system:What Makes OmniDaemon Different?
1. AI-First Design
OmniDaemon was built specifically for AI agents, not adapted from general task queues. This means:- First-class support for any AI framework
- Built-in patterns for agent collaboration
- Optimized for long-running AI tasks
- Metrics and observability for AI workloads
2. Pluggable Everything
Swap backends without changing code:3. Framework Agnostic
Use ANY AI framework:- OmniCore Agent
- Google ADK
- LangChain
- AutoGen
- CrewAI
- LlamaIndex
- Or plain Python functions!
4. Production Ready
Built-in:- ✅ Automatic retries
- ✅ Dead letter queue
- ✅ Metrics tracking
- ✅ Health checks
- ✅ Horizontal scaling
- ✅ Beautiful CLI
- ✅ REST API
- ✅ Graceful shutdown
5. Developer Experience
- 📖 Clear documentation (you’re reading it!)
- 🎨 Beautiful CLI with Rich
- 🔍 Easy debugging
- 📊 Real-time metrics
- 🚀 Quick to get started
Next Steps
Ready to dive in?- Quick Start Tutorial - Build your first agent in 10 minutes
- Core Concepts - Deep dive into EDA
- Complete Examples - See real-world implementations