Skip to content

orbitsuite-inc/orbitsuite-core

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🌌 OrbitSuite

The framework behind what works.
Clarity near clairvoyance.

Version Python License Status

OrbitSuite is a modular, fault-tolerant AI agent runtime—purpose-built for developer automation, memory continuity, and self-correcting execution loops. It's the backbone powering intelligent, persistent agent workflows with zero handoff and full auditability.


Complete Features (Enterprise Package features not included in Open Core release)

Multi-Modal AI Runtime

  • Universal Inference Layer — Any local model (CPU / GPU / accelerator) or remote provider via pluggable adapter
  • Policy-Based Adaptive Model Router — Cost / latency / reliability / compliance aware routing & automatic failover (utils/model_router.py)
  • Hardware Offload Support — Optional GPU / multi-GPU / accelerator utilization with adaptive partitioning (hardware-dependent)
  • Remote Compute Tunneling — Transparently dispatch heavy workloads to external clusters / federated datacenters

Specialized Agent System

  • Diverse Agent Types — CodeGen, Tester, Engineer (Core/Professional/Enterprise), Patcher, Security Guard, Task Linguist (Core/Professional/Enterprise), RAG Parser, Entropy Monitor, Security Guard, Sandbox Enforcement, Designer
  • Agent Registry — Dynamic agent discovery and execution with async interfaces (agents/registry/)
  • BaseAgent Framework — Async foundation with MTS context mixing and comprehensive logging (agents/base.py)
  • Orchestration Engine — Multi-agent coordination with task routing and dependency management (agents/orchestrator/)

Mnemonic Token System (MTS)

  • 3-Tiered Memory Architecture — Cache (context_cache) → Buffer (context_buffer) → Pool (context_pool) with automatic lifecycle promotion
  • MemCube Structure — Rich memory units with metadata, versioning, lineage tracking, and usage analytics (memory/memcube.py)
  • TokenCube Spatialization — 3D memory organization by tier/topic/time for efficient retrieval (shared_memory/token_cube.py)
  • Memory Lifecycle Management — Automated promotion, demotion, and pruning with configurable TTL (memory/memory_manager.py)
  • Vector Embeddings — Semantic search using sentence-transformers with CPU-optimized embedding models

Production Infrastructure

  • FastAPI Backend — RESTful API with memory system endpoints and CORS support (backend/api.py)
  • Next.js Dashboard — React/TypeScript frontend with real-time monitoring (my-orbitsuite-dashboard/)
  • Multi-Database Support — PostgreSQL, Redis, Neo4j, SQLite with connection pooling
  • Environment Management — Standardized .env loading and validation (utils/env_loader.py)

Intelligent Orchestration

  • Supervisor Agent — Self-managing execution loop with retry logic and error recovery (supervisor_class.py)
  • Task Queue System — JSON-based task ingestion with priority scheduling and deduplication
  • MTS Conductor — Memory lifecycle orchestration with background processing (conductor.py)
  • Git Integration — Auto-commits, Gist sync, and diff-based change detection (utils/gist.py)
  • Discord Webhooks — Live notifications for task completion and system alerts (utils/webhook_server.py)

Security & Monitoring

  • Sandbox Enforcement — Isolated execution environments for code generation (agents/sandbox_enforcement.py)
  • Security Guards — Multi-layer security validation for generated code (agents/security_guard.py)
  • Entropy Monitoring — System health tracking and anomaly detection (agents/entropy_monitor.py)
  • Code Protection Policy — Built-in protection against destructive edits with enforcement
  • Comprehensive Logging — Structured logs with agent-specific loggers and performance monitoring
  • Global Log Rotation & Archival — 1MB rotation threshold, aggregated per-agent archives with 10MB rollover & retention (90d)
  • Verbose Policy Audit Trail — Detailed code-protection enforcement events (see config/code_protection_policy.py)

System Architecture

graph TB
  UI[Next.js Dashboard] --> API[FastAPI Backend]
  API --> SUP[Supervisor Agent]
  SUP --> ORCH[Agent Orchestrator]
    
  ORCH --> CG[CodeGen Agent]
  ORCH --> TEST[Tester Agent]
  ORCH --> ENG[Engineer Agent]
  ORCH --> SEC[Security Guard]
  ORCH --> TL[Task Linguist]
  ORCH --> PATCH[Patcher Agent]
    
  SUP --> COND[MTS Conductor]
  COND --> MEM[Memory System]
  MEM --> CACHE[Context Cache<br/>Fast Memory]
  MEM --> BUF[Context Buffer<br/>Mid-term Queue]
  MEM --> POOL[Context Pool<br/>Long-term Storage]
  MEM --> SCRATCH[Scratchpad<br/>Session Temp]
    
  SUP --> DB[(Multi-DB Layer)]
  DB --> PG[(PostgreSQL)]
  DB --> REDIS[(Redis)]
  DB --> NEO4J[(Neo4j)]
  DB --> SQLITE[(SQLite)]
    
  SUP --> LLM[LLM Router]
  LLM --> LOCAL[Local Runtime<br/>Any Model Engine]
  LLM --> CLOUD[Remote Providers<br/>Any API Adapter]
  LLM --> CLUSTER[Remote Cluster<br/>Tunneled / Federated]
    
  SUP --> SYNC[Git/Gist Sync]
  SUP --> HOOKS[Discord Webhooks]
Loading

Memory Architecture (MTS)

┌─────────────────────────────────────────────────────────┐
│                 MemCube Lifecycle                       │
├─────────────────────────────────────────────────────────│
│  🔥 Context Cache    │ Fast working memory            │
│  📦 Context Buffer   │ Mid-term processing queue      │
│  🌊 Context Pool     │ Long-term semantic storage     │
│  📝 Scratchpad       │ Session-based temp storage     │
└─────────────────────────────────────────────────────────┘
                           ↕️
              Automatic promotion/demotion
                via MemoryManager lifecycle

Quick Start

Prerequisites

  • Python 3.11+ with Poetry package management
  • Node.js 18+ with npm for Next.js dashboard
  • PostgreSQL 14+ (optional, for production database)
  • Redis 6+ (optional, for caching layer)
  • Git (for version control integration)
  • NVIDIA GPU (CUDA package & SDK recommended, for local LLM acceleration)

Installation

# 1. Clone the repository
git clone https://github.com/SyntacticLuster/OrbitSuite.git
cd OrbitSuite/C_O_A/PRODUCTION

# 2. Install Python dependencies
poetry install

# 3. Install Next.js dashboard dependencies
cd my-orbitsuite-dashboard
npm install
cd ..

# 4. Set up environment configuration
cp .env.template .env
# Edit .env with your configuration

Environment Configuration

Create your .env file with the following configuration:

# === LLM CONFIGURATION (Vendor / Model Agnostic) ===
LLM_MODE=auto                       # auto | local | remote
PREFERRED=your_local_model_name     # symbolic name for primary model
FALLBACK=your_remote_model_alias    # symbolic alias for remote provider

# Local / Edge / On-Prem Runtime
LLM_MODEL_PATH=/path/to/model/weights    # GGUF / ONNX / safetensors / TensorRT / custom
# LLM_PARTITION=auto                      # auto | full | <strategy>
# LLM_DEVICE_MAP=auto                     # auto | gpu0,gpu1 | shard spec
LLM_CTX_SIZE=8192                         # Adjust per use case
# LLM_THREADS=<cpu_core_hint>             # Uncomment for CPU inference
# LLM_PARALLELISM=tensor                  # tensor | pipeline | none
# CUDA_VISIBLE_DEVICES=0,1                # Example multi-GPU selection

# Remote / Cloud Provider (Adapter Neutral)
REMOTE_ENDPOINT_URL=https://provider.example/v1
API_KEY_PROVIDER_X=your_api_key           # Repeat pattern per provider
ALLOW_REMOTE_LLM=1
ROUTING_POLICY=balanced                   # balanced | cost | latency | reliability | throughput | compliance
MODEL_ADAPTERS=local,remote               # Enabled adapters

# === MEMORY CONFIGURATION ===
MEMORY_CACHE_TTL=900
MEMORY_BUFFER_TTL=7200
MEMORY_POOL_TTL=86400
PROMOTION_THRESHOLD=5
MAX_POOL_SIZE=100000

# === DATABASE CONFIGURATION ===
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
POSTGRES_DB=orbitsuite
POSTGRES_USER=orbitsuite_user
POSTGRES_PASSWORD=your_secure_password

REDIS_HOST=localhost
REDIS_PORT=6379
REDIS_PASSWORD=your_redis_password

# === INTEGRATION SETTINGS ===
DISCORD_WEBHOOK_URL=your_discord_webhook_url
DISCORD_POST_SUCCESS=1

# Git/Gist Integration
GIST_TOKEN=your_github_token
TASKS_GIST_URL=https://gist.github.com/your_username/gist_id
DONE_GIST_URL=https://gist.github.com/your_username/done_gist_id

# === FRONTEND CONFIGURATION ===
PYTHON_BACKEND_URL=http://localhost:8000
CORS_ORIGINS=http://localhost:3000,http://localhost:3001

Running OrbitSuite

# Option 1: Main system entry point (recommended)
poetry run python main.py

# Option 2: FastAPI backend server
cd backend
poetry run uvicorn api:app --reload --port 8000

# Option 3: Next.js dashboard (separate terminal)
cd my-orbitsuite-dashboard
npm run dev
# Dashboard available at http://localhost:3000

# Option 4: Model router proxy (separate terminal)
poetry run python utils/model_router.py

Agent System

Available Agents

Agent Purpose Status Location
BaseAgent Abstract base class for all agents ✅ Stable agents/base.py
Agent Generator Dynamic agent creation & lifecycle management ✅ Beta agents/agent_generator.py
Ops Agent System operations & cross-agent commands ✅ Beta agents/ops_handler.py
Orchestrator Multi-agent coordination & task routing ✅ Stable agents/orchestrator_agent.py
Task Linguist Natural language understanding & task parsing ✅ Stable agents/task_linguist.py
Engineer System architecture (Core/Pro/Enterprise) ✅ Stable agents/engineer_*.py
CodeGen Code generation and refactoring ✅ Stable agents/codegen.py
Patcher Code patching and automated fixes ✅ Stable agents/patcher.py
Tester Automated testing and QA ✅ Stable agents/tester.py
Designer UI/UX analysis & asset generation ✅ Beta agents/designer.py
RAG Parser Document processing & retrieval ✅ Stable agents/rag_parser.py
Security Guard Code security analysis & validation ✅ Stable agents/security_guard.py
Sandbox Enforcer Isolated execution environment management ✅ Stable agents/sandbox_enforcement.py
Entropy Monitor System health tracking & anomaly detection ✅ Stable agents/entropy_monitor.py

Engineer Agent Enhancements

The EngineerAgent now integrates three advanced analysis helpers (see agents/analysis/):

Helper File Key Capabilities Returned Fields
ComplexityAnalyzer complexity_analyzer.py Weighted multi-factor scoring (requirements, features, constraints, domain keywords); cross-cutting concern bonus level, score, summary, factors[], category_breakdown{}
RiskAssessor risk_assessor.py Probability × impact matrix, sorted risks, distribution aggregation, top 5 overall_risk_level, total_risk_score, risks[], top_risks[], risk_distribution, impact_distribution
TechStackSelector tech_stack_selector.py Rule-ordered customization (expertise → constraints → requirements); confidence & conflict detection stack, rationale[], alternatives{}, confidence, constraint_conflicts[]

Example engineer command payloads:

{ "command": "analyze", "project_type": "web_application", "spec": { "features": ["auth", "reporting"], "constraints": ["budget"], "scalability": "100k users year 1" } }
{ "command": "technology_stack", "project_type": "api_service", "team_expertise": ["python", "javascript"], "constraints": ["enterprise"], "requirements": [{"description": "real-time updates"}] }
{ "command": "assess_risk", "scope": "large", "timeline": "3 months", "team_size": 2, "budget": "low", "technology_maturity": "bleeding_edge" }

Each response augments legacy fields with: complexity_details, enriched risk_assessment, and stack confidence / constraint_conflicts.

Task Submission

Submit tasks via memory/tasks.json, Gist sync, or the dashboard:

{
  "tasks": [
    {
      "id": "task_001",
      "priority": 5,
      "agent": "codegen",
      "function": "generate_module",
      "arguments": {
        "description": "Create a REST API for user management",
        "language": "python",
        "framework": "fastapi",
        "features": ["authentication", "CRUD operations", "validation"]
      },
      "context": {
        "related_files": ["models/user.py", "schemas/user.py"],
        "requirements": ["async support", "JWT tokens"]
      }
    }
  ]
}

Execution Flow

  1. Task Ingestion → Supervisor reads from tasks.json or Gist sync
  2. Context Hydration → MTS system provides relevant memory context
  3. Agent Dispatch → Orchestrator routes to appropriate agent via registry
  4. LLM Inference → Model router handles local/cloud LLM calls
  5. Security Validation → Security guards validate all outputs
  6. Memory Storage → Results stored in MTS memory tiers
  7. Completion Archival → Results saved to tasks.done.json
  8. Sync & Notify → Git commit, Gist upload, Discord notifications

Dashboard & Monitoring

Next.js Dashboard Features

  • Memory System Visualization — Real-time view of cache/buffer/pool tiers
  • Agent Status Monitoring — Live agent health and performance metrics
  • Task Queue Management — Interactive task submission and tracking
  • Log Streaming — Live log output with filtering and search
  • System Statistics — Memory usage, model status, and performance metrics

FastAPI Endpoints

# Memory System
GET /api/memory/stats
POST /api/memory/search
GET /api/memory/cache
GET /api/memory/buffer
GET /api/memory/pool

# Task Management
GET /api/tasks
POST /api/tasks/submit
GET /api/tasks/{task_id}

# Agent Control
GET /api/agents/status
POST /api/agents/{agent_id}/execute

# System Health
GET /api/health
GET /api/metrics

Memory System (MTS)

MemCube Structure

@dataclass
class MemCube:
    id: str                    # UUID identifier
    payload: Union[str, Dict]  # Content data
    mem_type: str              # 'plaintext', 'parametric'
    
    created_at: datetime       # Creation timestamp
    updated_at: datetime       # Last modification
    last_accessed: datetime    # Last access time
    
    usage_count: int           # Access frequency
    version: int               # Version number
    provenance: str            # Origin tracking
    tags: List[str]            # Semantic tags
    lineage: List[str]         # Parent relationships

Memory Lifecycle

# Memory Manager handles automatic transitions
memory_manager = MemoryManager()
memory_manager.promote_cache_to_buffer()    # High-usage promotion
memory_manager.mature_buffer_to_pool()      # Age-based maturation
memory_manager.expire_cache_cubes()         # TTL cleanup
memory_manager.trim_context_pool()          # Size management

TokenCube 3D Organization

# Spatial memory organization
token_cube = TokenCube()
token_cube.insert(
    bath=token_bath,
    x="cache",           # Memory tier
    y="codegen",         # Semantic cluster
    z="2025-W03"         # Temporal bin
)

# Query with criteria
results = token_cube.query({
    "tier": "buffer",
    "topic": "summarize",
    "limit": 10
})

Security & Compliance

Security Features

  • Code Protection Policy — Built-in protection against destructive modifications
  • Sandboxed Execution — Isolated environments for code generation and testing
  • Input Validation — Comprehensive sanitization of all user inputs
  • Output Filtering — Security scanning of all generated code
  • Agent Isolation — Memory and execution isolation between agents
  • Audit Logging — Complete audit trail with structured logging
  • Policy Verbose Logs — High-fidelity event logging for protected asset checks
  • Archive Pipeline — Rotated logs aggregated & lifecycle managed (utils/logrotate.py, archives/archiver.py)

Memory Security

  • MemCube Immutability — Versioned memory with lineage tracking
  • Access Control — Usage-based access patterns and rate limiting
  • Data Encryption — Secure storage of sensitive memory content
  • Backup & Recovery — Automated memory snapshots and rollback capability

Production Deployment

Docker Support

# Build and run with Docker Compose
docker-compose build
docker-compose up -d

# Scale services
docker-compose up -d --scale backend=3

Cloud Deployment

Supports deployment on:

  • AWS (EC2, RDS, ElastiCache, ECS)
  • Azure (VM, SQL Database, Redis Cache)
  • Google Cloud (Compute Engine, Cloud SQL)
  • On-premises (Docker, Kubernetes)

Performance Optimization

# Inference / Acceleration (Hardware Agnostic)
export LLM_PARTITION=auto              # Adaptive offload / sharding
export LLM_DEVICE_MAP=auto             # Or explicit: gpu0,gpu1,gpu2
export CUDA_VISIBLE_DEVICES=0,1        # Example; omit if scheduler-managed
# export LLM_PARALLELISM=tensor         # tensor | pipeline | none

# Memory / Context (Illustrative Only)
export MEMORY_CACHE_SIZE=1000
export MEMORY_BUFFER_SIZE=5000
export MEMORY_POOL_SIZE=10000
export PROMOTION_THRESHOLD=5

# Database Pooling (Tune for concurrency)
export POSTGRES_POOL_SIZE=20
export REDIS_POOL_SIZE=50

Guideline (illustrative, not prescriptive):

  • Workstation (≤24GB VRAM): quantized or partial offload
  • High-memory GPU (≥80GB): full or mixed precision
  • Multi-GPU single node: enable tensor/pipeline parallel
  • Multi-node / cluster: use remote tunneling or distributed adapter

Development & Testing

Running Tests

# Run all tests
poetry run pytest

# Run specific test categories
poetry run pytest tests/unit/
poetry run pytest tests/integration/
poetry run pytest tests/agents/

# Generate coverage report
poetry run pytest --cov=. --cov-report=html

Development Setup

# Install development dependencies
poetry install --with dev

# Run code formatting
black .
isort .

# Run type checking
mypy .

# Run linting
flake8 .

Custom Agent Development

from agents.base import BaseAgent
from agents.mixins.mts_context_mixin import MTSContextMixin

class CustomAgent(BaseAgent):
    def __init__(self):
        super().__init__(name="custom")
        self.version = "1.0.0"
    
    async def run(self, task_data: dict) -> dict:
        # Access MTS context
        context = await self.hydrate_context(task_data.get("context"))
        
        # Your agent logic here
        result = await self.process_task(task_data, context)
        
        # Store results in memory
        await self.store_result(result)
        
        return {"success": True, "result": result}
    
    def validate_input(self, data: dict) -> bool:
        return "description" in data

# Register agent in orchestrator
from agents.orchestrator.loader import register_agent
register_agent("custom", CustomAgent)

Architecture Highlights

Model & Infrastructure Agnosticism

OrbitSuite is vendor, model, hardware, and cloud agnostic:

  • Pluggable Adapters — Load any local engine (llama.cpp, vLLM, TensorRT, ONNX Runtime, custom) or remote API (OpenAI-compatible, Anthropic, Azure, self-hosted gateways) via a common abstraction.
  • Policy-Based Adaptive Model Router — Optimize for latency, cost, reliability, throughput, or compliance constraints.
  • Seamless Scaling — Run on consumer silicon, edge devices, single GPUs, multi-GPU rigs, or tunneled multi-datacenter clusters without changing task definitions.
  • Remote Compute Tunneling — Heavy workloads can transparently execute on external or federated infrastructure; results stream back into the same orchestration context.
  • Acceleration Neutral — CPU-only, GPU, multi-GPU, or accelerator (TPU / ASIC) paths when adapters expose them.
  • Data & Compliance Hooks — Future policy integration for locality / residency-aware routing.

Disclaimers: All configuration values here are illustrative. No specific vendor, model family, or hardware profile is required or assumed.

Further reading: see MODEL_INTEGRATION.md and SCALING_GUIDE.md for adapter patterns & deployment topologies.

Memory-First Design

OrbitSuite is built around persistent memory as a first-class citizen. The MTS (Mnemonic Token System) provides:

  • Semantic Continuity — Context preserved across sessions and restarts
  • Intelligent Retrieval — Vector-based similarity search for relevant context
  • Lifecycle Management — Automated promotion/demotion based on usage patterns
  • Provenance Tracking — Full lineage and versioning for all memory content

Async Agent Runtime

All agents inherit from BaseAgent and implement async execution:

  • Non-blocking Execution — Coroutine-based agent dispatch
  • Memory Integration — Automatic MTS context injection
  • Error Recovery — Built-in retry logic and fallback mechanisms
  • Performance Monitoring — Comprehensive logging and telemetry

Modular Orchestration

The orchestrator system provides intelligent task routing:

  • Agent Registry — Dynamic agent discovery and registration
  • Task Classification — Automatic routing based on task analysis
  • Dependency Management — Task ordering and workflow coordination
  • Load Balancing — Intelligent distribution across available agents

System Status

Component Status Features Performance
Supervisor Agent 🟢 Production Task management, error recovery Excellent
MTS Memory System 🟢 Production 3-tier architecture, lifecycle mgmt Excellent
Agent Orchestrator 🟢 Production Multi-agent coordination Excellent
LLM Router 🟢 Production Local/cloud failover Good
FastAPI Backend 🟢 Production RESTful API, CORS support Good
Next.js Dashboard 🟢 Production Real-time monitoring Good
Security System 🟢 Production Multi-layer validation Excellent
Git/Gist Sync 🟢 Production Auto-commit, diff-based Good
Discord Integration 🟢 Production Live notifications Good
Database Layer 🟢 Production Multi-DB support, pooling Excellent

Current Architecture

Project Structure

OrbitSuite/C_O_A/PRODUCTION/
├── agents/                    # Agent implementations
│   ├── base.py               # BaseAgent async foundation
│   ├── orchestrator/         # Multi-agent coordination
│   ├── mixins/              # MTS context mixins
│   └── *.py                 # Specialized agents
├── memory/                   # MTS memory system
│   ├── memcube.py           # Core memory unit
│   ├── memory_manager.py    # Lifecycle management
│   ├── context_cache/       # Fast memory tier
│   ├── context_buffer/      # Mid-term queue
│   └── context_pool/        # Long-term storage
├── shared_memory/           # Memory coordination
│   ├── token_cube.py        # 3D memory organization
│   ├── token_bath.py        # Memory batching
│   └── memory_lifecycle.py  # Lifecycle coordination
├── utils/                   # Core utilities
│   ├── env_loader.py        # Environment management
│   ├── model_router.py      # LLM routing
│   ├── agent_logger.py      # Structured logging
│   └── *.py                 # Various utilities
├── backend/                 # FastAPI backend
│   ├── api.py              # Main API server
│   └── routers/            # API route modules
├── my-orbitsuite-dashboard/ # Next.js frontend
│   ├── pages/api/          # API routes
│   ├── src/components/     # React components
│   └── utils/              # Frontend utilities
├── data/                   # Data management
├── conductor.py           # MTS conductor
├── supervisor_class.py    # Main supervisor
└── main.py               # System entry point

Contact & Support

Aaron McCarthy
Founder & Chief Architect
OrbitSuite
aaron@orbitsuite.cloud
@SyntacticLuster
LinkedIn

OrbitSuite, Inc.

📍 Headquarters: Buffalo, NY
🌐 Website: orbitsuite.cloud
📞 Phone: +1 (716) 254-8282 📧 Business: cockpit@orbitsuite.cloud


© 2025 OrbitSuite, Inc. All rights reserved.

We're building this to last.

Stars Follow

About

Agentic AI runtime. Open-core, Local or Cloud LLM powered agents.

Topics

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors