REST API240+ API routes

API Reference

Complete reference for all LYDOS REST API endpoints. The server runs on port 8888 locally and https://lydos.ailydian.com in production.

Base URL

Local development
http://localhost:8888
Production
https://lydos.ailydian.com

Authentication

All API requests require an API key passed as a Bearer token in the Authorization header. Keys can be scoped to specific capabilities: agents, chat, memory, engines, or admin.

POST/api/auth/keys

Create a new API key with granular scope permissions.

Request body
{
  "name": "my-application",
  "scopes": ["agents", "chat", "memory"],
  "expires_in_days": 365  // optional, null = never expires
}
Example response
{
  "key": "lyd_sk_a1b2c3d4e5f6...",
  "id": "key_01HQ4X...",
  "name": "my-application",
  "scopes": ["agents", "chat", "memory"],
  "created_at": "2026-03-23T10:00:00Z",
  "expires_at": "2027-03-23T10:00:00Z"
}
authenticated-request.shBASH
# All requests require Authorization: Bearer <key>
curl -H "Authorization: Bearer lyd_sk_your_key_here" \
     -H "Content-Type: application/json" \
     https://lydos.ailydian.com/api/health

Core Endpoints

The core endpoints provide health monitoring, system status, agent management, task execution, and LLM access. All are available from the root API prefix.

MethodPathDescription
GET/api/health29-module health check — returns score, module statuses, uptime, and provider availability
GET/api/statusFull system status: all module states, active task queue, engine catalogue, and LLM provider chain
GET/api/agentsList all 109 registered agents with capabilities, categories, and parameters
GET/api/agents/{id}Get full details for a specific agent by ID
POST/api/agents/{type}/runExecute an agent — returns a task ID for async polling
GET/api/tasks/{id}Poll an async task's status, progress, and result
GET/api/tasksList all tasks with optional status, agent, and date filters
POST/api/groq/chatDirect Groq LLM chat (llama-3.3-70b-versatile) — streaming supported
POST/api/llm/chatMulti-provider LLM chat with automatic failover (Groq → Z.AI → Claude)
POST/api/memory/storeStore a key/value pair in semantic memory with optional tags and TTL
GET/api/memory/searchFTS5-powered semantic memory search — returns ranked results
POST/api/harika/analyzeHARiKA agent analysis — deep code + architecture review
POST/api/harika/ultra/analyze8-agent UltraTeam deep analysis — parallel multi-perspective review
GET/api/auth/keysList API keys for the authenticated user
POST/api/auth/keysCreate a new API key with scoped permissions

GET /api/health

GET/api/health

Returns a full 29-module health check with scores, module statuses, LLM provider availability, and server uptime.

Example response
{
  "status": "operational",
  "score": 98,
  "uptime_seconds": 172800,
  "version": "11.2.0",
  "modules": 29,
  "agents_available": 109,
  "engines_active": 43,
  "latency_ms": 12,
  "providers": {
    "groq":       { "status": "ok",   "model": "llama-3.3-70b-versatile" },
    "zai":        { "status": "ok",   "model": "glm-4.5-air" },
    "claude":     { "status": "ok",   "model": "claude-sonnet-4-6" },
    "cloudbrain": { "status": "ok",   "model": "qwen3-32b" }
  },
  "module_statuses": {
    "kernel_loader":     "healthy",
    "agent_manager":     "healthy",
    "llm_router":        "healthy",
    "semantic_memory":   "healthy",
    "security_hardening":"healthy"
  }
}

POST /api/llm/chat

POST/api/llm/chat

Multi-provider LLM chat with automatic 4-tier failover. Supports streaming via Server-Sent Events.

Request body
{
  "messages": [
    { "role": "user", "content": "Explain multi-agent orchestration" }
  ],
  "model": "llama-3.3-70b-versatile",  // optional — uses primary if omitted
  "max_tokens": 1024,                   // optional
  "temperature": 0.7,                   // optional
  "stream": false                       // optional — SSE if true
}
Example response
{
  "content": "Multi-agent orchestration is the coordination of...",
  "model": "llama-3.3-70b-versatile",
  "provider": "groq",
  "tokens": { "prompt": 18, "completion": 256, "total": 274 },
  "latency_ms": 1840,
  "cost_usd": 0.000082
}

POST /api/agents/{type}/run

POST/api/agents/{type}/run

Execute any registered agent by its type ID. Returns immediately with a task ID for async polling.

Request body
{
  "task": "Scan the repository for top-10 security vulnerabilities",
  "params": {
    "path": "./src",
    "depth": "comprehensive",
    "output_format": "json"
  },
  "timeout_seconds": 120,  // optional
  "priority": "normal"     // optional: low | normal | high
}
Example response
{
  "task_id": "task_01HR7X2NB8KM4...",
  "agent":   "security-scanner",
  "status":  "pending",
  "created_at": "2026-03-23T10:00:00Z",
  "estimated_seconds": 45
}

POST /api/memory/store

POST/api/memory/store

Persist a key/value pair in the semantic memory store with optional tags and TTL.

Request body
{
  "key": "project:auth-analysis",
  "value": "JWT tokens expire in 24h, refresh tokens in 30d...",
  "tags": ["auth", "jwt", "security"],
  "ttl_seconds": 86400  // optional — null = persist forever
}
Example response
{
  "id": "mem_01HR...",
  "key": "project:auth-analysis",
  "stored_at": "2026-03-23T10:00:00Z",
  "expires_at": "2026-03-24T10:00:00Z"
}

Q-Engine Endpoints

Each Q-Series engine exposes its own set of endpoints under the /api/qN/ prefix. The table below lists representative endpoints across 25 key engines. See the full Q-Engine Catalog for all 205 engines and their complete endpoint surfaces.

MethodPathDescription
POST/api/q25/plugins/installInstall a plugin via PluginManager with lifecycle hooks and ToolRegistry integration
POST/api/q26/workflow/createCreate and validate a workflow DAG using Kahn topological sort
POST/api/q26/workflow/{id}/runExecute a workflow DAG — returns execution trace with VariablePool states
POST/api/q27/searchFTS5 full-text search across JSONL session transcripts with secret redaction
POST/api/q28/geo/scoreCitabilityScorer — returns word count, AI crawler detection, and structured data validation
POST/api/q29/agent/runAgent Hub — run with configurable strategy: FC, ReAct, Plan & Execute, or Chain of Thought
GET/api/q30/skills/searchSearch 1254+ skills by keyword, category, domain, or agent-match criteria
POST/api/q32/hunt/planBug Bounty — generate a hunt plan across 20 VulnClass categories with industry standard alignment
POST/api/q32/reportGenerate a professional bug bounty report for HackerOne or Bugcrowd
POST/api/q34/deerflow/delegateDelegate research tasks to deep research agent with SSE streaming
POST/api/q40/tunnel/createCreate a WireRift tunnel — returns public URL for localhost service exposure
POST/api/q41/chatterbox/synthesizeText-to-speech in 23 languages with optional voice cloning
POST/api/q42/evaluateEvaluate LLM output quality with rubric-based scoring and benchmark comparison
POST/api/q45/goals/executeExecute TELOS goals autonomously — dependency resolution and progress tracking
POST/api/q54/debateMulti-agent structured debate — 5 agents, configurable rounds, consensus extraction
GET/api/q62/dashboard/agentsReal-time WebSocket agent monitoring — live logs, metrics, pause/kill/redirect controls
POST/api/q63/users/registerMulti-user registration with JWT Bearer auth, RBAC roles (ADMIN/ENGINEER/VIEWER)
POST/api/q141/audit/scanProduction Audit Engine — A-Z scan: code quality, security, infra, compliance
POST/api/q155/sentinel/orchestrateSentinel Orchestrator — coordinate all 14 sentinel engines in priority swarm mode
POST/api/q159/llm/chatUniversal LLM Gateway — litellm with circuit breaker, per-model budget, and retry
GET/api/q169/kernel/statusAgent OS Kernel — AIOS status with 7 subsystems: scheduler, IPC, VFS, memory, net, security, telemetry
POST/api/q188/constitutional/checkConstitutional Guard — check against 10 safety policies with dual-path verification
POST/api/q193/security/analyzeSecurity Fortress — request fingerprinting, threat scoring, and top-10 vulnerability protection
POST/api/q198/factory/generateSkill Factory — 5-phase pipeline to generate and publish new skills from spec
POST/api/q204/auth/googleGoogle OAuth 2.0 authentication with JWT HS256, CSRF protection, and refresh tokens

Rate limiting

Rate limits apply per API key and vary by endpoint category. The Retry-After header is set on 429 responses.

Endpoint categoryLimitWindow
Health & Status600 reqper minute
Chat & LLM120 reqper minute
Agent execution60 tasksper minute
Memory operations300 reqper minute
Q-Engine endpoints240 reqper minute

Error handling

All errors return a JSON body with a detail field. Validation errors (422) include a structured array of field-level messages.

error-response.jsonJSON
// Standard error response
{
  "detail": "Agent 'security-scanner' failed: invalid path './nonexistent'",
  "error_code": "AGENT_EXECUTION_ERROR",
  "task_id": "task_01HR7X2NB8KM4",
  "timestamp": "2026-03-23T10:00:00Z"
}

// Validation error (422) response
{
  "detail": [
    {
      "type": "missing",
      "loc": ["body", "task"],
      "msg": "Field required",
      "input": {}
    }
  ]
}
StatusMeaningDescription
200OKRequest succeeded
201CreatedResource created (POST responses)
400Bad RequestInvalid request body or missing required fields
401UnauthorizedMissing or invalid API key
403ForbiddenValid key but insufficient scope/role
404Not FoundAgent, task, or resource ID not found
422Unprocessable EntityPydantic validation error — see detail array
429Too Many RequestsRate limit exceeded — see Retry-After header
500Internal Server ErrorUnexpected engine failure — logged to observability
503Service UnavailableLLM provider unavailable — failover in progress

OpenAPI specification

The full OpenAPI 3.1 specification is generated automatically by Q196 API Documentation engine and is available at the following endpoints on any running LYDOS server:

Swagger UI
localhost:8888/docs
Interactive browser
ReDoc
localhost:8888/redoc
Readable reference
JSON Spec
localhost:8888/openapi.json
Raw OpenAPI 3.1